1
0
Fork 1
Spiegel von https://github.com/dani-garcia/vaultwarden.git synchronisiert 2024-05-19 23:40:06 +02:00

Commits vergleichen

...

71 Commits
1.30.0 ... main

Autor SHA1 Nachricht Datum
FDHoho007 753a9e0bae
Fix public api for domains with path prefix (#4500) 2024-05-19 20:33:31 +02:00
Stefan Melmuk f5fb69b64f
also delete organization_api_key (#4557) 2024-05-19 20:33:00 +02:00
Daniel 3261534438
Optimize Dockerfiles (#4532)
Move some ARGs closer to the build stage (potentially improving caching)
Remove redundant COPY commands
Remove redundant RUN command
Move CARGO_HOME's "&&" operator to the first line (improves consistency)
2024-05-19 20:32:36 +02:00
Rich Purnell 46762d9fde
Improve commentary aesthetics (#4549) 2024-05-19 20:30:57 +02:00
Mathijs van Veluw 6cadb2627a
Update Rust, crates and web-vault (#4558)
* Update Rust and crates

- Updated Rust to v1.78.0
- Updated crates

* Update web-vault to v2024.5.0
2024-05-19 20:30:34 +02:00
Daniel García 0fe93edea6
Some fixes for the new mobile apps (#4526) 2024-04-27 23:24:04 +02:00
Stefan Melmuk e9aa5a545e
fix emergency access invites (#4337)
* fix emergency access invites with no mail

when mail is disabled instead of accepting emergency access for all
invited users automatically, we only accept if the user already exists

on registration of a new account any open emergency access invitations
will be accepted, if mail is disabled

also prevent invited emergency access contacts to register if emergency
access is disabled (this is only relevant for when mail is enabled, if
mail is disabled they should have an Invitation entry)

* delete emergency access invitations

if an invited user is deleted in the /admin panel their emergency
access invitation will remain in the database which causes
the to_json_grantee_details fn to panic

* improve missing emergency access grantees

instead of returning an empty emergency access contact the entry should
not be added to the list. also the error handling can be improved a bit.
2024-04-27 22:16:05 +02:00
Stefan Melmuk 9dcc738f85
improve access to collections via groups (#4441)
* refactor get_org_collections_details

* improve access to collection check

* fix get_org_collection_detail too
2024-04-27 22:09:00 +02:00
Kristof Mattei 84a7c7da5d
Pass in collection ids to notifier when sharing cipher. (#4517) 2024-04-27 21:53:10 +02:00
Mathijs van Veluw ca9234ed86
Add extra (unsupported) container build arch's (#4524)
There was a PR (#4370) to add i686/i386 support for Vaultwarden.
That specific PR was not a viable way of adding this.

This PR adds extra architectures for Debian based containers which we
will not support by default. Those images will not be build and pushed
to our container registries.

Added the following architectures:
 - linux/386
 - linux/ppc64le
 - linux/s390x

Again, there will be no major support for these architectures, but it
will allow people who use these architectures to build a Debian based
binary more easily
2024-04-27 21:51:14 +02:00
Daniel García 27dc67fadd
Implement custom DNS resolver (#3988) 2024-04-27 20:25:34 +02:00
Mathijs van Veluw 2ad33ec97f
Update Crate and Rust (#4522)
* Update Crate and Rust

- Updated all crates
- Updated Rust to the latest patch version

* Updated GitHub Actions
2024-04-27 00:53:42 +02:00
Mathijs van Veluw e1a8df96db
Update Key Rotation web-vault v2024.3.x (#4446)
Key rotation was changed since 2024.1.x.
Multiple other items were added to be rotated like password-reset and emergency-access data to be part of just one POST instead of having multiple.

See: https://github.com/dani-garcia/bw_web_builds/pull/157
2024-04-06 14:42:53 +02:00
Mathijs van Veluw e42a37c6c1
Update crates and some Clippy fixes (#4475)
- Updated all crates including reqwest
- Fixed some clippy lints reported by nightly Rust
2024-04-06 13:55:10 +02:00
Stefan Melmuk 129b835ac7
update web-vault to v2024.3.1 (new vertical layout) (#4468)
* update web-vault to v2024.3.0

* update web-vault to v2024.3.1
2024-04-06 11:45:25 +02:00
Daniel García 2d98aa3045
Use async verify for Yubikey (#4448) 2024-03-23 16:03:17 +01:00
Mathijs van Veluw 93636eb3c3
Update Rust and crates (#4445)
- Updated Rust to v1.77.0
- Updated several crates
  The `reqwest` update included `trust-dns` > `hickory-dns` changes.
  Also, `reqwest` v0.12 is not working correctly for us, that is something to investigate.
- Fixed a new clippy warning
2024-03-23 15:40:34 +01:00
Mathijs van Veluw 1e42755187
Update chrono and sqlite (#4436)
- Updated sqlite crate
- Updated chrono crate

The latter needed a lot of changes done, mostly `Duration` to `TimeDelta`.
And some changes on how to use Naive.
2024-03-19 19:47:30 +01:00
guangwu ce8efcc48f
fix: typos (#4440)
Signed-off-by: guoguangwu <guoguangwug@gmail.com>
2024-03-19 19:47:14 +01:00
Stefan Melmuk 79ce5b49bc
automatically use email address as 2fa provider (#4317) 2024-03-17 22:35:02 +01:00
Matlink 7c3cad197c
Fix #3624: fix manager permission within groups (#3754)
* Fix #3624: fix manager permission within groups

* Query returns UUID only

* Fix issue when user is manager and in a group having access to all collections

* optimize condition check

* fix(groups): renaming and optimizations

* fix: wrong organization group membership detection

* Simplify group membership check

Co-authored-by: Stefan Melmuk <509385+stefan0xC@users.noreply.github.com>

* Remove unused statement

* improve check if the user has access via groups

instead of returning the two lists of member ids and later checking if
they contain the uuid of the current user, we really only care if
the current user has full access via a group or if they have
access to a given collection via a group

* improve comments for get_org_collections_details

* small refactor to make it easier to review

* fix(groups): query full access via group only when necessary

Co-authored-by: Mathijs van Veluw <black.dex@gmail.com>

* chore(fmt): apply rustfmt

---------

Co-authored-by: Stefan Melmuk <509385+stefan0xC@users.noreply.github.com>
Co-authored-by: Stefan Melmuk <stefan.melmuk@gmail.com>
Co-authored-by: Mathijs van Veluw <black.dex@gmail.com>
2024-03-17 22:11:34 +01:00
gzfrozen 000c606029
Change timestamp data type. (#4355)
Co-authored-by: Daniel García <dani-garcia@users.noreply.github.com>
2024-03-17 22:04:37 +01:00
Jacques B 29144b2ce0
Small improvements around email change (#4415) 2024-03-17 19:55:03 +01:00
Helmut K. C. Tessarek ea04b6f151
refactor: replace panic with a graceful exit (#4402)
* refactor: replace panic with a graceful exit

* fix: clippy errors

* fix: typo

* Update src/main.rs

Co-authored-by: Stefan Melmuk <509385+stefan0xC@users.noreply.github.com>

---------

Co-authored-by: Stefan Melmuk <509385+stefan0xC@users.noreply.github.com>
2024-03-17 19:53:41 +01:00
Mathijs van Veluw 3427217686
Remove custom WebSocket code (#4001)
* Remove custom WebSocket code

Remove our custom WebSocket code and only use the Rocket code.
Removed all options in regards to WebSockets
Added a new option `WEBSOCKET_DISABLED` which defaults too `false`.
This can be used to disable WebSockets if you really do not want to use it.

* Addressed remarks given and some updates

- Addressed comments given during review
- Updated crates, including Rocket to the latest merged v0.5 changes
- Removed an extra header which should not be sent for websocket connections

* Updated suggestions and crates

- Addressed the suggestions
- Updated Rocket to latest rc4
  Also made the needed code changes
- Updated all other crates
  Pinned `openssl` and `openssl-sys`

---------

Co-authored-by: Daniel García <dani-garcia@users.noreply.github.com>
2024-03-17 19:52:55 +01:00
Daniel García a1fbd6d729
Improve JWT key initialization and avoid saving public key (#4085) 2024-03-17 15:11:20 +01:00
Krapp 2cbfe6fa5b
Fix comment in events.rs (#4408)
I think
` // Collection events`
was repeated twice
2024-03-17 14:29:31 +01:00
one230six d86c4f2c23
Signed-off-by: one230six <723682061@qq.com> (#4422)
Signed-off-by: one230six <723682061@qq.com>
2024-03-17 14:28:10 +01:00
Daniel García 6d73f30b4f
Update crates 2024-03-17 14:25:49 +01:00
Calvin Li d0c22b9fc9
fix: web API call for jquery 3.7.1 (#4400) 2024-03-02 19:09:36 +01:00
Mathijs van Veluw d6b97090fa
Update crates, GHA and a Python/JS scripts (#4357)
- Update all crates
- Update GHA
- Update Global Domains script to use main instead of master
  Also fixed some Python linting warnings
- Updated Admin JS and CSS libraries
2024-02-25 23:26:46 +01:00
seiuneko 94b077cb2d
Fix env templateto ensure compatibility with systemd's EnvironmentFile parsing (#4315)
* fix: update env template for systemd compatibility

Adjust env template to ensure compatibility with systemd's EnvironmentFile parsing, which only recognizes line-starting comment symbols.

* Refactor SMTP and Rocket settings in .env.template

- Simplify the SMTP_SECURITY and SMTP_PORT options by providing a list of choices and default values
- Clarify the ROCKET_PORT default value depending on the environment (Docker or not)
2024-02-19 16:29:53 +01:00
Mathijs van Veluw bb2412d033
Change the codegen-units for low resources (#4336)
It seems (as disscusses here #4320) a single codegen unit makes it still
crash. This sets it to the default 16 Rust uses for the release profile.
2024-02-10 13:04:08 +01:00
Mathijs van Veluw b9bdc9b8e2
Update Rust, crates and web-vault (#4328)
- Updated Rust to v1.76.0
- Updated crates
- Updated web-vault to v2024.1.2b
- Fixed some Clippy lints
- Moved lint check configuration Cargo.toml
- Fixed issue with Reset Password Enrollment when logged-in via device
2024-02-08 22:16:29 +01:00
Mathijs van Veluw 897bdf8343
Update GHA Workflows (#4309)
- Update the workflow GH Actions.
- Configured the release workflow to always run on main/tag as discussed
  in #4226

Closes #4226
2024-02-03 16:41:25 +01:00
Mathijs van Veluw 569add453d
Add Kubernetes environment detection (#4290)
Also check if we are running within a Kubernetes environment.
These do not always run using Docker or Podman of course.

Also renamed all the functions and variables to use `container` instead
of `docker`.
2024-02-02 21:44:19 +01:00
Mathijs van Veluw 77cd5b5954
Update crates to fix new builds (#4308)
Because handlebars yanked a version which was there for a few days, we
need to downgrade this crate. In this process update all the others.

Fixes #4307
2024-02-02 18:30:54 +01:00
Mathijs van Veluw 4438da39f9
Fix healthcheck when using .env file (#4299)
It seems Debian based images see the `.env` file in the `pwd` path, but
sourcing it via `. .env` breaks. It does work if you provide the full
path `/.env`. Changed the default to `/.env`.

Alpine does not have an issue with both ways.
2024-01-31 22:31:47 +01:00
Stefan Melmuk 0b2383ab56
fix push device registration (#4297)
don't try to register a push device when the device is new
it will be registered when the push token is saved

fixes #4296
2024-01-31 22:31:22 +01:00
gzfrozen ad1d65bdf8
Update env template file (#4276)
* update env template to fit the config.rs

* Categorize env template settings

* Fix a wrong setting

* Fix wrong icon redirect code

* Fix ICON_DOWNLOAD_TIMEOUT default value

Co-authored-by: Daniel <daniel.barabasa@gmail.com>

* Move related settings together.
Merge Yubikey, Duo, Email 2FA sections into one.
Other minor fixes.

* Minor fix of some settings position

* Add some comment

* Minor fix.

---------

Co-authored-by: Daniel <daniel.barabasa@gmail.com>
2024-01-30 19:15:37 +01:00
Stefan Melmuk 3b283c289e
register missing push devices at login (#3792)
save the push token of new device even if push notifications are not
enabled and provide a way to register the push device at login

unregister device if there already is a push token saved unless the
new token has already been registered.

also the `unregister_push_device` function used the wrong argument
cf. 08d380900b/src/Core/Services/Implementations/RelayPushRegistrationService.cs (L43)
2024-01-30 19:14:25 +01:00
Stefan Melmuk 4b9384cb2b
err on invalid feature flag (#4263)
* err on invalid feature flag

* print all invalid flags and improve error message
2024-01-28 23:36:27 +01:00
Mathijs van Veluw 0f39d96518
Fix attachment upload size check (#4282)
The min/max were reversed with the `add` and `sub` functions.
This caused the files to always be out of bounds in the check.

Fixes #4281
2024-01-28 23:32:09 +01:00
Daniel García edf7484a70
Improve file limit handling (#4242)
* Improve file limit handling

* Oops

* Update PostgreSQL migration

* Review comments

---------

Co-authored-by: BlackDex <black.dex@gmail.com>
2024-01-27 02:43:26 +01:00
Jacques B 8b66e34415
Return 404 when user public_key is empty (#4271) 2024-01-26 20:34:36 +01:00
Mathijs van Veluw 1d00e34bbb
Update crates, web-vault and GHA (#4275)
- Update GitHub Actions
- Updated crates
- Updated web-vault to v2024.1.2
2024-01-26 20:19:53 +01:00
Stefan Melmuk 1b801406d6
prevent side effects if groups are disabled (#4265) 2024-01-25 22:02:07 +01:00
Helmut K. C. Tessarek 5e46a43306
fix: use black text for update badge (better contrast) (#4245) 2024-01-25 21:58:05 +01:00
Mathijs van Veluw 5c77431c2d
Fix bulk collection deletion (#4257)
The bulk collection delete seems to have removed the extra org_id in the
posted data. Now we only use the org_id from the path.

Fixes #4253
2024-01-25 21:57:35 +01:00
dependabot[bot] 2775c6ce8a
Bump h2 from 0.3.23 to 0.3.24 (#4260)
Bumps [h2](https://github.com/hyperium/h2) from 0.3.23 to 0.3.24.
- [Release notes](https://github.com/hyperium/h2/releases)
- [Changelog](https://github.com/hyperium/h2/blob/v0.3.24/CHANGELOG.md)
- [Commits](https://github.com/hyperium/h2/compare/v0.3.23...v0.3.24)

---
updated-dependencies:
- dependency-name: h2
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-25 21:56:33 +01:00
Mathijs van Veluw 890e668071
Update crates and fix icon issue (#4237)
- Fix icon download issue by removing the deflate feature
- Updated all the crates
- Updated Handlebars code

Fixes #4224
2024-01-12 20:44:37 +01:00
Stefan Melmuk 596c167312
improve emergency access when not enabled (#4227)
* improve emergency access when not enabled

* display note that emergency access is disabled
2024-01-10 19:02:36 +01:00
Daniel García ae3a153bdb
Update README.md 2024-01-01 19:44:52 +01:00
Stefan Melmuk 2c36993792
enforce 2FA policy on removal of second factor and login (#3803)
* enforce 2fa policy on removal of second factor

users should be revoked when their second factors are removed.

we want to revoke users so they don't have to be invited again and
organization admins and owners are aware that they no longer have
access.

we make an exception for non-confirmed users to speed up the invitation
process as they would have to be restored before they can accept their
invitation or be confirmed.

if email is enabled, invited users have to add a second factor before
they can accept the invitation to an organization with 2fa policy.
and if it is not enabled that check is done when confirming the user.

* use &str instead of String in log_event()

* enforce the 2fa policy on login

if a user doesn't have a second factor check if they are in an
organization that has the 2fa policy enabled to revoke their access
2024-01-01 19:41:40 +01:00
THONY d672ad3f76
US or EU Data Region Selection (#3752)
* add selection of data region for push

* fix cargo check + rewrite config + add check url

* fix clippy error

* add comment in .env.template, adapt config.rs

* Update .env.template

Co-authored-by: William Desportes <williamdes@wdes.fr>

* Update .env.template

Co-authored-by: William Desportes <williamdes@wdes.fr>

* Revert "Update .env.template"

This reverts commit 5bed974ba7.

* Revert "Update .env.template"

This reverts commit 0760eff95d.

* fix /connect/token to push identity

* fix /connect/token to push identity

* Fixed formatting when solving merge conflicts

---------

Co-authored-by: William Desportes <williamdes@wdes.fr>
Co-authored-by: Daniel García <dani-garcia@users.noreply.github.com>
2024-01-01 16:01:57 +01:00
Matlink a641b48884
Fix #3413: push to users accessing the collections using groups (#3757)
* Fix #3413: push to users acessing the collections using groups

* Notify groups only when enabled
2024-01-01 15:46:03 +01:00
Philipp Kolberg 98b2178c7d
Allow customizing the featureStates (#4168)
* Allow customizing the featureStates

Use a comma separated list of features to enable using the FEATURE_FLAGS env variable

* Move feature flag parsing to util

* Fix formatting

* Update supported feature flags

* Rename feature_flags to experimental_client_feature_flags

Additionally, use a caret (^) instead of an exclamation mark (!) to disable features

* Fix formatting issue.

* Add documentation to env template

* Remove functionality to disable feature flags

* Fix JSON key for feature states

* Convert error to warning when feature flag is unrecognized

* Simplify parsing of feature flags

* Fix default value of feature flags in env template

* Fix formatting
2024-01-01 15:44:02 +01:00
Mathijs van Veluw 76a3f0f531
Fix Single Org Policy check (#4207)
There was an error in the single org policy check to determine how many
users there are in an org. The `or` check was at the wrong location in
the DSL.

This is now fixed.

Fixes #4205
2024-01-01 15:42:57 +01:00
Mathijs van Veluw c5665e7b77
Update Rust and Crates (#4211)
- Updated Rust to v1.75.0
- Updated all the crates
- Fixed warning generated by latest version of Rust
2024-01-01 15:41:54 +01:00
Mathijs van Veluw cbdcf8ef9f
Update web-vault to v2023.12.0 (#4201) 2023-12-24 15:50:58 +01:00
Chris 3337594d60
Add additional build target which optimizes for size (#4096)
OpenWRT is a project which builds and distributes firmware for
embedded devies like routers, access points, and so on. These
devices are usually very limited in terms of storage. Therefore,
optimizing binaries for size at the cost of execution speed is
usually desired.

This PR adds an additional build-target, namely "release-micro",
which implements several parameters which optimize in favor of
binary size.

The following parameters were chosen:
- opt-level "z": Optimize for size with disabled loop vectorization
- strip "symbols": Strip debuginfo and symbols from binary
- lto "fat": Enable link-time optimizations across all crates
- codegen-units 1: Disable parallelization of code generation to
  allow for additional optimizations
- panic "abort": Abort on Panic() instead of unwinding

All these build parameters significantly reduce the binary size
from >40MB to <15MB - the actual amount depends on the target
architecture.

We would like to upstream this new build target to keep our build
environment simple. Other projects which deploy vaultwarden on
size-constrained environments may benefit from this change too.

Signed-off-by: Christian Lachner <gladiac@gmail.com>
2023-12-18 21:46:53 +01:00
Mathijs van Veluw 2daa8be1f1
Update crates (#4173)
Update all crates instead of only the zerocopy from dependabot.
Closes #4170
2023-12-18 21:45:54 +01:00
Mathijs van Veluw eccb3ab947
Decrease JWT Refresh/Auth token (#4163)
Large JWT's could cause issue because of header or body sizes of the
HTTP request could get too large when you are a member of a lot of organizations.

This PR removes these specific keys since they are not used either
client side or server side.

Because Bitwarden does add these in there JWT's i would suggest to keep
the code we had but then commented out as a reference.

Removing it and searching for this when needed would be a waist of time.

Fixes #4156
2023-12-13 17:49:35 +01:00
Mathijs van Veluw 3246251f29
Fix the version string (#4153)
For some reason still not known, the `.git` directory was not copied
into the container. I think buildkit (buildx) did this by default before, and
stopped this with newer versions.

This PR fixes this by also touching `build.rs` besides `src/main.rs`.

This PR also updates Rust to v1.74.1 and some crates, including the
latest version of Alpine 3.19.

Fixes #4150
2023-12-09 23:04:33 +01:00
Mathijs van Veluw 8ab200224e
Several small fixes for open issues (#4143)
* Fix BWDC when re-run with cleared cache

Using the BWDC with a cleared cache caused invited users to be converted
to accepted users.

The problem was a wrong check for the `restore` function.

Fixes #4114

* Remove useless variable

During some refactoring this seems to be overlooked.
This variable gets filled but isn't used at all afterwards.

Fixes #4105

* Check some `.git` paths to force a rebuild

When a checked-out repo switches to a specific tag, and that tag does
not have anything else changed in the files except the tag, it could
happen that the build process doesn't see any changes, while it could be
that the version string needs to be different.

This commit ensures that if some specific paths are changed within the
.git directory, cargo will be triggered to rebuild.

Fixes #4087

* Do not delete dir on file delete

Previously during a `delete_file` check we also tried to delete the
parent directory and ignored all errors, like not being empty for
example.

Since this function is called `delete_file` and does not mention
anything in regards to a directory i have removed that code and it will
now only delete the file and leave the rest as-is.

If this somehow is still needed or wanted, which i do not think we want,
then we should create a new function.

Fixes #4081

* Fix healthcheck when using an ENV file

If someone is using a `.env` file or configured the `ENV_FILE` variable
to use that as it's configuration, this was missed by the healthcheck.

So, `DOMAIN` and `ROCKET_TLS` were not seen, and not used in these cases.

This commit fixes this by checking for this file and if it exists, then
it will load those variables first.

Fixes #4112

* Add missing route

While there was a function and a derive, this endpoint wasn't part of
the routes. Since Bitwarden does have this endpoint ill add the route
instead of deleting it.

Fixes #4076
Fixes #4144

* Update crates to update the openssl crate

Because of a bug in the openssl-sys crate we pinned the version to an
older version. This issue has been fixed and was released 2 days ago.

This commit updates the openssl crates including others.
This should also fix the issues with building Vaultwarden using newer
versions of LibreSSL.

Fixes #4051
2023-12-09 01:21:14 +01:00
Mathijs van Veluw 34e00e1478
Update Rust, Crates, Profile and Actions (#4126)
- Updated Rust to v1.74.0
- Updated all crates (where possible)
- Changed release profile to use
  * fat lto
  * 1 codegen-unit
  This should optimize a bit for speed and a lot for size ~15MB smaller
- Updated Github actions to use caching for the bake process
- Added a schedule to clean the cache every week to prevent stale Debian/Alpine base images
- During the release action, the Alpine/static binaries are added as artifects.
  Later we could also automatically add them to the releases maybe.
- Added CODEWONERS to prevent unchecked changes to github actions workflows
2023-12-04 20:26:11 +01:00
Mathijs van Veluw 0fdda3bc2f
Prevent generating an error during ws close (#4127)
When a WebSocket connection was closing it was sending a message after
it was closed already. This generated an error in the logs.
While this error didn't harm any of the functionallity of Vaultwarden it
isn't nice to see them of course.

This PR Fixes this by catching the close message and breaks the loop at
that point. This prevents the `_` catch-all from replying the close
message back to the client, which was causing the error message.

Fixes #4090
2023-12-04 20:20:13 +01:00
Mathijs van Veluw 48836501bf
Update crates (#4074)
* Remove another header for websocket connections

* Fix small bake issue

* Update crates

Updated crates and adjusted code where needed.
One major update is Rocket rc4, no need anymore (again) for crates.io patching.

The only item still pending is openssl/openssl-sys for which we need to
wait if https://github.com/sfackler/rust-openssl/pull/2094 will be
merged. If, then we can remove the pinned versions for the openssl crate.
2023-11-15 10:41:14 +01:00
Mathijs van Veluw f863ffb89a
Add Protected Actions Check (#4067)
Since the feature `Login with device` some actions done via the
web-vault need to be verified via an OTP instead of providing the MasterPassword.

This only happens if a user used the `Login with device` on a device
which uses either Biometrics login or PIN. These actions prevent the
athorizing device to send the MasterPasswordHash. When this happens, the
web-vault requests an OTP to be filled-in and this OTP is send to the
users email address which is the same as the email address to login.

The only way to bypass this is by logging in with the your password, in
those cases a password is requested instead of an OTP.

In case SMTP is not enabled, it will show an error message telling to
user to login using there password.

Fixes #4042
2023-11-12 22:15:44 +01:00
Mathijs van Veluw 03c6ed2e07
Disable autofill-v2 (#4056)
Disabled autofill-v2 as it seems to cause strange issues as reported
here: https://github.com/dani-garcia/vaultwarden/discussions/4052

Also added the Vaultwarden server version back again but at a different
location.

Fixes #4052
2023-11-09 00:16:27 +01:00
Mathijs van Veluw efc6eb0073
Fix missing alpine tag during buildx bake (#4043)
The bake recipt was missing the single `:alpine` tag for the alpine
builds when we were releasing a `stable/latest` version of Vaultwarden.

This PR fixes this by checking for those conditions and add the
`:alpine` tag too.

We will keep the `:latest-alpine` also, which i find even nicer then just
`:alpine`

Fixes #4035
2023-11-07 10:50:58 +01:00
90 geänderte Dateien mit 9112 neuen und 9685 gelöschten Zeilen

Datei anzeigen

@ -10,39 +10,13 @@
## variable ENV_FILE can be set to the location of this file prior to starting
## Vaultwarden.
####################
### Data folders ###
####################
## Main data folder
# DATA_FOLDER=data
## Database URL
## When using SQLite, this is the path to the DB file, default to %DATA_FOLDER%/db.sqlite3
# DATABASE_URL=data/db.sqlite3
## When using MySQL, specify an appropriate connection URI.
## Details: https://docs.diesel.rs/diesel/mysql/struct.MysqlConnection.html
# DATABASE_URL=mysql://user:password@host[:port]/database_name
## When using PostgreSQL, specify an appropriate connection URI (recommended)
## or keyword/value connection string.
## Details:
## - https://docs.diesel.rs/diesel/pg/struct.PgConnection.html
## - https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING
# DATABASE_URL=postgresql://user:password@host[:port]/database_name
## Database max connections
## Define the size of the connection pool used for connecting to the database.
# DATABASE_MAX_CONNS=10
## Database timeout
## Timeout when acquiring database connection
# DATABASE_TIMEOUT=30
## Database connection initialization
## Allows SQL statements to be run whenever a new database connection is created.
## This is mainly useful for connection-scoped pragmas.
## If empty, a database-specific default is used:
## - SQLite: "PRAGMA busy_timeout = 5000; PRAGMA synchronous = NORMAL;"
## - MySQL: ""
## - PostgreSQL: ""
# DATABASE_CONN_INIT=""
## Individual folders, these override %DATA_FOLDER%
# RSA_KEY_FILENAME=data/rsa_key
# ICON_CACHE_FOLDER=data/icon_cache
@ -52,65 +26,85 @@
## Templates data folder, by default uses embedded templates
## Check source code to see the format
# TEMPLATES_FOLDER=/path/to/templates
# TEMPLATES_FOLDER=data/templates
## Automatically reload the templates for every request, slow, use only for development
# RELOAD_TEMPLATES=false
## Client IP Header, used to identify the IP of the client, defaults to "X-Real-IP"
## Set to the string "none" (without quotes), to disable any headers and just use the remote IP
# IP_HEADER=X-Real-IP
## Cache time-to-live for successfully obtained icons, in seconds (0 is "forever")
# ICON_CACHE_TTL=2592000
## Cache time-to-live for icons which weren't available, in seconds (0 is "forever")
# ICON_CACHE_NEGTTL=259200
## Web vault settings
# WEB_VAULT_FOLDER=web-vault/
# WEB_VAULT_ENABLED=true
## Enables websocket notifications
# WEBSOCKET_ENABLED=false
#########################
### Database settings ###
#########################
## Controls the WebSocket server address and port
# WEBSOCKET_ADDRESS=0.0.0.0
# WEBSOCKET_PORT=3012
## Database URL
## When using SQLite, this is the path to the DB file, default to %DATA_FOLDER%/db.sqlite3
# DATABASE_URL=data/db.sqlite3
## When using MySQL, specify an appropriate connection URI.
## Details: https://docs.diesel.rs/2.1.x/diesel/mysql/struct.MysqlConnection.html
# DATABASE_URL=mysql://user:password@host[:port]/database_name
## When using PostgreSQL, specify an appropriate connection URI (recommended)
## or keyword/value connection string.
## Details:
## - https://docs.diesel.rs/2.1.x/diesel/pg/struct.PgConnection.html
## - https://www.postgresql.org/docs/current/libpq-connect.html#LIBPQ-CONNSTRING
# DATABASE_URL=postgresql://user:password@host[:port]/database_name
## Enable WAL for the DB
## Set to false to avoid enabling WAL during startup.
## Note that if the DB already has WAL enabled, you will also need to disable WAL in the DB,
## this setting only prevents Vaultwarden from automatically enabling it on start.
## Please read project wiki page about this setting first before changing the value as it can
## cause performance degradation or might render the service unable to start.
# ENABLE_DB_WAL=true
## Database connection retries
## Number of times to retry the database connection during startup, with 1 second delay between each retry, set to 0 to retry indefinitely
# DB_CONNECTION_RETRIES=15
## Database timeout
## Timeout when acquiring database connection
# DATABASE_TIMEOUT=30
## Database max connections
## Define the size of the connection pool used for connecting to the database.
# DATABASE_MAX_CONNS=10
## Database connection initialization
## Allows SQL statements to be run whenever a new database connection is created.
## This is mainly useful for connection-scoped pragmas.
## If empty, a database-specific default is used:
## - SQLite: "PRAGMA busy_timeout = 5000; PRAGMA synchronous = NORMAL;"
## - MySQL: ""
## - PostgreSQL: ""
# DATABASE_CONN_INIT=""
#################
### WebSocket ###
#################
## Enable websocket notifications
# ENABLE_WEBSOCKET=true
##########################
### Push notifications ###
##########################
## Enables push notifications (requires key and id from https://bitwarden.com/host)
# PUSH_ENABLED=true
## If you choose "European Union" Data Region, uncomment PUSH_RELAY_URI and PUSH_IDENTITY_URI then replace .com by .eu
## Details about mobile client push notification:
## - https://github.com/dani-garcia/vaultwarden/wiki/Enabling-Mobile-Client-push-notification
# PUSH_ENABLED=false
# PUSH_INSTALLATION_ID=CHANGEME
# PUSH_INSTALLATION_KEY=CHANGEME
## Don't change this unless you know what you're doing.
# PUSH_RELAY_URI=https://push.bitwarden.com
# PUSH_IDENTITY_URI=https://identity.bitwarden.com
## Controls whether users are allowed to create Bitwarden Sends.
## This setting applies globally to all users.
## To control this on a per-org basis instead, use the "Disable Send" org policy.
# SENDS_ALLOWED=true
## Controls whether users can enable emergency access to their accounts.
## This setting applies globally to all users.
# EMERGENCY_ACCESS_ALLOWED=true
## Controls whether event logging is enabled for organizations
## This setting applies to organizations.
## Disabled by default. Also check the EVENT_CLEANUP_SCHEDULE and EVENTS_DAYS_RETAIN settings.
# ORG_EVENTS_ENABLED=false
## Controls whether users can change their email.
## This setting applies globally to all users
# EMAIL_CHANGE_ALLOWED=true
## Number of days to retain events stored in the database.
## If unset (the default), events are kept indefinitely and the scheduled job is disabled!
# EVENTS_DAYS_RETAIN=
## BETA FEATURE: Groups
## Controls whether group support is enabled for organizations
## This setting applies to organizations.
## Disabled by default because this is a beta feature, it contains known issues!
## KNOW WHAT YOU ARE DOING!
# ORG_GROUPS_ENABLED=false
#####################
### Schedule jobs ###
#####################
## Job scheduler settings
##
@ -151,60 +145,69 @@
## Cron schedule of the job that cleans old events from the event table.
## Defaults to daily. Set blank to disable this job. Also without EVENTS_DAYS_RETAIN set, this job will not start.
# EVENT_CLEANUP_SCHEDULE="0 10 0 * * *"
## Enable extended logging, which shows timestamps and targets in the logs
# EXTENDED_LOGGING=true
## Timestamp format used in extended logging.
## Format specifiers: https://docs.rs/chrono/latest/chrono/format/strftime
# LOG_TIMESTAMP_FORMAT="%Y-%m-%d %H:%M:%S.%3f"
## Logging to file
# LOG_FILE=/path/to/log
## Logging to Syslog
## This requires extended logging
# USE_SYSLOG=false
## Log level
## Change the verbosity of the log output
## Valid values are "trace", "debug", "info", "warn", "error" and "off"
## Setting it to "trace" or "debug" would also show logs for mounted
## routes and static file, websocket and alive requests
# LOG_LEVEL=Info
## Enable WAL for the DB
## Set to false to avoid enabling WAL during startup.
## Note that if the DB already has WAL enabled, you will also need to disable WAL in the DB,
## this setting only prevents Vaultwarden from automatically enabling it on start.
## Please read project wiki page about this setting first before changing the value as it can
## cause performance degradation or might render the service unable to start.
# ENABLE_DB_WAL=true
## Database connection retries
## Number of times to retry the database connection during startup, with 1 second delay between each retry, set to 0 to retry indefinitely
# DB_CONNECTION_RETRIES=15
## Icon service
## The predefined icon services are: internal, bitwarden, duckduckgo, google.
## To specify a custom icon service, set a URL template with exactly one instance of `{}`,
## which is replaced with the domain. For example: `https://icon.example.com/domain/{}`.
## Number of days to retain events stored in the database.
## If unset (the default), events are kept indefinitely and the scheduled job is disabled!
# EVENTS_DAYS_RETAIN=
##
## `internal` refers to Vaultwarden's built-in icon fetching implementation.
## If an external service is set, an icon request to Vaultwarden will return an HTTP
## redirect to the corresponding icon at the external service. An external service may
## be useful if your Vaultwarden instance has no external network connectivity, or if
## you are concerned that someone may probe your instance to try to detect whether icons
## for certain sites have been cached.
# ICON_SERVICE=internal
## Cron schedule of the job that cleans old auth requests from the auth request.
## Defaults to every minute. Set blank to disable this job.
# AUTH_REQUEST_PURGE_SCHEDULE="30 * * * * *"
## Icon redirect code
## The HTTP status code to use for redirects to an external icon service.
## The supported codes are 301 (legacy permanent), 302 (legacy temporary), 307 (temporary), and 308 (permanent).
## Temporary redirects are useful while testing different icon services, but once a service
## has been decided on, consider using permanent redirects for cacheability. The legacy codes
## are currently better supported by the Bitwarden clients.
# ICON_REDIRECT_CODE=302
########################
### General settings ###
########################
## Domain settings
## The domain must match the address from where you access the server
## It's recommended to configure this value, otherwise certain functionality might not work,
## like attachment downloads, email links and U2F.
## For U2F to work, the server must use HTTPS, you can use Let's Encrypt for free certs
## To use HTTPS, the recommended way is to put Vaultwarden behind a reverse proxy
## Details:
## - https://github.com/dani-garcia/vaultwarden/wiki/Enabling-HTTPS
## - https://github.com/dani-garcia/vaultwarden/wiki/Proxy-examples
## For development
# DOMAIN=http://localhost
## For public server
# DOMAIN=https://vw.domain.tld
## For public server (URL with port number)
# DOMAIN=https://vw.domain.tld:8443
## For public server (URL with path)
# DOMAIN=https://domain.tld/vw
## Controls whether users are allowed to create Bitwarden Sends.
## This setting applies globally to all users.
## To control this on a per-org basis instead, use the "Disable Send" org policy.
# SENDS_ALLOWED=true
## HIBP Api Key
## HaveIBeenPwned API Key, request it here: https://haveibeenpwned.com/API/Key
# HIBP_API_KEY=
## Per-organization attachment storage limit (KB)
## Max kilobytes of attachment storage allowed per organization.
## When this limit is reached, organization members will not be allowed to upload further attachments for ciphers owned by that organization.
# ORG_ATTACHMENT_LIMIT=
## Per-user attachment storage limit (KB)
## Max kilobytes of attachment storage allowed per user.
## When this limit is reached, the user will not be allowed to upload further attachments.
# USER_ATTACHMENT_LIMIT=
## Per-user send storage limit (KB)
## Max kilobytes of send storage allowed per user.
## When this limit is reached, the user will not be allowed to upload further sends.
# USER_SEND_LIMIT=
## Number of days to wait before auto-deleting a trashed item.
## If unset (the default), trashed items are not auto-deleted.
## This setting applies globally, so make sure to inform all users of any changes to this setting.
# TRASH_AUTO_DELETE_DAYS=
## Number of minutes to wait before a 2FA-enabled login is considered incomplete,
## resulting in an email notification. An incomplete 2FA login is one where the correct
## master password was provided but the required 2FA step was not completed, which
## potentially indicates a master password compromise. Set to 0 to disable this check.
## This setting applies globally to all users.
# INCOMPLETE_2FA_TIME_LIMIT=3
## Disable icon downloading
## Set to true to disable icon downloading in the internal icon service.
@ -213,38 +216,6 @@
## will be deleted eventually, but won't be downloaded again.
# DISABLE_ICON_DOWNLOAD=false
## Icon download timeout
## Configure the timeout value when downloading the favicons.
## The default is 10 seconds, but this could be to low on slower network connections
# ICON_DOWNLOAD_TIMEOUT=10
## Icon blacklist Regex
## Any domains or IPs that match this regex won't be fetched by the icon service.
## Useful to hide other servers in the local network. Check the WIKI for more details
## NOTE: Always enclose this regex withing single quotes!
# ICON_BLACKLIST_REGEX='^(192\.168\.0\.[0-9]+|192\.168\.1\.[0-9]+)$'
## Any IP which is not defined as a global IP will be blacklisted.
## Useful to secure your internal environment: See https://en.wikipedia.org/wiki/Reserved_IP_addresses for a list of IPs which it will block
# ICON_BLACKLIST_NON_GLOBAL_IPS=true
## Disable 2FA remember
## Enabling this would force the users to use a second factor to login every time.
## Note that the checkbox would still be present, but ignored.
# DISABLE_2FA_REMEMBER=false
## Maximum attempts before an email token is reset and a new email will need to be sent.
# EMAIL_ATTEMPTS_LIMIT=3
## Token expiration time
## Maximum time in seconds a token is valid. The time the user has to open email client and copy token.
# EMAIL_EXPIRATION_TIME=600
## Email token size
## Number of digits in an email 2FA token (min: 6, max: 255).
## Note that the Bitwarden clients are hardcoded to mention 6 digit codes regardless of this setting!
# EMAIL_TOKEN_SIZE=6
## Controls if new users can register
# SIGNUPS_ALLOWED=true
@ -266,6 +237,11 @@
## even if SIGNUPS_ALLOWED is set to false
# SIGNUPS_DOMAINS_WHITELIST=example.com,example.net,example.org
## Controls whether event logging is enabled for organizations
## This setting applies to organizations.
## Disabled by default. Also check the EVENT_CLEANUP_SCHEDULE and EVENTS_DAYS_RETAIN settings.
# ORG_EVENTS_ENABLED=false
## Controls which users can create new orgs.
## Blank or 'all' means all users can create orgs (this is the default):
# ORG_CREATION_USERS=
@ -274,6 +250,122 @@
## A comma-separated list means only those users can create orgs:
# ORG_CREATION_USERS=admin1@example.com,admin2@example.com
## Invitations org admins to invite users, even when signups are disabled
# INVITATIONS_ALLOWED=true
## Name shown in the invitation emails that don't come from a specific organization
# INVITATION_ORG_NAME=Vaultwarden
## The number of hours after which an organization invite token, emergency access invite token,
## email verification token and deletion request token will expire (must be at least 1)
# INVITATION_EXPIRATION_HOURS=120
## Controls whether users can enable emergency access to their accounts.
## This setting applies globally to all users.
# EMERGENCY_ACCESS_ALLOWED=true
## Controls whether users can change their email.
## This setting applies globally to all users
# EMAIL_CHANGE_ALLOWED=true
## Number of server-side passwords hashing iterations for the password hash.
## The default for new users. If changed, it will be updated during login for existing users.
# PASSWORD_ITERATIONS=600000
## Controls whether users can set password hints. This setting applies globally to all users.
# PASSWORD_HINTS_ALLOWED=true
## Controls whether a password hint should be shown directly in the web page if
## SMTP service is not configured. Not recommended for publicly-accessible instances
## as this provides unauthenticated access to potentially sensitive data.
# SHOW_PASSWORD_HINT=false
#########################
### Advanced settings ###
#########################
## Client IP Header, used to identify the IP of the client, defaults to "X-Real-IP"
## Set to the string "none" (without quotes), to disable any headers and just use the remote IP
# IP_HEADER=X-Real-IP
## Icon service
## The predefined icon services are: internal, bitwarden, duckduckgo, google.
## To specify a custom icon service, set a URL template with exactly one instance of `{}`,
## which is replaced with the domain. For example: `https://icon.example.com/domain/{}`.
##
## `internal` refers to Vaultwarden's built-in icon fetching implementation.
## If an external service is set, an icon request to Vaultwarden will return an HTTP
## redirect to the corresponding icon at the external service. An external service may
## be useful if your Vaultwarden instance has no external network connectivity, or if
## you are concerned that someone may probe your instance to try to detect whether icons
## for certain sites have been cached.
# ICON_SERVICE=internal
## Icon redirect code
## The HTTP status code to use for redirects to an external icon service.
## The supported codes are 301 (legacy permanent), 302 (legacy temporary), 307 (temporary), and 308 (permanent).
## Temporary redirects are useful while testing different icon services, but once a service
## has been decided on, consider using permanent redirects for cacheability. The legacy codes
## are currently better supported by the Bitwarden clients.
# ICON_REDIRECT_CODE=302
## Cache time-to-live for successfully obtained icons, in seconds (0 is "forever")
## Default: 2592000 (30 days)
# ICON_CACHE_TTL=2592000
## Cache time-to-live for icons which weren't available, in seconds (0 is "forever")
## Default: 2592000 (3 days)
# ICON_CACHE_NEGTTL=259200
## Icon download timeout
## Configure the timeout value when downloading the favicons.
## The default is 10 seconds, but this could be to low on slower network connections
# ICON_DOWNLOAD_TIMEOUT=10
## Icon blacklist Regex
## Any domains or IPs that match this regex won't be fetched by the icon service.
## Useful to hide other servers in the local network. Check the WIKI for more details
## NOTE: Always enclose this regex withing single quotes!
# ICON_BLACKLIST_REGEX='^(192\.168\.0\.[0-9]+|192\.168\.1\.[0-9]+)$'
## Any IP which is not defined as a global IP will be blacklisted.
## Useful to secure your internal environment: See https://en.wikipedia.org/wiki/Reserved_IP_addresses for a list of IPs which it will block
# ICON_BLACKLIST_NON_GLOBAL_IPS=true
## Client Settings
## Enable experimental feature flags for clients.
## This is a comma-separated list of flags, e.g. "flag1,flag2,flag3".
##
## The following flags are available:
## - "autofill-overlay": Add an overlay menu to form fields for quick access to credentials.
## - "autofill-v2": Use the new autofill implementation.
## - "browser-fileless-import": Directly import credentials from other providers without a file.
## - "fido2-vault-credentials": Enable the use of FIDO2 security keys as second factor.
# EXPERIMENTAL_CLIENT_FEATURE_FLAGS=fido2-vault-credentials
## Require new device emails. When a user logs in an email is required to be sent.
## If sending the email fails the login attempt will fail!!
# REQUIRE_DEVICE_EMAIL=false
## Enable extended logging, which shows timestamps and targets in the logs
# EXTENDED_LOGGING=true
## Timestamp format used in extended logging.
## Format specifiers: https://docs.rs/chrono/latest/chrono/format/strftime
# LOG_TIMESTAMP_FORMAT="%Y-%m-%d %H:%M:%S.%3f"
## Logging to Syslog
## This requires extended logging
# USE_SYSLOG=false
## Logging to file
# LOG_FILE=/path/to/log
## Log level
## Change the verbosity of the log output
## Valid values are "trace", "debug", "info", "warn", "error" and "off"
## Setting it to "trace" or "debug" would also show logs for mounted
## routes and static file, websocket and alive requests
# LOG_LEVEL=info
## Token for the admin interface, preferably an Argon2 PCH string
## Vaultwarden has a built-in generator by calling `vaultwarden hash`
## For details see: https://github.com/dani-garcia/vaultwarden/wiki/Enabling-admin-page#secure-the-admin_token
@ -289,54 +381,13 @@
## meant to be used with the use of a separate auth layer in front
# DISABLE_ADMIN_TOKEN=false
## Invitations org admins to invite users, even when signups are disabled
# INVITATIONS_ALLOWED=true
## Name shown in the invitation emails that don't come from a specific organization
# INVITATION_ORG_NAME=Vaultwarden
## Number of seconds, on average, between admin login requests from the same IP address before rate limiting kicks in.
# ADMIN_RATELIMIT_SECONDS=300
## Allow a burst of requests of up to this size, while maintaining the average indicated by `ADMIN_RATELIMIT_SECONDS`.
# ADMIN_RATELIMIT_MAX_BURST=3
## The number of hours after which an organization invite token, emergency access invite token,
## email verification token and deletion request token will expire (must be at least 1)
# INVITATION_EXPIRATION_HOURS=120
## Per-organization attachment storage limit (KB)
## Max kilobytes of attachment storage allowed per organization.
## When this limit is reached, organization members will not be allowed to upload further attachments for ciphers owned by that organization.
# ORG_ATTACHMENT_LIMIT=
## Per-user attachment storage limit (KB)
## Max kilobytes of attachment storage allowed per user.
## When this limit is reached, the user will not be allowed to upload further attachments.
# USER_ATTACHMENT_LIMIT=
## Number of days to wait before auto-deleting a trashed item.
## If unset (the default), trashed items are not auto-deleted.
## This setting applies globally, so make sure to inform all users of any changes to this setting.
# TRASH_AUTO_DELETE_DAYS=
## Number of minutes to wait before a 2FA-enabled login is considered incomplete,
## resulting in an email notification. An incomplete 2FA login is one where the correct
## master password was provided but the required 2FA step was not completed, which
## potentially indicates a master password compromise. Set to 0 to disable this check.
## This setting applies globally to all users.
# INCOMPLETE_2FA_TIME_LIMIT=3
## Number of server-side passwords hashing iterations for the password hash.
## The default for new users. If changed, it will be updated during login for existing users.
# PASSWORD_ITERATIONS=350000
## Controls whether users can set password hints. This setting applies globally to all users.
# PASSWORD_HINTS_ALLOWED=true
## Controls whether a password hint should be shown directly in the web page if
## SMTP service is not configured. Not recommended for publicly-accessible instances
## as this provides unauthenticated access to potentially sensitive data.
# SHOW_PASSWORD_HINT=false
## Domain settings
## The domain must match the address from where you access the server
## It's recommended to configure this value, otherwise certain functionality might not work,
## like attachment downloads, email links and U2F.
## For U2F to work, the server must use HTTPS, you can use Let's Encrypt for free certs
# DOMAIN=https://vw.domain.tld:8443
## Set the lifetime of admin sessions to this value (in minutes).
# ADMIN_SESSION_LIFETIME=20
## Allowed iframe ancestors (Know the risks!)
## https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy/frame-ancestors
@ -351,13 +402,16 @@
## Note that this applies to both the login and the 2FA, so it's recommended to allow a burst size of at least 2.
# LOGIN_RATELIMIT_MAX_BURST=10
## Number of seconds, on average, between admin login requests from the same IP address before rate limiting kicks in.
# ADMIN_RATELIMIT_SECONDS=300
## Allow a burst of requests of up to this size, while maintaining the average indicated by `ADMIN_RATELIMIT_SECONDS`.
# ADMIN_RATELIMIT_MAX_BURST=3
## BETA FEATURE: Groups
## Controls whether group support is enabled for organizations
## This setting applies to organizations.
## Disabled by default because this is a beta feature, it contains known issues!
## KNOW WHAT YOU ARE DOING!
# ORG_GROUPS_ENABLED=false
## Set the lifetime of admin sessions to this value (in minutes).
# ADMIN_SESSION_LIFETIME=20
########################
### MFA/2FA settings ###
########################
## Yubico (Yubikey) Settings
## Set your Client ID and Secret Key for Yubikey OTP
@ -378,6 +432,30 @@
## After that, you should be able to follow the rest of the guide linked above,
## ignoring the fields that ask for the values that you already configured beforehand.
## Email 2FA settings
## Email token size
## Number of digits in an email 2FA token (min: 6, max: 255).
## Note that the Bitwarden clients are hardcoded to mention 6 digit codes regardless of this setting!
# EMAIL_TOKEN_SIZE=6
##
## Token expiration time
## Maximum time in seconds a token is valid. The time the user has to open email client and copy token.
# EMAIL_EXPIRATION_TIME=600
##
## Maximum attempts before an email token is reset and a new email will need to be sent.
# EMAIL_ATTEMPTS_LIMIT=3
##
## Setup email 2FA regardless of any organization policy
# EMAIL_2FA_ENFORCE_ON_VERIFIED_INVITE=false
## Automatically setup email 2FA as fallback provider when needed
# EMAIL_2FA_AUTO_FALLBACK=false
## Other MFA/2FA settings
## Disable 2FA remember
## Enabling this would force the users to use a second factor to login every time.
## Note that the checkbox would still be present, but ignored.
# DISABLE_2FA_REMEMBER=false
##
## Authenticator Settings
## Disable authenticator time drifted codes to be valid.
## TOTP codes of the previous and next 30 seconds will be invalid
@ -390,12 +468,9 @@
## In any case, if a code has been used it can not be used again, also codes which predates it will be invalid.
# AUTHENTICATOR_DISABLE_TIME_DRIFT=false
## Rocket specific settings
## See https://rocket.rs/v0.4/guide/configuration/ for more details.
# ROCKET_ADDRESS=0.0.0.0
# ROCKET_PORT=80 # Defaults to 80 in the Docker images, or 8000 otherwise.
# ROCKET_WORKERS=10
# ROCKET_TLS={certs="/path/to/certs.pem",key="/path/to/key.pem"}
###########################
### SMTP Email settings ###
###########################
## Mail specific settings, set SMTP_FROM and either SMTP_HOST or USE_SENDMAIL to enable the mail service.
## To make sure the email links are pointing to the correct host, set the DOMAIN variable.
@ -403,12 +478,19 @@
# SMTP_HOST=smtp.domain.tld
# SMTP_FROM=vaultwarden@domain.tld
# SMTP_FROM_NAME=Vaultwarden
# SMTP_SECURITY=starttls # ("starttls", "force_tls", "off") Enable a secure connection. Default is "starttls" (Explicit - ports 587 or 25), "force_tls" (Implicit - port 465) or "off", no encryption (port 25)
# SMTP_PORT=587 # Ports 587 (submission) and 25 (smtp) are standard without encryption and with encryption via STARTTLS (Explicit TLS). Port 465 (submissions) is used for encrypted submission (Implicit TLS).
# SMTP_USERNAME=username
# SMTP_PASSWORD=password
# SMTP_TIMEOUT=15
## Choose the type of secure connection for SMTP. The default is "starttls".
## The available options are:
## - "starttls": The default port is 587.
## - "force_tls": The default port is 465.
## - "off": The default port is 25.
## Ports 587 (submission) and 25 (smtp) are standard without encryption and with encryption via STARTTLS (Explicit TLS). Port 465 (submissions) is used for encrypted submission (Implicit TLS).
# SMTP_SECURITY=starttls
# SMTP_PORT=587
# Whether to send mail via the `sendmail` command
# USE_SENDMAIL=false
# Which sendmail command to use. The one found in the $PATH is used if not specified.
@ -417,7 +499,7 @@
## Defaults for SSL is "Plain" and "Login" and nothing for Non-SSL connections.
## Possible values: ["Plain", "Login", "Xoauth2"].
## Multiple options need to be separated by a comma ','.
# SMTP_AUTH_MECHANISM="Plain"
# SMTP_AUTH_MECHANISM=
## Server name sent during the SMTP HELO
## By default this value should be is on the machine's hostname,
@ -425,30 +507,34 @@
# HELO_NAME=
## Embed images as email attachments
# SMTP_EMBED_IMAGES=false
# SMTP_EMBED_IMAGES=true
## SMTP debugging
## When set to true this will output very detailed SMTP messages.
## WARNING: This could contain sensitive information like passwords and usernames! Only enable this during troubleshooting!
# SMTP_DEBUG=false
## Accept Invalid Hostnames
## DANGEROUS: This option introduces significant vulnerabilities to man-in-the-middle attacks!
## Only use this as a last resort if you are not able to use a valid certificate.
# SMTP_ACCEPT_INVALID_HOSTNAMES=false
## Accept Invalid Certificates
## DANGEROUS: This option introduces significant vulnerabilities to man-in-the-middle attacks!
## Only use this as a last resort if you are not able to use a valid certificate.
## If the Certificate is valid but the hostname doesn't match, please use SMTP_ACCEPT_INVALID_HOSTNAMES instead.
# SMTP_ACCEPT_INVALID_CERTS=false
## Require new device emails. When a user logs in an email is required to be sent.
## If sending the email fails the login attempt will fail!!
# REQUIRE_DEVICE_EMAIL=false
## Accept Invalid Hostnames
## DANGEROUS: This option introduces significant vulnerabilities to man-in-the-middle attacks!
## Only use this as a last resort if you are not able to use a valid certificate.
# SMTP_ACCEPT_INVALID_HOSTNAMES=false
#######################
### Rocket settings ###
#######################
## Rocket specific settings
## See https://rocket.rs/v0.5/guide/configuration/ for more details.
# ROCKET_ADDRESS=0.0.0.0
## The default port is 8000, unless running in a Docker container, in which case it is 80.
# ROCKET_PORT=8000
# ROCKET_TLS={certs="/path/to/certs.pem",key="/path/to/key.pem"}
## HIBP Api Key
## HaveIBeenPwned API Key, request it here: https://haveibeenpwned.com/API/Key
# HIBP_API_KEY=
# vim: syntax=ini

3
.github/CODEOWNERS gevendort Normale Datei
Datei anzeigen

@ -0,0 +1,3 @@
/.github @dani-garcia @BlackDex
/.github/CODEOWNERS @dani-garcia @BlackDex
/.github/workflows/** @dani-garcia @BlackDex

Datei anzeigen

@ -46,7 +46,7 @@ jobs:
steps:
# Checkout the repo
- name: "Checkout"
uses: actions/checkout@8ade135a41bc03ea155e62e844d188df1ea18608 # v4.1.0
uses: actions/checkout@0ad4b8fadaa221de15dcec353f45205ec38ea70b #v4.1.4
# End Checkout the repo
@ -74,7 +74,7 @@ jobs:
# Only install the clippy and rustfmt components on the default rust-toolchain
- name: "Install rust-toolchain version"
uses: dtolnay/rust-toolchain@439cf607258077187679211f12aa6f19af4a0af7 # master @ 2023-09-19 - 05:31 PM GMT+2
uses: dtolnay/rust-toolchain@bb45937a053e097f8591208d8e74c90db1873d07 # master @ Apr 14, 2024, 9:02 PM GMT+2
if: ${{ matrix.channel == 'rust-toolchain' }}
with:
toolchain: "${{steps.toolchain.outputs.RUST_TOOLCHAIN}}"
@ -84,7 +84,7 @@ jobs:
# Install the any other channel to be used for which we do not execute clippy and rustfmt
- name: "Install MSRV version"
uses: dtolnay/rust-toolchain@439cf607258077187679211f12aa6f19af4a0af7 # master @ 2023-09-19 - 05:31 PM GMT+2
uses: dtolnay/rust-toolchain@bb45937a053e097f8591208d8e74c90db1873d07 # master @ Apr 14, 2024, 9:02 PM GMT+2
if: ${{ matrix.channel != 'rust-toolchain' }}
with:
toolchain: "${{steps.toolchain.outputs.RUST_TOOLCHAIN}}"
@ -106,7 +106,7 @@ jobs:
# End Show environment
# Enable Rust Caching
- uses: Swatinem/rust-cache@a95ba195448af2da9b00fb742d14ffaaf3c21f43 # v2.7.0
- uses: Swatinem/rust-cache@23bce251a8cd2ffc3c1075eaa2367cf899916d84 # v2.7.3
with:
# Use a custom prefix-key to force a fresh start. This is sometimes needed with bigger changes.
# Like changing the build host from Ubuntu 20.04 to 22.04 for example.

Datei anzeigen

@ -13,7 +13,7 @@ jobs:
steps:
# Checkout the repo
- name: Checkout
uses: actions/checkout@8ade135a41bc03ea155e62e844d188df1ea18608 # v4.1.0
uses: actions/checkout@0ad4b8fadaa221de15dcec353f45205ec38ea70b # v4.1.4
# End Checkout the repo
# Download hadolint - https://github.com/hadolint/hadolint/releases

Datei anzeigen

@ -2,21 +2,10 @@ name: Release
on:
push:
paths:
- ".github/workflows/release.yml"
- "src/**"
- "migrations/**"
- "docker/**"
- "Cargo.*"
- "build.rs"
- "diesel.toml"
- "rust-toolchain.toml"
branches: # Only on paths above
branches:
- main
- release-build-revision
tags: # Always, regardless of paths above
tags:
- '*'
jobs:
@ -31,7 +20,7 @@ jobs:
steps:
- name: Skip Duplicates Actions
id: skip_check
uses: fkirc/skip-duplicate-actions@12aca0a884f6137d619d6a8a09fcc3406ced5281 # v5.3.0
uses: fkirc/skip-duplicate-actions@f75f66ce1886f00957d99748a42c724f4330bdcf # v5.3.1
with:
cancel_others: 'true'
# Only run this when not creating a tag
@ -42,12 +31,12 @@ jobs:
timeout-minutes: 120
needs: skip_check
if: ${{ needs.skip_check.outputs.should_skip != 'true' && github.repository == 'dani-garcia/vaultwarden' }}
# TODO: Start a local docker registry to be used to extract the final Alpine static build images
# services:
# registry:
# image: registry:2
# ports:
# - 5000:5000
# Start a local docker registry to extract the final Alpine static build binaries
services:
registry:
image: registry:2
ports:
- 5000:5000
env:
SOURCE_COMMIT: ${{ github.sha }}
SOURCE_REPOSITORY_URL: "https://github.com/${{ github.repository }}"
@ -69,7 +58,7 @@ jobs:
steps:
# Checkout the repo
- name: Checkout
uses: actions/checkout@8ade135a41bc03ea155e62e844d188df1ea18608 # v4.1.0
uses: actions/checkout@0ad4b8fadaa221de15dcec353f45205ec38ea70b # v4.1.4
with:
fetch-depth: 0
@ -80,11 +69,11 @@ jobs:
# Start Docker Buildx
- name: Setup Docker Buildx
uses: docker/setup-buildx-action@f95db51fddba0c2d1ec667646a06c2ce06100226 # v3.0.0
uses: docker/setup-buildx-action@d70bba72b1f3fd22344832f00baa16ece964efeb # v3.3.0
# https://github.com/moby/buildkit/issues/3969
# Also set max parallelism to 2, the default of 4 breaks GitHub Actions
with:
config-inline: |
buildkitd-config-inline: |
[worker.oci]
max-parallelism = 2
driver-opts: |
@ -113,7 +102,7 @@ jobs:
# Login to Docker Hub
- name: Login to Docker Hub
uses: docker/login-action@343f7c4344506bcbf9b4de18042ae17996df046d # v3.0.0
uses: docker/login-action@e92390c5fb421da1463c202d546fed0ec5c39f20 # v3.1.0
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
@ -127,7 +116,7 @@ jobs:
# Login to GitHub Container Registry
- name: Login to GitHub Container Registry
uses: docker/login-action@343f7c4344506bcbf9b4de18042ae17996df046d # v3.0.0
uses: docker/login-action@e92390c5fb421da1463c202d546fed0ec5c39f20 # v3.1.0
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
@ -140,9 +129,15 @@ jobs:
run: |
echo "CONTAINER_REGISTRIES=${CONTAINER_REGISTRIES:+${CONTAINER_REGISTRIES},}${{ vars.GHCR_REPO }}" | tee -a "${GITHUB_ENV}"
- name: Add registry for ghcr.io
if: ${{ env.HAVE_GHCR_LOGIN == 'true' }}
shell: bash
run: |
echo "CONTAINER_REGISTRIES=${CONTAINER_REGISTRIES:+${CONTAINER_REGISTRIES},}${{ vars.GHCR_REPO }}" | tee -a "${GITHUB_ENV}"
# Login to Quay.io
- name: Login to Quay.io
uses: docker/login-action@343f7c4344506bcbf9b4de18042ae17996df046d # v3.0.0
uses: docker/login-action@e92390c5fb421da1463c202d546fed0ec5c39f20 # v3.1.0
with:
registry: quay.io
username: ${{ secrets.QUAY_USERNAME }}
@ -155,8 +150,28 @@ jobs:
run: |
echo "CONTAINER_REGISTRIES=${CONTAINER_REGISTRIES:+${CONTAINER_REGISTRIES},}${{ vars.QUAY_REPO }}" | tee -a "${GITHUB_ENV}"
- name: Configure build cache from/to
shell: bash
run: |
#
# Check if there is a GitHub Container Registry Login and use it for caching
if [[ -n "${HAVE_GHCR_LOGIN}" ]]; then
echo "BAKE_CACHE_FROM=type=registry,ref=${{ vars.GHCR_REPO }}-buildcache:${{ matrix.base_image }}" | tee -a "${GITHUB_ENV}"
echo "BAKE_CACHE_TO=type=registry,ref=${{ vars.GHCR_REPO }}-buildcache:${{ matrix.base_image }},mode=max" | tee -a "${GITHUB_ENV}"
else
echo "BAKE_CACHE_FROM="
echo "BAKE_CACHE_TO="
fi
#
- name: Add localhost registry
if: ${{ matrix.base_image == 'alpine' }}
shell: bash
run: |
echo "CONTAINER_REGISTRIES=${CONTAINER_REGISTRIES:+${CONTAINER_REGISTRIES},}localhost:5000/vaultwarden/server" | tee -a "${GITHUB_ENV}"
- name: Bake ${{ matrix.base_image }} containers
uses: docker/bake-action@511fde2517761e303af548ec9e0ea74a8a100112 # v4.0.0
uses: docker/bake-action@73b0efa7a0e8ac276e0a8d5c580698a942ff10b5 # v4.4.0
env:
BASE_TAGS: "${{ env.BASE_TAGS }}"
SOURCE_COMMIT: "${{ env.SOURCE_COMMIT }}"
@ -168,3 +183,76 @@ jobs:
push: true
files: docker/docker-bake.hcl
targets: "${{ matrix.base_image }}-multi"
set: |
*.cache-from=${{ env.BAKE_CACHE_FROM }}
*.cache-to=${{ env.BAKE_CACHE_TO }}
# Extract the Alpine binaries from the containers
- name: Extract binaries
if: ${{ matrix.base_image == 'alpine' }}
shell: bash
run: |
# Check which main tag we are going to build determined by github.ref_type
if [[ "${{ github.ref_type }}" == "tag" ]]; then
EXTRACT_TAG="latest"
elif [[ "${{ github.ref_type }}" == "branch" ]]; then
EXTRACT_TAG="testing"
fi
# After each extraction the image is removed.
# This is needed because using different platforms doesn't trigger a new pull/download
# Extract amd64 binary
docker create --name amd64 --platform=linux/amd64 "vaultwarden/server:${EXTRACT_TAG}-alpine"
docker cp amd64:/vaultwarden vaultwarden-amd64
docker rm --force amd64
docker rmi --force "vaultwarden/server:${EXTRACT_TAG}-alpine"
# Extract arm64 binary
docker create --name arm64 --platform=linux/arm64 "vaultwarden/server:${EXTRACT_TAG}-alpine"
docker cp arm64:/vaultwarden vaultwarden-arm64
docker rm --force arm64
docker rmi --force "vaultwarden/server:${EXTRACT_TAG}-alpine"
# Extract armv7 binary
docker create --name armv7 --platform=linux/arm/v7 "vaultwarden/server:${EXTRACT_TAG}-alpine"
docker cp armv7:/vaultwarden vaultwarden-armv7
docker rm --force armv7
docker rmi --force "vaultwarden/server:${EXTRACT_TAG}-alpine"
# Extract armv6 binary
docker create --name armv6 --platform=linux/arm/v6 "vaultwarden/server:${EXTRACT_TAG}-alpine"
docker cp armv6:/vaultwarden vaultwarden-armv6
docker rm --force armv6
docker rmi --force "vaultwarden/server:${EXTRACT_TAG}-alpine"
# Upload artifacts to Github Actions
- name: "Upload amd64 artifact"
uses: actions/upload-artifact@65462800fd760344b1a7b4382951275a0abb4808 # v4.3.3
if: ${{ matrix.base_image == 'alpine' }}
with:
name: vaultwarden-${{ env.SOURCE_VERSION }}-linux-amd64
path: vaultwarden-amd64
- name: "Upload arm64 artifact"
uses: actions/upload-artifact@65462800fd760344b1a7b4382951275a0abb4808 # v4.3.3
if: ${{ matrix.base_image == 'alpine' }}
with:
name: vaultwarden-${{ env.SOURCE_VERSION }}-linux-arm64
path: vaultwarden-arm64
- name: "Upload armv7 artifact"
uses: actions/upload-artifact@65462800fd760344b1a7b4382951275a0abb4808 # v4.3.3
if: ${{ matrix.base_image == 'alpine' }}
with:
name: vaultwarden-${{ env.SOURCE_VERSION }}-linux-armv7
path: vaultwarden-armv7
- name: "Upload armv6 artifact"
uses: actions/upload-artifact@65462800fd760344b1a7b4382951275a0abb4808 # v4.3.3
if: ${{ matrix.base_image == 'alpine' }}
with:
name: vaultwarden-${{ env.SOURCE_VERSION }}-linux-armv6
path: vaultwarden-armv6
# End Upload artifacts to Github Actions

26
.github/workflows/releasecache-cleanup.yml gevendort Normale Datei
Datei anzeigen

@ -0,0 +1,26 @@
on:
workflow_dispatch:
inputs:
manual_trigger:
description: "Manual trigger buildcache cleanup"
required: false
default: ""
schedule:
- cron: '0 1 * * FRI'
name: Cleanup
jobs:
releasecache-cleanup:
name: Releasecache Cleanup
runs-on: ubuntu-22.04
continue-on-error: true
timeout-minutes: 30
steps:
- name: Delete vaultwarden-buildcache containers
uses: actions/delete-package-versions@e5bc658cc4c965c472efe991f8beea3981499c55 # v5.0.0
with:
package-name: 'vaultwarden-buildcache'
package-type: 'container'
min-versions-to-keep: 0
delete-only-untagged-versions: 'false'

Datei anzeigen

@ -4,7 +4,6 @@ on:
push:
branches:
- main
- release-build-revision
tags:
- '*'
pull_request:
@ -26,10 +25,10 @@ jobs:
actions: read
steps:
- name: Checkout code
uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 #v4.1.1
uses: actions/checkout@0ad4b8fadaa221de15dcec353f45205ec38ea70b #v4.1.4
- name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@f78e9ecf42a1271402d4f484518b9313235990e1 # v0.13.1
uses: aquasecurity/trivy-action@d710430a6722f083d3b36b8339ff66b32f22ee55 # v0.19.0
with:
scan-type: repo
ignore-unfixed: true
@ -38,6 +37,6 @@ jobs:
severity: CRITICAL,HIGH
- name: Upload Trivy scan results to GitHub Security tab
uses: github/codeql-action/upload-sarif@bad341350a2f5616f9e048e51360cedc49181ce8 # v2.22.4
uses: github/codeql-action/upload-sarif@2bbafcdd7fbf96243689e764c2f15d9735164f33 # v3.25.3
with:
sarif_file: 'trivy-results.sarif'

1981
Cargo.lock generiert

Datei-Diff unterdrückt, da er zu groß ist Diff laden

Datei anzeigen

@ -3,7 +3,7 @@ name = "vaultwarden"
version = "1.0.0"
authors = ["Daniel García <dani-garcia@users.noreply.github.com>"]
edition = "2021"
rust-version = "1.71.1"
rust-version = "1.76.0"
resolver = "2"
repository = "https://github.com/dani-garcia/vaultwarden"
@ -36,11 +36,11 @@ unstable = []
[target."cfg(not(windows))".dependencies]
# Logging
syslog = "6.1.0"
syslog = "6.1.1"
[dependencies]
# Logging
log = "0.4.20"
log = "0.4.21"
fern = { version = "0.6.2", features = ["syslog-6", "reopen-1"] }
tracing = { version = "0.1.40", features = ["log"] } # Needed to have lettre and webauthn-rs trace logging to work
@ -48,63 +48,62 @@ tracing = { version = "0.1.40", features = ["log"] } # Needed to have lettre and
dotenvy = { version = "0.15.7", default-features = false }
# Lazy initialization
once_cell = "1.18.0"
once_cell = "1.19.0"
# Numerical libraries
num-traits = "0.2.17"
num-derive = "0.4.1"
num-traits = "0.2.19"
num-derive = "0.4.2"
bigdecimal = "0.4.3"
# Web framework
rocket = { version = "0.5.0-rc.3", features = ["tls", "json"], default-features = false }
# rocket_ws = { version ="0.1.0-rc.3" }
rocket_ws = { git = 'https://github.com/SergioBenitez/Rocket', rev = "ce441b5f46fdf5cd99cb32b8b8638835e4c2a5fa" } # v0.5 branch
rocket = { version = "0.5.0", features = ["tls", "json"], default-features = false }
rocket_ws = { version ="0.1.0" }
# WebSockets libraries
tokio-tungstenite = "0.19.0"
rmpv = "1.0.1" # MessagePack library
rmpv = "1.3.0" # MessagePack library
# Concurrent HashMap used for WebSocket messaging and favicons
dashmap = "5.5.3"
# Async futures
futures = "0.3.28"
tokio = { version = "1.33.0", features = ["rt-multi-thread", "fs", "io-util", "parking_lot", "time", "signal"] }
futures = "0.3.30"
tokio = { version = "1.37.0", features = ["rt-multi-thread", "fs", "io-util", "parking_lot", "time", "signal", "net"] }
# A generic serialization/deserialization framework
serde = { version = "1.0.189", features = ["derive"] }
serde_json = "1.0.107"
serde = { version = "1.0.202", features = ["derive"] }
serde_json = "1.0.117"
# A safe, extensible ORM and Query builder
diesel = { version = "2.1.3", features = ["chrono", "r2d2"] }
diesel = { version = "2.1.6", features = ["chrono", "r2d2", "numeric"] }
diesel_migrations = "2.1.0"
diesel_logger = { version = "0.3.0", optional = true }
# Bundled/Static SQLite
libsqlite3-sys = { version = "0.26.0", features = ["bundled"], optional = true }
libsqlite3-sys = { version = "0.28.0", features = ["bundled"], optional = true }
# Crypto-related libraries
rand = { version = "0.8.5", features = ["small_rng"] }
ring = "0.17.5"
ring = "0.17.8"
# UUID generation
uuid = { version = "1.5.0", features = ["v4"] }
uuid = { version = "1.8.0", features = ["v4"] }
# Date and time libraries
chrono = { version = "0.4.31", features = ["clock", "serde"], default-features = false }
chrono-tz = "0.8.3"
time = "0.3.30"
chrono = { version = "0.4.38", features = ["clock", "serde"], default-features = false }
chrono-tz = "0.9.0"
time = "0.3.36"
# Job scheduler
job_scheduler_ng = "2.0.4"
job_scheduler_ng = "2.0.5"
# Data encoding library Hex/Base32/Base64
data-encoding = "2.4.0"
data-encoding = "2.6.0"
# JWT library
jsonwebtoken = "9.0.0"
jsonwebtoken = "9.3.0"
# TOTP library
totp-lite = "2.0.0"
totp-lite = "2.0.1"
# Yubico Library
yubico = { version = "0.11.0", features = ["online-tokio"], default-features = false }
@ -113,70 +112,65 @@ yubico = { version = "0.11.0", features = ["online-tokio"], default-features = f
webauthn-rs = "0.3.2"
# Handling of URL's for WebAuthn and favicons
url = "2.4.1"
url = "2.5.0"
# Email libraries
lettre = { version = "0.11.0", features = ["smtp-transport", "sendmail-transport", "builder", "serde", "tokio1-native-tls", "hostname", "tracing", "tokio1"], default-features = false }
percent-encoding = "2.3.0" # URL encoding library used for URL's in the emails
lettre = { version = "0.11.7", features = ["smtp-transport", "sendmail-transport", "builder", "serde", "tokio1-native-tls", "hostname", "tracing", "tokio1"], default-features = false }
percent-encoding = "2.3.1" # URL encoding library used for URL's in the emails
email_address = "0.2.4"
# HTML Template library
handlebars = { version = "4.4.0", features = ["dir_source"] }
handlebars = { version = "5.1.2", features = ["dir_source"] }
# HTTP client (Used for favicons, version check, DUO and HIBP API)
reqwest = { version = "0.11.22", features = ["stream", "json", "deflate", "gzip", "brotli", "socks", "cookies", "trust-dns", "native-tls-alpn"] }
reqwest = { version = "0.12.4", features = ["native-tls-alpn", "stream", "json", "gzip", "brotli", "socks", "cookies"] }
hickory-resolver = "0.24.1"
# Favicon extraction libraries
html5gum = "0.5.7"
regex = { version = "1.10.2", features = ["std", "perf", "unicode-perl"], default-features = false }
data-url = "0.3.0"
bytes = "1.5.0"
regex = { version = "1.10.4", features = ["std", "perf", "unicode-perl"], default-features = false }
data-url = "0.3.1"
bytes = "1.6.0"
# Cache function results (Used for version check and favicon fetching)
cached = { version = "0.46.0", features = ["async"] }
cached = { version = "0.51.3", features = ["async"] }
# Used for custom short lived cookie jar during favicon extraction
cookie = "0.16.2"
cookie_store = "0.19.1"
cookie = "0.18.1"
cookie_store = "0.21.0"
# Used by U2F, JWT and PostgreSQL
openssl = "0.10.57"
# Set openssl-sys fixed to v0.9.92 to prevent building issues with musl, arm and 32bit pointer width
# It will force add a dynamically linked library which prevents the build from being static
openssl-sys = "=0.9.92"
openssl = "0.10.64"
# CLI argument parsing
pico-args = "0.5.0"
# Macro ident concatenation
paste = "1.0.14"
governor = "0.6.0"
paste = "1.0.15"
governor = "0.6.3"
# Check client versions for specific features.
semver = "1.0.20"
semver = "1.0.23"
# Allow overriding the default memory allocator
# Mainly used for the musl builds, since the default musl malloc is very slow
mimalloc = { version = "0.1.39", features = ["secure"], default-features = false, optional = true }
which = "5.0.0"
mimalloc = { version = "0.1.41", features = ["secure"], default-features = false, optional = true }
which = "6.0.1"
# Argon2 library with support for the PHC format
argon2 = "0.5.2"
argon2 = "0.5.3"
# Reading a password from the cli for generating the Argon2id ADMIN_TOKEN
rpassword = "7.2.0"
[patch.crates-io]
rocket = { git = 'https://github.com/SergioBenitez/Rocket', rev = 'ce441b5f46fdf5cd99cb32b8b8638835e4c2a5fa' } # v0.5 branch
# rocket_ws = { git = 'https://github.com/SergioBenitez/Rocket', rev = 'ce441b5f46fdf5cd99cb32b8b8638835e4c2a5fa' } # v0.5 branch
rpassword = "7.3.1"
# Strip debuginfo from the release builds
# Also enable thin LTO for some optimizations
# The symbols are the provide better panic traces
# Also enable fat LTO and use 1 codegen unit for optimizations
[profile.release]
strip = "debuginfo"
lto = "thin"
lto = "fat"
codegen-units = 1
# A little bit of a speedup
@ -187,3 +181,69 @@ split-debuginfo = "unpacked"
# This is a huge speed improvement during testing
[profile.dev.package.argon2]
opt-level = 3
# Optimize for size
[profile.release-micro]
inherits = "release"
opt-level = "z"
strip = "symbols"
lto = "fat"
codegen-units = 1
panic = "abort"
# Profile for systems with low resources
# It will use less resources during build
[profile.release-low]
inherits = "release"
strip = "symbols"
lto = "thin"
codegen-units = 16
# Linting config
[lints.rust]
# Forbid
unsafe_code = "forbid"
non_ascii_idents = "forbid"
# Deny
future_incompatible = { level = "deny", priority = -1 }
noop_method_call = "deny"
pointer_structural_match = "deny"
rust_2018_idioms = { level = "deny", priority = -1 }
rust_2021_compatibility = { level = "deny", priority = -1 }
trivial_casts = "deny"
trivial_numeric_casts = "deny"
unused = { level = "deny", priority = -1 }
unused_import_braces = "deny"
unused_lifetimes = "deny"
deprecated_in_future = "deny"
[lints.clippy]
# Allow
# We need this since Rust v1.76+, since it has some bugs
# https://github.com/rust-lang/rust-clippy/issues/12016
blocks_in_conditions = "allow"
# Deny
cast_lossless = "deny"
clone_on_ref_ptr = "deny"
equatable_if_let = "deny"
float_cmp_const = "deny"
inefficient_to_string = "deny"
iter_on_empty_collections = "deny"
iter_on_single_items = "deny"
linkedlist = "deny"
macro_use_imports = "deny"
manual_assert = "deny"
manual_instant_elapsed = "deny"
manual_string_new = "deny"
match_wildcard_for_single_variants = "deny"
mem_forget = "deny"
needless_lifetimes = "deny"
string_add_assign = "deny"
string_to_string = "deny"
unnecessary_join = "deny"
unnecessary_self_imports = "deny"
unused_async = "deny"
verbose_file_reads = "deny"
zero_sized_map_values = "deny"

Datei anzeigen

@ -92,4 +92,11 @@ Thanks for your contribution to the project!
</a>
</td>
</tr>
<tr>
<td align="center">
<a href="https://github.com/IQ333777" style="width: 75px">
<sub><b>IQ333777</b></sub>
</a>
</td>
</tr>
</table>

Datei anzeigen

@ -17,6 +17,13 @@ fn main() {
"You need to enable one DB backend. To build with previous defaults do: cargo build --features sqlite"
);
// Rerun when these paths are changed.
// Someone could have checked-out a tag or specific commit, but no other files changed.
println!("cargo:rerun-if-changed=.git");
println!("cargo:rerun-if-changed=.git/HEAD");
println!("cargo:rerun-if-changed=.git/index");
println!("cargo:rerun-if-changed=.git/refs/tags");
#[cfg(all(not(debug_assertions), feature = "query_logger"))]
compile_error!("Query Logging is only allowed during development, it is not intended for production usage!");
@ -42,11 +49,11 @@ fn run(args: &[&str]) -> Result<String, std::io::Error> {
/// This method reads info from Git, namely tags, branch, and revision
/// To access these values, use:
/// - env!("GIT_EXACT_TAG")
/// - env!("GIT_LAST_TAG")
/// - env!("GIT_BRANCH")
/// - env!("GIT_REV")
/// - env!("VW_VERSION")
/// - `env!("GIT_EXACT_TAG")`
/// - `env!("GIT_LAST_TAG")`
/// - `env!("GIT_BRANCH")`
/// - `env!("GIT_REV")`
/// - `env!("VW_VERSION")`
fn version_from_git_info() -> Result<String, std::io::Error> {
// The exact tag for the current commit, can be empty when
// the current commit doesn't have an associated tag

Datei anzeigen

@ -1,12 +1,12 @@
---
vault_version: "v2023.10.0"
vault_image_digest: "sha256:419e4976921f98f1124f296ed02e68bf7f8ff29b3f1fba59e7e715228a065935"
# Cross Compile Docker Helper Scripts v1.3.0
vault_version: "v2024.3.1"
vault_image_digest: "sha256:784838b15c775c81b29e8979aaac36dc5ef44ea18ff0adb7fc56c7c62886319b"
# Cross Compile Docker Helper Scripts v1.4.0
# We use the linux/amd64 platform shell scripts since there is no difference between the different platform scripts
xx_image_digest: "sha256:c9609ace652bbe51dd4ce90e0af9d48a4590f1214246da5bc70e46f6dd586edc"
rust_version: 1.73.0 # Rust version to be used
xx_image_digest: "sha256:0cd3f05c72d6c9b038eb135f91376ee1169ef3a330d34e418e65e2a5c2e9c0d4"
rust_version: 1.78.0 # Rust version to be used
debian_version: bookworm # Debian release name to be used
alpine_version: 3.18 # Alpine version to be used
alpine_version: 3.19 # Alpine version to be used
# For which platforms/architectures will we try to build images
platforms: ["linux/amd64", "linux/arm64", "linux/arm/v7", "linux/arm/v6"]
# Determine the build images per OS/Arch

Datei anzeigen

@ -18,23 +18,23 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull docker.io/vaultwarden/web-vault:v2023.10.0
# $ docker image inspect --format "{{.RepoDigests}}" docker.io/vaultwarden/web-vault:v2023.10.0
# [docker.io/vaultwarden/web-vault@sha256:419e4976921f98f1124f296ed02e68bf7f8ff29b3f1fba59e7e715228a065935]
# $ docker pull docker.io/vaultwarden/web-vault:v2024.3.1
# $ docker image inspect --format "{{.RepoDigests}}" docker.io/vaultwarden/web-vault:v2024.3.1
# [docker.io/vaultwarden/web-vault@sha256:784838b15c775c81b29e8979aaac36dc5ef44ea18ff0adb7fc56c7c62886319b]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" docker.io/vaultwarden/web-vault@sha256:419e4976921f98f1124f296ed02e68bf7f8ff29b3f1fba59e7e715228a065935
# [docker.io/vaultwarden/web-vault:v2023.10.0]
# $ docker image inspect --format "{{.RepoTags}}" docker.io/vaultwarden/web-vault@sha256:784838b15c775c81b29e8979aaac36dc5ef44ea18ff0adb7fc56c7c62886319b
# [docker.io/vaultwarden/web-vault:v2024.3.1]
#
FROM --platform=linux/amd64 docker.io/vaultwarden/web-vault@sha256:419e4976921f98f1124f296ed02e68bf7f8ff29b3f1fba59e7e715228a065935 as vault
FROM --platform=linux/amd64 docker.io/vaultwarden/web-vault@sha256:784838b15c775c81b29e8979aaac36dc5ef44ea18ff0adb7fc56c7c62886319b as vault
########################## ALPINE BUILD IMAGES ##########################
## NOTE: The Alpine Base Images do not support other platforms then linux/amd64
## And for Alpine we define all build images here, they will only be loaded when actually used
FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:x86_64-musl-stable-1.73.0 as build_amd64
FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:aarch64-musl-stable-1.73.0 as build_arm64
FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:armv7-musleabihf-stable-1.73.0 as build_armv7
FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:arm-musleabi-stable-1.73.0 as build_armv6
FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:x86_64-musl-stable-1.78.0 as build_amd64
FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:aarch64-musl-stable-1.78.0 as build_arm64
FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:armv7-musleabihf-stable-1.78.0 as build_armv7
FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:arm-musleabi-stable-1.78.0 as build_armv6
########################## BUILD IMAGE ##########################
# hadolint ignore=DL3006
@ -58,33 +58,31 @@ ENV DEBIAN_FRONTEND=noninteractive \
# Create CARGO_HOME folder and don't download rust docs
RUN mkdir -pv "${CARGO_HOME}" \
&& rustup set profile minimal
RUN mkdir -pv "${CARGO_HOME}" && \
rustup set profile minimal
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
WORKDIR /app
# Shared variables across Debian and Alpine
# Environment variables for Cargo on Alpine based builds
RUN echo "export CARGO_TARGET=${RUST_MUSL_CROSS_TARGET}" >> /env-cargo && \
# To be able to build the armv6 image with mimalloc we need to tell the linker to also look for libatomic
if [[ "${TARGETARCH}${TARGETVARIANT}" == "armv6" ]] ; then echo "export RUSTFLAGS='-Clink-arg=-latomic'" >> /env-cargo ; fi && \
# Output the current contents of the file
cat /env-cargo
# Enable MiMalloc to improve performance on Alpine builds
ARG DB=sqlite,mysql,postgresql,enable_mimalloc
RUN source /env-cargo && \
rustup target add "${CARGO_TARGET}"
ARG CARGO_PROFILE=release
ARG VW_VERSION
# Copies over *only* your manifests and build files
COPY ./Cargo.* ./
COPY ./rust-toolchain.toml ./rust-toolchain.toml
COPY ./build.rs ./build.rs
COPY ./Cargo.* ./rust-toolchain.toml ./build.rs ./
ARG CARGO_PROFILE=release
# Configure the DB ARG as late as possible to not invalidate the cached layers above
# Enable MiMalloc to improve performance on Alpine builds
ARG DB=sqlite,mysql,postgresql,enable_mimalloc
# Builds your dependencies and removes the
# dummy project, except the target folder
@ -97,10 +95,13 @@ RUN source /env-cargo && \
# To avoid copying unneeded files, use .dockerignore
COPY . .
ARG VW_VERSION
# Builds again, this time it will be the actual source files being build
RUN source /env-cargo && \
# Make sure that we actually build the project by updating the src/main.rs timestamp
touch src/main.rs && \
# Also do this for build.rs to ensure the version is rechecked
touch build.rs src/main.rs && \
# Create a symlink to the binary target folder to easy copy the binary in the final stage
cargo build --features ${DB} --profile "${CARGO_PROFILE}" --target="${CARGO_TARGET}" && \
if [[ "${CARGO_PROFILE}" == "dev" ]] ; then \
@ -126,7 +127,7 @@ RUN source /env-cargo && \
# To uninstall: docker run --privileged --rm tonistiigi/binfmt --uninstall 'qemu-*'
#
# We need to add `--platform` here, because of a podman bug: https://github.com/containers/buildah/issues/4742
FROM --platform=$TARGETPLATFORM docker.io/library/alpine:3.18
FROM --platform=$TARGETPLATFORM docker.io/library/alpine:3.19
ENV ROCKET_PROFILE="release" \
ROCKET_ADDRESS=0.0.0.0 \
@ -149,8 +150,7 @@ EXPOSE 3012
# and the binary from the "build" stage to the current stage
WORKDIR /
COPY docker/healthcheck.sh /healthcheck.sh
COPY docker/start.sh /start.sh
COPY docker/healthcheck.sh docker/start.sh /
COPY --from=vault /web-vault ./web-vault
COPY --from=build /app/target/final/vaultwarden .

Datei anzeigen

@ -18,24 +18,24 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull docker.io/vaultwarden/web-vault:v2023.10.0
# $ docker image inspect --format "{{.RepoDigests}}" docker.io/vaultwarden/web-vault:v2023.10.0
# [docker.io/vaultwarden/web-vault@sha256:419e4976921f98f1124f296ed02e68bf7f8ff29b3f1fba59e7e715228a065935]
# $ docker pull docker.io/vaultwarden/web-vault:v2024.3.1
# $ docker image inspect --format "{{.RepoDigests}}" docker.io/vaultwarden/web-vault:v2024.3.1
# [docker.io/vaultwarden/web-vault@sha256:784838b15c775c81b29e8979aaac36dc5ef44ea18ff0adb7fc56c7c62886319b]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" docker.io/vaultwarden/web-vault@sha256:419e4976921f98f1124f296ed02e68bf7f8ff29b3f1fba59e7e715228a065935
# [docker.io/vaultwarden/web-vault:v2023.10.0]
# $ docker image inspect --format "{{.RepoTags}}" docker.io/vaultwarden/web-vault@sha256:784838b15c775c81b29e8979aaac36dc5ef44ea18ff0adb7fc56c7c62886319b
# [docker.io/vaultwarden/web-vault:v2024.3.1]
#
FROM --platform=linux/amd64 docker.io/vaultwarden/web-vault@sha256:419e4976921f98f1124f296ed02e68bf7f8ff29b3f1fba59e7e715228a065935 as vault
FROM --platform=linux/amd64 docker.io/vaultwarden/web-vault@sha256:784838b15c775c81b29e8979aaac36dc5ef44ea18ff0adb7fc56c7c62886319b as vault
########################## Cross Compile Docker Helper Scripts ##########################
## We use the linux/amd64 no matter which Build Platform, since these are all bash scripts
## And these bash scripts do not have any significant difference if at all
FROM --platform=linux/amd64 docker.io/tonistiigi/xx@sha256:c9609ace652bbe51dd4ce90e0af9d48a4590f1214246da5bc70e46f6dd586edc AS xx
FROM --platform=linux/amd64 docker.io/tonistiigi/xx@sha256:0cd3f05c72d6c9b038eb135f91376ee1169ef3a330d34e418e65e2a5c2e9c0d4 AS xx
########################## BUILD IMAGE ##########################
# hadolint ignore=DL3006
FROM --platform=$BUILDPLATFORM docker.io/library/rust:1.73.0-slim-bookworm as build
FROM --platform=$BUILDPLATFORM docker.io/library/rust:1.78.0-slim-bookworm as build
COPY --from=xx / /
ARG TARGETARCH
ARG TARGETVARIANT
@ -64,32 +64,40 @@ RUN apt-get update && \
"libc6-$(xx-info debian-arch)-cross" \
"libc6-dev-$(xx-info debian-arch)-cross" \
"linux-libc-dev-$(xx-info debian-arch)-cross" && \
# Run xx-cargo early, since it sometimes seems to break when run at a later stage
echo "export CARGO_TARGET=$(xx-cargo --print-target-triple)" >> /env-cargo
RUN xx-apt-get install -y \
xx-apt-get install -y \
--no-install-recommends \
gcc \
libmariadb3 \
libpq-dev \
libpq5 \
libssl-dev && \
libssl-dev \
zlib1g-dev && \
# Force install arch dependend mariadb dev packages
# Installing them the normal way breaks several other packages (again)
apt-get download "libmariadb-dev-compat:$(xx-info debian-arch)" "libmariadb-dev:$(xx-info debian-arch)" && \
dpkg --force-all -i ./libmariadb-dev*.deb
dpkg --force-all -i ./libmariadb-dev*.deb && \
# Run xx-cargo early, since it sometimes seems to break when run at a later stage
echo "export CARGO_TARGET=$(xx-cargo --print-target-triple)" >> /env-cargo
# Create CARGO_HOME folder and don't download rust docs
RUN mkdir -pv "${CARGO_HOME}" \
&& rustup set profile minimal
RUN mkdir -pv "${CARGO_HOME}" && \
rustup set profile minimal
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
WORKDIR /app
# Environment variables for cargo across Debian and Alpine
# Environment variables for Cargo on Debian based builds
ARG ARCH_OPENSSL_LIB_DIR \
ARCH_OPENSSL_INCLUDE_DIR
RUN source /env-cargo && \
if xx-info is-cross ; then \
# Some special variables if needed to override some build paths
if [[ -n "${ARCH_OPENSSL_LIB_DIR}" && -n "${ARCH_OPENSSL_INCLUDE_DIR}" ]]; then \
echo "export $(echo "${CARGO_TARGET}" | tr '[:lower:]' '[:upper:]' | tr - _)_OPENSSL_LIB_DIR=${ARCH_OPENSSL_LIB_DIR}" >> /env-cargo && \
echo "export $(echo "${CARGO_TARGET}" | tr '[:lower:]' '[:upper:]' | tr - _)_OPENSSL_INCLUDE_DIR=${ARCH_OPENSSL_INCLUDE_DIR}" >> /env-cargo ; \
fi && \
# We can't use xx-cargo since that uses clang, which doesn't work for our libraries.
# Because of this we generate the needed environment variables here which we can load in the needed steps.
echo "export CC_$(echo "${CARGO_TARGET}" | tr '[:upper:]' '[:lower:]' | tr - _)=/usr/bin/$(xx-info)-gcc" >> /env-cargo && \
@ -102,19 +110,16 @@ RUN source /env-cargo && \
# Output the current contents of the file
cat /env-cargo
# Configure the DB ARG as late as possible to not invalidate the cached layers above
ARG DB=sqlite,mysql,postgresql
RUN source /env-cargo && \
rustup target add "${CARGO_TARGET}"
ARG CARGO_PROFILE=release
ARG VW_VERSION
# Copies over *only* your manifests and build files
COPY ./Cargo.* ./
COPY ./rust-toolchain.toml ./rust-toolchain.toml
COPY ./build.rs ./build.rs
COPY ./Cargo.* ./rust-toolchain.toml ./build.rs ./
ARG CARGO_PROFILE=release
# Configure the DB ARG as late as possible to not invalidate the cached layers above
ARG DB=sqlite,mysql,postgresql
# Builds your dependencies and removes the
# dummy project, except the target folder
@ -127,10 +132,13 @@ RUN source /env-cargo && \
# To avoid copying unneeded files, use .dockerignore
COPY . .
ARG VW_VERSION
# Builds again, this time it will be the actual source files being build
RUN source /env-cargo && \
# Make sure that we actually build the project by updating the src/main.rs timestamp
touch src/main.rs && \
# Also do this for build.rs to ensure the version is rechecked
touch build.rs src/main.rs && \
# Create a symlink to the binary target folder to easy copy the binary in the final stage
cargo build --features ${DB} --profile "${CARGO_PROFILE}" --target="${CARGO_TARGET}" && \
if [[ "${CARGO_PROFILE}" == "dev" ]] ; then \
@ -183,8 +191,7 @@ EXPOSE 3012
# and the binary from the "build" stage to the current stage
WORKDIR /
COPY docker/healthcheck.sh /healthcheck.sh
COPY docker/start.sh /start.sh
COPY docker/healthcheck.sh docker/start.sh /
COPY --from=vault /web-vault ./web-vault
COPY --from=build /app/target/final/vaultwarden .

Datei anzeigen

@ -82,34 +82,42 @@ RUN apt-get update && \
"libc6-$(xx-info debian-arch)-cross" \
"libc6-dev-$(xx-info debian-arch)-cross" \
"linux-libc-dev-$(xx-info debian-arch)-cross" && \
# Run xx-cargo early, since it sometimes seems to break when run at a later stage
echo "export CARGO_TARGET=$(xx-cargo --print-target-triple)" >> /env-cargo
RUN xx-apt-get install -y \
xx-apt-get install -y \
--no-install-recommends \
gcc \
libmariadb3 \
libpq-dev \
libpq5 \
libssl-dev && \
libssl-dev \
zlib1g-dev && \
# Force install arch dependend mariadb dev packages
# Installing them the normal way breaks several other packages (again)
apt-get download "libmariadb-dev-compat:$(xx-info debian-arch)" "libmariadb-dev:$(xx-info debian-arch)" && \
dpkg --force-all -i ./libmariadb-dev*.deb
dpkg --force-all -i ./libmariadb-dev*.deb && \
# Run xx-cargo early, since it sometimes seems to break when run at a later stage
echo "export CARGO_TARGET=$(xx-cargo --print-target-triple)" >> /env-cargo
{% endif %}
# Create CARGO_HOME folder and don't download rust docs
RUN mkdir -pv "${CARGO_HOME}" \
&& rustup set profile minimal
RUN mkdir -pv "${CARGO_HOME}" && \
rustup set profile minimal
# Creates a dummy project used to grab dependencies
RUN USER=root cargo new --bin /app
WORKDIR /app
{% if base == "debian" %}
# Environment variables for cargo across Debian and Alpine
# Environment variables for Cargo on Debian based builds
ARG ARCH_OPENSSL_LIB_DIR \
ARCH_OPENSSL_INCLUDE_DIR
RUN source /env-cargo && \
if xx-info is-cross ; then \
# Some special variables if needed to override some build paths
if [[ -n "${ARCH_OPENSSL_LIB_DIR}" && -n "${ARCH_OPENSSL_INCLUDE_DIR}" ]]; then \
echo "export $(echo "${CARGO_TARGET}" | tr '[:lower:]' '[:upper:]' | tr - _)_OPENSSL_LIB_DIR=${ARCH_OPENSSL_LIB_DIR}" >> /env-cargo && \
echo "export $(echo "${CARGO_TARGET}" | tr '[:lower:]' '[:upper:]' | tr - _)_OPENSSL_INCLUDE_DIR=${ARCH_OPENSSL_INCLUDE_DIR}" >> /env-cargo ; \
fi && \
# We can't use xx-cargo since that uses clang, which doesn't work for our libraries.
# Because of this we generate the needed environment variables here which we can load in the needed steps.
echo "export CC_$(echo "${CARGO_TARGET}" | tr '[:upper:]' '[:lower:]' | tr - _)=/usr/bin/$(xx-info)-gcc" >> /env-cargo && \
@ -122,30 +130,30 @@ RUN source /env-cargo && \
# Output the current contents of the file
cat /env-cargo
# Configure the DB ARG as late as possible to not invalidate the cached layers above
ARG DB=sqlite,mysql,postgresql
{% elif base == "alpine" %}
# Shared variables across Debian and Alpine
# Environment variables for Cargo on Alpine based builds
RUN echo "export CARGO_TARGET=${RUST_MUSL_CROSS_TARGET}" >> /env-cargo && \
# To be able to build the armv6 image with mimalloc we need to tell the linker to also look for libatomic
if [[ "${TARGETARCH}${TARGETVARIANT}" == "armv6" ]] ; then echo "export RUSTFLAGS='-Clink-arg=-latomic'" >> /env-cargo ; fi && \
# Output the current contents of the file
cat /env-cargo
# Enable MiMalloc to improve performance on Alpine builds
ARG DB=sqlite,mysql,postgresql,enable_mimalloc
{% endif %}
RUN source /env-cargo && \
rustup target add "${CARGO_TARGET}"
ARG CARGO_PROFILE=release
ARG VW_VERSION
# Copies over *only* your manifests and build files
COPY ./Cargo.* ./
COPY ./rust-toolchain.toml ./rust-toolchain.toml
COPY ./build.rs ./build.rs
COPY ./Cargo.* ./rust-toolchain.toml ./build.rs ./
ARG CARGO_PROFILE=release
# Configure the DB ARG as late as possible to not invalidate the cached layers above
{% if base == "debian" %}
ARG DB=sqlite,mysql,postgresql
{% elif base == "alpine" %}
# Enable MiMalloc to improve performance on Alpine builds
ARG DB=sqlite,mysql,postgresql,enable_mimalloc
{% endif %}
# Builds your dependencies and removes the
# dummy project, except the target folder
@ -158,10 +166,13 @@ RUN source /env-cargo && \
# To avoid copying unneeded files, use .dockerignore
COPY . .
ARG VW_VERSION
# Builds again, this time it will be the actual source files being build
RUN source /env-cargo && \
# Make sure that we actually build the project by updating the src/main.rs timestamp
touch src/main.rs && \
# Also do this for build.rs to ensure the version is rechecked
touch build.rs src/main.rs && \
# Create a symlink to the binary target folder to easy copy the binary in the final stage
cargo build --features ${DB} --profile "${CARGO_PROFILE}" --target="${CARGO_TARGET}" && \
if [[ "${CARGO_PROFILE}" == "dev" ]] ; then \
@ -226,8 +237,7 @@ EXPOSE 3012
# and the binary from the "build" stage to the current stage
WORKDIR /
COPY docker/healthcheck.sh /healthcheck.sh
COPY docker/start.sh /start.sh
COPY docker/healthcheck.sh docker/start.sh /
COPY --from=vault /web-vault ./web-vault
COPY --from=build /app/target/final/vaultwarden .

Datei anzeigen

@ -11,6 +11,11 @@ With just these two files we can build both Debian and Alpine images for the fol
- armv7 (linux/arm/v7)
- armv6 (linux/arm/v6)
Some unsupported platforms for Debian based images. These are not built and tested by default and are only provided to make it easier for users to build for these architectures.
- 386 (linux/386)
- ppc64le (linux/ppc64le)
- s390x (linux/s390x)
To build these containers you need to enable QEMU binfmt support to be able to run/emulate architectures which are different then your host.<br>
This ensures the container build process can run binaries from other architectures.<br>

Datei anzeigen

@ -88,7 +88,7 @@ target "debian" {
inherits = ["_default_attributes"]
dockerfile = "docker/Dockerfile.debian"
tags = generate_tags("", platform_tag())
output = [join(",", flatten([["type=docker"], image_index_annotations()]))]
output = ["type=docker"]
}
// Multi Platform target, will build one tagged manifest with all supported architectures
@ -125,6 +125,40 @@ target "debian-armv6" {
tags = generate_tags("", "-armv6")
}
// ==== Start of unsupported Debian architecture targets ===
// These are provided just to help users build for these rare platforms
// They will not be built by default
target "debian-386" {
inherits = ["debian"]
platforms = ["linux/386"]
tags = generate_tags("", "-386")
args = {
ARCH_OPENSSL_LIB_DIR = "/usr/lib/i386-linux-gnu"
ARCH_OPENSSL_INCLUDE_DIR = "/usr/include/i386-linux-gnu"
}
}
target "debian-ppc64le" {
inherits = ["debian"]
platforms = ["linux/ppc64le"]
tags = generate_tags("", "-ppc64le")
args = {
ARCH_OPENSSL_LIB_DIR = "/usr/lib/powerpc64le-linux-gnu"
ARCH_OPENSSL_INCLUDE_DIR = "/usr/include/powerpc64le-linux-gnu"
}
}
target "debian-s390x" {
inherits = ["debian"]
platforms = ["linux/s390x"]
tags = generate_tags("", "-s390x")
args = {
ARCH_OPENSSL_LIB_DIR = "/usr/lib/s390x-linux-gnu"
ARCH_OPENSSL_INCLUDE_DIR = "/usr/include/s390x-linux-gnu"
}
}
// ==== End of unsupported Debian architecture targets ===
// A Group to build all platforms individually for local testing
group "debian-all" {
targets = ["debian-amd64", "debian-arm64", "debian-armv7", "debian-armv6"]
@ -138,7 +172,7 @@ target "alpine" {
inherits = ["_default_attributes"]
dockerfile = "docker/Dockerfile.alpine"
tags = generate_tags("-alpine", platform_tag())
output = [join(",", flatten([["type=docker"], image_index_annotations()]))]
output = ["type=docker"]
}
// Multi Platform target, will build one tagged manifest with all supported architectures
@ -216,7 +250,13 @@ function "generate_tags" {
result = flatten([
for registry in get_container_registries() :
[for base_tag in get_base_tags() :
concat(["${registry}:${base_tag}${suffix}${platform}"])]
concat(
# If the base_tag contains latest, and the suffix contains `-alpine` add a `:alpine` tag too
base_tag == "latest" ? suffix == "-alpine" ? ["${registry}:alpine${platform}"] : [] : [],
# The default tagging strategy
["${registry}:${base_tag}${suffix}${platform}"]
)
]
])
}

Datei anzeigen

@ -1,12 +1,20 @@
#!/bin/sh
#!/usr/bin/env sh
# Use the value of the corresponding env var (if present),
# or a default value otherwise.
: "${DATA_FOLDER:="data"}"
: "${DATA_FOLDER:="/data"}"
: "${ROCKET_PORT:="80"}"
: "${ENV_FILE:="/.env"}"
CONFIG_FILE="${DATA_FOLDER}"/config.json
# Check if the $ENV_FILE file exist and is readable
# If that is the case, load it into the environment before running any check
if [ -r "${ENV_FILE}" ]; then
# shellcheck disable=SC1090
. "${ENV_FILE}"
fi
# Given a config key, return the corresponding config value from the
# config file. If the key doesn't exist, return an empty string.
get_config_val() {

Datei anzeigen

@ -0,0 +1 @@
ALTER TABLE attachments MODIFY file_size BIGINT NOT NULL;

Datei anzeigen

@ -0,0 +1 @@
ALTER TABLE twofactor MODIFY last_used BIGINT NOT NULL;

Datei anzeigen

@ -0,0 +1,3 @@
ALTER TABLE attachments
ALTER COLUMN file_size TYPE BIGINT,
ALTER COLUMN file_size SET NOT NULL;

Datei anzeigen

@ -0,0 +1,3 @@
ALTER TABLE twofactor
ALTER COLUMN last_used TYPE BIGINT,
ALTER COLUMN last_used SET NOT NULL;

Datei anzeigen

@ -0,0 +1 @@
-- Integer size in SQLite is already i64, so we don't need to do anything

Datei anzeigen

@ -0,0 +1 @@
-- Integer size in SQLite is already i64, so we don't need to do anything

Datei anzeigen

@ -1,4 +1,4 @@
[toolchain]
channel = "1.73.0"
channel = "1.78.0"
components = [ "rustfmt", "clippy" ]
profile = "minimal"

Datei anzeigen

@ -13,14 +13,18 @@ use rocket::{
};
use crate::{
api::{core::log_event, unregister_push_device, ApiResult, EmptyResult, JsonResult, Notify, NumberOrString},
api::{
core::{log_event, two_factor},
unregister_push_device, ApiResult, EmptyResult, JsonResult, Notify,
},
auth::{decode_admin, encode_jwt, generate_admin_claims, ClientIp},
config::ConfigBuilder,
db::{backup_database, get_sql_server_version, models::*, DbConn, DbConnType},
error::{Error, MapResult},
mail,
util::{
docker_base_image, format_naive_datetime_local, get_display_size, get_reqwest_client, is_running_in_docker,
container_base_image, format_naive_datetime_local, get_display_size, get_reqwest_client,
is_running_in_container, NumberOrString,
},
CONFIG, VERSION,
};
@ -184,12 +188,11 @@ fn post_admin_login(data: Form<LoginForm>, cookies: &CookieJar<'_>, ip: ClientIp
let claims = generate_admin_claims();
let jwt = encode_jwt(&claims);
let cookie = Cookie::build(COOKIE_NAME, jwt)
let cookie = Cookie::build((COOKIE_NAME, jwt))
.path(admin_path())
.max_age(rocket::time::Duration::minutes(CONFIG.admin_session_lifetime()))
.same_site(SameSite::Strict)
.http_only(true)
.finish();
.http_only(true);
cookies.add(cookie);
if let Some(redirect) = redirect {
@ -313,7 +316,7 @@ async fn test_smtp(data: Json<InviteData>, _token: AdminToken) -> EmptyResult {
#[get("/logout")]
fn logout(cookies: &CookieJar<'_>) -> Redirect {
cookies.remove(Cookie::build(COOKIE_NAME, "").path(admin_path()).finish());
cookies.remove(Cookie::build(COOKIE_NAME).path(admin_path()));
Redirect::to(admin_path())
}
@ -343,7 +346,7 @@ async fn users_overview(_token: AdminToken, mut conn: DbConn) -> ApiResult<Html<
let mut usr = u.to_json(&mut conn).await;
usr["cipher_count"] = json!(Cipher::count_owned_by_user(&u.uuid, &mut conn).await);
usr["attachment_count"] = json!(Attachment::count_by_user(&u.uuid, &mut conn).await);
usr["attachment_size"] = json!(get_display_size(Attachment::size_by_user(&u.uuid, &mut conn).await as i32));
usr["attachment_size"] = json!(get_display_size(Attachment::size_by_user(&u.uuid, &mut conn).await));
usr["user_enabled"] = json!(u.enabled);
usr["created_at"] = json!(format_naive_datetime_local(&u.created_at, DT_FMT));
usr["last_active"] = match u.last_active(&mut conn).await {
@ -391,7 +394,7 @@ async fn delete_user(uuid: &str, token: AdminToken, mut conn: DbConn) -> EmptyRe
EventType::OrganizationUserRemoved as i32,
&user_org.uuid,
&user_org.org_uuid,
String::from(ACTING_ADMIN_USER),
ACTING_ADMIN_USER,
14, // Use UnknownBrowser type
&token.ip.ip,
&mut conn,
@ -410,7 +413,7 @@ async fn deauth_user(uuid: &str, _token: AdminToken, mut conn: DbConn, nt: Notif
if CONFIG.push_enabled() {
for device in Device::find_push_devices_by_user(&user.uuid, &mut conn).await {
match unregister_push_device(device.uuid).await {
match unregister_push_device(device.push_uuid).await {
Ok(r) => r,
Err(e) => error!("Unable to unregister devices from Bitwarden server: {}", e),
};
@ -446,9 +449,10 @@ async fn enable_user(uuid: &str, _token: AdminToken, mut conn: DbConn) -> EmptyR
}
#[post("/users/<uuid>/remove-2fa")]
async fn remove_2fa(uuid: &str, _token: AdminToken, mut conn: DbConn) -> EmptyResult {
async fn remove_2fa(uuid: &str, token: AdminToken, mut conn: DbConn) -> EmptyResult {
let mut user = get_user_or_404(uuid, &mut conn).await?;
TwoFactor::delete_all_by_user(&user.uuid, &mut conn).await?;
two_factor::enforce_2fa_policy(&user, ACTING_ADMIN_USER, 14, &token.ip.ip, &mut conn).await?;
user.totp_recover = None;
user.save(&mut conn).await
}
@ -506,7 +510,11 @@ async fn update_user_org_type(data: Json<UserOrgTypeData>, token: AdminToken, mu
match OrgPolicy::is_user_allowed(&user_to_edit.user_uuid, &user_to_edit.org_uuid, true, &mut conn).await {
Ok(_) => {}
Err(OrgPolicyErr::TwoFactorMissing) => {
err!("You cannot modify this user to this type because it has no two-step login method activated");
if CONFIG.email_2fa_auto_fallback() {
two_factor::email::find_and_activate_email_2fa(&user_to_edit.user_uuid, &mut conn).await?;
} else {
err!("You cannot modify this user to this type because they have not setup 2FA");
}
}
Err(OrgPolicyErr::SingleOrgEnforced) => {
err!("You cannot modify this user to this type because it is a member of an organization which forbids it");
@ -518,7 +526,7 @@ async fn update_user_org_type(data: Json<UserOrgTypeData>, token: AdminToken, mu
EventType::OrganizationUserUpdated as i32,
&user_to_edit.uuid,
&data.org_uuid,
String::from(ACTING_ADMIN_USER),
ACTING_ADMIN_USER,
14, // Use UnknownBrowser type
&token.ip.ip,
&mut conn,
@ -546,7 +554,7 @@ async fn organizations_overview(_token: AdminToken, mut conn: DbConn) -> ApiResu
org["group_count"] = json!(Group::count_by_org(&o.uuid, &mut conn).await);
org["event_count"] = json!(Event::count_by_org(&o.uuid, &mut conn).await);
org["attachment_count"] = json!(Attachment::count_by_org(&o.uuid, &mut conn).await);
org["attachment_size"] = json!(get_display_size(Attachment::size_by_org(&o.uuid, &mut conn).await as i32));
org["attachment_size"] = json!(get_display_size(Attachment::size_by_org(&o.uuid, &mut conn).await));
organizations_json.push(org);
}
@ -604,7 +612,7 @@ use cached::proc_macro::cached;
/// Cache this function to prevent API call rate limit. Github only allows 60 requests per hour, and we use 3 here already.
/// It will cache this function for 300 seconds (5 minutes) which should prevent the exhaustion of the rate limit.
#[cached(time = 300, sync_writes = true)]
async fn get_release_info(has_http_access: bool, running_within_docker: bool) -> (String, String, String) {
async fn get_release_info(has_http_access: bool, running_within_container: bool) -> (String, String, String) {
// If the HTTP Check failed, do not even attempt to check for new versions since we were not able to connect with github.com anyway.
if has_http_access {
(
@ -621,9 +629,9 @@ async fn get_release_info(has_http_access: bool, running_within_docker: bool) ->
}
_ => "-".to_string(),
},
// Do not fetch the web-vault version when running within Docker.
// Do not fetch the web-vault version when running within a container.
// The web-vault version is embedded within the container it self, and should not be updated manually
if running_within_docker {
if running_within_container {
"-".to_string()
} else {
match get_json_api::<GitRelease>(
@ -677,7 +685,7 @@ async fn diagnostics(_token: AdminToken, ip_header: IpHeader, mut conn: DbConn)
};
// Execute some environment checks
let running_within_docker = is_running_in_docker();
let running_within_container = is_running_in_container();
let has_http_access = has_http_access().await;
let uses_proxy = env::var_os("HTTP_PROXY").is_some()
|| env::var_os("http_proxy").is_some()
@ -691,12 +699,9 @@ async fn diagnostics(_token: AdminToken, ip_header: IpHeader, mut conn: DbConn)
};
let (latest_release, latest_commit, latest_web_build) =
get_release_info(has_http_access, running_within_docker).await;
get_release_info(has_http_access, running_within_container).await;
let ip_header_name = match &ip_header.0 {
Some(h) => h,
_ => "",
};
let ip_header_name = &ip_header.0.unwrap_or_default();
let diagnostics_json = json!({
"dns_resolved": dns_resolved,
@ -706,11 +711,11 @@ async fn diagnostics(_token: AdminToken, ip_header: IpHeader, mut conn: DbConn)
"web_vault_enabled": &CONFIG.web_vault_enabled(),
"web_vault_version": web_vault_version.version.trim_start_matches('v'),
"latest_web_build": latest_web_build,
"running_within_docker": running_within_docker,
"docker_base_image": if running_within_docker { docker_base_image() } else { "Not applicable" },
"running_within_container": running_within_container,
"container_base_image": if running_within_container { container_base_image() } else { "Not applicable" },
"has_http_access": has_http_access,
"ip_header_exists": &ip_header.0.is_some(),
"ip_header_match": ip_header_name == CONFIG.ip_header(),
"ip_header_exists": !ip_header_name.is_empty(),
"ip_header_match": ip_header_name.eq(&CONFIG.ip_header()),
"ip_header_name": ip_header_name,
"ip_header_config": &CONFIG.ip_header(),
"uses_proxy": uses_proxy,
@ -786,16 +791,16 @@ impl<'r> FromRequest<'r> for AdminToken {
if requested_page.is_empty() {
return Outcome::Forward(Status::Unauthorized);
} else {
return Outcome::Failure((Status::Unauthorized, "Unauthorized"));
return Outcome::Error((Status::Unauthorized, "Unauthorized"));
}
}
};
if decode_admin(access_token).is_err() {
// Remove admin cookie
cookies.remove(Cookie::build(COOKIE_NAME, "").path(admin_path()).finish());
cookies.remove(Cookie::build(COOKIE_NAME).path(admin_path()));
error!("Invalid or expired admin JWT. IP: {}.", &ip.ip);
return Outcome::Failure((Status::Unauthorized, "Session expired"));
return Outcome::Error((Status::Unauthorized, "Session expired"));
}
Outcome::Success(Self {

Datei anzeigen

@ -5,13 +5,16 @@ use serde_json::Value;
use crate::{
api::{
core::log_user_event, register_push_device, unregister_push_device, AnonymousNotify, EmptyResult, JsonResult,
JsonUpcase, Notify, NumberOrString, PasswordData, UpdateType,
core::{log_user_event, two_factor::email},
register_push_device, unregister_push_device, AnonymousNotify, EmptyResult, JsonResult, JsonUpcase, Notify,
PasswordOrOtpData, UpdateType,
},
auth::{decode_delete, decode_invite, decode_verify_email, ClientHeaders, Headers},
crypto,
db::{models::*, DbConn},
mail, CONFIG,
mail,
util::NumberOrString,
CONFIG,
};
use rocket::{
@ -102,6 +105,19 @@ fn enforce_password_hint_setting(password_hint: &Option<String>) -> EmptyResult
}
Ok(())
}
async fn is_email_2fa_required(org_user_uuid: Option<String>, conn: &mut DbConn) -> bool {
if !CONFIG._enable_email_2fa() {
return false;
}
if CONFIG.email_2fa_enforce_on_verified_invite() {
return true;
}
if org_user_uuid.is_some() {
return OrgPolicy::is_enabled_by_org(&org_user_uuid.unwrap(), OrgPolicyType::TwoFactorAuthentication, conn)
.await;
}
false
}
#[post("/accounts/register", data = "<data>")]
async fn register(data: JsonUpcase<RegisterData>, conn: DbConn) -> JsonResult {
@ -150,7 +166,8 @@ pub async fn _register(data: JsonUpcase<RegisterData>, mut conn: DbConn) -> Json
}
user
} else if CONFIG.is_signup_allowed(&email)
|| EmergencyAccess::find_invited_by_grantee_email(&email, &mut conn).await.is_some()
|| (CONFIG.emergency_access_allowed()
&& EmergencyAccess::find_invited_by_grantee_email(&email, &mut conn).await.is_some())
{
user
} else {
@ -201,14 +218,25 @@ pub async fn _register(data: JsonUpcase<RegisterData>, mut conn: DbConn) -> Json
if let Err(e) = mail::send_welcome_must_verify(&user.email, &user.uuid).await {
error!("Error sending welcome email: {:#?}", e);
}
user.last_verifying_at = Some(user.created_at);
} else if let Err(e) = mail::send_welcome(&user.email).await {
error!("Error sending welcome email: {:#?}", e);
}
if verified_by_invite && is_email_2fa_required(data.OrganizationUserId, &mut conn).await {
let _ = email::activate_email_2fa(&user, &mut conn).await;
}
}
user.save(&mut conn).await?;
// accept any open emergency access invitations
if !CONFIG.mail_enabled() && CONFIG.emergency_access_allowed() {
for mut emergency_invite in EmergencyAccess::find_all_invited_by_grantee_email(&user.email, &mut conn).await {
let _ = emergency_invite.accept_invite(&user.uuid, &user.email, &mut conn).await;
}
}
Ok(Json(json!({
"Object": "register",
"CaptchaBypassToken": "",
@ -279,8 +307,9 @@ async fn put_avatar(data: JsonUpcase<AvatarData>, headers: Headers, mut conn: Db
#[get("/users/<uuid>/public-key")]
async fn get_public_keys(uuid: &str, _headers: Headers, mut conn: DbConn) -> JsonResult {
let user = match User::find_by_uuid(uuid, &mut conn).await {
Some(user) => user,
None => err!("User doesn't exist"),
Some(user) if user.public_key.is_some() => user,
Some(_) => err_code!("User has no public_key", Status::NotFound.code),
None => err_code!("User doesn't exist", Status::NotFound.code),
};
Ok(Json(json!({
@ -417,24 +446,46 @@ async fn post_kdf(data: JsonUpcase<ChangeKdfData>, headers: Headers, mut conn: D
#[derive(Deserialize)]
#[allow(non_snake_case)]
struct UpdateFolderData {
Id: String,
// There is a bug in 2024.3.x which adds a `null` item.
// To bypass this we allow a Option here, but skip it during the updates
// See: https://github.com/bitwarden/clients/issues/8453
Id: Option<String>,
Name: String,
}
#[derive(Deserialize)]
#[allow(non_snake_case)]
struct UpdateEmergencyAccessData {
Id: String,
KeyEncrypted: String,
}
#[derive(Deserialize)]
#[allow(non_snake_case)]
struct UpdateResetPasswordData {
OrganizationId: String,
ResetPasswordKey: String,
}
use super::ciphers::CipherData;
use super::sends::{update_send_from_data, SendData};
#[derive(Deserialize)]
#[allow(non_snake_case)]
struct KeyData {
Ciphers: Vec<CipherData>,
Folders: Vec<UpdateFolderData>,
Sends: Vec<SendData>,
EmergencyAccessKeys: Vec<UpdateEmergencyAccessData>,
ResetPasswordKeys: Vec<UpdateResetPasswordData>,
Key: String,
PrivateKey: String,
MasterPasswordHash: String,
PrivateKey: String,
}
#[post("/accounts/key", data = "<data>")]
async fn post_rotatekey(data: JsonUpcase<KeyData>, headers: Headers, mut conn: DbConn, nt: Notify<'_>) -> EmptyResult {
// TODO: See if we can wrap everything within a SQL Transaction. If something fails it should revert everything.
let data: KeyData = data.into_inner().data;
if !headers.user.check_valid_password(&data.MasterPasswordHash) {
@ -451,37 +502,83 @@ async fn post_rotatekey(data: JsonUpcase<KeyData>, headers: Headers, mut conn: D
// Update folder data
for folder_data in data.Folders {
let mut saved_folder = match Folder::find_by_uuid(&folder_data.Id, &mut conn).await {
Some(folder) => folder,
None => err!("Folder doesn't exist"),
// Skip `null` folder id entries.
// See: https://github.com/bitwarden/clients/issues/8453
if let Some(folder_id) = folder_data.Id {
let mut saved_folder = match Folder::find_by_uuid(&folder_id, &mut conn).await {
Some(folder) => folder,
None => err!("Folder doesn't exist"),
};
if &saved_folder.user_uuid != user_uuid {
err!("The folder is not owned by the user")
}
saved_folder.name = folder_data.Name;
saved_folder.save(&mut conn).await?
}
}
// Update emergency access data
for emergency_access_data in data.EmergencyAccessKeys {
let mut saved_emergency_access = match EmergencyAccess::find_by_uuid(&emergency_access_data.Id, &mut conn).await
{
Some(emergency_access) => emergency_access,
None => err!("Emergency access doesn't exist"),
};
if &saved_folder.user_uuid != user_uuid {
err!("The folder is not owned by the user")
if &saved_emergency_access.grantor_uuid != user_uuid {
err!("The emergency access is not owned by the user")
}
saved_folder.name = folder_data.Name;
saved_folder.save(&mut conn).await?
saved_emergency_access.key_encrypted = Some(emergency_access_data.KeyEncrypted);
saved_emergency_access.save(&mut conn).await?
}
// Update reset password data
for reset_password_data in data.ResetPasswordKeys {
let mut user_org =
match UserOrganization::find_by_user_and_org(user_uuid, &reset_password_data.OrganizationId, &mut conn)
.await
{
Some(reset_password) => reset_password,
None => err!("Reset password doesn't exist"),
};
user_org.reset_password_key = Some(reset_password_data.ResetPasswordKey);
user_org.save(&mut conn).await?
}
// Update send data
for send_data in data.Sends {
let mut send = match Send::find_by_uuid(send_data.Id.as_ref().unwrap(), &mut conn).await {
Some(send) => send,
None => err!("Send doesn't exist"),
};
update_send_from_data(&mut send, send_data, &headers, &mut conn, &nt, UpdateType::None).await?;
}
// Update cipher data
use super::ciphers::update_cipher_from_data;
for cipher_data in data.Ciphers {
let mut saved_cipher = match Cipher::find_by_uuid(cipher_data.Id.as_ref().unwrap(), &mut conn).await {
Some(cipher) => cipher,
None => err!("Cipher doesn't exist"),
};
if cipher_data.OrganizationId.is_none() {
let mut saved_cipher = match Cipher::find_by_uuid(cipher_data.Id.as_ref().unwrap(), &mut conn).await {
Some(cipher) => cipher,
None => err!("Cipher doesn't exist"),
};
if saved_cipher.user_uuid.as_ref().unwrap() != user_uuid {
err!("The cipher is not owned by the user")
if saved_cipher.user_uuid.as_ref().unwrap() != user_uuid {
err!("The cipher is not owned by the user")
}
// Prevent triggering cipher updates via WebSockets by settings UpdateType::None
// The user sessions are invalidated because all the ciphers were re-encrypted and thus triggering an update could cause issues.
// We force the users to logout after the user has been saved to try and prevent these issues.
update_cipher_from_data(&mut saved_cipher, cipher_data, &headers, None, &mut conn, &nt, UpdateType::None)
.await?
}
// Prevent triggering cipher updates via WebSockets by settings UpdateType::None
// The user sessions are invalidated because all the ciphers were re-encrypted and thus triggering an update could cause issues.
// We force the users to logout after the user has been saved to try and prevent these issues.
update_cipher_from_data(&mut saved_cipher, cipher_data, &headers, false, &mut conn, &nt, UpdateType::None)
.await?
}
// Update user data
@ -503,17 +600,15 @@ async fn post_rotatekey(data: JsonUpcase<KeyData>, headers: Headers, mut conn: D
#[post("/accounts/security-stamp", data = "<data>")]
async fn post_sstamp(
data: JsonUpcase<PasswordData>,
data: JsonUpcase<PasswordOrOtpData>,
headers: Headers,
mut conn: DbConn,
nt: Notify<'_>,
) -> EmptyResult {
let data: PasswordData = data.into_inner().data;
let data: PasswordOrOtpData = data.into_inner().data;
let mut user = headers.user;
if !user.check_valid_password(&data.MasterPasswordHash) {
err!("Invalid password")
}
data.validate(&user, true, &mut conn).await?;
Device::delete_all_by_user(&user.uuid, &mut conn).await?;
user.reset_security_stamp();
@ -558,6 +653,8 @@ async fn post_email_token(data: JsonUpcase<EmailTokenData>, headers: Headers, mu
if let Err(e) = mail::send_change_email(&data.NewEmail, &token).await {
error!("Error sending change-email email: {:#?}", e);
}
} else {
debug!("Email change request for user ({}) to email ({}) with token ({})", user.uuid, data.NewEmail, token);
}
user.email_new = Some(data.NewEmail);
@ -736,25 +833,23 @@ async fn post_delete_recover_token(data: JsonUpcase<DeleteRecoverTokenData>, mut
}
#[post("/accounts/delete", data = "<data>")]
async fn post_delete_account(data: JsonUpcase<PasswordData>, headers: Headers, conn: DbConn) -> EmptyResult {
async fn post_delete_account(data: JsonUpcase<PasswordOrOtpData>, headers: Headers, conn: DbConn) -> EmptyResult {
delete_account(data, headers, conn).await
}
#[delete("/accounts", data = "<data>")]
async fn delete_account(data: JsonUpcase<PasswordData>, headers: Headers, mut conn: DbConn) -> EmptyResult {
let data: PasswordData = data.into_inner().data;
async fn delete_account(data: JsonUpcase<PasswordOrOtpData>, headers: Headers, mut conn: DbConn) -> EmptyResult {
let data: PasswordOrOtpData = data.into_inner().data;
let user = headers.user;
if !user.check_valid_password(&data.MasterPasswordHash) {
err!("Invalid password")
}
data.validate(&user, true, &mut conn).await?;
user.delete(&mut conn).await
}
#[get("/accounts/revision-date")]
fn revision_date(headers: Headers) -> JsonResult {
let revision_date = headers.user.updated_at.timestamp_millis();
let revision_date = headers.user.updated_at.and_utc().timestamp_millis();
Ok(Json(json!(revision_date)))
}
@ -854,20 +949,13 @@ fn verify_password(data: JsonUpcase<SecretVerificationRequest>, headers: Headers
Ok(())
}
async fn _api_key(
data: JsonUpcase<SecretVerificationRequest>,
rotate: bool,
headers: Headers,
mut conn: DbConn,
) -> JsonResult {
async fn _api_key(data: JsonUpcase<PasswordOrOtpData>, rotate: bool, headers: Headers, mut conn: DbConn) -> JsonResult {
use crate::util::format_date;
let data: SecretVerificationRequest = data.into_inner().data;
let data: PasswordOrOtpData = data.into_inner().data;
let mut user = headers.user;
if !user.check_valid_password(&data.MasterPasswordHash) {
err!("Invalid password")
}
data.validate(&user, true, &mut conn).await?;
if rotate || user.api_key.is_none() {
user.api_key = Some(crypto::generate_api_key());
@ -882,12 +970,12 @@ async fn _api_key(
}
#[post("/accounts/api-key", data = "<data>")]
async fn api_key(data: JsonUpcase<SecretVerificationRequest>, headers: Headers, conn: DbConn) -> JsonResult {
async fn api_key(data: JsonUpcase<PasswordOrOtpData>, headers: Headers, conn: DbConn) -> JsonResult {
_api_key(data, false, headers, conn).await
}
#[post("/accounts/rotate-api-key", data = "<data>")]
async fn rotate_api_key(data: JsonUpcase<SecretVerificationRequest>, headers: Headers, conn: DbConn) -> JsonResult {
async fn rotate_api_key(data: JsonUpcase<PasswordOrOtpData>, headers: Headers, conn: DbConn) -> JsonResult {
_api_key(data, true, headers, conn).await
}
@ -921,26 +1009,23 @@ impl<'r> FromRequest<'r> for KnownDevice {
let email_bytes = match data_encoding::BASE64URL_NOPAD.decode(email_b64.as_bytes()) {
Ok(bytes) => bytes,
Err(_) => {
return Outcome::Failure((
Status::BadRequest,
"X-Request-Email value failed to decode as base64url",
));
return Outcome::Error((Status::BadRequest, "X-Request-Email value failed to decode as base64url"));
}
};
match String::from_utf8(email_bytes) {
Ok(email) => email,
Err(_) => {
return Outcome::Failure((Status::BadRequest, "X-Request-Email value failed to decode as UTF-8"));
return Outcome::Error((Status::BadRequest, "X-Request-Email value failed to decode as UTF-8"));
}
}
} else {
return Outcome::Failure((Status::BadRequest, "X-Request-Email value is required"));
return Outcome::Error((Status::BadRequest, "X-Request-Email value is required"));
};
let uuid = if let Some(uuid) = req.headers().get_one("X-Device-Identifier") {
uuid.to_string()
} else {
return Outcome::Failure((Status::BadRequest, "X-Device-Identifier value is required"));
return Outcome::Error((Status::BadRequest, "X-Device-Identifier value is required"));
};
Outcome::Success(KnownDevice {
@ -963,26 +1048,33 @@ async fn post_device_token(uuid: &str, data: JsonUpcase<PushToken>, headers: Hea
#[put("/devices/identifier/<uuid>/token", data = "<data>")]
async fn put_device_token(uuid: &str, data: JsonUpcase<PushToken>, headers: Headers, mut conn: DbConn) -> EmptyResult {
if !CONFIG.push_enabled() {
return Ok(());
}
let data = data.into_inner().data;
let token = data.PushToken;
let mut device = match Device::find_by_uuid_and_user(&headers.device.uuid, &headers.user.uuid, &mut conn).await {
Some(device) => device,
None => err!(format!("Error: device {uuid} should be present before a token can be assigned")),
};
device.push_token = Some(token);
if device.push_uuid.is_none() {
device.push_uuid = Some(uuid::Uuid::new_v4().to_string());
// if the device already has been registered
if device.is_registered() {
// check if the new token is the same as the registered token
if device.push_token.is_some() && device.push_token.unwrap() == token.clone() {
debug!("Device {} is already registered and token is the same", uuid);
return Ok(());
} else {
// Try to unregister already registered device
let _ = unregister_push_device(device.push_uuid).await;
}
// clear the push_uuid
device.push_uuid = None;
}
device.push_token = Some(token);
if let Err(e) = device.save(&mut conn).await {
err!(format!("An error occurred while trying to save the device push token: {e}"));
}
if let Err(e) = register_push_device(headers.user.uuid, device).await {
err!(format!("An error occurred while proceeding registration of a device: {e}"));
}
register_push_device(&mut device, &mut conn).await?;
Ok(())
}
@ -999,7 +1091,7 @@ async fn put_clear_device_token(uuid: &str, mut conn: DbConn) -> EmptyResult {
if let Some(device) = Device::find_by_uuid(uuid, &mut conn).await {
Device::clear_push_token_by_uuid(uuid, &mut conn).await?;
unregister_push_device(device.uuid).await?;
unregister_push_device(device.push_uuid).await?;
}
Ok(())

Datei anzeigen

@ -1,6 +1,7 @@
use std::collections::{HashMap, HashSet};
use chrono::{NaiveDateTime, Utc};
use num_traits::ToPrimitive;
use rocket::fs::TempFile;
use rocket::serde::json::Json;
use rocket::{
@ -9,8 +10,9 @@ use rocket::{
};
use serde_json::Value;
use crate::util::NumberOrString;
use crate::{
api::{self, core::log_event, EmptyResult, JsonResult, JsonUpcase, Notify, PasswordData, UpdateType},
api::{self, core::log_event, EmptyResult, JsonResult, JsonUpcase, Notify, PasswordOrOtpData, UpdateType},
auth::Headers,
crypto,
db::{models::*, DbConn, DbPool},
@ -204,7 +206,7 @@ pub struct CipherData {
// Folder id is not included in import
FolderId: Option<String>,
// TODO: Some of these might appear all the time, no need for Option
OrganizationId: Option<String>,
pub OrganizationId: Option<String>,
Key: Option<String>,
@ -320,7 +322,7 @@ async fn post_ciphers(data: JsonUpcase<CipherData>, headers: Headers, mut conn:
data.LastKnownRevisionDate = None;
let mut cipher = Cipher::new(data.Type, data.Name.clone());
update_cipher_from_data(&mut cipher, data, &headers, false, &mut conn, &nt, UpdateType::SyncCipherCreate).await?;
update_cipher_from_data(&mut cipher, data, &headers, None, &mut conn, &nt, UpdateType::SyncCipherCreate).await?;
Ok(Json(cipher.to_json(&headers.host, &headers.user.uuid, None, CipherSyncType::User, &mut conn).await))
}
@ -351,7 +353,7 @@ pub async fn update_cipher_from_data(
cipher: &mut Cipher,
data: CipherData,
headers: &Headers,
shared_to_collection: bool,
shared_to_collections: Option<Vec<String>>,
conn: &mut DbConn,
nt: &Notify<'_>,
ut: UpdateType,
@ -390,7 +392,7 @@ pub async fn update_cipher_from_data(
match UserOrganization::find_by_user_and_org(&headers.user.uuid, &org_id, conn).await {
None => err!("You don't have permission to add item to organization"),
Some(org_user) => {
if shared_to_collection
if shared_to_collections.is_some()
|| org_user.has_full_access()
|| cipher.is_write_accessible_to_user(&headers.user.uuid, conn).await
{
@ -510,15 +512,22 @@ pub async fn update_cipher_from_data(
event_type as i32,
&cipher.uuid,
org_uuid,
headers.user.uuid.clone(),
&headers.user.uuid,
headers.device.atype,
&headers.ip.ip,
conn,
)
.await;
}
nt.send_cipher_update(ut, cipher, &cipher.update_users_revision(conn).await, &headers.device.uuid, None, conn)
.await;
nt.send_cipher_update(
ut,
cipher,
&cipher.update_users_revision(conn).await,
&headers.device.uuid,
shared_to_collections,
conn,
)
.await;
}
Ok(())
}
@ -579,7 +588,7 @@ async fn post_ciphers_import(
cipher_data.FolderId = folder_uuid;
let mut cipher = Cipher::new(cipher_data.Type, cipher_data.Name.clone());
update_cipher_from_data(&mut cipher, cipher_data, &headers, false, &mut conn, &nt, UpdateType::None).await?;
update_cipher_from_data(&mut cipher, cipher_data, &headers, None, &mut conn, &nt, UpdateType::None).await?;
}
let mut user = headers.user;
@ -647,7 +656,7 @@ async fn put_cipher(
err!("Cipher is not write accessible")
}
update_cipher_from_data(&mut cipher, data, &headers, false, &mut conn, &nt, UpdateType::SyncCipherUpdate).await?;
update_cipher_from_data(&mut cipher, data, &headers, None, &mut conn, &nt, UpdateType::SyncCipherUpdate).await?;
Ok(Json(cipher.to_json(&headers.host, &headers.user.uuid, None, CipherSyncType::User, &mut conn).await))
}
@ -791,7 +800,7 @@ async fn post_collections_admin(
EventType::CipherUpdatedCollections as i32,
&cipher.uuid,
&cipher.organization_uuid.unwrap(),
headers.user.uuid.clone(),
&headers.user.uuid,
headers.device.atype,
&headers.ip.ip,
&mut conn,
@ -849,7 +858,6 @@ async fn put_cipher_share_selected(
nt: Notify<'_>,
) -> EmptyResult {
let mut data: ShareSelectedCipherData = data.into_inner().data;
let mut cipher_ids: Vec<String> = Vec::new();
if data.Ciphers.is_empty() {
err!("You must select at least one cipher.")
@ -860,10 +868,9 @@ async fn put_cipher_share_selected(
}
for cipher in data.Ciphers.iter() {
match cipher.Id {
Some(ref id) => cipher_ids.push(id.to_string()),
None => err!("Request missing ids field"),
};
if cipher.Id.is_none() {
err!("Request missing ids field")
}
}
while let Some(cipher) = data.Ciphers.pop() {
@ -899,7 +906,7 @@ async fn share_cipher_by_uuid(
None => err!("Cipher doesn't exist"),
};
let mut shared_to_collection = false;
let mut shared_to_collections = vec![];
if let Some(organization_uuid) = &data.Cipher.OrganizationId {
for uuid in &data.CollectionIds {
@ -908,7 +915,7 @@ async fn share_cipher_by_uuid(
Some(collection) => {
if collection.is_writable_by_user(&headers.user.uuid, conn).await {
CollectionCipher::save(&cipher.uuid, &collection.uuid, conn).await?;
shared_to_collection = true;
shared_to_collections.push(collection.uuid);
} else {
err!("No rights to modify the collection")
}
@ -924,7 +931,7 @@ async fn share_cipher_by_uuid(
UpdateType::SyncCipherCreate
};
update_cipher_from_data(&mut cipher, data.Cipher, headers, shared_to_collection, conn, nt, ut).await?;
update_cipher_from_data(&mut cipher, data.Cipher, headers, Some(shared_to_collections), conn, nt, ut).await?;
Ok(Json(cipher.to_json(&headers.host, &headers.user.uuid, None, CipherSyncType::User, conn).await))
}
@ -958,7 +965,7 @@ async fn get_attachment(uuid: &str, attachment_id: &str, headers: Headers, mut c
struct AttachmentRequestData {
Key: String,
FileName: String,
FileSize: i32,
FileSize: NumberOrString,
AdminRequest: Option<bool>, // true when attaching from an org vault view
}
@ -987,10 +994,15 @@ async fn post_attachment_v2(
err!("Cipher is not write accessible")
}
let attachment_id = crypto::generate_attachment_id();
let data: AttachmentRequestData = data.into_inner().data;
let file_size = data.FileSize.into_i64()?;
if file_size < 0 {
err!("Attachment size can't be negative")
}
let attachment_id = crypto::generate_attachment_id();
let attachment =
Attachment::new(attachment_id.clone(), cipher.uuid.clone(), data.FileName, data.FileSize, Some(data.Key));
Attachment::new(attachment_id.clone(), cipher.uuid.clone(), data.FileName, file_size, Some(data.Key));
attachment.save(&mut conn).await.expect("Error saving attachment");
let url = format!("/ciphers/{}/attachment/{}", cipher.uuid, attachment_id);
@ -1030,6 +1042,15 @@ async fn save_attachment(
mut conn: DbConn,
nt: Notify<'_>,
) -> Result<(Cipher, DbConn), crate::error::Error> {
let mut data = data.into_inner();
let Some(size) = data.data.len().to_i64() else {
err!("Attachment data size overflow");
};
if size < 0 {
err!("Attachment size can't be negative")
}
let cipher = match Cipher::find_by_uuid(cipher_uuid, &mut conn).await {
Some(cipher) => cipher,
None => err!("Cipher doesn't exist"),
@ -1042,19 +1063,29 @@ async fn save_attachment(
// In the v2 API, the attachment record has already been created,
// so the size limit needs to be adjusted to account for that.
let size_adjust = match &attachment {
None => 0, // Legacy API
Some(a) => i64::from(a.file_size), // v2 API
None => 0, // Legacy API
Some(a) => a.file_size, // v2 API
};
let size_limit = if let Some(ref user_uuid) = cipher.user_uuid {
match CONFIG.user_attachment_limit() {
Some(0) => err!("Attachments are disabled"),
Some(limit_kb) => {
let left = (limit_kb * 1024) - Attachment::size_by_user(user_uuid, &mut conn).await + size_adjust;
let already_used = Attachment::size_by_user(user_uuid, &mut conn).await;
let left = limit_kb
.checked_mul(1024)
.and_then(|l| l.checked_sub(already_used))
.and_then(|l| l.checked_add(size_adjust));
let Some(left) = left else {
err!("Attachment size overflow");
};
if left <= 0 {
err!("Attachment storage limit reached! Delete some attachments to free up space")
}
Some(left as u64)
Some(left)
}
None => None,
}
@ -1062,11 +1093,21 @@ async fn save_attachment(
match CONFIG.org_attachment_limit() {
Some(0) => err!("Attachments are disabled"),
Some(limit_kb) => {
let left = (limit_kb * 1024) - Attachment::size_by_org(org_uuid, &mut conn).await + size_adjust;
let already_used = Attachment::size_by_org(org_uuid, &mut conn).await;
let left = limit_kb
.checked_mul(1024)
.and_then(|l| l.checked_sub(already_used))
.and_then(|l| l.checked_add(size_adjust));
let Some(left) = left else {
err!("Attachment size overflow");
};
if left <= 0 {
err!("Attachment storage limit reached! Delete some attachments to free up space")
}
Some(left as u64)
Some(left)
}
None => None,
}
@ -1074,10 +1115,8 @@ async fn save_attachment(
err!("Cipher is neither owned by a user nor an organization");
};
let mut data = data.into_inner();
if let Some(size_limit) = size_limit {
if data.data.len() > size_limit {
if size > size_limit {
err!("Attachment storage limit exceeded with this file");
}
}
@ -1087,20 +1126,19 @@ async fn save_attachment(
None => crypto::generate_attachment_id(), // Legacy API
};
let folder_path = tokio::fs::canonicalize(&CONFIG.attachments_folder()).await?.join(cipher_uuid);
let file_path = folder_path.join(&file_id);
tokio::fs::create_dir_all(&folder_path).await?;
let size = data.data.len() as i32;
if let Some(attachment) = &mut attachment {
// v2 API
// Check the actual size against the size initially provided by
// the client. Upstream allows +/- 1 MiB deviation from this
// size, but it's not clear when or why this is needed.
const LEEWAY: i32 = 1024 * 1024; // 1 MiB
let min_size = attachment.file_size - LEEWAY;
let max_size = attachment.file_size + LEEWAY;
const LEEWAY: i64 = 1024 * 1024; // 1 MiB
let Some(max_size) = attachment.file_size.checked_add(LEEWAY) else {
err!("Invalid attachment size max")
};
let Some(min_size) = attachment.file_size.checked_sub(LEEWAY) else {
err!("Invalid attachment size min")
};
if min_size <= size && size <= max_size {
if size != attachment.file_size {
@ -1115,6 +1153,10 @@ async fn save_attachment(
}
} else {
// Legacy API
// SAFETY: This value is only stored in the database and is not used to access the file system.
// As a result, the conditions specified by Rocket [0] are met and this is safe to use.
// [0]: https://docs.rs/rocket/latest/rocket/fs/struct.FileName.html#-danger-
let encrypted_filename = data.data.raw_name().map(|s| s.dangerous_unsafe_unsanitized_raw().to_string());
if encrypted_filename.is_none() {
@ -1124,10 +1166,14 @@ async fn save_attachment(
err!("No attachment key provided")
}
let attachment =
Attachment::new(file_id, String::from(cipher_uuid), encrypted_filename.unwrap(), size, data.key);
Attachment::new(file_id.clone(), String::from(cipher_uuid), encrypted_filename.unwrap(), size, data.key);
attachment.save(&mut conn).await.expect("Error saving attachment");
}
let folder_path = tokio::fs::canonicalize(&CONFIG.attachments_folder()).await?.join(cipher_uuid);
let file_path = folder_path.join(&file_id);
tokio::fs::create_dir_all(&folder_path).await?;
if let Err(_err) = data.data.persist_to(&file_path).await {
data.data.move_copy_to(file_path).await?
}
@ -1147,7 +1193,7 @@ async fn save_attachment(
EventType::CipherAttachmentCreated as i32,
&cipher.uuid,
org_uuid,
headers.user.uuid.clone(),
&headers.user.uuid,
headers.device.atype,
&headers.ip.ip,
&mut conn,
@ -1457,19 +1503,15 @@ struct OrganizationId {
#[post("/ciphers/purge?<organization..>", data = "<data>")]
async fn delete_all(
organization: Option<OrganizationId>,
data: JsonUpcase<PasswordData>,
data: JsonUpcase<PasswordOrOtpData>,
headers: Headers,
mut conn: DbConn,
nt: Notify<'_>,
) -> EmptyResult {
let data: PasswordData = data.into_inner().data;
let password_hash = data.MasterPasswordHash;
let data: PasswordOrOtpData = data.into_inner().data;
let mut user = headers.user;
if !user.check_valid_password(&password_hash) {
err!("Invalid password")
}
data.validate(&user, true, &mut conn).await?;
match organization {
Some(org_data) => {
@ -1485,7 +1527,7 @@ async fn delete_all(
EventType::OrganizationPurgedVault as i32,
&org_data.org_id,
&org_data.org_id,
user.uuid,
&user.uuid,
headers.device.atype,
&headers.ip.ip,
&mut conn,
@ -1566,16 +1608,8 @@ async fn _delete_cipher_by_uuid(
false => EventType::CipherDeleted as i32,
};
log_event(
event_type,
&cipher.uuid,
&org_uuid,
headers.user.uuid.clone(),
headers.device.atype,
&headers.ip.ip,
conn,
)
.await;
log_event(event_type, &cipher.uuid, &org_uuid, &headers.user.uuid, headers.device.atype, &headers.ip.ip, conn)
.await;
}
Ok(())
@ -1635,7 +1669,7 @@ async fn _restore_cipher_by_uuid(uuid: &str, headers: &Headers, conn: &mut DbCon
EventType::CipherRestored as i32,
&cipher.uuid.clone(),
org_uuid,
headers.user.uuid.clone(),
&headers.user.uuid,
headers.device.atype,
&headers.ip.ip,
conn,
@ -1719,7 +1753,7 @@ async fn _delete_cipher_attachment_by_id(
EventType::CipherAttachmentDeleted as i32,
&cipher.uuid,
&org_uuid,
headers.user.uuid.clone(),
&headers.user.uuid,
headers.device.atype,
&headers.ip.ip,
conn,
@ -1802,15 +1836,22 @@ impl CipherSyncData {
.collect();
// Generate a HashMap with the collections_uuid as key and the CollectionGroup record
let user_collections_groups: HashMap<String, CollectionGroup> = CollectionGroup::find_by_user(user_uuid, conn)
.await
.into_iter()
.map(|collection_group| (collection_group.collections_uuid.clone(), collection_group))
.collect();
let user_collections_groups: HashMap<String, CollectionGroup> = if CONFIG.org_groups_enabled() {
CollectionGroup::find_by_user(user_uuid, conn)
.await
.into_iter()
.map(|collection_group| (collection_group.collections_uuid.clone(), collection_group))
.collect()
} else {
HashMap::new()
};
// Get all organizations that the user has full access to via group assignment
let user_group_full_access_for_organizations: HashSet<String> =
Group::gather_user_organizations_full_access(user_uuid, conn).await.into_iter().collect();
let user_group_full_access_for_organizations: HashSet<String> = if CONFIG.org_groups_enabled() {
Group::gather_user_organizations_full_access(user_uuid, conn).await.into_iter().collect()
} else {
HashSet::new()
};
Self {
cipher_attachments,

Datei anzeigen

@ -1,15 +1,17 @@
use chrono::{Duration, Utc};
use chrono::{TimeDelta, Utc};
use rocket::{serde::json::Json, Route};
use serde_json::Value;
use crate::{
api::{
core::{CipherSyncData, CipherSyncType},
EmptyResult, JsonResult, JsonUpcase, NumberOrString,
EmptyResult, JsonResult, JsonUpcase,
},
auth::{decode_emergency_access_invite, Headers},
db::{models::*, DbConn, DbPool},
mail, CONFIG,
mail,
util::NumberOrString,
CONFIG,
};
pub fn routes() -> Vec<Route> {
@ -18,6 +20,7 @@ pub fn routes() -> Vec<Route> {
get_grantees,
get_emergency_access,
put_emergency_access,
post_emergency_access,
delete_emergency_access,
post_delete_emergency_access,
send_invite,
@ -37,45 +40,66 @@ pub fn routes() -> Vec<Route> {
// region get
#[get("/emergency-access/trusted")]
async fn get_contacts(headers: Headers, mut conn: DbConn) -> JsonResult {
check_emergency_access_allowed()?;
async fn get_contacts(headers: Headers, mut conn: DbConn) -> Json<Value> {
if !CONFIG.emergency_access_allowed() {
return Json(json!({
"Data": [{
"Id": "",
"Status": 2,
"Type": 0,
"WaitTimeDays": 0,
"GranteeId": "",
"Email": "",
"Name": "NOTE: Emergency Access is disabled!",
"Object": "emergencyAccessGranteeDetails",
}],
"Object": "list",
"ContinuationToken": null
}));
}
let emergency_access_list = EmergencyAccess::find_all_by_grantor_uuid(&headers.user.uuid, &mut conn).await;
let mut emergency_access_list_json = Vec::with_capacity(emergency_access_list.len());
for ea in emergency_access_list {
emergency_access_list_json.push(ea.to_json_grantee_details(&mut conn).await);
if let Some(grantee) = ea.to_json_grantee_details(&mut conn).await {
emergency_access_list_json.push(grantee)
}
}
Ok(Json(json!({
Json(json!({
"Data": emergency_access_list_json,
"Object": "list",
"ContinuationToken": null
})))
}))
}
#[get("/emergency-access/granted")]
async fn get_grantees(headers: Headers, mut conn: DbConn) -> JsonResult {
check_emergency_access_allowed()?;
let emergency_access_list = EmergencyAccess::find_all_by_grantee_uuid(&headers.user.uuid, &mut conn).await;
async fn get_grantees(headers: Headers, mut conn: DbConn) -> Json<Value> {
let emergency_access_list = if CONFIG.emergency_access_allowed() {
EmergencyAccess::find_all_by_grantee_uuid(&headers.user.uuid, &mut conn).await
} else {
Vec::new()
};
let mut emergency_access_list_json = Vec::with_capacity(emergency_access_list.len());
for ea in emergency_access_list {
emergency_access_list_json.push(ea.to_json_grantor_details(&mut conn).await);
}
Ok(Json(json!({
Json(json!({
"Data": emergency_access_list_json,
"Object": "list",
"ContinuationToken": null
})))
}))
}
#[get("/emergency-access/<emer_id>")]
async fn get_emergency_access(emer_id: &str, mut conn: DbConn) -> JsonResult {
check_emergency_access_allowed()?;
check_emergency_access_enabled()?;
match EmergencyAccess::find_by_uuid(emer_id, &mut conn).await {
Some(emergency_access) => Ok(Json(emergency_access.to_json_grantee_details(&mut conn).await)),
Some(emergency_access) => Ok(Json(
emergency_access.to_json_grantee_details(&mut conn).await.expect("Grantee user should exist but does not!"),
)),
None => err!("Emergency access not valid."),
}
}
@ -103,7 +127,7 @@ async fn post_emergency_access(
data: JsonUpcase<EmergencyAccessUpdateData>,
mut conn: DbConn,
) -> JsonResult {
check_emergency_access_allowed()?;
check_emergency_access_enabled()?;
let data: EmergencyAccessUpdateData = data.into_inner().data;
@ -133,7 +157,7 @@ async fn post_emergency_access(
#[delete("/emergency-access/<emer_id>")]
async fn delete_emergency_access(emer_id: &str, headers: Headers, mut conn: DbConn) -> EmptyResult {
check_emergency_access_allowed()?;
check_emergency_access_enabled()?;
let grantor_user = headers.user;
@ -169,7 +193,7 @@ struct EmergencyAccessInviteData {
#[post("/emergency-access/invite", data = "<data>")]
async fn send_invite(data: JsonUpcase<EmergencyAccessInviteData>, headers: Headers, mut conn: DbConn) -> EmptyResult {
check_emergency_access_allowed()?;
check_emergency_access_enabled()?;
let data: EmergencyAccessInviteData = data.into_inner().data;
let email = data.Email.to_lowercase();
@ -189,7 +213,7 @@ async fn send_invite(data: JsonUpcase<EmergencyAccessInviteData>, headers: Heade
err!("You can not set yourself as an emergency contact.")
}
let grantee_user = match User::find_by_mail(&email, &mut conn).await {
let (grantee_user, new_user) = match User::find_by_mail(&email, &mut conn).await {
None => {
if !CONFIG.invitations_allowed() {
err!(format!("Grantee user does not exist: {}", &email))
@ -206,9 +230,10 @@ async fn send_invite(data: JsonUpcase<EmergencyAccessInviteData>, headers: Heade
let mut user = User::new(email.clone());
user.save(&mut conn).await?;
user
(user, true)
}
Some(user) => user,
Some(user) if user.password_hash.is_empty() => (user, true),
Some(user) => (user, false),
};
if EmergencyAccess::find_by_grantor_uuid_and_grantee_uuid_or_email(
@ -236,15 +261,9 @@ async fn send_invite(data: JsonUpcase<EmergencyAccessInviteData>, headers: Heade
&grantor_user.email,
)
.await?;
} else {
// Automatically mark user as accepted if no email invites
match User::find_by_mail(&email, &mut conn).await {
Some(user) => match accept_invite_process(&user.uuid, &mut new_emergency_access, &email, &mut conn).await {
Ok(v) => v,
Err(e) => err!(e.to_string()),
},
None => err!("Grantee user not found."),
}
} else if !new_user {
// if mail is not enabled immediately accept the invitation for existing users
new_emergency_access.accept_invite(&grantee_user.uuid, &email, &mut conn).await?;
}
Ok(())
@ -252,7 +271,7 @@ async fn send_invite(data: JsonUpcase<EmergencyAccessInviteData>, headers: Heade
#[post("/emergency-access/<emer_id>/reinvite")]
async fn resend_invite(emer_id: &str, headers: Headers, mut conn: DbConn) -> EmptyResult {
check_emergency_access_allowed()?;
check_emergency_access_enabled()?;
let mut emergency_access = match EmergencyAccess::find_by_uuid(emer_id, &mut conn).await {
Some(emer) => emer,
@ -288,17 +307,12 @@ async fn resend_invite(emer_id: &str, headers: Headers, mut conn: DbConn) -> Emp
&grantor_user.email,
)
.await?;
} else {
if Invitation::find_by_mail(&email, &mut conn).await.is_none() {
let invitation = Invitation::new(&email);
invitation.save(&mut conn).await?;
}
// Automatically mark user as accepted if no email invites
match accept_invite_process(&grantee_user.uuid, &mut emergency_access, &email, &mut conn).await {
Ok(v) => v,
Err(e) => err!(e.to_string()),
}
} else if !grantee_user.password_hash.is_empty() {
// accept the invitation for existing user
emergency_access.accept_invite(&grantee_user.uuid, &email, &mut conn).await?;
} else if CONFIG.invitations_allowed() && Invitation::find_by_mail(&email, &mut conn).await.is_none() {
let invitation = Invitation::new(&email);
invitation.save(&mut conn).await?;
}
Ok(())
@ -312,7 +326,7 @@ struct AcceptData {
#[post("/emergency-access/<emer_id>/accept", data = "<data>")]
async fn accept_invite(emer_id: &str, data: JsonUpcase<AcceptData>, headers: Headers, mut conn: DbConn) -> EmptyResult {
check_emergency_access_allowed()?;
check_emergency_access_enabled()?;
let data: AcceptData = data.into_inner().data;
let token = &data.Token;
@ -347,10 +361,7 @@ async fn accept_invite(emer_id: &str, data: JsonUpcase<AcceptData>, headers: Hea
&& grantor_user.name == claims.grantor_name
&& grantor_user.email == claims.grantor_email
{
match accept_invite_process(&grantee_user.uuid, &mut emergency_access, &grantee_user.email, &mut conn).await {
Ok(v) => v,
Err(e) => err!(e.to_string()),
}
emergency_access.accept_invite(&grantee_user.uuid, &grantee_user.email, &mut conn).await?;
if CONFIG.mail_enabled() {
mail::send_emergency_access_invite_accepted(&grantor_user.email, &grantee_user.email).await?;
@ -362,26 +373,6 @@ async fn accept_invite(emer_id: &str, data: JsonUpcase<AcceptData>, headers: Hea
}
}
async fn accept_invite_process(
grantee_uuid: &str,
emergency_access: &mut EmergencyAccess,
grantee_email: &str,
conn: &mut DbConn,
) -> EmptyResult {
if emergency_access.email.is_none() || emergency_access.email.as_ref().unwrap() != grantee_email {
err!("User email does not match invite.");
}
if emergency_access.status == EmergencyAccessStatus::Accepted as i32 {
err!("Emergency contact already accepted.");
}
emergency_access.status = EmergencyAccessStatus::Accepted as i32;
emergency_access.grantee_uuid = Some(String::from(grantee_uuid));
emergency_access.email = None;
emergency_access.save(conn).await
}
#[derive(Deserialize)]
#[allow(non_snake_case)]
struct ConfirmData {
@ -395,7 +386,7 @@ async fn confirm_emergency_access(
headers: Headers,
mut conn: DbConn,
) -> JsonResult {
check_emergency_access_allowed()?;
check_emergency_access_enabled()?;
let confirming_user = headers.user;
let data: ConfirmData = data.into_inner().data;
@ -444,7 +435,7 @@ async fn confirm_emergency_access(
#[post("/emergency-access/<emer_id>/initiate")]
async fn initiate_emergency_access(emer_id: &str, headers: Headers, mut conn: DbConn) -> JsonResult {
check_emergency_access_allowed()?;
check_emergency_access_enabled()?;
let initiating_user = headers.user;
let mut emergency_access = match EmergencyAccess::find_by_uuid(emer_id, &mut conn).await {
@ -484,7 +475,7 @@ async fn initiate_emergency_access(emer_id: &str, headers: Headers, mut conn: Db
#[post("/emergency-access/<emer_id>/approve")]
async fn approve_emergency_access(emer_id: &str, headers: Headers, mut conn: DbConn) -> JsonResult {
check_emergency_access_allowed()?;
check_emergency_access_enabled()?;
let mut emergency_access = match EmergencyAccess::find_by_uuid(emer_id, &mut conn).await {
Some(emer) => emer,
@ -522,7 +513,7 @@ async fn approve_emergency_access(emer_id: &str, headers: Headers, mut conn: DbC
#[post("/emergency-access/<emer_id>/reject")]
async fn reject_emergency_access(emer_id: &str, headers: Headers, mut conn: DbConn) -> JsonResult {
check_emergency_access_allowed()?;
check_emergency_access_enabled()?;
let mut emergency_access = match EmergencyAccess::find_by_uuid(emer_id, &mut conn).await {
Some(emer) => emer,
@ -565,7 +556,7 @@ async fn reject_emergency_access(emer_id: &str, headers: Headers, mut conn: DbCo
#[post("/emergency-access/<emer_id>/view")]
async fn view_emergency_access(emer_id: &str, headers: Headers, mut conn: DbConn) -> JsonResult {
check_emergency_access_allowed()?;
check_emergency_access_enabled()?;
let emergency_access = match EmergencyAccess::find_by_uuid(emer_id, &mut conn).await {
Some(emer) => emer,
@ -602,7 +593,7 @@ async fn view_emergency_access(emer_id: &str, headers: Headers, mut conn: DbConn
#[post("/emergency-access/<emer_id>/takeover")]
async fn takeover_emergency_access(emer_id: &str, headers: Headers, mut conn: DbConn) -> JsonResult {
check_emergency_access_allowed()?;
check_emergency_access_enabled()?;
let requesting_user = headers.user;
let emergency_access = match EmergencyAccess::find_by_uuid(emer_id, &mut conn).await {
@ -645,7 +636,7 @@ async fn password_emergency_access(
headers: Headers,
mut conn: DbConn,
) -> EmptyResult {
check_emergency_access_allowed()?;
check_emergency_access_enabled()?;
let data: EmergencyAccessPasswordData = data.into_inner().data;
let new_master_password_hash = &data.NewMasterPasswordHash;
@ -722,9 +713,9 @@ fn is_valid_request(
&& emergency_access.atype == requested_access_type as i32
}
fn check_emergency_access_allowed() -> EmptyResult {
fn check_emergency_access_enabled() -> EmptyResult {
if !CONFIG.emergency_access_allowed() {
err!("Emergency access is not allowed.")
err!("Emergency access is not enabled.")
}
Ok(())
}
@ -746,7 +737,7 @@ pub async fn emergency_request_timeout_job(pool: DbPool) {
for mut emer in emergency_access_list {
// The find_all_recoveries_initiated already checks if the recovery_initiated_at is not null (None)
let recovery_allowed_at =
emer.recovery_initiated_at.unwrap() + Duration::days(i64::from(emer.wait_time_days));
emer.recovery_initiated_at.unwrap() + TimeDelta::try_days(i64::from(emer.wait_time_days)).unwrap();
if recovery_allowed_at.le(&now) {
// Only update the access status
// Updating the whole record could cause issues when the emergency_notification_reminder_job is also active
@ -802,10 +793,10 @@ pub async fn emergency_notification_reminder_job(pool: DbPool) {
// The find_all_recoveries_initiated already checks if the recovery_initiated_at is not null (None)
// Calculate the day before the recovery will become active
let final_recovery_reminder_at =
emer.recovery_initiated_at.unwrap() + Duration::days(i64::from(emer.wait_time_days - 1));
emer.recovery_initiated_at.unwrap() + TimeDelta::try_days(i64::from(emer.wait_time_days - 1)).unwrap();
// Calculate if a day has passed since the previous notification, else no notification has been sent before
let next_recovery_reminder_at = if let Some(last_notification_at) = emer.last_notification_at {
last_notification_at + Duration::days(1)
last_notification_at + TimeDelta::try_days(1).unwrap()
} else {
now
};

Datei anzeigen

@ -125,7 +125,7 @@ async fn get_user_events(
})))
}
fn get_continuation_token(events_json: &Vec<Value>) -> Option<&str> {
fn get_continuation_token(events_json: &[Value]) -> Option<&str> {
// When the length of the vec equals the max page_size there probably is more data
// When it is less, then all events are loaded.
if events_json.len() as i64 == Event::PAGE_SIZE {
@ -263,7 +263,7 @@ pub async fn log_event(
event_type: i32,
source_uuid: &str,
org_uuid: &str,
act_user_uuid: String,
act_user_uuid: &str,
device_type: i32,
ip: &IpAddr,
conn: &mut DbConn,
@ -271,7 +271,7 @@ pub async fn log_event(
if !CONFIG.org_events_enabled() {
return;
}
_log_event(event_type, source_uuid, org_uuid, &act_user_uuid, device_type, None, ip, conn).await;
_log_event(event_type, source_uuid, org_uuid, act_user_uuid, device_type, None, ip, conn).await;
}
#[allow(clippy::too_many_arguments)]
@ -289,7 +289,7 @@ async fn _log_event(
let mut event = Event::new(event_type, event_date);
match event_type {
// 1000..=1099 Are user events, they need to be logged via log_user_event()
// Collection Events
// Cipher Events
1100..=1199 => {
event.cipher_uuid = Some(String::from(source_uuid));
}

Datei anzeigen

@ -13,7 +13,6 @@ pub use ciphers::{purge_trashed_ciphers, CipherData, CipherSyncData, CipherSyncT
pub use emergency_access::{emergency_notification_reminder_job, emergency_request_timeout_job};
pub use events::{event_cleanup_job, log_event, log_user_event};
pub use sends::purge_sends;
pub use two_factor::send_incomplete_2fa_notifications;
pub fn routes() -> Vec<Route> {
let mut eq_domains_routes = routes![get_eq_domains, post_eq_domains, put_eq_domains];
@ -47,15 +46,14 @@ pub fn events_routes() -> Vec<Route> {
//
// Move this somewhere else
//
use rocket::{serde::json::Json, Catcher, Route};
use serde_json::Value;
use rocket::{serde::json::Json, serde::json::Value, Catcher, Route};
use crate::{
api::{JsonResult, JsonUpcase, Notify, UpdateType},
auth::Headers,
db::DbConn,
error::Error,
util::get_reqwest_client,
util::{get_reqwest_client, parse_experimental_client_feature_flags},
};
#[derive(Serialize, Deserialize, Debug)]
@ -193,17 +191,22 @@ fn version() -> Json<&'static str> {
#[get("/config")]
fn config() -> Json<Value> {
let domain = crate::CONFIG.domain();
let mut feature_states =
parse_experimental_client_feature_flags(&crate::CONFIG.experimental_client_feature_flags());
// Force the new key rotation feature
feature_states.insert("key-rotation-improvements".to_string(), true);
Json(json!({
// Note: The clients use this version to handle backwards compatibility concerns
// This means they expect a version that closely matches the Bitwarden server version
// We should make sure that we keep this updated when we support the new server features
// Version history:
// - Individual cipher key encryption: 2023.9.1
"version": "2023.9.1",
"version": "2024.2.0",
"gitHash": option_env!("GIT_REV"),
"server": {
"name": "Vaultwarden",
"url": "https://github.com/dani-garcia/vaultwarden"
"url": "https://github.com/dani-garcia/vaultwarden",
"version": crate::VERSION
},
"environment": {
"vault": domain,
@ -212,13 +215,7 @@ fn config() -> Json<Value> {
"notifications": format!("{domain}/notifications"),
"sso": "",
},
"featureStates": {
// Any feature flags that we want the clients to use
// Can check the enabled ones at:
// https://vault.bitwarden.com/api/config
"autofill-v2": true,
"fido2-vault-credentials": true
},
"featureStates": feature_states,
"object": "config",
}))
}

Datei anzeigen

@ -5,14 +5,14 @@ use serde_json::Value;
use crate::{
api::{
core::{log_event, CipherSyncData, CipherSyncType},
EmptyResult, JsonResult, JsonUpcase, JsonUpcaseVec, JsonVec, Notify, NumberOrString, PasswordData, UpdateType,
core::{log_event, two_factor, CipherSyncData, CipherSyncType},
EmptyResult, JsonResult, JsonUpcase, JsonUpcaseVec, JsonVec, Notify, PasswordOrOtpData, UpdateType,
},
auth::{decode_invite, AdminHeaders, Headers, ManagerHeaders, ManagerHeadersLoose, OwnerHeaders},
db::{models::*, DbConn},
error::Error,
mail,
util::convert_json_key_lcase_first,
util::{convert_json_key_lcase_first, NumberOrString},
CONFIG,
};
@ -186,16 +186,13 @@ async fn create_organization(headers: Headers, data: JsonUpcase<OrgData>, mut co
#[delete("/organizations/<org_id>", data = "<data>")]
async fn delete_organization(
org_id: &str,
data: JsonUpcase<PasswordData>,
data: JsonUpcase<PasswordOrOtpData>,
headers: OwnerHeaders,
mut conn: DbConn,
) -> EmptyResult {
let data: PasswordData = data.into_inner().data;
let password_hash = data.MasterPasswordHash;
let data: PasswordOrOtpData = data.into_inner().data;
if !headers.user.check_valid_password(&password_hash) {
err!("Invalid password")
}
data.validate(&headers.user, true, &mut conn).await?;
match Organization::find_by_uuid(org_id, &mut conn).await {
None => err!("Organization not found"),
@ -206,7 +203,7 @@ async fn delete_organization(
#[post("/organizations/<org_id>/delete", data = "<data>")]
async fn post_delete_organization(
org_id: &str,
data: JsonUpcase<PasswordData>,
data: JsonUpcase<PasswordOrOtpData>,
headers: OwnerHeaders,
conn: DbConn,
) -> EmptyResult {
@ -228,7 +225,7 @@ async fn leave_organization(org_id: &str, headers: Headers, mut conn: DbConn) ->
EventType::OrganizationUserRemoved as i32,
&user_org.uuid,
org_id,
headers.user.uuid.clone(),
&headers.user.uuid,
headers.device.atype,
&headers.ip.ip,
&mut conn,
@ -281,7 +278,7 @@ async fn post_organization(
EventType::OrganizationUpdated as i32,
org_id,
org_id,
headers.user.uuid.clone(),
&headers.user.uuid,
headers.device.atype,
&headers.ip.ip,
&mut conn,
@ -296,7 +293,7 @@ async fn post_organization(
async fn get_user_collections(headers: Headers, mut conn: DbConn) -> Json<Value> {
Json(json!({
"Data":
Collection::find_by_user_uuid(headers.user.uuid.clone(), &mut conn).await
Collection::find_by_user_uuid(headers.user.uuid, &mut conn).await
.iter()
.map(Collection::to_json)
.collect::<Value>(),
@ -323,9 +320,29 @@ async fn get_org_collections_details(org_id: &str, headers: ManagerHeadersLoose,
None => err!("User is not part of organization"),
};
// get all collection memberships for the current organization
let coll_users = CollectionUser::find_by_organization(org_id, &mut conn).await;
// check if current user has full access to the organization (either directly or via any group)
let has_full_access_to_org = user_org.access_all
|| (CONFIG.org_groups_enabled()
&& GroupUser::has_full_access_by_member(org_id, &user_org.uuid, &mut conn).await);
for col in Collection::find_by_organization(org_id, &mut conn).await {
// check whether the current user has access to the given collection
let assigned = has_full_access_to_org
|| CollectionUser::has_access_to_collection_by_user(&col.uuid, &user_org.user_uuid, &mut conn).await
|| (CONFIG.org_groups_enabled()
&& GroupUser::has_access_to_collection_by_member(&col.uuid, &user_org.uuid, &mut conn).await);
// get the users assigned directly to the given collection
let users: Vec<Value> = coll_users
.iter()
.filter(|collection_user| collection_user.collection_uuid == col.uuid)
.map(|collection_user| SelectionReadOnly::to_collection_user_details_read_only(collection_user).to_json())
.collect();
// get the group details for the given collection
let groups: Vec<Value> = if CONFIG.org_groups_enabled() {
CollectionGroup::find_by_collection(&col.uuid, &mut conn)
.await
@ -335,29 +352,9 @@ async fn get_org_collections_details(org_id: &str, headers: ManagerHeadersLoose,
})
.collect()
} else {
// The Bitwarden clients seem to call this API regardless of whether groups are enabled,
// so just act as if there are no groups.
Vec::with_capacity(0)
};
let mut assigned = false;
let users: Vec<Value> = coll_users
.iter()
.filter(|collection_user| collection_user.collection_uuid == col.uuid)
.map(|collection_user| {
// Remember `user_uuid` is swapped here with the `user_org.uuid` with a join during the `CollectionUser::find_by_organization` call.
// We check here if the current user is assigned to this collection or not.
if collection_user.user_uuid == user_org.uuid {
assigned = true;
}
SelectionReadOnly::to_collection_user_details_read_only(collection_user).to_json()
})
.collect();
if user_org.access_all {
assigned = true;
}
let mut json_object = col.to_json();
json_object["Assigned"] = json!(assigned);
json_object["Users"] = json!(users);
@ -398,7 +395,7 @@ async fn post_organization_collections(
EventType::CollectionCreated as i32,
&collection.uuid,
org_id,
headers.user.uuid.clone(),
&headers.user.uuid,
headers.device.atype,
&headers.ip.ip,
&mut conn,
@ -479,7 +476,7 @@ async fn post_organization_collection_update(
EventType::CollectionUpdated as i32,
&collection.uuid,
org_id,
headers.user.uuid.clone(),
&headers.user.uuid,
headers.device.atype,
&headers.ip.ip,
&mut conn,
@ -567,7 +564,7 @@ async fn _delete_organization_collection(
EventType::CollectionDeleted as i32,
&collection.uuid,
org_id,
headers.user.uuid.clone(),
&headers.user.uuid,
headers.device.atype,
&headers.ip.ip,
conn,
@ -613,7 +610,6 @@ async fn post_organization_collection_delete(
#[allow(non_snake_case)]
struct BulkCollectionIds {
Ids: Vec<String>,
OrganizationId: String,
}
#[delete("/organizations/<org_id>/collections", data = "<data>")]
@ -624,9 +620,6 @@ async fn bulk_delete_organization_collections(
mut conn: DbConn,
) -> EmptyResult {
let data: BulkCollectionIds = data.into_inner().data;
if org_id != data.OrganizationId {
err!("OrganizationId mismatch");
}
let collections = data.Ids;
@ -671,24 +664,16 @@ async fn get_org_collection_detail(
Vec::with_capacity(0)
};
let mut assigned = false;
let users: Vec<Value> =
CollectionUser::find_by_collection_swap_user_uuid_with_org_user_uuid(&collection.uuid, &mut conn)
.await
.iter()
.map(|collection_user| {
// Remember `user_uuid` is swapped here with the `user_org.uuid` with a join during the `find_by_collection_swap_user_uuid_with_org_user_uuid` call.
// We check here if the current user is assigned to this collection or not.
if collection_user.user_uuid == user_org.uuid {
assigned = true;
}
SelectionReadOnly::to_collection_user_details_read_only(collection_user).to_json()
})
.collect();
if user_org.access_all {
assigned = true;
}
let assigned = Collection::can_access_collection(&user_org, &collection.uuid, &mut conn).await;
let mut json_object = collection.to_json();
json_object["Assigned"] = json!(assigned);
@ -948,7 +933,7 @@ async fn send_invite(
EventType::OrganizationUserInvited as i32,
&new_user.uuid,
org_id,
headers.user.uuid.clone(),
&headers.user.uuid,
headers.device.atype,
&headers.ip.ip,
&mut conn,
@ -1078,7 +1063,7 @@ async fn accept_invite(
let claims = decode_invite(&data.Token)?;
match User::find_by_mail(&claims.email, &mut conn).await {
Some(_) => {
Some(user) => {
Invitation::take(&claims.email, &mut conn).await;
if let (Some(user_org), Some(org)) = (&claims.user_org_id, &claims.org_id) {
@ -1102,7 +1087,11 @@ async fn accept_invite(
match OrgPolicy::is_user_allowed(&user_org.user_uuid, org_id, false, &mut conn).await {
Ok(_) => {}
Err(OrgPolicyErr::TwoFactorMissing) => {
err!("You cannot join this organization until you enable two-step login on your user account");
if CONFIG.email_2fa_auto_fallback() {
two_factor::email::activate_email_2fa(&user, &mut conn).await?;
} else {
err!("You cannot join this organization until you enable two-step login on your user account");
}
}
Err(OrgPolicyErr::SingleOrgEnforced) => {
err!("You cannot join this organization because you are a member of an organization which forbids it");
@ -1227,10 +1216,14 @@ async fn _confirm_invite(
match OrgPolicy::is_user_allowed(&user_to_confirm.user_uuid, org_id, true, conn).await {
Ok(_) => {}
Err(OrgPolicyErr::TwoFactorMissing) => {
err!("You cannot confirm this user because it has no two-step login method activated");
if CONFIG.email_2fa_auto_fallback() {
two_factor::email::find_and_activate_email_2fa(&user_to_confirm.user_uuid, conn).await?;
} else {
err!("You cannot confirm this user because they have not setup 2FA");
}
}
Err(OrgPolicyErr::SingleOrgEnforced) => {
err!("You cannot confirm this user because it is a member of an organization which forbids it");
err!("You cannot confirm this user because they are a member of an organization which forbids it");
}
}
}
@ -1242,7 +1235,7 @@ async fn _confirm_invite(
EventType::OrganizationUserConfirmed as i32,
&user_to_confirm.uuid,
org_id,
headers.user.uuid.clone(),
&headers.user.uuid,
headers.device.atype,
&headers.ip.ip,
conn,
@ -1358,10 +1351,14 @@ async fn edit_user(
match OrgPolicy::is_user_allowed(&user_to_edit.user_uuid, org_id, true, &mut conn).await {
Ok(_) => {}
Err(OrgPolicyErr::TwoFactorMissing) => {
err!("You cannot modify this user to this type because it has no two-step login method activated");
if CONFIG.email_2fa_auto_fallback() {
two_factor::email::find_and_activate_email_2fa(&user_to_edit.user_uuid, &mut conn).await?;
} else {
err!("You cannot modify this user to this type because they have not setup 2FA");
}
}
Err(OrgPolicyErr::SingleOrgEnforced) => {
err!("You cannot modify this user to this type because it is a member of an organization which forbids it");
err!("You cannot modify this user to this type because they are a member of an organization which forbids it");
}
}
}
@ -1404,7 +1401,7 @@ async fn edit_user(
EventType::OrganizationUserUpdated as i32,
&user_to_edit.uuid,
org_id,
headers.user.uuid.clone(),
&headers.user.uuid,
headers.device.atype,
&headers.ip.ip,
&mut conn,
@ -1496,7 +1493,7 @@ async fn _delete_user(
EventType::OrganizationUserRemoved as i32,
&user_to_delete.uuid,
org_id,
headers.user.uuid.clone(),
&headers.user.uuid,
headers.device.atype,
&headers.ip.ip,
conn,
@ -1605,7 +1602,7 @@ async fn post_org_import(
let mut ciphers = Vec::new();
for cipher_data in data.Ciphers {
let mut cipher = Cipher::new(cipher_data.Type, cipher_data.Name.clone());
update_cipher_from_data(&mut cipher, cipher_data, &headers, false, &mut conn, &nt, UpdateType::None).await.ok();
update_cipher_from_data(&mut cipher, cipher_data, &headers, None, &mut conn, &nt, UpdateType::None).await.ok();
ciphers.push(cipher);
}
@ -1699,38 +1696,16 @@ async fn put_policy(
None => err!("Invalid or unsupported policy type"),
};
// When enabling the TwoFactorAuthentication policy, remove this org's members that do have 2FA
// When enabling the TwoFactorAuthentication policy, revoke all members that do not have 2FA
if pol_type_enum == OrgPolicyType::TwoFactorAuthentication && data.enabled {
for member in UserOrganization::find_by_org(org_id, &mut conn).await.into_iter() {
let user_twofactor_disabled = TwoFactor::find_by_user(&member.user_uuid, &mut conn).await.is_empty();
// Policy only applies to non-Owner/non-Admin members who have accepted joining the org
// Invited users still need to accept the invite and will get an error when they try to accept the invite.
if user_twofactor_disabled
&& member.atype < UserOrgType::Admin
&& member.status != UserOrgStatus::Invited as i32
{
if CONFIG.mail_enabled() {
let org = Organization::find_by_uuid(&member.org_uuid, &mut conn).await.unwrap();
let user = User::find_by_uuid(&member.user_uuid, &mut conn).await.unwrap();
mail::send_2fa_removed_from_org(&user.email, &org.name).await?;
}
log_event(
EventType::OrganizationUserRemoved as i32,
&member.uuid,
org_id,
headers.user.uuid.clone(),
headers.device.atype,
&headers.ip.ip,
&mut conn,
)
.await;
member.delete(&mut conn).await?;
}
}
two_factor::enforce_2fa_policy_for_org(
org_id,
&headers.user.uuid,
headers.device.atype,
&headers.ip.ip,
&mut conn,
)
.await?;
}
// When enabling the SingleOrg policy, remove this org's members that are members of other orgs
@ -1755,7 +1730,7 @@ async fn put_policy(
EventType::OrganizationUserRemoved as i32,
&member.uuid,
org_id,
headers.user.uuid.clone(),
&headers.user.uuid,
headers.device.atype,
&headers.ip.ip,
&mut conn,
@ -1780,7 +1755,7 @@ async fn put_policy(
EventType::PolicyUpdated as i32,
&policy.uuid,
org_id,
headers.user.uuid.clone(),
&headers.user.uuid,
headers.device.atype,
&headers.ip.ip,
&mut conn,
@ -1897,7 +1872,7 @@ async fn import(org_id: &str, data: JsonUpcase<OrgImportData>, headers: Headers,
EventType::OrganizationUserRemoved as i32,
&user_org.uuid,
org_id,
headers.user.uuid.clone(),
&headers.user.uuid,
headers.device.atype,
&headers.ip.ip,
&mut conn,
@ -1927,7 +1902,7 @@ async fn import(org_id: &str, data: JsonUpcase<OrgImportData>, headers: Headers,
EventType::OrganizationUserInvited as i32,
&new_org_user.uuid,
org_id,
headers.user.uuid.clone(),
&headers.user.uuid,
headers.device.atype,
&headers.ip.ip,
&mut conn,
@ -1963,7 +1938,7 @@ async fn import(org_id: &str, data: JsonUpcase<OrgImportData>, headers: Headers,
EventType::OrganizationUserRemoved as i32,
&user_org.uuid,
org_id,
headers.user.uuid.clone(),
&headers.user.uuid,
headers.device.atype,
&headers.ip.ip,
&mut conn,
@ -2076,7 +2051,7 @@ async fn _revoke_organization_user(
EventType::OrganizationUserRevoked as i32,
&user_org.uuid,
org_id,
headers.user.uuid.clone(),
&headers.user.uuid,
headers.device.atype,
&headers.ip.ip,
conn,
@ -2180,10 +2155,14 @@ async fn _restore_organization_user(
match OrgPolicy::is_user_allowed(&user_org.user_uuid, org_id, false, conn).await {
Ok(_) => {}
Err(OrgPolicyErr::TwoFactorMissing) => {
err!("You cannot restore this user because it has no two-step login method activated");
if CONFIG.email_2fa_auto_fallback() {
two_factor::email::find_and_activate_email_2fa(&user_org.user_uuid, conn).await?;
} else {
err!("You cannot restore this user because they have not setup 2FA");
}
}
Err(OrgPolicyErr::SingleOrgEnforced) => {
err!("You cannot restore this user because it is a member of an organization which forbids it");
err!("You cannot restore this user because they are a member of an organization which forbids it");
}
}
}
@ -2195,7 +2174,7 @@ async fn _restore_organization_user(
EventType::OrganizationUserRestored as i32,
&user_org.uuid,
org_id,
headers.user.uuid.clone(),
&headers.user.uuid,
headers.device.atype,
&headers.ip.ip,
conn,
@ -2252,7 +2231,7 @@ impl GroupRequest {
}
pub fn update_group(&self, mut group: Group) -> Group {
group.name = self.Name.clone();
group.name.clone_from(&self.Name);
group.access_all = self.AccessAll.unwrap_or(false);
// Group Updates do not support changing the external_id
// These input fields are in a disabled state, and can only be updated/added via ldap_import
@ -2324,7 +2303,7 @@ async fn post_groups(
EventType::GroupCreated as i32,
&group.uuid,
org_id,
headers.user.uuid.clone(),
&headers.user.uuid,
headers.device.atype,
&headers.ip.ip,
&mut conn,
@ -2361,7 +2340,7 @@ async fn put_group(
EventType::GroupUpdated as i32,
&updated_group.uuid,
org_id,
headers.user.uuid.clone(),
&headers.user.uuid,
headers.device.atype,
&headers.ip.ip,
&mut conn,
@ -2394,7 +2373,7 @@ async fn add_update_group(
EventType::OrganizationUserUpdatedGroups as i32,
&assigned_user_id,
org_id,
headers.user.uuid.clone(),
&headers.user.uuid,
headers.device.atype,
&headers.ip.ip,
conn,
@ -2449,7 +2428,7 @@ async fn _delete_group(org_id: &str, group_id: &str, headers: &AdminHeaders, con
EventType::GroupDeleted as i32,
&group.uuid,
org_id,
headers.user.uuid.clone(),
&headers.user.uuid,
headers.device.atype,
&headers.ip.ip,
conn,
@ -2540,7 +2519,7 @@ async fn put_group_users(
EventType::OrganizationUserUpdatedGroups as i32,
&assigned_user_id,
org_id,
headers.user.uuid.clone(),
&headers.user.uuid,
headers.device.atype,
&headers.ip.ip,
&mut conn,
@ -2618,7 +2597,7 @@ async fn put_user_groups(
EventType::OrganizationUserUpdatedGroups as i32,
org_user_id,
org_id,
headers.user.uuid.clone(),
&headers.user.uuid,
headers.device.atype,
&headers.ip.ip,
&mut conn,
@ -2673,7 +2652,7 @@ async fn delete_group_user(
EventType::OrganizationUserUpdatedGroups as i32,
org_user_id,
org_id,
headers.user.uuid.clone(),
&headers.user.uuid,
headers.device.atype,
&headers.ip.ip,
&mut conn,
@ -2688,6 +2667,7 @@ async fn delete_group_user(
struct OrganizationUserResetPasswordEnrollmentRequest {
ResetPasswordKey: Option<String>,
MasterPasswordHash: Option<String>,
Otp: Option<String>,
}
#[derive(Deserialize)]
@ -2762,7 +2742,7 @@ async fn put_reset_password(
EventType::OrganizationUserAdminResetPassword as i32,
org_user_id,
org_id,
headers.user.uuid.clone(),
&headers.user.uuid,
headers.device.atype,
&headers.ip.ip,
&mut conn,
@ -2870,14 +2850,12 @@ async fn put_reset_password_enrollment(
}
if reset_request.ResetPasswordKey.is_some() {
match reset_request.MasterPasswordHash {
Some(password) => {
if !headers.user.check_valid_password(&password) {
err!("Invalid or wrong password")
}
}
None => err!("No password provided"),
};
PasswordOrOtpData {
MasterPasswordHash: reset_request.MasterPasswordHash,
Otp: reset_request.Otp,
}
.validate(&headers.user, true, &mut conn)
.await?;
}
org_user.reset_password_key = reset_request.ResetPasswordKey;
@ -2889,8 +2867,7 @@ async fn put_reset_password_enrollment(
EventType::OrganizationUserResetPasswordWithdraw as i32
};
log_event(log_id, org_user_id, org_id, headers.user.uuid.clone(), headers.device.atype, &headers.ip.ip, &mut conn)
.await;
log_event(log_id, org_user_id, org_id, &headers.user.uuid, headers.device.atype, &headers.ip.ip, &mut conn).await;
Ok(())
}
@ -2945,18 +2922,16 @@ async fn get_org_export(org_id: &str, headers: AdminHeaders, mut conn: DbConn) -
async fn _api_key(
org_id: &str,
data: JsonUpcase<PasswordData>,
data: JsonUpcase<PasswordOrOtpData>,
rotate: bool,
headers: AdminHeaders,
conn: DbConn,
mut conn: DbConn,
) -> JsonResult {
let data: PasswordData = data.into_inner().data;
let data: PasswordOrOtpData = data.into_inner().data;
let user = headers.user;
// Validate the admin users password
if !user.check_valid_password(&data.MasterPasswordHash) {
err!("Invalid password")
}
// Validate the admin users password/otp
data.validate(&user, true, &mut conn).await?;
let org_api_key = match OrganizationApiKey::find_by_org_uuid(org_id, &conn).await {
Some(mut org_api_key) => {
@ -2983,14 +2958,14 @@ async fn _api_key(
}
#[post("/organizations/<org_id>/api-key", data = "<data>")]
async fn api_key(org_id: &str, data: JsonUpcase<PasswordData>, headers: AdminHeaders, conn: DbConn) -> JsonResult {
async fn api_key(org_id: &str, data: JsonUpcase<PasswordOrOtpData>, headers: AdminHeaders, conn: DbConn) -> JsonResult {
_api_key(org_id, data, false, headers, conn).await
}
#[post("/organizations/<org_id>/rotate-api-key", data = "<data>")]
async fn rotate_api_key(
org_id: &str,
data: JsonUpcase<PasswordData>,
data: JsonUpcase<PasswordOrOtpData>,
headers: AdminHeaders,
conn: DbConn,
) -> JsonResult {

Datei anzeigen

@ -209,19 +209,15 @@ impl<'r> FromRequest<'r> for PublicToken {
Err(_) => err_handler!("Invalid claim"),
};
// Check if time is between claims.nbf and claims.exp
let time_now = Utc::now().naive_utc().timestamp();
let time_now = Utc::now().timestamp();
if time_now < claims.nbf {
err_handler!("Token issued in the future");
}
if time_now > claims.exp {
err_handler!("Token expired");
}
// Check if claims.iss is host|claims.scope[0]
let host = match auth::Host::from_request(request).await {
Outcome::Success(host) => host,
_ => err_handler!("Error getting Host"),
};
let complete_host = format!("{}|{}", host.host, claims.scope[0]);
// Check if claims.iss is domain|claims.scope[0]
let complete_host = format!("{}|{}", CONFIG.domain_origin(), claims.scope[0]);
if complete_host != claims.iss {
err_handler!("Token not issued by this server");
}

Datei anzeigen

@ -1,6 +1,7 @@
use std::path::Path;
use chrono::{DateTime, Duration, Utc};
use chrono::{DateTime, TimeDelta, Utc};
use num_traits::ToPrimitive;
use rocket::form::Form;
use rocket::fs::NamedFile;
use rocket::fs::TempFile;
@ -8,17 +9,17 @@ use rocket::serde::json::Json;
use serde_json::Value;
use crate::{
api::{ApiResult, EmptyResult, JsonResult, JsonUpcase, Notify, NumberOrString, UpdateType},
api::{ApiResult, EmptyResult, JsonResult, JsonUpcase, Notify, UpdateType},
auth::{ClientIp, Headers, Host},
db::{models::*, DbConn, DbPool},
util::SafeString,
util::{NumberOrString, SafeString},
CONFIG,
};
const SEND_INACCESSIBLE_MSG: &str = "Send does not exist or is no longer available";
// The max file size allowed by Bitwarden clients and add an extra 5% to avoid issues
const SIZE_525_MB: u64 = 550_502_400;
const SIZE_525_MB: i64 = 550_502_400;
pub fn routes() -> Vec<rocket::Route> {
routes![
@ -48,7 +49,7 @@ pub async fn purge_sends(pool: DbPool) {
#[derive(Deserialize)]
#[allow(non_snake_case)]
struct SendData {
pub struct SendData {
Type: i32,
Key: String,
Password: Option<String>,
@ -64,6 +65,9 @@ struct SendData {
Text: Option<Value>,
File: Option<Value>,
FileLength: Option<NumberOrString>,
// Used for key rotations
pub Id: Option<String>,
}
/// Enforces the `Disable Send` policy. A non-owner/admin user belonging to
@ -118,7 +122,7 @@ fn create_send(data: SendData, user_uuid: String) -> ApiResult<Send> {
err!("Send data not provided");
};
if data.DeletionDate > Utc::now() + Duration::days(31) {
if data.DeletionDate > Utc::now() + TimeDelta::try_days(31).unwrap() {
err!(
"You cannot have a Send with a deletion date that far into the future. Adjust the Deletion Date to a value less than 31 days from now and try again."
);
@ -216,30 +220,41 @@ async fn post_send_file(data: Form<UploadData<'_>>, headers: Headers, mut conn:
} = data.into_inner();
let model = model.into_inner().data;
let Some(size) = data.len().to_i64() else {
err!("Invalid send size");
};
if size < 0 {
err!("Send size can't be negative")
}
enforce_disable_hide_email_policy(&model, &headers, &mut conn).await?;
let size_limit = match CONFIG.user_attachment_limit() {
let size_limit = match CONFIG.user_send_limit() {
Some(0) => err!("File uploads are disabled"),
Some(limit_kb) => {
let left = (limit_kb * 1024) - Attachment::size_by_user(&headers.user.uuid, &mut conn).await;
let Some(already_used) = Send::size_by_user(&headers.user.uuid, &mut conn).await else {
err!("Existing sends overflow")
};
let Some(left) = limit_kb.checked_mul(1024).and_then(|l| l.checked_sub(already_used)) else {
err!("Send size overflow");
};
if left <= 0 {
err!("Attachment storage limit reached! Delete some attachments to free up space")
err!("Send storage limit reached! Delete some sends to free up space")
}
std::cmp::Ord::max(left as u64, SIZE_525_MB)
i64::clamp(left, 0, SIZE_525_MB)
}
None => SIZE_525_MB,
};
if size > size_limit {
err!("Send storage limit exceeded with this file");
}
let mut send = create_send(model, headers.user.uuid)?;
if send.atype != SendType::File as i32 {
err!("Send content is not a file");
}
let size = data.len();
if size > size_limit {
err!("Attachment storage limit exceeded with this file");
}
let file_id = crate::crypto::generate_send_id();
let folder_path = tokio::fs::canonicalize(&CONFIG.sends_folder()).await?.join(&send.uuid);
let file_path = folder_path.join(&file_id);
@ -253,7 +268,7 @@ async fn post_send_file(data: Form<UploadData<'_>>, headers: Headers, mut conn:
if let Some(o) = data_value.as_object_mut() {
o.insert(String::from("Id"), Value::String(file_id));
o.insert(String::from("Size"), Value::Number(size.into()));
o.insert(String::from("SizeName"), Value::String(crate::util::get_display_size(size as i32)));
o.insert(String::from("SizeName"), Value::String(crate::util::get_display_size(size)));
}
send.data = serde_json::to_string(&data_value)?;
@ -285,24 +300,32 @@ async fn post_send_file_v2(data: JsonUpcase<SendData>, headers: Headers, mut con
enforce_disable_hide_email_policy(&data, &headers, &mut conn).await?;
let file_length = match &data.FileLength {
Some(m) => Some(m.into_i32()?),
_ => None,
Some(m) => m.into_i64()?,
_ => err!("Invalid send length"),
};
if file_length < 0 {
err!("Send size can't be negative")
}
let size_limit = match CONFIG.user_attachment_limit() {
let size_limit = match CONFIG.user_send_limit() {
Some(0) => err!("File uploads are disabled"),
Some(limit_kb) => {
let left = (limit_kb * 1024) - Attachment::size_by_user(&headers.user.uuid, &mut conn).await;
let Some(already_used) = Send::size_by_user(&headers.user.uuid, &mut conn).await else {
err!("Existing sends overflow")
};
let Some(left) = limit_kb.checked_mul(1024).and_then(|l| l.checked_sub(already_used)) else {
err!("Send size overflow");
};
if left <= 0 {
err!("Attachment storage limit reached! Delete some attachments to free up space")
err!("Send storage limit reached! Delete some sends to free up space")
}
std::cmp::Ord::max(left as u64, SIZE_525_MB)
i64::clamp(left, 0, SIZE_525_MB)
}
None => SIZE_525_MB,
};
if file_length.is_some() && file_length.unwrap() as u64 > size_limit {
err!("Attachment storage limit exceeded with this file");
if file_length > size_limit {
err!("Send storage limit exceeded with this file");
}
let mut send = create_send(data, headers.user.uuid)?;
@ -312,8 +335,8 @@ async fn post_send_file_v2(data: JsonUpcase<SendData>, headers: Headers, mut con
let mut data_value: Value = serde_json::from_str(&send.data)?;
if let Some(o) = data_value.as_object_mut() {
o.insert(String::from("Id"), Value::String(file_id.clone()));
o.insert(String::from("Size"), Value::Number(file_length.unwrap().into()));
o.insert(String::from("SizeName"), Value::String(crate::util::get_display_size(file_length.unwrap())));
o.insert(String::from("Size"), Value::Number(file_length.into()));
o.insert(String::from("SizeName"), Value::String(crate::util::get_display_size(file_length)));
}
send.data = serde_json::to_string(&data_value)?;
send.save(&mut conn).await?;
@ -529,6 +552,19 @@ async fn put_send(
None => err!("Send not found"),
};
update_send_from_data(&mut send, data, &headers, &mut conn, &nt, UpdateType::SyncSendUpdate).await?;
Ok(Json(send.to_json()))
}
pub async fn update_send_from_data(
send: &mut Send,
data: SendData,
headers: &Headers,
conn: &mut DbConn,
nt: &Notify<'_>,
ut: UpdateType,
) -> EmptyResult {
if send.user_uuid.as_ref() != Some(&headers.user.uuid) {
err!("Send is not owned by user")
}
@ -537,6 +573,12 @@ async fn put_send(
err!("Sends can't change type")
}
if data.DeletionDate > Utc::now() + TimeDelta::try_days(31).unwrap() {
err!(
"You cannot have a Send with a deletion date that far into the future. Adjust the Deletion Date to a value less than 31 days from now and try again."
);
}
// When updating a file Send, we receive nulls in the File field, as it's immutable,
// so we only need to update the data field in the Text case
if data.Type == SendType::Text as i32 {
@ -549,11 +591,6 @@ async fn put_send(
send.data = data_str;
}
if data.DeletionDate > Utc::now() + Duration::days(31) {
err!(
"You cannot have a Send with a deletion date that far into the future. Adjust the Deletion Date to a value less than 31 days from now and try again."
);
}
send.name = data.Name;
send.akey = data.Key;
send.deletion_date = data.DeletionDate.naive_utc();
@ -571,17 +608,11 @@ async fn put_send(
send.set_password(Some(&password));
}
send.save(&mut conn).await?;
nt.send_send_update(
UpdateType::SyncSendUpdate,
&send,
&send.update_users_revision(&mut conn).await,
&headers.device.uuid,
&mut conn,
)
.await;
Ok(Json(send.to_json()))
send.save(conn).await?;
if ut != UpdateType::None {
nt.send_send_update(ut, send, &send.update_users_revision(conn).await, &headers.device.uuid, conn).await;
}
Ok(())
}
#[delete("/sends/<id>")]

Datei anzeigen

@ -5,7 +5,7 @@ use rocket::Route;
use crate::{
api::{
core::log_user_event, core::two_factor::_generate_recover_code, EmptyResult, JsonResult, JsonUpcase,
NumberOrString, PasswordData,
PasswordOrOtpData,
},
auth::{ClientIp, Headers},
crypto,
@ -13,6 +13,7 @@ use crate::{
models::{EventType, TwoFactor, TwoFactorType},
DbConn,
},
util::NumberOrString,
};
pub use crate::config::CONFIG;
@ -22,13 +23,11 @@ pub fn routes() -> Vec<Route> {
}
#[post("/two-factor/get-authenticator", data = "<data>")]
async fn generate_authenticator(data: JsonUpcase<PasswordData>, headers: Headers, mut conn: DbConn) -> JsonResult {
let data: PasswordData = data.into_inner().data;
async fn generate_authenticator(data: JsonUpcase<PasswordOrOtpData>, headers: Headers, mut conn: DbConn) -> JsonResult {
let data: PasswordOrOtpData = data.into_inner().data;
let user = headers.user;
if !user.check_valid_password(&data.MasterPasswordHash) {
err!("Invalid password");
}
data.validate(&user, false, &mut conn).await?;
let type_ = TwoFactorType::Authenticator as i32;
let twofactor = TwoFactor::find_by_user_and_type(&user.uuid, type_, &mut conn).await;
@ -48,9 +47,10 @@ async fn generate_authenticator(data: JsonUpcase<PasswordData>, headers: Headers
#[derive(Deserialize, Debug)]
#[allow(non_snake_case)]
struct EnableAuthenticatorData {
MasterPasswordHash: String,
Key: String,
Token: NumberOrString,
MasterPasswordHash: Option<String>,
Otp: Option<String>,
}
#[post("/two-factor/authenticator", data = "<data>")]
@ -60,15 +60,17 @@ async fn activate_authenticator(
mut conn: DbConn,
) -> JsonResult {
let data: EnableAuthenticatorData = data.into_inner().data;
let password_hash = data.MasterPasswordHash;
let key = data.Key;
let token = data.Token.into_string();
let mut user = headers.user;
if !user.check_valid_password(&password_hash) {
err!("Invalid password");
PasswordOrOtpData {
MasterPasswordHash: data.MasterPasswordHash,
Otp: data.Otp,
}
.validate(&user, true, &mut conn)
.await?;
// Validate key as base32 and 20 bytes length
let decoded_key: Vec<u8> = match BASE32.decode(key.as_bytes()) {
@ -154,8 +156,8 @@ pub async fn validate_totp_code(
let time = (current_timestamp + step * 30i64) as u64;
let generated = totp_custom::<Sha1>(30, 6, &decoded_secret, time);
// Check the the given code equals the generated and if the time_step is larger then the one last used.
if generated == totp_code && time_step > i64::from(twofactor.last_used) {
// Check the given code equals the generated and if the time_step is larger then the one last used.
if generated == totp_code && time_step > twofactor.last_used {
// If the step does not equals 0 the time is drifted either server or client side.
if step != 0 {
warn!("TOTP Time drift detected. The step offset is {}", step);
@ -163,10 +165,10 @@ pub async fn validate_totp_code(
// Save the last used time step so only totp time steps higher then this one are allowed.
// This will also save a newly created twofactor if the code is correct.
twofactor.last_used = time_step as i32;
twofactor.last_used = time_step;
twofactor.save(conn).await?;
return Ok(());
} else if generated == totp_code && time_step <= i64::from(twofactor.last_used) {
} else if generated == totp_code && time_step <= twofactor.last_used {
warn!("This TOTP or a TOTP code within {} steps back or forward has already been used!", steps);
err!(
format!("Invalid TOTP code! Server time: {} IP: {}", current_time.format("%F %T UTC"), ip.ip),

Datei anzeigen

@ -6,7 +6,7 @@ use rocket::Route;
use crate::{
api::{
core::log_user_event, core::two_factor::_generate_recover_code, ApiResult, EmptyResult, JsonResult, JsonUpcase,
PasswordData,
PasswordOrOtpData,
},
auth::Headers,
crypto,
@ -92,14 +92,13 @@ impl DuoStatus {
const DISABLED_MESSAGE_DEFAULT: &str = "<To use the global Duo keys, please leave these fields untouched>";
#[post("/two-factor/get-duo", data = "<data>")]
async fn get_duo(data: JsonUpcase<PasswordData>, headers: Headers, mut conn: DbConn) -> JsonResult {
let data: PasswordData = data.into_inner().data;
async fn get_duo(data: JsonUpcase<PasswordOrOtpData>, headers: Headers, mut conn: DbConn) -> JsonResult {
let data: PasswordOrOtpData = data.into_inner().data;
let user = headers.user;
if !headers.user.check_valid_password(&data.MasterPasswordHash) {
err!("Invalid password");
}
data.validate(&user, false, &mut conn).await?;
let data = get_user_duo_data(&headers.user.uuid, &mut conn).await;
let data = get_user_duo_data(&user.uuid, &mut conn).await;
let (enabled, data) = match data {
DuoStatus::Global(_) => (true, Some(DuoData::secret())),
@ -129,10 +128,11 @@ async fn get_duo(data: JsonUpcase<PasswordData>, headers: Headers, mut conn: DbC
#[derive(Deserialize)]
#[allow(non_snake_case, dead_code)]
struct EnableDuoData {
MasterPasswordHash: String,
Host: String,
SecretKey: String,
IntegrationKey: String,
MasterPasswordHash: Option<String>,
Otp: Option<String>,
}
impl From<EnableDuoData> for DuoData {
@ -159,9 +159,12 @@ async fn activate_duo(data: JsonUpcase<EnableDuoData>, headers: Headers, mut con
let data: EnableDuoData = data.into_inner().data;
let mut user = headers.user;
if !user.check_valid_password(&data.MasterPasswordHash) {
err!("Invalid password");
PasswordOrOtpData {
MasterPasswordHash: data.MasterPasswordHash.clone(),
Otp: data.Otp.clone(),
}
.validate(&user, true, &mut conn)
.await?;
let (data, data_str) = if check_duo_fields_custom(&data) {
let data_req: DuoData = data.into();

Datei anzeigen

@ -1,16 +1,16 @@
use chrono::{Duration, NaiveDateTime, Utc};
use chrono::{DateTime, TimeDelta, Utc};
use rocket::serde::json::Json;
use rocket::Route;
use crate::{
api::{
core::{log_user_event, two_factor::_generate_recover_code},
EmptyResult, JsonResult, JsonUpcase, PasswordData,
EmptyResult, JsonResult, JsonUpcase, PasswordOrOtpData,
},
auth::Headers,
crypto,
db::{
models::{EventType, TwoFactor, TwoFactorType},
models::{EventType, TwoFactor, TwoFactorType, User},
DbConn,
},
error::{Error, MapResult},
@ -76,13 +76,11 @@ pub async fn send_token(user_uuid: &str, conn: &mut DbConn) -> EmptyResult {
/// When user clicks on Manage email 2FA show the user the related information
#[post("/two-factor/get-email", data = "<data>")]
async fn get_email(data: JsonUpcase<PasswordData>, headers: Headers, mut conn: DbConn) -> JsonResult {
let data: PasswordData = data.into_inner().data;
async fn get_email(data: JsonUpcase<PasswordOrOtpData>, headers: Headers, mut conn: DbConn) -> JsonResult {
let data: PasswordOrOtpData = data.into_inner().data;
let user = headers.user;
if !user.check_valid_password(&data.MasterPasswordHash) {
err!("Invalid password");
}
data.validate(&user, false, &mut conn).await?;
let (enabled, mfa_email) =
match TwoFactor::find_by_user_and_type(&user.uuid, TwoFactorType::Email as i32, &mut conn).await {
@ -105,7 +103,8 @@ async fn get_email(data: JsonUpcase<PasswordData>, headers: Headers, mut conn: D
struct SendEmailData {
/// Email where 2FA codes will be sent to, can be different than user email account.
Email: String,
MasterPasswordHash: String,
MasterPasswordHash: Option<String>,
Otp: Option<String>,
}
/// Send a verification email to the specified email address to check whether it exists/belongs to user.
@ -114,9 +113,12 @@ async fn send_email(data: JsonUpcase<SendEmailData>, headers: Headers, mut conn:
let data: SendEmailData = data.into_inner().data;
let user = headers.user;
if !user.check_valid_password(&data.MasterPasswordHash) {
err!("Invalid password");
PasswordOrOtpData {
MasterPasswordHash: data.MasterPasswordHash,
Otp: data.Otp,
}
.validate(&user, false, &mut conn)
.await?;
if !CONFIG._enable_email_2fa() {
err!("Email 2FA is disabled")
@ -144,8 +146,9 @@ async fn send_email(data: JsonUpcase<SendEmailData>, headers: Headers, mut conn:
#[allow(non_snake_case)]
struct EmailData {
Email: String,
MasterPasswordHash: String,
Token: String,
MasterPasswordHash: Option<String>,
Otp: Option<String>,
}
/// Verify email belongs to user and can be used for 2FA email codes.
@ -154,9 +157,13 @@ async fn email(data: JsonUpcase<EmailData>, headers: Headers, mut conn: DbConn)
let data: EmailData = data.into_inner().data;
let mut user = headers.user;
if !user.check_valid_password(&data.MasterPasswordHash) {
err!("Invalid password");
// This is the last step in the verification process, delete the otp directly afterwards
PasswordOrOtpData {
MasterPasswordHash: data.MasterPasswordHash,
Otp: data.Otp,
}
.validate(&user, true, &mut conn)
.await?;
let type_ = TwoFactorType::EmailVerificationChallenge as i32;
let mut twofactor =
@ -225,9 +232,9 @@ pub async fn validate_email_code_str(user_uuid: &str, token: &str, data: &str, c
twofactor.data = email_data.to_json();
twofactor.save(conn).await?;
let date = NaiveDateTime::from_timestamp_opt(email_data.token_sent, 0).expect("Email token timestamp invalid.");
let date = DateTime::from_timestamp(email_data.token_sent, 0).expect("Email token timestamp invalid.").naive_utc();
let max_time = CONFIG.email_expiration_time() as i64;
if date + Duration::seconds(max_time) < Utc::now().naive_utc() {
if date + TimeDelta::try_seconds(max_time).unwrap() < Utc::now().naive_utc() {
err!(
"Token has expired",
ErrorEvent {
@ -258,14 +265,14 @@ impl EmailTokenData {
EmailTokenData {
email,
last_token: Some(token),
token_sent: Utc::now().naive_utc().timestamp(),
token_sent: Utc::now().timestamp(),
attempts: 0,
}
}
pub fn set_token(&mut self, token: String) {
self.last_token = Some(token);
self.token_sent = Utc::now().naive_utc().timestamp();
self.token_sent = Utc::now().timestamp();
}
pub fn reset_token(&mut self) {
@ -290,6 +297,15 @@ impl EmailTokenData {
}
}
pub async fn activate_email_2fa(user: &User, conn: &mut DbConn) -> EmptyResult {
if user.verified_at.is_none() {
err!("Auto-enabling of email 2FA failed because the users email address has not been verified!");
}
let twofactor_data = EmailTokenData::new(user.email.clone(), String::new());
let twofactor = TwoFactor::new(user.uuid.clone(), TwoFactorType::Email, twofactor_data.to_json());
twofactor.save(conn).await
}
/// Takes an email address and obscures it by replacing it with asterisks except two characters.
pub fn obscure_email(email: &str) -> String {
let split: Vec<&str> = email.rsplitn(2, '@').collect();
@ -311,6 +327,14 @@ pub fn obscure_email(email: &str) -> String {
format!("{}@{}", new_name, &domain)
}
pub async fn find_and_activate_email_2fa(user_uuid: &str, conn: &mut DbConn) -> EmptyResult {
if let Some(user) = User::find_by_uuid(user_uuid, conn).await {
activate_email_2fa(&user, conn).await
} else {
err!("User not found!");
}
}
#[cfg(test)]
mod tests {
use super::*;

Datei anzeigen

@ -1,20 +1,26 @@
use chrono::{Duration, Utc};
use chrono::{TimeDelta, Utc};
use data_encoding::BASE32;
use rocket::serde::json::Json;
use rocket::Route;
use serde_json::Value;
use crate::{
api::{core::log_user_event, JsonResult, JsonUpcase, NumberOrString, PasswordData},
api::{
core::{log_event, log_user_event},
EmptyResult, JsonResult, JsonUpcase, PasswordOrOtpData,
},
auth::{ClientHeaders, Headers},
crypto,
db::{models::*, DbConn, DbPool},
mail, CONFIG,
mail,
util::NumberOrString,
CONFIG,
};
pub mod authenticator;
pub mod duo;
pub mod email;
pub mod protected_actions;
pub mod webauthn;
pub mod yubikey;
@ -33,6 +39,7 @@ pub fn routes() -> Vec<Route> {
routes.append(&mut email::routes());
routes.append(&mut webauthn::routes());
routes.append(&mut yubikey::routes());
routes.append(&mut protected_actions::routes());
routes
}
@ -50,13 +57,11 @@ async fn get_twofactor(headers: Headers, mut conn: DbConn) -> Json<Value> {
}
#[post("/two-factor/get-recover", data = "<data>")]
fn get_recover(data: JsonUpcase<PasswordData>, headers: Headers) -> JsonResult {
let data: PasswordData = data.into_inner().data;
async fn get_recover(data: JsonUpcase<PasswordOrOtpData>, headers: Headers, mut conn: DbConn) -> JsonResult {
let data: PasswordOrOtpData = data.into_inner().data;
let user = headers.user;
if !user.check_valid_password(&data.MasterPasswordHash) {
err!("Invalid password");
}
data.validate(&user, true, &mut conn).await?;
Ok(Json(json!({
"Code": user.totp_recover,
@ -96,6 +101,7 @@ async fn recover(data: JsonUpcase<RecoverTwoFactor>, client_headers: ClientHeade
// Remove all twofactors from the user
TwoFactor::delete_all_by_user(&user.uuid, &mut conn).await?;
enforce_2fa_policy(&user, &user.uuid, client_headers.device_type, &client_headers.ip.ip, &mut conn).await?;
log_user_event(
EventType::UserRecovered2fa as i32,
@ -123,19 +129,23 @@ async fn _generate_recover_code(user: &mut User, conn: &mut DbConn) {
#[derive(Deserialize)]
#[allow(non_snake_case)]
struct DisableTwoFactorData {
MasterPasswordHash: String,
MasterPasswordHash: Option<String>,
Otp: Option<String>,
Type: NumberOrString,
}
#[post("/two-factor/disable", data = "<data>")]
async fn disable_twofactor(data: JsonUpcase<DisableTwoFactorData>, headers: Headers, mut conn: DbConn) -> JsonResult {
let data: DisableTwoFactorData = data.into_inner().data;
let password_hash = data.MasterPasswordHash;
let user = headers.user;
if !user.check_valid_password(&password_hash) {
err!("Invalid password");
// Delete directly after a valid token has been provided
PasswordOrOtpData {
MasterPasswordHash: data.MasterPasswordHash,
Otp: data.Otp,
}
.validate(&user, true, &mut conn)
.await?;
let type_ = data.Type.into_i32()?;
@ -145,22 +155,8 @@ async fn disable_twofactor(data: JsonUpcase<DisableTwoFactorData>, headers: Head
.await;
}
let twofactor_disabled = TwoFactor::find_by_user(&user.uuid, &mut conn).await.is_empty();
if twofactor_disabled {
for user_org in
UserOrganization::find_by_user_and_policy(&user.uuid, OrgPolicyType::TwoFactorAuthentication, &mut conn)
.await
.into_iter()
{
if user_org.atype < UserOrgType::Admin {
if CONFIG.mail_enabled() {
let org = Organization::find_by_uuid(&user_org.org_uuid, &mut conn).await.unwrap();
mail::send_2fa_removed_from_org(&user.email, &org.name).await?;
}
user_org.delete(&mut conn).await?;
}
}
if TwoFactor::find_by_user(&user.uuid, &mut conn).await.is_empty() {
enforce_2fa_policy(&user, &user.uuid, headers.device.atype, &headers.ip.ip, &mut conn).await?;
}
Ok(Json(json!({
@ -175,6 +171,78 @@ async fn disable_twofactor_put(data: JsonUpcase<DisableTwoFactorData>, headers:
disable_twofactor(data, headers, conn).await
}
pub async fn enforce_2fa_policy(
user: &User,
act_uuid: &str,
device_type: i32,
ip: &std::net::IpAddr,
conn: &mut DbConn,
) -> EmptyResult {
for member in UserOrganization::find_by_user_and_policy(&user.uuid, OrgPolicyType::TwoFactorAuthentication, conn)
.await
.into_iter()
{
// Policy only applies to non-Owner/non-Admin members who have accepted joining the org
if member.atype < UserOrgType::Admin {
if CONFIG.mail_enabled() {
let org = Organization::find_by_uuid(&member.org_uuid, conn).await.unwrap();
mail::send_2fa_removed_from_org(&user.email, &org.name).await?;
}
let mut member = member;
member.revoke();
member.save(conn).await?;
log_event(
EventType::OrganizationUserRevoked as i32,
&member.uuid,
&member.org_uuid,
act_uuid,
device_type,
ip,
conn,
)
.await;
}
}
Ok(())
}
pub async fn enforce_2fa_policy_for_org(
org_uuid: &str,
act_uuid: &str,
device_type: i32,
ip: &std::net::IpAddr,
conn: &mut DbConn,
) -> EmptyResult {
let org = Organization::find_by_uuid(org_uuid, conn).await.unwrap();
for member in UserOrganization::find_confirmed_by_org(org_uuid, conn).await.into_iter() {
// Don't enforce the policy for Admins and Owners.
if member.atype < UserOrgType::Admin && TwoFactor::find_by_user(&member.user_uuid, conn).await.is_empty() {
if CONFIG.mail_enabled() {
let user = User::find_by_uuid(&member.user_uuid, conn).await.unwrap();
mail::send_2fa_removed_from_org(&user.email, &org.name).await?;
}
let mut member = member;
member.revoke();
member.save(conn).await?;
log_event(
EventType::OrganizationUserRevoked as i32,
&member.uuid,
org_uuid,
act_uuid,
device_type,
ip,
conn,
)
.await;
}
}
Ok(())
}
pub async fn send_incomplete_2fa_notifications(pool: DbPool) {
debug!("Sending notifications for incomplete 2FA logins");
@ -191,7 +259,7 @@ pub async fn send_incomplete_2fa_notifications(pool: DbPool) {
};
let now = Utc::now().naive_utc();
let time_limit = Duration::minutes(CONFIG.incomplete_2fa_time_limit());
let time_limit = TimeDelta::try_minutes(CONFIG.incomplete_2fa_time_limit()).unwrap();
let time_before = now - time_limit;
let incomplete_logins = TwoFactorIncomplete::find_logins_before(&time_before, &mut conn).await;
for login in incomplete_logins {

Datei anzeigen

@ -0,0 +1,142 @@
use chrono::{DateTime, TimeDelta, Utc};
use rocket::Route;
use crate::{
api::{EmptyResult, JsonUpcase},
auth::Headers,
crypto,
db::{
models::{TwoFactor, TwoFactorType},
DbConn,
},
error::{Error, MapResult},
mail, CONFIG,
};
pub fn routes() -> Vec<Route> {
routes![request_otp, verify_otp]
}
/// Data stored in the TwoFactor table in the db
#[derive(Serialize, Deserialize, Debug)]
pub struct ProtectedActionData {
/// Token issued to validate the protected action
pub token: String,
/// UNIX timestamp of token issue.
pub token_sent: i64,
// The total amount of attempts
pub attempts: u8,
}
impl ProtectedActionData {
pub fn new(token: String) -> Self {
Self {
token,
token_sent: Utc::now().timestamp(),
attempts: 0,
}
}
pub fn to_json(&self) -> String {
serde_json::to_string(&self).unwrap()
}
pub fn from_json(string: &str) -> Result<Self, Error> {
let res: Result<Self, crate::serde_json::Error> = serde_json::from_str(string);
match res {
Ok(x) => Ok(x),
Err(_) => err!("Could not decode ProtectedActionData from string"),
}
}
pub fn add_attempt(&mut self) {
self.attempts += 1;
}
}
#[post("/accounts/request-otp")]
async fn request_otp(headers: Headers, mut conn: DbConn) -> EmptyResult {
if !CONFIG.mail_enabled() {
err!("Email is disabled for this server. Either enable email or login using your master password instead of login via device.");
}
let user = headers.user;
// Only one Protected Action per user is allowed to take place, delete the previous one
if let Some(pa) =
TwoFactor::find_by_user_and_type(&user.uuid, TwoFactorType::ProtectedActions as i32, &mut conn).await
{
pa.delete(&mut conn).await?;
}
let generated_token = crypto::generate_email_token(CONFIG.email_token_size());
let pa_data = ProtectedActionData::new(generated_token);
// Uses EmailVerificationChallenge as type to show that it's not verified yet.
let twofactor = TwoFactor::new(user.uuid, TwoFactorType::ProtectedActions, pa_data.to_json());
twofactor.save(&mut conn).await?;
mail::send_protected_action_token(&user.email, &pa_data.token).await?;
Ok(())
}
#[derive(Deserialize, Serialize, Debug)]
#[allow(non_snake_case)]
struct ProtectedActionVerify {
OTP: String,
}
#[post("/accounts/verify-otp", data = "<data>")]
async fn verify_otp(data: JsonUpcase<ProtectedActionVerify>, headers: Headers, mut conn: DbConn) -> EmptyResult {
if !CONFIG.mail_enabled() {
err!("Email is disabled for this server. Either enable email or login using your master password instead of login via device.");
}
let user = headers.user;
let data: ProtectedActionVerify = data.into_inner().data;
// Delete the token after one validation attempt
// This endpoint only gets called for the vault export, and doesn't need a second attempt
validate_protected_action_otp(&data.OTP, &user.uuid, true, &mut conn).await
}
pub async fn validate_protected_action_otp(
otp: &str,
user_uuid: &str,
delete_if_valid: bool,
conn: &mut DbConn,
) -> EmptyResult {
let pa = TwoFactor::find_by_user_and_type(user_uuid, TwoFactorType::ProtectedActions as i32, conn)
.await
.map_res("Protected action token not found, try sending the code again or restart the process")?;
let mut pa_data = ProtectedActionData::from_json(&pa.data)?;
pa_data.add_attempt();
// Delete the token after x attempts if it has been used too many times
// We use the 6, which should be more then enough for invalid attempts and multiple valid checks
if pa_data.attempts > 6 {
pa.delete(conn).await?;
err!("Token has expired")
}
// Check if the token has expired (Using the email 2fa expiration time)
let date =
DateTime::from_timestamp(pa_data.token_sent, 0).expect("Protected Action token timestamp invalid.").naive_utc();
let max_time = CONFIG.email_expiration_time() as i64;
if date + TimeDelta::try_seconds(max_time).unwrap() < Utc::now().naive_utc() {
pa.delete(conn).await?;
err!("Token has expired")
}
if !crypto::ct_eq(&pa_data.token, otp) {
pa.save(conn).await?;
err!("Token is invalid")
}
if delete_if_valid {
pa.delete(conn).await?;
}
Ok(())
}

Datei anzeigen

@ -7,7 +7,7 @@ use webauthn_rs::{base64_data::Base64UrlSafeData, proto::*, AuthenticationState,
use crate::{
api::{
core::{log_user_event, two_factor::_generate_recover_code},
EmptyResult, JsonResult, JsonUpcase, NumberOrString, PasswordData,
EmptyResult, JsonResult, JsonUpcase, PasswordOrOtpData,
},
auth::Headers,
db::{
@ -15,6 +15,7 @@ use crate::{
DbConn,
},
error::Error,
util::NumberOrString,
CONFIG,
};
@ -103,16 +104,17 @@ impl WebauthnRegistration {
}
#[post("/two-factor/get-webauthn", data = "<data>")]
async fn get_webauthn(data: JsonUpcase<PasswordData>, headers: Headers, mut conn: DbConn) -> JsonResult {
async fn get_webauthn(data: JsonUpcase<PasswordOrOtpData>, headers: Headers, mut conn: DbConn) -> JsonResult {
if !CONFIG.domain_set() {
err!("`DOMAIN` environment variable is not set. Webauthn disabled")
}
if !headers.user.check_valid_password(&data.data.MasterPasswordHash) {
err!("Invalid password");
}
let data: PasswordOrOtpData = data.into_inner().data;
let user = headers.user;
let (enabled, registrations) = get_webauthn_registrations(&headers.user.uuid, &mut conn).await?;
data.validate(&user, false, &mut conn).await?;
let (enabled, registrations) = get_webauthn_registrations(&user.uuid, &mut conn).await?;
let registrations_json: Vec<Value> = registrations.iter().map(WebauthnRegistration::to_json).collect();
Ok(Json(json!({
@ -123,12 +125,17 @@ async fn get_webauthn(data: JsonUpcase<PasswordData>, headers: Headers, mut conn
}
#[post("/two-factor/get-webauthn-challenge", data = "<data>")]
async fn generate_webauthn_challenge(data: JsonUpcase<PasswordData>, headers: Headers, mut conn: DbConn) -> JsonResult {
if !headers.user.check_valid_password(&data.data.MasterPasswordHash) {
err!("Invalid password");
}
async fn generate_webauthn_challenge(
data: JsonUpcase<PasswordOrOtpData>,
headers: Headers,
mut conn: DbConn,
) -> JsonResult {
let data: PasswordOrOtpData = data.into_inner().data;
let user = headers.user;
let registrations = get_webauthn_registrations(&headers.user.uuid, &mut conn)
data.validate(&user, false, &mut conn).await?;
let registrations = get_webauthn_registrations(&user.uuid, &mut conn)
.await?
.1
.into_iter()
@ -136,16 +143,16 @@ async fn generate_webauthn_challenge(data: JsonUpcase<PasswordData>, headers: He
.collect();
let (challenge, state) = WebauthnConfig::load().generate_challenge_register_options(
headers.user.uuid.as_bytes().to_vec(),
headers.user.email,
headers.user.name,
user.uuid.as_bytes().to_vec(),
user.email,
user.name,
Some(registrations),
None,
None,
)?;
let type_ = TwoFactorType::WebauthnRegisterChallenge;
TwoFactor::new(headers.user.uuid, type_, serde_json::to_string(&state)?).save(&mut conn).await?;
TwoFactor::new(user.uuid, type_, serde_json::to_string(&state)?).save(&mut conn).await?;
let mut challenge_value = serde_json::to_value(challenge.public_key)?;
challenge_value["status"] = "ok".into();
@ -158,8 +165,9 @@ async fn generate_webauthn_challenge(data: JsonUpcase<PasswordData>, headers: He
struct EnableWebauthnData {
Id: NumberOrString, // 1..5
Name: String,
MasterPasswordHash: String,
DeviceResponse: RegisterPublicKeyCredentialCopy,
MasterPasswordHash: Option<String>,
Otp: Option<String>,
}
// This is copied from RegisterPublicKeyCredential to change the Response objects casing
@ -246,9 +254,12 @@ async fn activate_webauthn(data: JsonUpcase<EnableWebauthnData>, headers: Header
let data: EnableWebauthnData = data.into_inner().data;
let mut user = headers.user;
if !user.check_valid_password(&data.MasterPasswordHash) {
err!("Invalid password");
PasswordOrOtpData {
MasterPasswordHash: data.MasterPasswordHash,
Otp: data.Otp,
}
.validate(&user, true, &mut conn)
.await?;
// Retrieve and delete the saved challenge state
let type_ = TwoFactorType::WebauthnRegisterChallenge as i32;

Datei anzeigen

@ -1,12 +1,12 @@
use rocket::serde::json::Json;
use rocket::Route;
use serde_json::Value;
use yubico::{config::Config, verify};
use yubico::{config::Config, verify_async};
use crate::{
api::{
core::{log_user_event, two_factor::_generate_recover_code},
EmptyResult, JsonResult, JsonUpcase, PasswordData,
EmptyResult, JsonResult, JsonUpcase, PasswordOrOtpData,
},
auth::Headers,
db::{
@ -24,13 +24,14 @@ pub fn routes() -> Vec<Route> {
#[derive(Deserialize, Debug)]
#[allow(non_snake_case)]
struct EnableYubikeyData {
MasterPasswordHash: String,
Key1: Option<String>,
Key2: Option<String>,
Key3: Option<String>,
Key4: Option<String>,
Key5: Option<String>,
Nfc: bool,
MasterPasswordHash: Option<String>,
Otp: Option<String>,
}
#[derive(Deserialize, Serialize, Debug)]
@ -73,26 +74,21 @@ async fn verify_yubikey_otp(otp: String) -> EmptyResult {
let config = Config::default().set_client_id(yubico_id).set_key(yubico_secret);
match CONFIG.yubico_server() {
Some(server) => {
tokio::task::spawn_blocking(move || verify(otp, config.set_api_hosts(vec![server]))).await.unwrap()
}
None => tokio::task::spawn_blocking(move || verify(otp, config)).await.unwrap(),
Some(server) => verify_async(otp, config.set_api_hosts(vec![server])).await,
None => verify_async(otp, config).await,
}
.map_res("Failed to verify OTP")
.and(Ok(()))
}
#[post("/two-factor/get-yubikey", data = "<data>")]
async fn generate_yubikey(data: JsonUpcase<PasswordData>, headers: Headers, mut conn: DbConn) -> JsonResult {
async fn generate_yubikey(data: JsonUpcase<PasswordOrOtpData>, headers: Headers, mut conn: DbConn) -> JsonResult {
// Make sure the credentials are set
get_yubico_credentials()?;
let data: PasswordData = data.into_inner().data;
let data: PasswordOrOtpData = data.into_inner().data;
let user = headers.user;
if !user.check_valid_password(&data.MasterPasswordHash) {
err!("Invalid password");
}
data.validate(&user, false, &mut conn).await?;
let user_uuid = &user.uuid;
let yubikey_type = TwoFactorType::YubiKey as i32;
@ -122,9 +118,12 @@ async fn activate_yubikey(data: JsonUpcase<EnableYubikeyData>, headers: Headers,
let data: EnableYubikeyData = data.into_inner().data;
let mut user = headers.user;
if !user.check_valid_password(&data.MasterPasswordHash) {
err!("Invalid password");
PasswordOrOtpData {
MasterPasswordHash: data.MasterPasswordHash.clone(),
Otp: data.Otp.clone(),
}
.validate(&user, true, &mut conn)
.await?;
// Check if we already have some data
let mut yubikey_data =
@ -192,10 +191,6 @@ pub async fn validate_yubikey_login(response: &str, twofactor_data: &str) -> Emp
err!("Given Yubikey is not registered");
}
let result = verify_yubikey_otp(response.to_owned()).await;
match result {
Ok(_answer) => Ok(()),
Err(_e) => err!("Failed to verify Yubikey against OTP server"),
}
verify_yubikey_otp(response.to_owned()).await.map_res("Failed to verify Yubikey against OTP server")?;
Ok(())
}

Datei anzeigen

@ -1,6 +1,6 @@
use std::{
net::IpAddr,
sync::Arc,
sync::{Arc, Mutex},
time::{Duration, SystemTime},
};
@ -16,14 +16,13 @@ use rocket::{http::ContentType, response::Redirect, Route};
use tokio::{
fs::{create_dir_all, remove_file, symlink_metadata, File},
io::{AsyncReadExt, AsyncWriteExt},
net::lookup_host,
};
use html5gum::{Emitter, HtmlString, InfallibleTokenizer, Readable, StringReader, Tokenizer};
use crate::{
error::Error,
util::{get_reqwest_client_builder, Cached},
util::{get_reqwest_client_builder, Cached, CustomDnsResolver, CustomResolverError},
CONFIG,
};
@ -49,48 +48,32 @@ static CLIENT: Lazy<Client> = Lazy::new(|| {
let icon_download_timeout = Duration::from_secs(CONFIG.icon_download_timeout());
let pool_idle_timeout = Duration::from_secs(10);
// Reuse the client between requests
let client = get_reqwest_client_builder()
get_reqwest_client_builder()
.cookie_provider(Arc::clone(&cookie_store))
.timeout(icon_download_timeout)
.pool_max_idle_per_host(5) // Configure the Hyper Pool to only have max 5 idle connections
.pool_idle_timeout(pool_idle_timeout) // Configure the Hyper Pool to timeout after 10 seconds
.trust_dns(true)
.default_headers(default_headers.clone());
match client.build() {
Ok(client) => client,
Err(e) => {
error!("Possible trust-dns error, trying with trust-dns disabled: '{e}'");
get_reqwest_client_builder()
.cookie_provider(cookie_store)
.timeout(icon_download_timeout)
.pool_max_idle_per_host(5) // Configure the Hyper Pool to only have max 5 idle connections
.pool_idle_timeout(pool_idle_timeout) // Configure the Hyper Pool to timeout after 10 seconds
.trust_dns(false)
.default_headers(default_headers)
.build()
.expect("Failed to build client")
}
}
.dns_resolver(CustomDnsResolver::instance())
.default_headers(default_headers.clone())
.build()
.expect("Failed to build client")
});
// Build Regex only once since this takes a lot of time.
static ICON_SIZE_REGEX: Lazy<Regex> = Lazy::new(|| Regex::new(r"(?x)(\d+)\D*(\d+)").unwrap());
// Special HashMap which holds the user defined Regex to speedup matching the regex.
static ICON_BLACKLIST_REGEX: Lazy<dashmap::DashMap<String, Regex>> = Lazy::new(dashmap::DashMap::new);
async fn icon_redirect(domain: &str, template: &str) -> Option<Redirect> {
#[get("/<domain>/icon.png")]
fn icon_external(domain: &str) -> Option<Redirect> {
if !is_valid_domain(domain) {
warn!("Invalid domain: {}", domain);
return None;
}
if check_domain_blacklist_reason(domain).await.is_some() {
if is_domain_blacklisted(domain) {
return None;
}
let url = template.replace("{}", domain);
let url = CONFIG._icon_service_url().replace("{}", domain);
match CONFIG.icon_redirect_code() {
301 => Some(Redirect::moved(url)), // legacy permanent redirect
302 => Some(Redirect::found(url)), // legacy temporary redirect
@ -103,11 +86,6 @@ async fn icon_redirect(domain: &str, template: &str) -> Option<Redirect> {
}
}
#[get("/<domain>/icon.png")]
async fn icon_external(domain: &str) -> Option<Redirect> {
icon_redirect(domain, &CONFIG._icon_service_url()).await
}
#[get("/<domain>/icon.png")]
async fn icon_internal(domain: &str) -> Cached<(ContentType, Vec<u8>)> {
const FALLBACK_ICON: &[u8] = include_bytes!("../static/images/fallback-icon.png");
@ -166,153 +144,28 @@ fn is_valid_domain(domain: &str) -> bool {
true
}
/// TODO: This is extracted from IpAddr::is_global, which is unstable:
/// https://doc.rust-lang.org/nightly/std/net/enum.IpAddr.html#method.is_global
/// Remove once https://github.com/rust-lang/rust/issues/27709 is merged
#[allow(clippy::nonminimal_bool)]
#[cfg(not(feature = "unstable"))]
fn is_global(ip: IpAddr) -> bool {
match ip {
IpAddr::V4(ip) => {
// check if this address is 192.0.0.9 or 192.0.0.10. These addresses are the only two
// globally routable addresses in the 192.0.0.0/24 range.
if u32::from(ip) == 0xc0000009 || u32::from(ip) == 0xc000000a {
return true;
}
!ip.is_private()
&& !ip.is_loopback()
&& !ip.is_link_local()
&& !ip.is_broadcast()
&& !ip.is_documentation()
&& !(ip.octets()[0] == 100 && (ip.octets()[1] & 0b1100_0000 == 0b0100_0000))
&& !(ip.octets()[0] == 192 && ip.octets()[1] == 0 && ip.octets()[2] == 0)
&& !(ip.octets()[0] & 240 == 240 && !ip.is_broadcast())
&& !(ip.octets()[0] == 198 && (ip.octets()[1] & 0xfe) == 18)
// Make sure the address is not in 0.0.0.0/8
&& ip.octets()[0] != 0
}
IpAddr::V6(ip) => {
if ip.is_multicast() && ip.segments()[0] & 0x000f == 14 {
true
} else {
!ip.is_multicast()
&& !ip.is_loopback()
&& !((ip.segments()[0] & 0xffc0) == 0xfe80)
&& !((ip.segments()[0] & 0xfe00) == 0xfc00)
&& !ip.is_unspecified()
&& !((ip.segments()[0] == 0x2001) && (ip.segments()[1] == 0xdb8))
}
}
}
}
pub fn is_domain_blacklisted(domain: &str) -> bool {
let Some(config_blacklist) = CONFIG.icon_blacklist_regex() else {
return false;
};
#[cfg(feature = "unstable")]
fn is_global(ip: IpAddr) -> bool {
ip.is_global()
}
// Compiled domain blacklist
static COMPILED_BLACKLIST: Mutex<Option<(String, Regex)>> = Mutex::new(None);
let mut guard = COMPILED_BLACKLIST.lock().unwrap();
/// These are some tests to check that the implementations match
/// The IPv4 can be all checked in 5 mins or so and they are correct as of nightly 2020-07-11
/// The IPV6 can't be checked in a reasonable time, so we check about ten billion random ones, so far correct
/// Note that the is_global implementation is subject to change as new IP RFCs are created
///
/// To run while showing progress output:
/// cargo test --features sqlite,unstable -- --nocapture --ignored
#[cfg(test)]
#[cfg(feature = "unstable")]
mod tests {
use super::*;
#[test]
#[ignore]
fn test_ipv4_global() {
for a in 0..u8::MAX {
println!("Iter: {}/255", a);
for b in 0..u8::MAX {
for c in 0..u8::MAX {
for d in 0..u8::MAX {
let ip = IpAddr::V4(std::net::Ipv4Addr::new(a, b, c, d));
assert_eq!(ip.is_global(), is_global(ip))
}
}
}
// If the stored regex is up to date, use it
if let Some((value, regex)) = &*guard {
if value == &config_blacklist {
return regex.is_match(domain);
}
}
#[test]
#[ignore]
fn test_ipv6_global() {
use ring::rand::{SecureRandom, SystemRandom};
let mut v = [0u8; 16];
let rand = SystemRandom::new();
for i in 0..1_000 {
println!("Iter: {}/1_000", i);
for _ in 0..10_000_000 {
rand.fill(&mut v).expect("Error generating random values");
let ip = IpAddr::V6(std::net::Ipv6Addr::new(
(v[14] as u16) << 8 | v[15] as u16,
(v[12] as u16) << 8 | v[13] as u16,
(v[10] as u16) << 8 | v[11] as u16,
(v[8] as u16) << 8 | v[9] as u16,
(v[6] as u16) << 8 | v[7] as u16,
(v[4] as u16) << 8 | v[5] as u16,
(v[2] as u16) << 8 | v[3] as u16,
(v[0] as u16) << 8 | v[1] as u16,
));
assert_eq!(ip.is_global(), is_global(ip))
}
}
}
}
// If we don't have a regex stored, or it's not up to date, recreate it
let regex = Regex::new(&config_blacklist).unwrap();
let is_match = regex.is_match(domain);
*guard = Some((config_blacklist, regex));
#[derive(Clone)]
enum DomainBlacklistReason {
Regex,
IP,
}
use cached::proc_macro::cached;
#[cached(key = "String", convert = r#"{ domain.to_string() }"#, size = 16, time = 60)]
async fn check_domain_blacklist_reason(domain: &str) -> Option<DomainBlacklistReason> {
// First check the blacklist regex if there is a match.
// This prevents the blocked domain(s) from being leaked via a DNS lookup.
if let Some(blacklist) = CONFIG.icon_blacklist_regex() {
// Use the pre-generate Regex stored in a Lazy HashMap if there's one, else generate it.
let is_match = if let Some(regex) = ICON_BLACKLIST_REGEX.get(&blacklist) {
regex.is_match(domain)
} else {
// Clear the current list if the previous key doesn't exists.
// To prevent growing of the HashMap after someone has changed it via the admin interface.
if ICON_BLACKLIST_REGEX.len() >= 1 {
ICON_BLACKLIST_REGEX.clear();
}
// Generate the regex to store in too the Lazy Static HashMap.
let blacklist_regex = Regex::new(&blacklist).unwrap();
let is_match = blacklist_regex.is_match(domain);
ICON_BLACKLIST_REGEX.insert(blacklist.clone(), blacklist_regex);
is_match
};
if is_match {
debug!("Blacklisted domain: {} matched ICON_BLACKLIST_REGEX", domain);
return Some(DomainBlacklistReason::Regex);
}
}
if CONFIG.icon_blacklist_non_global_ips() {
if let Ok(s) = lookup_host((domain, 0)).await {
for addr in s {
if !is_global(addr.ip()) {
debug!("IP {} for domain '{}' is not a global IP!", addr.ip(), domain);
return Some(DomainBlacklistReason::IP);
}
}
}
}
None
is_match
}
async fn get_icon(domain: &str) -> Option<(Vec<u8>, String)> {
@ -342,6 +195,13 @@ async fn get_icon(domain: &str) -> Option<(Vec<u8>, String)> {
Some((icon.to_vec(), icon_type.unwrap_or("x-icon").to_string()))
}
Err(e) => {
// If this error comes from the custom resolver, this means this is a blacklisted domain
// or non global IP, don't save the miss file in this case to avoid leaking it
if let Some(error) = CustomResolverError::downcast_ref(&e) {
warn!("{error}");
return None;
}
warn!("Unable to download icon: {:?}", e);
let miss_indicator = path + ".miss";
save_icon(&miss_indicator, &[]).await;
@ -491,42 +351,48 @@ async fn get_icon_url(domain: &str) -> Result<IconUrlResult, Error> {
let ssldomain = format!("https://{domain}");
let httpdomain = format!("http://{domain}");
// First check the domain as given during the request for both HTTPS and HTTP.
let resp = match get_page(&ssldomain).or_else(|_| get_page(&httpdomain)).await {
Ok(c) => Ok(c),
Err(e) => {
let mut sub_resp = Err(e);
// First check the domain as given during the request for HTTPS.
let resp = match get_page(&ssldomain).await {
Err(e) if CustomResolverError::downcast_ref(&e).is_none() => {
// If we get an error that is not caused by the blacklist, we retry with HTTP
match get_page(&httpdomain).await {
mut sub_resp @ Err(_) => {
// When the domain is not an IP, and has more then one dot, remove all subdomains.
let is_ip = domain.parse::<IpAddr>();
if is_ip.is_err() && domain.matches('.').count() > 1 {
let mut domain_parts = domain.split('.');
let base_domain = format!(
"{base}.{tld}",
tld = domain_parts.next_back().unwrap(),
base = domain_parts.next_back().unwrap()
);
if is_valid_domain(&base_domain) {
let sslbase = format!("https://{base_domain}");
let httpbase = format!("http://{base_domain}");
debug!("[get_icon_url]: Trying without subdomains '{base_domain}'");
// When the domain is not an IP, and has more then one dot, remove all subdomains.
let is_ip = domain.parse::<IpAddr>();
if is_ip.is_err() && domain.matches('.').count() > 1 {
let mut domain_parts = domain.split('.');
let base_domain = format!(
"{base}.{tld}",
tld = domain_parts.next_back().unwrap(),
base = domain_parts.next_back().unwrap()
);
if is_valid_domain(&base_domain) {
let sslbase = format!("https://{base_domain}");
let httpbase = format!("http://{base_domain}");
debug!("[get_icon_url]: Trying without subdomains '{base_domain}'");
sub_resp = get_page(&sslbase).or_else(|_| get_page(&httpbase)).await;
}
sub_resp = get_page(&sslbase).or_else(|_| get_page(&httpbase)).await;
}
// When the domain is not an IP, and has less then 2 dots, try to add www. infront of it.
} else if is_ip.is_err() && domain.matches('.').count() < 2 {
let www_domain = format!("www.{domain}");
if is_valid_domain(&www_domain) {
let sslwww = format!("https://{www_domain}");
let httpwww = format!("http://{www_domain}");
debug!("[get_icon_url]: Trying with www. prefix '{www_domain}'");
sub_resp = get_page(&sslwww).or_else(|_| get_page(&httpwww)).await;
// When the domain is not an IP, and has less then 2 dots, try to add www. infront of it.
} else if is_ip.is_err() && domain.matches('.').count() < 2 {
let www_domain = format!("www.{domain}");
if is_valid_domain(&www_domain) {
let sslwww = format!("https://{www_domain}");
let httpwww = format!("http://{www_domain}");
debug!("[get_icon_url]: Trying with www. prefix '{www_domain}'");
sub_resp = get_page(&sslwww).or_else(|_| get_page(&httpwww)).await;
}
}
sub_resp
}
res => res,
}
sub_resp
}
// If we get a result or a blacklist error, just continue
res => res,
};
// Create the iconlist
@ -573,21 +439,12 @@ async fn get_page(url: &str) -> Result<Response, Error> {
}
async fn get_page_with_referer(url: &str, referer: &str) -> Result<Response, Error> {
match check_domain_blacklist_reason(url::Url::parse(url).unwrap().host_str().unwrap_or_default()).await {
Some(DomainBlacklistReason::Regex) => warn!("Favicon '{}' is from a blacklisted domain!", url),
Some(DomainBlacklistReason::IP) => warn!("Favicon '{}' is hosted on a non-global IP!", url),
None => (),
}
let mut client = CLIENT.get(url);
if !referer.is_empty() {
client = client.header("Referer", referer)
}
match client.send().await {
Ok(c) => c.error_for_status().map_err(Into::into),
Err(e) => err_silent!(format!("{e}")),
}
Ok(client.send().await?.error_for_status()?)
}
/// Returns a Integer with the priority of the type of the icon which to prefer.
@ -670,12 +527,6 @@ fn parse_sizes(sizes: &str) -> (u16, u16) {
}
async fn download_icon(domain: &str) -> Result<(Bytes, Option<&str>), Error> {
match check_domain_blacklist_reason(domain).await {
Some(DomainBlacklistReason::Regex) => err_silent!("Domain is blacklisted", domain),
Some(DomainBlacklistReason::IP) => err_silent!("Host resolves to a non-global IP", domain),
None => (),
}
let icon_result = get_icon_url(domain).await?;
let mut buffer = Bytes::new();
@ -711,22 +562,19 @@ async fn download_icon(domain: &str) -> Result<(Bytes, Option<&str>), Error> {
_ => debug!("Extracted icon from data:image uri is invalid"),
};
} else {
match get_page_with_referer(&icon.href, &icon_result.referer).await {
Ok(res) => {
buffer = stream_to_bytes_limit(res, 5120 * 1024).await?; // 5120KB/5MB for each icon max (Same as icons.bitwarden.net)
let res = get_page_with_referer(&icon.href, &icon_result.referer).await?;
// Check if the icon type is allowed, else try an icon from the list.
icon_type = get_icon_type(&buffer);
if icon_type.is_none() {
buffer.clear();
debug!("Icon from {}, is not a valid image type", icon.href);
continue;
}
info!("Downloaded icon from {}", icon.href);
break;
}
Err(e) => debug!("{:?}", e),
};
buffer = stream_to_bytes_limit(res, 5120 * 1024).await?; // 5120KB/5MB for each icon max (Same as icons.bitwarden.net)
// Check if the icon type is allowed, else try an icon from the list.
icon_type = get_icon_type(&buffer);
if icon_type.is_none() {
buffer.clear();
debug!("Icon from {}, is not a valid image type", icon.href);
continue;
}
info!("Downloaded icon from {}", icon.href);
break;
}
}

Datei anzeigen

@ -9,9 +9,12 @@ use serde_json::Value;
use crate::{
api::{
core::accounts::{PreloginData, RegisterData, _prelogin, _register},
core::log_user_event,
core::two_factor::{duo, email, email::EmailTokenData, yubikey},
core::{
accounts::{PreloginData, RegisterData, _prelogin, _register},
log_user_event,
two_factor::{authenticator, duo, email, enforce_2fa_policy, webauthn, yubikey},
},
push::register_push_device,
ApiResult, EmptyResult, JsonResult, JsonUpcase,
},
auth::{generate_organization_api_key_login_claims, ClientHeaders, ClientIp},
@ -103,8 +106,13 @@ async fn _refresh_login(data: ConnectData, conn: &mut DbConn) -> JsonResult {
// Common
let user = User::find_by_uuid(&device.user_uuid, conn).await.unwrap();
let orgs = UserOrganization::find_confirmed_by_user(&user.uuid, conn).await;
let (access_token, expires_in) = device.refresh_tokens(&user, orgs, scope_vec);
// ---
// Disabled this variable, it was used to generate the JWT
// Because this might get used in the future, and is add by the Bitwarden Server, lets keep it, but then commented out
// See: https://github.com/dani-garcia/vaultwarden/issues/4156
// ---
// let orgs = UserOrganization::find_confirmed_by_user(&user.uuid, conn).await;
let (access_token, expires_in) = device.refresh_tokens(&user, scope_vec);
device.save(conn).await?;
let result = json!({
@ -242,7 +250,7 @@ async fn _password_login(
let (mut device, new_device) = get_device(&data, conn, &user).await;
let twofactor_token = twofactor_auth(&user.uuid, &data, &mut device, ip, conn).await?;
let twofactor_token = twofactor_auth(&user, &data, &mut device, ip, conn).await?;
if CONFIG.mail_enabled() && new_device {
if let Err(e) = mail::send_new_device_logged_in(&user.email, &ip.ip.to_string(), &now, &device.name).await {
@ -259,9 +267,19 @@ async fn _password_login(
}
}
// register push device
if !new_device {
register_push_device(&mut device, conn).await?;
}
// Common
let orgs = UserOrganization::find_confirmed_by_user(&user.uuid, conn).await;
let (access_token, expires_in) = device.refresh_tokens(&user, orgs, scope_vec);
// ---
// Disabled this variable, it was used to generate the JWT
// Because this might get used in the future, and is add by the Bitwarden Server, lets keep it, but then commented out
// See: https://github.com/dani-garcia/vaultwarden/issues/4156
// ---
// let orgs = UserOrganization::find_confirmed_by_user(&user.uuid, conn).await;
let (access_token, expires_in) = device.refresh_tokens(&user, scope_vec);
device.save(conn).await?;
let mut result = json!({
@ -277,7 +295,12 @@ async fn _password_login(
"KdfIterations": user.client_kdf_iter,
"KdfMemory": user.client_kdf_memory,
"KdfParallelism": user.client_kdf_parallelism,
"ResetMasterPassword": false,// TODO: Same as above
"ResetMasterPassword": false, // TODO: Same as above
"ForcePasswordReset": false,
"MasterPasswordPolicy": {
"object": "masterPasswordPolicy",
},
"scope": scope,
"unofficialServer": true,
"UserDecryptionOptions": {
@ -374,8 +397,13 @@ async fn _user_api_key_login(
// Common
let scope_vec = vec!["api".into()];
let orgs = UserOrganization::find_confirmed_by_user(&user.uuid, conn).await;
let (access_token, expires_in) = device.refresh_tokens(&user, orgs, scope_vec);
// ---
// Disabled this variable, it was used to generate the JWT
// Because this might get used in the future, and is add by the Bitwarden Server, lets keep it, but then commented out
// See: https://github.com/dani-garcia/vaultwarden/issues/4156
// ---
// let orgs = UserOrganization::find_confirmed_by_user(&user.uuid, conn).await;
let (access_token, expires_in) = device.refresh_tokens(&user, scope_vec);
device.save(conn).await?;
info!("User {} logged in successfully via API key. IP: {}", user.email, ip.ip);
@ -453,32 +481,32 @@ async fn get_device(data: &ConnectData, conn: &mut DbConn, user: &User) -> (Devi
}
async fn twofactor_auth(
user_uuid: &str,
user: &User,
data: &ConnectData,
device: &mut Device,
ip: &ClientIp,
conn: &mut DbConn,
) -> ApiResult<Option<String>> {
let twofactors = TwoFactor::find_by_user(user_uuid, conn).await;
let twofactors = TwoFactor::find_by_user(&user.uuid, conn).await;
// No twofactor token if twofactor is disabled
if twofactors.is_empty() {
enforce_2fa_policy(user, &user.uuid, device.atype, &ip.ip, conn).await?;
return Ok(None);
}
TwoFactorIncomplete::mark_incomplete(user_uuid, &device.uuid, &device.name, ip, conn).await?;
TwoFactorIncomplete::mark_incomplete(&user.uuid, &device.uuid, &device.name, ip, conn).await?;
let twofactor_ids: Vec<_> = twofactors.iter().map(|tf| tf.atype).collect();
let selected_id = data.two_factor_provider.unwrap_or(twofactor_ids[0]); // If we aren't given a two factor provider, assume the first one
let twofactor_code = match data.two_factor_token {
Some(ref code) => code,
None => err_json!(_json_err_twofactor(&twofactor_ids, user_uuid, conn).await?, "2FA token not provided"),
None => err_json!(_json_err_twofactor(&twofactor_ids, &user.uuid, conn).await?, "2FA token not provided"),
};
let selected_twofactor = twofactors.into_iter().find(|tf| tf.atype == selected_id && tf.enabled);
use crate::api::core::two_factor as _tf;
use crate::crypto::ct_eq;
let selected_data = _selected_data(selected_twofactor);
@ -486,17 +514,15 @@ async fn twofactor_auth(
match TwoFactorType::from_i32(selected_id) {
Some(TwoFactorType::Authenticator) => {
_tf::authenticator::validate_totp_code_str(user_uuid, twofactor_code, &selected_data?, ip, conn).await?
authenticator::validate_totp_code_str(&user.uuid, twofactor_code, &selected_data?, ip, conn).await?
}
Some(TwoFactorType::Webauthn) => {
_tf::webauthn::validate_webauthn_login(user_uuid, twofactor_code, conn).await?
}
Some(TwoFactorType::YubiKey) => _tf::yubikey::validate_yubikey_login(twofactor_code, &selected_data?).await?,
Some(TwoFactorType::Webauthn) => webauthn::validate_webauthn_login(&user.uuid, twofactor_code, conn).await?,
Some(TwoFactorType::YubiKey) => yubikey::validate_yubikey_login(twofactor_code, &selected_data?).await?,
Some(TwoFactorType::Duo) => {
_tf::duo::validate_duo_login(data.username.as_ref().unwrap().trim(), twofactor_code, conn).await?
duo::validate_duo_login(data.username.as_ref().unwrap().trim(), twofactor_code, conn).await?
}
Some(TwoFactorType::Email) => {
_tf::email::validate_email_code_str(user_uuid, twofactor_code, &selected_data?, conn).await?
email::validate_email_code_str(&user.uuid, twofactor_code, &selected_data?, conn).await?
}
Some(TwoFactorType::Remember) => {
@ -506,7 +532,7 @@ async fn twofactor_auth(
}
_ => {
err_json!(
_json_err_twofactor(&twofactor_ids, user_uuid, conn).await?,
_json_err_twofactor(&twofactor_ids, &user.uuid, conn).await?,
"2FA Remember token not provided"
)
}
@ -520,7 +546,7 @@ async fn twofactor_auth(
),
}
TwoFactorIncomplete::mark_complete(user_uuid, &device.uuid, conn).await?;
TwoFactorIncomplete::mark_complete(&user.uuid, &device.uuid, conn).await?;
if !CONFIG.disable_2fa_remember() && remember == 1 {
Ok(Some(device.refresh_twofactor_remember()))
@ -535,8 +561,6 @@ fn _selected_data(tf: Option<TwoFactor>) -> ApiResult<String> {
}
async fn _json_err_twofactor(providers: &[i32], user_uuid: &str, conn: &mut DbConn) -> ApiResult<Value> {
use crate::api::core::two_factor;
let mut result = json!({
"error" : "invalid_grant",
"error_description" : "Two factor required.",
@ -551,7 +575,7 @@ async fn _json_err_twofactor(providers: &[i32], user_uuid: &str, conn: &mut DbCo
Some(TwoFactorType::Authenticator) => { /* Nothing to do for TOTP */ }
Some(TwoFactorType::Webauthn) if CONFIG.domain_set() => {
let request = two_factor::webauthn::generate_webauthn_login(user_uuid, conn).await?;
let request = webauthn::generate_webauthn_login(user_uuid, conn).await?;
result["TwoFactorProviders2"][provider.to_string()] = request.0;
}
@ -583,8 +607,6 @@ async fn _json_err_twofactor(providers: &[i32], user_uuid: &str, conn: &mut DbCo
}
Some(tf_type @ TwoFactorType::Email) => {
use crate::api::core::two_factor as _tf;
let twofactor = match TwoFactor::find_by_user_and_type(user_uuid, tf_type as i32, conn).await {
Some(tf) => tf,
None => err!("No twofactor email registered"),
@ -592,10 +614,10 @@ async fn _json_err_twofactor(providers: &[i32], user_uuid: &str, conn: &mut DbCo
// Send email immediately if email is the only 2FA option
if providers.len() == 1 {
_tf::email::send_token(user_uuid, conn).await?
email::send_token(user_uuid, conn).await?
}
let email_data = EmailTokenData::from_json(&twofactor.data)?;
let email_data = email::EmailTokenData::from_json(&twofactor.data)?;
result["TwoFactorProviders2"][provider.to_string()] = json!({
"Email": email::obscure_email(&email_data.email),
})

Datei anzeigen

@ -20,10 +20,10 @@ pub use crate::api::{
core::two_factor::send_incomplete_2fa_notifications,
core::{emergency_notification_reminder_job, emergency_request_timeout_job},
core::{event_cleanup_job, events_routes as core_events_routes},
icons::routes as icons_routes,
icons::{is_domain_blacklisted, routes as icons_routes},
identity::routes as identity_routes,
notifications::routes as notifications_routes,
notifications::{start_notification_server, AnonymousNotify, Notify, UpdateType, WS_ANONYMOUS_SUBSCRIPTIONS},
notifications::{AnonymousNotify, Notify, UpdateType, WS_ANONYMOUS_SUBSCRIPTIONS, WS_USERS},
push::{
push_cipher_update, push_folder_update, push_logout, push_send_update, push_user_update, register_push_device,
unregister_push_device,
@ -32,6 +32,7 @@ pub use crate::api::{
web::routes as web_routes,
web::static_files,
};
use crate::db::{models::User, DbConn};
use crate::util;
// Type aliases for API methods results
@ -46,33 +47,29 @@ type JsonVec<T> = Json<Vec<T>>;
// Common structs representing JSON data received
#[derive(Deserialize)]
#[allow(non_snake_case)]
struct PasswordData {
MasterPasswordHash: String,
struct PasswordOrOtpData {
MasterPasswordHash: Option<String>,
Otp: Option<String>,
}
#[derive(Deserialize, Debug, Clone)]
#[serde(untagged)]
enum NumberOrString {
Number(i32),
String(String),
}
impl PasswordOrOtpData {
/// Tokens used via this struct can be used multiple times during the process
/// First for the validation to continue, after that to enable or validate the following actions
/// This is different per caller, so it can be adjusted to delete the token or not
pub async fn validate(&self, user: &User, delete_if_valid: bool, conn: &mut DbConn) -> EmptyResult {
use crate::api::core::two_factor::protected_actions::validate_protected_action_otp;
impl NumberOrString {
fn into_string(self) -> String {
match self {
NumberOrString::Number(n) => n.to_string(),
NumberOrString::String(s) => s,
}
}
#[allow(clippy::wrong_self_convention)]
fn into_i32(&self) -> ApiResult<i32> {
use std::num::ParseIntError as PIE;
match self {
NumberOrString::Number(n) => Ok(*n),
NumberOrString::String(s) => {
s.parse().map_err(|e: PIE| crate::Error::new("Can't convert to number", e.to_string()))
match (self.MasterPasswordHash.as_deref(), self.Otp.as_deref()) {
(Some(pw_hash), None) => {
if !user.check_valid_password(pw_hash) {
err!("Invalid password");
}
}
(None, Some(otp)) => {
validate_protected_action_otp(otp, &user.uuid, delete_if_valid, conn).await?;
}
_ => err!("No validation provided"),
}
Ok(())
}
}

Datei anzeigen

@ -1,23 +1,11 @@
use std::{
net::{IpAddr, SocketAddr},
sync::Arc,
time::Duration,
};
use std::{net::IpAddr, sync::Arc, time::Duration};
use chrono::{NaiveDateTime, Utc};
use rmpv::Value;
use rocket::{
futures::{SinkExt, StreamExt},
Route,
};
use tokio::{
net::{TcpListener, TcpStream},
sync::mpsc::Sender,
};
use tokio_tungstenite::{
accept_hdr_async,
tungstenite::{handshake, Message},
};
use rocket::{futures::StreamExt, Route};
use tokio::sync::mpsc::Sender;
use rocket_ws::{Message, WebSocket};
use crate::{
auth::{ClientIp, WsAccessTokenHeader},
@ -30,7 +18,7 @@ use crate::{
use once_cell::sync::Lazy;
static WS_USERS: Lazy<Arc<WebSocketUsers>> = Lazy::new(|| {
pub static WS_USERS: Lazy<Arc<WebSocketUsers>> = Lazy::new(|| {
Arc::new(WebSocketUsers {
map: Arc::new(dashmap::DashMap::new()),
})
@ -47,8 +35,15 @@ use super::{
push_send_update, push_user_update,
};
static NOTIFICATIONS_DISABLED: Lazy<bool> = Lazy::new(|| !CONFIG.enable_websocket() && !CONFIG.push_enabled());
pub fn routes() -> Vec<Route> {
routes![websockets_hub, anonymous_websockets_hub]
if CONFIG.enable_websocket() {
routes![websockets_hub, anonymous_websockets_hub]
} else {
info!("WebSocket are disabled, realtime sync functionality will not work!");
routes![]
}
}
#[derive(FromForm, Debug)]
@ -108,7 +103,7 @@ impl Drop for WSAnonymousEntryMapGuard {
#[get("/hub?<data..>")]
fn websockets_hub<'r>(
ws: rocket_ws::WebSocket,
ws: WebSocket,
data: WsAccessToken,
ip: ClientIp,
header_token: WsAccessTokenHeader,
@ -164,6 +159,11 @@ fn websockets_hub<'r>(
continue;
}
}
// Prevent sending anything back when a `Close` Message is received.
// Just break the loop
Message::Close(_) => break,
// Just echo anything else the client sends
_ => yield message,
}
@ -187,11 +187,7 @@ fn websockets_hub<'r>(
}
#[get("/anonymous-hub?<token..>")]
fn anonymous_websockets_hub<'r>(
ws: rocket_ws::WebSocket,
token: String,
ip: ClientIp,
) -> Result<rocket_ws::Stream!['r], Error> {
fn anonymous_websockets_hub<'r>(ws: WebSocket, token: String, ip: ClientIp) -> Result<rocket_ws::Stream!['r], Error> {
let addr = ip.ip;
info!("Accepting Anonymous Rocket WS connection from {addr}");
@ -230,6 +226,11 @@ fn anonymous_websockets_hub<'r>(
continue;
}
}
// Prevent sending anything back when a `Close` Message is received.
// Just break the loop
Message::Close(_) => break,
// Just echo anything else the client sends
_ => yield message,
}
@ -287,8 +288,8 @@ fn serialize(val: Value) -> Vec<u8> {
}
fn serialize_date(date: NaiveDateTime) -> Value {
let seconds: i64 = date.timestamp();
let nanos: i64 = date.timestamp_subsec_nanos().into();
let seconds: i64 = date.and_utc().timestamp();
let nanos: i64 = date.and_utc().timestamp_subsec_nanos().into();
let timestamp = nanos << 34 | seconds;
let bs = timestamp.to_be_bytes();
@ -339,13 +340,19 @@ impl WebSocketUsers {
// NOTE: The last modified date needs to be updated before calling these methods
pub async fn send_user_update(&self, ut: UpdateType, user: &User) {
// Skip any processing if both WebSockets and Push are not active
if *NOTIFICATIONS_DISABLED {
return;
}
let data = create_update(
vec![("UserId".into(), user.uuid.clone().into()), ("Date".into(), serialize_date(user.updated_at))],
ut,
None,
);
self.send_update(&user.uuid, &data).await;
if CONFIG.enable_websocket() {
self.send_update(&user.uuid, &data).await;
}
if CONFIG.push_enabled() {
push_user_update(ut, user);
@ -353,13 +360,19 @@ impl WebSocketUsers {
}
pub async fn send_logout(&self, user: &User, acting_device_uuid: Option<String>) {
// Skip any processing if both WebSockets and Push are not active
if *NOTIFICATIONS_DISABLED {
return;
}
let data = create_update(
vec![("UserId".into(), user.uuid.clone().into()), ("Date".into(), serialize_date(user.updated_at))],
UpdateType::LogOut,
acting_device_uuid.clone(),
);
self.send_update(&user.uuid, &data).await;
if CONFIG.enable_websocket() {
self.send_update(&user.uuid, &data).await;
}
if CONFIG.push_enabled() {
push_logout(user, acting_device_uuid);
@ -373,6 +386,10 @@ impl WebSocketUsers {
acting_device_uuid: &String,
conn: &mut DbConn,
) {
// Skip any processing if both WebSockets and Push are not active
if *NOTIFICATIONS_DISABLED {
return;
}
let data = create_update(
vec![
("Id".into(), folder.uuid.clone().into()),
@ -383,7 +400,9 @@ impl WebSocketUsers {
Some(acting_device_uuid.into()),
);
self.send_update(&folder.user_uuid, &data).await;
if CONFIG.enable_websocket() {
self.send_update(&folder.user_uuid, &data).await;
}
if CONFIG.push_enabled() {
push_folder_update(ut, folder, acting_device_uuid, conn).await;
@ -399,6 +418,10 @@ impl WebSocketUsers {
collection_uuids: Option<Vec<String>>,
conn: &mut DbConn,
) {
// Skip any processing if both WebSockets and Push are not active
if *NOTIFICATIONS_DISABLED {
return;
}
let org_uuid = convert_option(cipher.organization_uuid.clone());
// Depending if there are collections provided or not, we need to have different values for the following variables.
// The user_uuid should be `null`, and the revision date should be set to now, else the clients won't sync the collection change.
@ -424,8 +447,10 @@ impl WebSocketUsers {
Some(acting_device_uuid.into()),
);
for uuid in user_uuids {
self.send_update(uuid, &data).await;
if CONFIG.enable_websocket() {
for uuid in user_uuids {
self.send_update(uuid, &data).await;
}
}
if CONFIG.push_enabled() && user_uuids.len() == 1 {
@ -441,6 +466,10 @@ impl WebSocketUsers {
acting_device_uuid: &String,
conn: &mut DbConn,
) {
// Skip any processing if both WebSockets and Push are not active
if *NOTIFICATIONS_DISABLED {
return;
}
let user_uuid = convert_option(send.user_uuid.clone());
let data = create_update(
@ -453,8 +482,10 @@ impl WebSocketUsers {
None,
);
for uuid in user_uuids {
self.send_update(uuid, &data).await;
if CONFIG.enable_websocket() {
for uuid in user_uuids {
self.send_update(uuid, &data).await;
}
}
if CONFIG.push_enabled() && user_uuids.len() == 1 {
push_send_update(ut, send, acting_device_uuid, conn).await;
@ -468,12 +499,18 @@ impl WebSocketUsers {
acting_device_uuid: &String,
conn: &mut DbConn,
) {
// Skip any processing if both WebSockets and Push are not active
if *NOTIFICATIONS_DISABLED {
return;
}
let data = create_update(
vec![("Id".into(), auth_request_uuid.clone().into()), ("UserId".into(), user_uuid.clone().into())],
UpdateType::AuthRequest,
Some(acting_device_uuid.to_string()),
);
self.send_update(user_uuid, &data).await;
if CONFIG.enable_websocket() {
self.send_update(user_uuid, &data).await;
}
if CONFIG.push_enabled() {
push_auth_request(user_uuid.to_string(), auth_request_uuid.to_string(), conn).await;
@ -487,12 +524,18 @@ impl WebSocketUsers {
approving_device_uuid: String,
conn: &mut DbConn,
) {
// Skip any processing if both WebSockets and Push are not active
if *NOTIFICATIONS_DISABLED {
return;
}
let data = create_update(
vec![("Id".into(), auth_response_uuid.to_owned().into()), ("UserId".into(), user_uuid.clone().into())],
UpdateType::AuthRequestResponse,
approving_device_uuid.clone().into(),
);
self.send_update(auth_response_uuid, &data).await;
if CONFIG.enable_websocket() {
self.send_update(auth_response_uuid, &data).await;
}
if CONFIG.push_enabled() {
push_auth_response(user_uuid.to_string(), auth_response_uuid.to_string(), approving_device_uuid, conn)
@ -516,6 +559,9 @@ impl AnonymousWebSocketSubscriptions {
}
pub async fn send_auth_response(&self, user_uuid: &String, auth_response_uuid: &str) {
if !CONFIG.enable_websocket() {
return;
}
let data = create_anonymous_update(
vec![("Id".into(), auth_response_uuid.to_owned().into()), ("UserId".into(), user_uuid.clone().into())],
UpdateType::AuthRequestResponse,
@ -610,127 +656,3 @@ pub enum UpdateType {
pub type Notify<'a> = &'a rocket::State<Arc<WebSocketUsers>>;
pub type AnonymousNotify<'a> = &'a rocket::State<Arc<AnonymousWebSocketSubscriptions>>;
pub fn start_notification_server() -> Arc<WebSocketUsers> {
let users = Arc::clone(&WS_USERS);
if CONFIG.websocket_enabled() {
let users2 = Arc::<WebSocketUsers>::clone(&users);
tokio::spawn(async move {
let addr = (CONFIG.websocket_address(), CONFIG.websocket_port());
info!("Starting WebSockets server on {}:{}", addr.0, addr.1);
let listener = TcpListener::bind(addr).await.expect("Can't listen on websocket port");
let (shutdown_tx, mut shutdown_rx) = tokio::sync::oneshot::channel::<()>();
CONFIG.set_ws_shutdown_handle(shutdown_tx);
loop {
tokio::select! {
Ok((stream, addr)) = listener.accept() => {
tokio::spawn(handle_connection(stream, Arc::<WebSocketUsers>::clone(&users2), addr));
}
_ = &mut shutdown_rx => {
break;
}
}
}
info!("Shutting down WebSockets server!")
});
}
users
}
async fn handle_connection(stream: TcpStream, users: Arc<WebSocketUsers>, addr: SocketAddr) -> Result<(), Error> {
let mut user_uuid: Option<String> = None;
info!("Accepting WS connection from {addr}");
// Accept connection, do initial handshake, validate auth token and get the user ID
use handshake::server::{Request, Response};
let mut stream = accept_hdr_async(stream, |req: &Request, res: Response| {
if let Some(token) = get_request_token(req) {
if let Ok(claims) = crate::auth::decode_login(&token) {
user_uuid = Some(claims.sub);
return Ok(res);
}
}
Err(Response::builder().status(401).body(None).unwrap())
})
.await?;
let user_uuid = user_uuid.expect("User UUID should be set after the handshake");
let (mut rx, guard) = {
// Add a channel to send messages to this client to the map
let entry_uuid = uuid::Uuid::new_v4();
let (tx, rx) = tokio::sync::mpsc::channel::<Message>(100);
users.map.entry(user_uuid.clone()).or_default().push((entry_uuid, tx));
// Once the guard goes out of scope, the connection will have been closed and the entry will be deleted from the map
(rx, WSEntryMapGuard::new(users, user_uuid, entry_uuid, addr.ip()))
};
let _guard = guard;
let mut interval = tokio::time::interval(Duration::from_secs(15));
loop {
tokio::select! {
res = stream.next() => {
match res {
Some(Ok(message)) => {
match message {
// Respond to any pings
Message::Ping(ping) => stream.send(Message::Pong(ping)).await?,
Message::Pong(_) => {/* Ignored */},
// We should receive an initial message with the protocol and version, and we will reply to it
Message::Text(ref message) => {
let msg = message.strip_suffix(RECORD_SEPARATOR as char).unwrap_or(message);
if serde_json::from_str(msg).ok() == Some(INITIAL_MESSAGE) {
stream.send(Message::binary(INITIAL_RESPONSE)).await?;
continue;
}
}
// Just echo anything else the client sends
_ => stream.send(message).await?,
}
}
_ => break,
}
}
res = rx.recv() => {
match res {
Some(res) => stream.send(res).await?,
None => break,
}
}
_ = interval.tick() => stream.send(Message::Ping(create_ping())).await?
}
}
Ok(())
}
fn get_request_token(req: &handshake::server::Request) -> Option<String> {
const ACCESS_TOKEN_KEY: &str = "access_token=";
if let Some(Ok(auth)) = req.headers().get("Authorization").map(|a| a.to_str()) {
if let Some(token_part) = auth.strip_prefix("Bearer ") {
return Some(token_part.to_owned());
}
}
if let Some(params) = req.uri().query() {
let params_iter = params.split('&').take(1);
for val in params_iter {
if let Some(stripped) = val.strip_prefix(ACCESS_TOKEN_KEY) {
return Some(stripped.to_owned());
}
}
}
None
}

Datei anzeigen

@ -50,7 +50,11 @@ async fn get_auth_push_token() -> ApiResult<String> {
("client_secret", &client_secret),
];
let res = match get_reqwest_client().post("https://identity.bitwarden.com/connect/token").form(&params).send().await
let res = match get_reqwest_client()
.post(&format!("{}/connect/token", CONFIG.push_identity_uri()))
.form(&params)
.send()
.await
{
Ok(r) => r,
Err(e) => err!(format!("Error getting push token from bitwarden server: {e}")),
@ -72,24 +76,35 @@ async fn get_auth_push_token() -> ApiResult<String> {
Ok(push_token.access_token.clone())
}
pub async fn register_push_device(user_uuid: String, device: Device) -> EmptyResult {
if !CONFIG.push_enabled() {
pub async fn register_push_device(device: &mut Device, conn: &mut crate::db::DbConn) -> EmptyResult {
if !CONFIG.push_enabled() || !device.is_push_device() || device.is_registered() {
return Ok(());
}
let auth_push_token = get_auth_push_token().await?;
if device.push_token.is_none() {
warn!("Skipping the registration of the device {} because the push_token field is empty.", device.uuid);
warn!("To get rid of this message you need to clear the app data and reconnect the device.");
return Ok(());
}
debug!("Registering Device {}", device.uuid);
// generate a random push_uuid so we know the device is registered
device.push_uuid = Some(uuid::Uuid::new_v4().to_string());
//Needed to register a device for push to bitwarden :
let data = json!({
"userId": user_uuid,
"userId": device.user_uuid,
"deviceId": device.push_uuid,
"identifier": device.uuid,
"type": device.atype,
"pushToken": device.push_token
});
let auth_push_token = get_auth_push_token().await?;
let auth_header = format!("Bearer {}", &auth_push_token);
get_reqwest_client()
if let Err(e) = get_reqwest_client()
.post(CONFIG.push_relay_uri() + "/push/register")
.header(CONTENT_TYPE, "application/json")
.header(ACCEPT, "application/json")
@ -97,12 +112,20 @@ pub async fn register_push_device(user_uuid: String, device: Device) -> EmptyRes
.json(&data)
.send()
.await?
.error_for_status()?;
.error_for_status()
{
err!(format!("An error occurred while proceeding registration of a device: {e}"));
}
if let Err(e) = device.save(conn).await {
err!(format!("An error occurred while trying to save the (registered) device push uuid: {e}"));
}
Ok(())
}
pub async fn unregister_push_device(uuid: String) -> EmptyResult {
if !CONFIG.push_enabled() {
pub async fn unregister_push_device(push_uuid: Option<String>) -> EmptyResult {
if !CONFIG.push_enabled() || push_uuid.is_none() {
return Ok(());
}
let auth_push_token = get_auth_push_token().await?;
@ -110,7 +133,7 @@ pub async fn unregister_push_device(uuid: String) -> EmptyResult {
let auth_header = format!("Bearer {}", &auth_push_token);
match get_reqwest_client()
.delete(CONFIG.push_relay_uri() + "/push/" + &uuid)
.delete(CONFIG.push_relay_uri() + "/push/" + &push_uuid.unwrap())
.header(AUTHORIZATION, auth_header)
.send()
.await

Datei anzeigen

@ -173,8 +173,8 @@ pub fn static_files(filename: &str) -> Result<(ContentType, &'static [u8]), Erro
"jdenticon.js" => Ok((ContentType::JavaScript, include_bytes!("../static/scripts/jdenticon.js"))),
"datatables.js" => Ok((ContentType::JavaScript, include_bytes!("../static/scripts/datatables.js"))),
"datatables.css" => Ok((ContentType::CSS, include_bytes!("../static/scripts/datatables.css"))),
"jquery-3.7.0.slim.js" => {
Ok((ContentType::JavaScript, include_bytes!("../static/scripts/jquery-3.7.0.slim.js")))
"jquery-3.7.1.slim.js" => {
Ok((ContentType::JavaScript, include_bytes!("../static/scripts/jquery-3.7.1.slim.js")))
}
_ => err!(format!("Static file not found: {filename}")),
}

Datei anzeigen

@ -1,10 +1,11 @@
// JWT Handling
//
use chrono::{Duration, Utc};
use chrono::{TimeDelta, Utc};
use num_traits::FromPrimitive;
use once_cell::sync::Lazy;
use once_cell::sync::{Lazy, OnceCell};
use jsonwebtoken::{self, errors::ErrorKind, Algorithm, DecodingKey, EncodingKey, Header};
use jsonwebtoken::{errors::ErrorKind, Algorithm, DecodingKey, EncodingKey, Header};
use openssl::rsa::Rsa;
use serde::de::DeserializeOwned;
use serde::ser::Serialize;
@ -12,7 +13,7 @@ use crate::{error::Error, CONFIG};
const JWT_ALGORITHM: Algorithm = Algorithm::RS256;
pub static DEFAULT_VALIDITY: Lazy<Duration> = Lazy::new(|| Duration::hours(2));
pub static DEFAULT_VALIDITY: Lazy<TimeDelta> = Lazy::new(|| TimeDelta::try_hours(2).unwrap());
static JWT_HEADER: Lazy<Header> = Lazy::new(|| Header::new(JWT_ALGORITHM));
pub static JWT_LOGIN_ISSUER: Lazy<String> = Lazy::new(|| format!("{}|login", CONFIG.domain_origin()));
@ -26,23 +27,46 @@ static JWT_SEND_ISSUER: Lazy<String> = Lazy::new(|| format!("{}|send", CONFIG.do
static JWT_ORG_API_KEY_ISSUER: Lazy<String> = Lazy::new(|| format!("{}|api.organization", CONFIG.domain_origin()));
static JWT_FILE_DOWNLOAD_ISSUER: Lazy<String> = Lazy::new(|| format!("{}|file_download", CONFIG.domain_origin()));
static PRIVATE_RSA_KEY: Lazy<EncodingKey> = Lazy::new(|| {
let key =
std::fs::read(CONFIG.private_rsa_key()).unwrap_or_else(|e| panic!("Error loading private RSA Key. \n{e}"));
EncodingKey::from_rsa_pem(&key).unwrap_or_else(|e| panic!("Error decoding private RSA Key.\n{e}"))
});
static PUBLIC_RSA_KEY: Lazy<DecodingKey> = Lazy::new(|| {
let key = std::fs::read(CONFIG.public_rsa_key()).unwrap_or_else(|e| panic!("Error loading public RSA Key. \n{e}"));
DecodingKey::from_rsa_pem(&key).unwrap_or_else(|e| panic!("Error decoding public RSA Key.\n{e}"))
});
static PRIVATE_RSA_KEY: OnceCell<EncodingKey> = OnceCell::new();
static PUBLIC_RSA_KEY: OnceCell<DecodingKey> = OnceCell::new();
pub fn load_keys() {
Lazy::force(&PRIVATE_RSA_KEY);
Lazy::force(&PUBLIC_RSA_KEY);
pub fn initialize_keys() -> Result<(), crate::error::Error> {
let mut priv_key_buffer = Vec::with_capacity(2048);
let priv_key = {
let mut priv_key_file =
File::options().create(true).truncate(false).read(true).write(true).open(CONFIG.private_rsa_key())?;
#[allow(clippy::verbose_file_reads)]
let bytes_read = priv_key_file.read_to_end(&mut priv_key_buffer)?;
if bytes_read > 0 {
Rsa::private_key_from_pem(&priv_key_buffer[..bytes_read])?
} else {
// Only create the key if the file doesn't exist or is empty
let rsa_key = openssl::rsa::Rsa::generate(2048)?;
priv_key_buffer = rsa_key.private_key_to_pem()?;
priv_key_file.write_all(&priv_key_buffer)?;
info!("Private key created correctly.");
rsa_key
}
};
let pub_key_buffer = priv_key.public_key_to_pem()?;
let enc = EncodingKey::from_rsa_pem(&priv_key_buffer)?;
let dec: DecodingKey = DecodingKey::from_rsa_pem(&pub_key_buffer)?;
if PRIVATE_RSA_KEY.set(enc).is_err() {
err!("PRIVATE_RSA_KEY must only be initialized once")
}
if PUBLIC_RSA_KEY.set(dec).is_err() {
err!("PUBLIC_RSA_KEY must only be initialized once")
}
Ok(())
}
pub fn encode_jwt<T: Serialize>(claims: &T) -> String {
match jsonwebtoken::encode(&JWT_HEADER, claims, &PRIVATE_RSA_KEY) {
match jsonwebtoken::encode(&JWT_HEADER, claims, PRIVATE_RSA_KEY.wait()) {
Ok(token) => token,
Err(e) => panic!("Error encoding jwt {e}"),
}
@ -56,7 +80,7 @@ fn decode_jwt<T: DeserializeOwned>(token: &str, issuer: String) -> Result<T, Err
validation.set_issuer(&[issuer]);
let token = token.replace(char::is_whitespace, "");
match jsonwebtoken::decode(&token, &PUBLIC_RSA_KEY, &validation) {
match jsonwebtoken::decode(&token, PUBLIC_RSA_KEY.wait(), &validation) {
Ok(d) => Ok(d.claims),
Err(err) => match *err.kind() {
ErrorKind::InvalidToken => err!("Token is invalid"),
@ -119,10 +143,16 @@ pub struct LoginJwtClaims {
pub email: String,
pub email_verified: bool,
pub orgowner: Vec<String>,
pub orgadmin: Vec<String>,
pub orguser: Vec<String>,
pub orgmanager: Vec<String>,
// ---
// Disabled these keys to be added to the JWT since they could cause the JWT to get too large
// Also These key/value pairs are not used anywhere by either Vaultwarden or Bitwarden Clients
// Because these might get used in the future, and they are added by the Bitwarden Server, lets keep it, but then commented out
// See: https://github.com/dani-garcia/vaultwarden/issues/4156
// ---
// pub orgowner: Vec<String>,
// pub orgadmin: Vec<String>,
// pub orguser: Vec<String>,
// pub orgmanager: Vec<String>,
// user security_stamp
pub sstamp: String,
@ -158,11 +188,11 @@ pub fn generate_invite_claims(
user_org_id: Option<String>,
invited_by_email: Option<String>,
) -> InviteJwtClaims {
let time_now = Utc::now().naive_utc();
let time_now = Utc::now();
let expire_hours = i64::from(CONFIG.invitation_expiration_hours());
InviteJwtClaims {
nbf: time_now.timestamp(),
exp: (time_now + Duration::hours(expire_hours)).timestamp(),
exp: (time_now + TimeDelta::try_hours(expire_hours).unwrap()).timestamp(),
iss: JWT_INVITE_ISSUER.to_string(),
sub: uuid,
email,
@ -196,11 +226,11 @@ pub fn generate_emergency_access_invite_claims(
grantor_name: String,
grantor_email: String,
) -> EmergencyAccessInviteJwtClaims {
let time_now = Utc::now().naive_utc();
let time_now = Utc::now();
let expire_hours = i64::from(CONFIG.invitation_expiration_hours());
EmergencyAccessInviteJwtClaims {
nbf: time_now.timestamp(),
exp: (time_now + Duration::hours(expire_hours)).timestamp(),
exp: (time_now + TimeDelta::try_hours(expire_hours).unwrap()).timestamp(),
iss: JWT_EMERGENCY_ACCESS_INVITE_ISSUER.to_string(),
sub: uuid,
email,
@ -227,10 +257,10 @@ pub struct OrgApiKeyLoginJwtClaims {
}
pub fn generate_organization_api_key_login_claims(uuid: String, org_id: String) -> OrgApiKeyLoginJwtClaims {
let time_now = Utc::now().naive_utc();
let time_now = Utc::now();
OrgApiKeyLoginJwtClaims {
nbf: time_now.timestamp(),
exp: (time_now + Duration::hours(1)).timestamp(),
exp: (time_now + TimeDelta::try_hours(1).unwrap()).timestamp(),
iss: JWT_ORG_API_KEY_ISSUER.to_string(),
sub: uuid,
client_id: format!("organization.{org_id}"),
@ -254,10 +284,10 @@ pub struct FileDownloadClaims {
}
pub fn generate_file_download_claims(uuid: String, file_id: String) -> FileDownloadClaims {
let time_now = Utc::now().naive_utc();
let time_now = Utc::now();
FileDownloadClaims {
nbf: time_now.timestamp(),
exp: (time_now + Duration::minutes(5)).timestamp(),
exp: (time_now + TimeDelta::try_minutes(5).unwrap()).timestamp(),
iss: JWT_FILE_DOWNLOAD_ISSUER.to_string(),
sub: uuid,
file_id,
@ -277,42 +307,42 @@ pub struct BasicJwtClaims {
}
pub fn generate_delete_claims(uuid: String) -> BasicJwtClaims {
let time_now = Utc::now().naive_utc();
let time_now = Utc::now();
let expire_hours = i64::from(CONFIG.invitation_expiration_hours());
BasicJwtClaims {
nbf: time_now.timestamp(),
exp: (time_now + Duration::hours(expire_hours)).timestamp(),
exp: (time_now + TimeDelta::try_hours(expire_hours).unwrap()).timestamp(),
iss: JWT_DELETE_ISSUER.to_string(),
sub: uuid,
}
}
pub fn generate_verify_email_claims(uuid: String) -> BasicJwtClaims {
let time_now = Utc::now().naive_utc();
let time_now = Utc::now();
let expire_hours = i64::from(CONFIG.invitation_expiration_hours());
BasicJwtClaims {
nbf: time_now.timestamp(),
exp: (time_now + Duration::hours(expire_hours)).timestamp(),
exp: (time_now + TimeDelta::try_hours(expire_hours).unwrap()).timestamp(),
iss: JWT_VERIFYEMAIL_ISSUER.to_string(),
sub: uuid,
}
}
pub fn generate_admin_claims() -> BasicJwtClaims {
let time_now = Utc::now().naive_utc();
let time_now = Utc::now();
BasicJwtClaims {
nbf: time_now.timestamp(),
exp: (time_now + Duration::minutes(CONFIG.admin_session_lifetime())).timestamp(),
exp: (time_now + TimeDelta::try_minutes(CONFIG.admin_session_lifetime()).unwrap()).timestamp(),
iss: JWT_ADMIN_ISSUER.to_string(),
sub: "admin_panel".to_string(),
}
}
pub fn generate_send_claims(send_id: &str, file_id: &str) -> BasicJwtClaims {
let time_now = Utc::now().naive_utc();
let time_now = Utc::now();
BasicJwtClaims {
nbf: time_now.timestamp(),
exp: (time_now + Duration::minutes(2)).timestamp(),
exp: (time_now + TimeDelta::try_minutes(2).unwrap()).timestamp(),
iss: JWT_SEND_ISSUER.to_string(),
sub: format!("{send_id}/{file_id}"),
}
@ -361,10 +391,8 @@ impl<'r> FromRequest<'r> for Host {
let host = if let Some(host) = headers.get_one("X-Forwarded-Host") {
host
} else if let Some(host) = headers.get_one("Host") {
host
} else {
""
headers.get_one("Host").unwrap_or_default()
};
format!("{protocol}://{host}")
@ -469,7 +497,7 @@ impl<'r> FromRequest<'r> for Headers {
// Check if the stamp exception has expired first.
// Then, check if the current route matches any of the allowed routes.
// After that check the stamp in exception matches the one in the claims.
if Utc::now().naive_utc().timestamp() > stamp_exception.expire {
if Utc::now().timestamp() > stamp_exception.expire {
// If the stamp exception has been expired remove it from the database.
// This prevents checking this stamp exception for new requests.
let mut user = user;
@ -661,7 +689,7 @@ impl<'r> FromRequest<'r> for ManagerHeaders {
_ => err_handler!("Error getting DB"),
};
if !can_access_collection(&headers.org_user, &col_id, &mut conn).await {
if !Collection::can_access_collection(&headers.org_user, &col_id, &mut conn).await {
err_handler!("The current user isn't a manager for this collection")
}
}
@ -734,10 +762,6 @@ impl From<ManagerHeadersLoose> for Headers {
}
}
}
async fn can_access_collection(org_user: &UserOrganization, col_id: &str, conn: &mut DbConn) -> bool {
org_user.has_full_access()
|| Collection::has_access_by_collection_and_user_uuid(col_id, &org_user.user_uuid, conn).await
}
impl ManagerHeaders {
pub async fn from_loose(
@ -749,7 +773,7 @@ impl ManagerHeaders {
if uuid::Uuid::parse_str(col_id).is_err() {
err!("Collection Id is malformed!");
}
if !can_access_collection(&h.org_user, col_id, conn).await {
if !Collection::can_access_collection(&h.org_user, col_id, conn).await {
err!("You don't have access to all collections!");
}
}
@ -793,7 +817,11 @@ impl<'r> FromRequest<'r> for OwnerHeaders {
//
// Client IP address detection
//
use std::net::IpAddr;
use std::{
fs::File,
io::{Read, Write},
net::IpAddr,
};
pub struct ClientIp {
pub ip: IpAddr,

Datei anzeigen

@ -9,7 +9,7 @@ use reqwest::Url;
use crate::{
db::DbConnType,
error::Error,
util::{get_env, get_env_bool},
util::{get_env, get_env_bool, parse_experimental_client_feature_flags},
};
static CONFIG_FILE: Lazy<String> = Lazy::new(|| {
@ -39,7 +39,6 @@ macro_rules! make_config {
struct Inner {
rocket_shutdown_handle: Option<rocket::Shutdown>,
ws_shutdown_handle: Option<tokio::sync::oneshot::Sender<()>>,
templates: Handlebars<'static>,
config: ConfigItems,
@ -361,7 +360,7 @@ make_config! {
/// Sends folder
sends_folder: String, false, auto, |c| format!("{}/{}", c.data_folder, "sends");
/// Temp folder |> Used for storing temporary file uploads
tmp_folder: String, false, auto, |c| format!("{}/{}", c.data_folder, "tmp");
tmp_folder: String, false, auto, |c| format!("{}/{}", c.data_folder, "tmp");
/// Templates folder
templates_folder: String, false, auto, |c| format!("{}/{}", c.data_folder, "templates");
/// Session JWT key
@ -371,17 +370,15 @@ make_config! {
},
ws {
/// Enable websocket notifications
websocket_enabled: bool, false, def, false;
/// Websocket address
websocket_address: String, false, def, "0.0.0.0".to_string();
/// Websocket port
websocket_port: u16, false, def, 3012;
enable_websocket: bool, false, def, true;
},
push {
/// Enable push notifications
push_enabled: bool, false, def, false;
/// Push relay base uri
/// Push relay uri
push_relay_uri: String, false, def, "https://push.bitwarden.com".to_string();
/// Push identity uri
push_identity_uri: String, false, def, "https://identity.bitwarden.com".to_string();
/// Installation id |> The installation id from https://bitwarden.com/host
push_installation_id: Pass, false, def, String::new();
/// Installation key |> The installation key from https://bitwarden.com/host
@ -440,6 +437,8 @@ make_config! {
user_attachment_limit: i64, true, option;
/// Per-organization attachment storage limit (KB) |> Max kilobytes of attachment storage allowed per org. When this limit is reached, org members will not be allowed to upload further attachments for ciphers owned by that org.
org_attachment_limit: i64, true, option;
/// Per-user send storage limit (KB) |> Max kilobytes of sends storage allowed per user. When this limit is reached, the user will not be allowed to upload further sends.
user_send_limit: i64, true, option;
/// Trash auto-delete days |> Number of days to wait before auto-deleting a trashed item.
/// If unset, trashed items are not auto-deleted. This setting applies globally, so make
@ -478,7 +477,7 @@ make_config! {
/// Invitation token expiration time (in hours) |> The number of hours after which an organization invite token, emergency access invite token,
/// email verification token and deletion request token will expire (must be at least 1)
invitation_expiration_hours: u32, false, def, 120;
/// Allow emergency access |> Controls whether users can enable emergency access to their accounts. This setting applies globally to all users.
/// Enable emergency access |> Controls whether users can enable emergency access to their accounts. This setting applies globally to all users.
emergency_access_allowed: bool, true, def, true;
/// Allow email change |> Controls whether users can change their email. This setting applies globally to all users.
email_change_allowed: bool, true, def, true;
@ -547,6 +546,9 @@ make_config! {
/// TOTP codes of the previous and next 30 seconds will be invalid.
authenticator_disable_time_drift: bool, true, def, false;
/// Customize the enabled feature flags on the clients |> This is a comma separated list of feature flags to enable.
experimental_client_feature_flags: String, false, def, "fido2-vault-credentials".to_string();
/// Require new device emails |> When a user logs in an email is required to be sent.
/// If sending the email fails the login attempt will fail.
require_device_email: bool, true, def, false;
@ -684,6 +686,10 @@ make_config! {
email_expiration_time: u64, true, def, 600;
/// Maximum attempts |> Maximum attempts before an email token is reset and a new email will need to be sent
email_attempts_limit: u64, true, def, 3;
/// Automatically enforce at login |> Setup email 2FA provider regardless of any organization policy
email_2fa_enforce_on_verified_invite: bool, true, def, false;
/// Auto-enable 2FA (Know the risks!) |> Automatically setup email 2FA as fallback provider when needed
email_2fa_auto_fallback: bool, true, def, false;
},
}
@ -751,6 +757,57 @@ fn validate_config(cfg: &ConfigItems) -> Result<(), Error> {
)
}
if cfg.push_enabled {
let push_relay_uri = cfg.push_relay_uri.to_lowercase();
if !push_relay_uri.starts_with("https://") {
err!("`PUSH_RELAY_URI` must start with 'https://'.")
}
if Url::parse(&push_relay_uri).is_err() {
err!("Invalid URL format for `PUSH_RELAY_URI`.");
}
let push_identity_uri = cfg.push_identity_uri.to_lowercase();
if !push_identity_uri.starts_with("https://") {
err!("`PUSH_IDENTITY_URI` must start with 'https://'.")
}
if Url::parse(&push_identity_uri).is_err() {
err!("Invalid URL format for `PUSH_IDENTITY_URI`.");
}
}
// TODO: deal with deprecated flags so they can be removed from this list, cf. #4263
const KNOWN_FLAGS: &[&str] =
&["autofill-overlay", "autofill-v2", "browser-fileless-import", "fido2-vault-credentials"];
let configured_flags = parse_experimental_client_feature_flags(&cfg.experimental_client_feature_flags);
let invalid_flags: Vec<_> = configured_flags.keys().filter(|flag| !KNOWN_FLAGS.contains(&flag.as_str())).collect();
if !invalid_flags.is_empty() {
err!(format!("Unrecognized experimental client feature flags: {invalid_flags:?}.\n\n\
Please ensure all feature flags are spelled correctly and that they are supported in this version.\n\
Supported flags: {KNOWN_FLAGS:?}"));
}
const MAX_FILESIZE_KB: i64 = i64::MAX >> 10;
if let Some(limit) = cfg.user_attachment_limit {
if !(0i64..=MAX_FILESIZE_KB).contains(&limit) {
err!("`USER_ATTACHMENT_LIMIT` is out of bounds");
}
}
if let Some(limit) = cfg.org_attachment_limit {
if !(0i64..=MAX_FILESIZE_KB).contains(&limit) {
err!("`ORG_ATTACHMENT_LIMIT` is out of bounds");
}
}
if let Some(limit) = cfg.user_send_limit {
if !(0i64..=MAX_FILESIZE_KB).contains(&limit) {
err!("`USER_SEND_LIMIT` is out of bounds");
}
}
if cfg._enable_duo
&& (cfg.duo_host.is_some() || cfg.duo_ikey.is_some() || cfg.duo_skey.is_some())
&& !(cfg.duo_host.is_some() && cfg.duo_ikey.is_some() && cfg.duo_skey.is_some())
@ -835,6 +892,13 @@ fn validate_config(cfg: &ConfigItems) -> Result<(), Error> {
err!("To enable email 2FA, a mail transport must be configured")
}
if !cfg._enable_email_2fa && cfg.email_2fa_enforce_on_verified_invite {
err!("To enforce email 2FA on verified invitations, email 2fa has to be enabled!");
}
if !cfg._enable_email_2fa && cfg.email_2fa_auto_fallback {
err!("To use email 2FA as automatic fallback, email 2fa has to be enabled!");
}
// Check if the icon blacklist regex is valid
if let Some(ref r) = cfg.icon_blacklist_regex {
let validate_regex = regex::Regex::new(r);
@ -1013,7 +1077,6 @@ impl Config {
Ok(Config {
inner: RwLock::new(Inner {
rocket_shutdown_handle: None,
ws_shutdown_handle: None,
templates: load_templates(&config.templates_folder),
config,
_env,
@ -1106,7 +1169,7 @@ impl Config {
}
pub fn delete_user_config(&self) -> Result<(), Error> {
crate::util::delete_file(&CONFIG_FILE)?;
std::fs::remove_file(&*CONFIG_FILE)?;
// Empty user config
let usr = ConfigBuilder::default();
@ -1131,9 +1194,6 @@ impl Config {
pub fn private_rsa_key(&self) -> String {
format!("{}.pem", CONFIG.rsa_key_filename())
}
pub fn public_rsa_key(&self) -> String {
format!("{}.pub.pem", CONFIG.rsa_key_filename())
}
pub fn mail_enabled(&self) -> bool {
let inner = &self.inner.read().unwrap().config;
inner._enable_smtp && (inner.smtp_host.is_some() || inner.use_sendmail)
@ -1182,16 +1242,8 @@ impl Config {
self.inner.write().unwrap().rocket_shutdown_handle = Some(handle);
}
pub fn set_ws_shutdown_handle(&self, handle: tokio::sync::oneshot::Sender<()>) {
self.inner.write().unwrap().ws_shutdown_handle = Some(handle);
}
pub fn shutdown(&self) {
if let Ok(mut c) = self.inner.write() {
if let Some(handle) = c.ws_shutdown_handle.take() {
handle.send(()).ok();
}
if let Some(handle) = c.rocket_shutdown_handle.take() {
handle.notify();
}
@ -1199,7 +1251,10 @@ impl Config {
}
}
use handlebars::{Context, Handlebars, Helper, HelperResult, Output, RenderContext, RenderError, Renderable};
use handlebars::{
Context, DirectorySourceOptions, Handlebars, Helper, HelperResult, Output, RenderContext, RenderErrorReason,
Renderable,
};
fn load_templates<P>(path: P) -> Handlebars<'static>
where
@ -1243,17 +1298,18 @@ where
reg!("email/invite_accepted", ".html");
reg!("email/invite_confirmed", ".html");
reg!("email/new_device_logged_in", ".html");
reg!("email/protected_action", ".html");
reg!("email/pw_hint_none", ".html");
reg!("email/pw_hint_some", ".html");
reg!("email/send_2fa_removed_from_org", ".html");
reg!("email/send_single_org_removed_from_org", ".html");
reg!("email/send_org_invite", ".html");
reg!("email/send_emergency_access_invite", ".html");
reg!("email/send_org_invite", ".html");
reg!("email/send_single_org_removed_from_org", ".html");
reg!("email/smtp_test", ".html");
reg!("email/twofactor_email", ".html");
reg!("email/verify_email", ".html");
reg!("email/welcome", ".html");
reg!("email/welcome_must_verify", ".html");
reg!("email/smtp_test", ".html");
reg!("email/welcome", ".html");
reg!("admin/base");
reg!("admin/login");
@ -1267,19 +1323,27 @@ where
// And then load user templates to overwrite the defaults
// Use .hbs extension for the files
// Templates get registered with their relative name
hb.register_templates_directory(".hbs", path).unwrap();
hb.register_templates_directory(
path,
DirectorySourceOptions {
tpl_extension: ".hbs".to_owned(),
..Default::default()
},
)
.unwrap();
hb
}
fn case_helper<'reg, 'rc>(
h: &Helper<'reg, 'rc>,
h: &Helper<'rc>,
r: &'reg Handlebars<'_>,
ctx: &'rc Context,
rc: &mut RenderContext<'reg, 'rc>,
out: &mut dyn Output,
) -> HelperResult {
let param = h.param(0).ok_or_else(|| RenderError::new("Param not found for helper \"case\""))?;
let param =
h.param(0).ok_or_else(|| RenderErrorReason::Other(String::from("Param not found for helper \"case\"")))?;
let value = param.value().clone();
if h.params().iter().skip(1).any(|x| x.value() == &value) {
@ -1290,17 +1354,21 @@ fn case_helper<'reg, 'rc>(
}
fn js_escape_helper<'reg, 'rc>(
h: &Helper<'reg, 'rc>,
h: &Helper<'rc>,
_r: &'reg Handlebars<'_>,
_ctx: &'rc Context,
_rc: &mut RenderContext<'reg, 'rc>,
out: &mut dyn Output,
) -> HelperResult {
let param = h.param(0).ok_or_else(|| RenderError::new("Param not found for helper \"jsesc\""))?;
let param =
h.param(0).ok_or_else(|| RenderErrorReason::Other(String::from("Param not found for helper \"jsesc\"")))?;
let no_quote = h.param(1).is_some();
let value = param.value().as_str().ok_or_else(|| RenderError::new("Param for helper \"jsesc\" is not a String"))?;
let value = param
.value()
.as_str()
.ok_or_else(|| RenderErrorReason::Other(String::from("Param for helper \"jsesc\" is not a String")))?;
let mut escaped_value = value.replace('\\', "").replace('\'', "\\x22").replace('\"', "\\x27");
if !no_quote {
@ -1312,15 +1380,18 @@ fn js_escape_helper<'reg, 'rc>(
}
fn to_json<'reg, 'rc>(
h: &Helper<'reg, 'rc>,
h: &Helper<'rc>,
_r: &'reg Handlebars<'_>,
_ctx: &'rc Context,
_rc: &mut RenderContext<'reg, 'rc>,
out: &mut dyn Output,
) -> HelperResult {
let param = h.param(0).ok_or_else(|| RenderError::new("Expected 1 parameter for \"to_json\""))?.value();
let param = h
.param(0)
.ok_or_else(|| RenderErrorReason::Other(String::from("Expected 1 parameter for \"to_json\"")))?
.value();
let json = serde_json::to_string(param)
.map_err(|e| RenderError::new(format!("Can't serialize parameter to JSON: {e}")))?;
.map_err(|e| RenderErrorReason::Other(format!("Can't serialize parameter to JSON: {e}")))?;
out.write(&json)?;
Ok(())
}

Datei anzeigen

@ -7,7 +7,6 @@ use diesel::{
use rocket::{
http::Status,
outcome::IntoOutcome,
request::{FromRequest, Outcome},
Request,
};
@ -413,8 +412,11 @@ impl<'r> FromRequest<'r> for DbConn {
async fn from_request(request: &'r Request<'_>) -> Outcome<Self, Self::Error> {
match request.rocket().state::<DbPool>() {
Some(p) => p.get().await.map_err(|_| ()).into_outcome(Status::ServiceUnavailable),
None => Outcome::Failure((Status::InternalServerError, ())),
Some(p) => match p.get().await {
Ok(dbconn) => Outcome::Success(dbconn),
_ => Outcome::Error((Status::ServiceUnavailable, ())),
},
None => Outcome::Error((Status::InternalServerError, ())),
}
}
}

Datei anzeigen

@ -1,5 +1,6 @@
use std::io::ErrorKind;
use bigdecimal::{BigDecimal, ToPrimitive};
use serde_json::Value;
use crate::CONFIG;
@ -13,14 +14,14 @@ db_object! {
pub id: String,
pub cipher_uuid: String,
pub file_name: String, // encrypted
pub file_size: i32,
pub file_size: i64,
pub akey: Option<String>,
}
}
/// Local methods
impl Attachment {
pub const fn new(id: String, cipher_uuid: String, file_name: String, file_size: i32, akey: Option<String>) -> Self {
pub const fn new(id: String, cipher_uuid: String, file_name: String, file_size: i64, akey: Option<String>) -> Self {
Self {
id,
cipher_uuid,
@ -102,7 +103,7 @@ impl Attachment {
let file_path = &self.get_file_path();
match crate::util::delete_file(file_path) {
match std::fs::remove_file(file_path) {
// Ignore "file not found" errors. This can happen when the
// upstream caller has already cleaned up the file as part of
// its own error handling.
@ -145,13 +146,18 @@ impl Attachment {
pub async fn size_by_user(user_uuid: &str, conn: &mut DbConn) -> i64 {
db_run! { conn: {
let result: Option<i64> = attachments::table
let result: Option<BigDecimal> = attachments::table
.left_join(ciphers::table.on(ciphers::uuid.eq(attachments::cipher_uuid)))
.filter(ciphers::user_uuid.eq(user_uuid))
.select(diesel::dsl::sum(attachments::file_size))
.first(conn)
.expect("Error loading user attachment total size");
result.unwrap_or(0)
match result.map(|r| r.to_i64()) {
Some(Some(r)) => r,
Some(None) => i64::MAX,
None => 0
}
}}
}
@ -168,13 +174,18 @@ impl Attachment {
pub async fn size_by_org(org_uuid: &str, conn: &mut DbConn) -> i64 {
db_run! { conn: {
let result: Option<i64> = attachments::table
let result: Option<BigDecimal> = attachments::table
.left_join(ciphers::table.on(ciphers::uuid.eq(attachments::cipher_uuid)))
.filter(ciphers::organization_uuid.eq(org_uuid))
.select(diesel::dsl::sum(attachments::file_size))
.first(conn)
.expect("Error loading user attachment total size");
result.unwrap_or(0)
match result.map(|r| r.to_i64()) {
Some(Some(r)) => r,
Some(None) => i64::MAX,
None => 0
}
}}
}

Datei anzeigen

@ -140,7 +140,7 @@ impl AuthRequest {
}
pub async fn purge_expired_auth_requests(conn: &mut DbConn) {
let expiry_time = Utc::now().naive_utc() - chrono::Duration::minutes(5); //after 5 minutes, clients reject the request
let expiry_time = Utc::now().naive_utc() - chrono::TimeDelta::try_minutes(5).unwrap(); //after 5 minutes, clients reject the request
for auth_request in Self::find_created_before(&expiry_time, conn).await {
auth_request.delete(conn).await.ok();
}

Datei anzeigen

@ -1,5 +1,5 @@
use crate::CONFIG;
use chrono::{Duration, NaiveDateTime, Utc};
use chrono::{NaiveDateTime, TimeDelta, Utc};
use serde_json::Value;
use super::{
@ -273,7 +273,16 @@ impl Cipher {
None => {
// Belongs to Organization, need to update affected users
if let Some(ref org_uuid) = self.organization_uuid {
for user_org in UserOrganization::find_by_cipher_and_org(&self.uuid, org_uuid, conn).await.iter() {
// users having access to the collection
let mut collection_users =
UserOrganization::find_by_cipher_and_org(&self.uuid, org_uuid, conn).await;
if CONFIG.org_groups_enabled() {
// members of a group having access to the collection
let group_users =
UserOrganization::find_by_cipher_and_org_with_group(&self.uuid, org_uuid, conn).await;
collection_users.extend(group_users);
}
for user_org in collection_users {
User::update_uuid_revision(&user_org.user_uuid, conn).await;
user_uuids.push(user_org.user_uuid.clone())
}
@ -352,7 +361,7 @@ impl Cipher {
pub async fn purge_trash(conn: &mut DbConn) {
if let Some(auto_delete_days) = CONFIG.trash_auto_delete_days() {
let now = Utc::now().naive_utc();
let dt = now - Duration::days(auto_delete_days);
let dt = now - TimeDelta::try_days(auto_delete_days).unwrap();
for cipher in Self::find_deleted_before(&dt, conn).await {
cipher.delete(conn).await.ok();
}
@ -417,9 +426,12 @@ impl Cipher {
cipher_sync_data: Option<&CipherSyncData>,
conn: &mut DbConn,
) -> bool {
if !CONFIG.org_groups_enabled() {
return false;
}
if let Some(ref org_uuid) = self.organization_uuid {
if let Some(cipher_sync_data) = cipher_sync_data {
return cipher_sync_data.user_group_full_access_for_organizations.get(org_uuid).is_some();
return cipher_sync_data.user_group_full_access_for_organizations.contains(org_uuid);
} else {
return Group::is_in_full_access_group(user_uuid, org_uuid, conn).await;
}
@ -512,6 +524,9 @@ impl Cipher {
}
async fn get_group_collections_access_flags(&self, user_uuid: &str, conn: &mut DbConn) -> Vec<(bool, bool)> {
if !CONFIG.org_groups_enabled() {
return Vec::new();
}
db_run! {conn: {
ciphers::table
.filter(ciphers::uuid.eq(&self.uuid))
@ -593,50 +608,84 @@ impl Cipher {
// result, those ciphers will not appear in "My Vault" for the org
// owner/admin, but they can still be accessed via the org vault view.
pub async fn find_by_user(user_uuid: &str, visible_only: bool, conn: &mut DbConn) -> Vec<Self> {
db_run! {conn: {
let mut query = ciphers::table
.left_join(ciphers_collections::table.on(
ciphers::uuid.eq(ciphers_collections::cipher_uuid)
))
.left_join(users_organizations::table.on(
ciphers::organization_uuid.eq(users_organizations::org_uuid.nullable())
.and(users_organizations::user_uuid.eq(user_uuid))
.and(users_organizations::status.eq(UserOrgStatus::Confirmed as i32))
))
.left_join(users_collections::table.on(
ciphers_collections::collection_uuid.eq(users_collections::collection_uuid)
// Ensure that users_collections::user_uuid is NULL for unconfirmed users.
.and(users_organizations::user_uuid.eq(users_collections::user_uuid))
))
.left_join(groups_users::table.on(
groups_users::users_organizations_uuid.eq(users_organizations::uuid)
))
.left_join(groups::table.on(
groups::uuid.eq(groups_users::groups_uuid)
))
.left_join(collections_groups::table.on(
collections_groups::collections_uuid.eq(ciphers_collections::collection_uuid).and(
collections_groups::groups_uuid.eq(groups::uuid)
)
))
.filter(ciphers::user_uuid.eq(user_uuid)) // Cipher owner
.or_filter(users_organizations::access_all.eq(true)) // access_all in org
.or_filter(users_collections::user_uuid.eq(user_uuid)) // Access to collection
.or_filter(groups::access_all.eq(true)) // Access via groups
.or_filter(collections_groups::collections_uuid.is_not_null()) // Access via groups
.into_boxed();
if CONFIG.org_groups_enabled() {
db_run! {conn: {
let mut query = ciphers::table
.left_join(ciphers_collections::table.on(
ciphers::uuid.eq(ciphers_collections::cipher_uuid)
))
.left_join(users_organizations::table.on(
ciphers::organization_uuid.eq(users_organizations::org_uuid.nullable())
.and(users_organizations::user_uuid.eq(user_uuid))
.and(users_organizations::status.eq(UserOrgStatus::Confirmed as i32))
))
.left_join(users_collections::table.on(
ciphers_collections::collection_uuid.eq(users_collections::collection_uuid)
// Ensure that users_collections::user_uuid is NULL for unconfirmed users.
.and(users_organizations::user_uuid.eq(users_collections::user_uuid))
))
.left_join(groups_users::table.on(
groups_users::users_organizations_uuid.eq(users_organizations::uuid)
))
.left_join(groups::table.on(
groups::uuid.eq(groups_users::groups_uuid)
))
.left_join(collections_groups::table.on(
collections_groups::collections_uuid.eq(ciphers_collections::collection_uuid).and(
collections_groups::groups_uuid.eq(groups::uuid)
)
))
.filter(ciphers::user_uuid.eq(user_uuid)) // Cipher owner
.or_filter(users_organizations::access_all.eq(true)) // access_all in org
.or_filter(users_collections::user_uuid.eq(user_uuid)) // Access to collection
.or_filter(groups::access_all.eq(true)) // Access via groups
.or_filter(collections_groups::collections_uuid.is_not_null()) // Access via groups
.into_boxed();
if !visible_only {
query = query.or_filter(
users_organizations::atype.le(UserOrgType::Admin as i32) // Org admin/owner
);
}
if !visible_only {
query = query.or_filter(
users_organizations::atype.le(UserOrgType::Admin as i32) // Org admin/owner
);
}
query
.select(ciphers::all_columns)
.distinct()
.load::<CipherDb>(conn).expect("Error loading ciphers").from_db()
}}
query
.select(ciphers::all_columns)
.distinct()
.load::<CipherDb>(conn).expect("Error loading ciphers").from_db()
}}
} else {
db_run! {conn: {
let mut query = ciphers::table
.left_join(ciphers_collections::table.on(
ciphers::uuid.eq(ciphers_collections::cipher_uuid)
))
.left_join(users_organizations::table.on(
ciphers::organization_uuid.eq(users_organizations::org_uuid.nullable())
.and(users_organizations::user_uuid.eq(user_uuid))
.and(users_organizations::status.eq(UserOrgStatus::Confirmed as i32))
))
.left_join(users_collections::table.on(
ciphers_collections::collection_uuid.eq(users_collections::collection_uuid)
// Ensure that users_collections::user_uuid is NULL for unconfirmed users.
.and(users_organizations::user_uuid.eq(users_collections::user_uuid))
))
.filter(ciphers::user_uuid.eq(user_uuid)) // Cipher owner
.or_filter(users_organizations::access_all.eq(true)) // access_all in org
.or_filter(users_collections::user_uuid.eq(user_uuid)) // Access to collection
.into_boxed();
if !visible_only {
query = query.or_filter(
users_organizations::atype.le(UserOrgType::Admin as i32) // Org admin/owner
);
}
query
.select(ciphers::all_columns)
.distinct()
.load::<CipherDb>(conn).expect("Error loading ciphers").from_db()
}}
}
}
// Find all ciphers visible to the specified user.

Datei anzeigen

@ -1,6 +1,7 @@
use serde_json::Value;
use super::{CollectionGroup, User, UserOrgStatus, UserOrgType, UserOrganization};
use super::{CollectionGroup, GroupUser, User, UserOrgStatus, UserOrgType, UserOrganization};
use crate::CONFIG;
db_object! {
#[derive(Identifiable, Queryable, Insertable, AsChangeset)]
@ -101,6 +102,15 @@ impl Collection {
json_object["HidePasswords"] = json!(hide_passwords);
json_object
}
pub async fn can_access_collection(org_user: &UserOrganization, col_id: &str, conn: &mut DbConn) -> bool {
org_user.has_status(UserOrgStatus::Confirmed)
&& (org_user.has_full_access()
|| CollectionUser::has_access_to_collection_by_user(col_id, &org_user.user_uuid, conn).await
|| (CONFIG.org_groups_enabled()
&& (GroupUser::has_full_access_by_member(&org_user.org_uuid, &org_user.uuid, conn).await
|| GroupUser::has_access_to_collection_by_member(col_id, &org_user.uuid, conn).await)))
}
}
use crate::db::DbConn;
@ -181,58 +191,74 @@ impl Collection {
}
pub async fn find_by_user_uuid(user_uuid: String, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: {
collections::table
.left_join(users_collections::table.on(
users_collections::collection_uuid.eq(collections::uuid).and(
users_collections::user_uuid.eq(user_uuid.clone())
if CONFIG.org_groups_enabled() {
db_run! { conn: {
collections::table
.left_join(users_collections::table.on(
users_collections::collection_uuid.eq(collections::uuid).and(
users_collections::user_uuid.eq(user_uuid.clone())
)
))
.left_join(users_organizations::table.on(
collections::org_uuid.eq(users_organizations::org_uuid).and(
users_organizations::user_uuid.eq(user_uuid.clone())
)
))
.left_join(groups_users::table.on(
groups_users::users_organizations_uuid.eq(users_organizations::uuid)
))
.left_join(groups::table.on(
groups::uuid.eq(groups_users::groups_uuid)
))
.left_join(collections_groups::table.on(
collections_groups::groups_uuid.eq(groups_users::groups_uuid).and(
collections_groups::collections_uuid.eq(collections::uuid)
)
))
.filter(
users_organizations::status.eq(UserOrgStatus::Confirmed as i32)
)
))
.left_join(users_organizations::table.on(
collections::org_uuid.eq(users_organizations::org_uuid).and(
users_organizations::user_uuid.eq(user_uuid.clone())
)
))
.left_join(groups_users::table.on(
groups_users::users_organizations_uuid.eq(users_organizations::uuid)
))
.left_join(groups::table.on(
groups::uuid.eq(groups_users::groups_uuid)
))
.left_join(collections_groups::table.on(
collections_groups::groups_uuid.eq(groups_users::groups_uuid).and(
collections_groups::collections_uuid.eq(collections::uuid)
)
))
.filter(
users_organizations::status.eq(UserOrgStatus::Confirmed as i32)
)
.filter(
users_collections::user_uuid.eq(user_uuid).or( // Directly accessed collection
users_organizations::access_all.eq(true) // access_all in Organization
).or(
groups::access_all.eq(true) // access_all in groups
).or( // access via groups
groups_users::users_organizations_uuid.eq(users_organizations::uuid).and(
collections_groups::collections_uuid.is_not_null()
.filter(
users_collections::user_uuid.eq(user_uuid).or( // Directly accessed collection
users_organizations::access_all.eq(true) // access_all in Organization
).or(
groups::access_all.eq(true) // access_all in groups
).or( // access via groups
groups_users::users_organizations_uuid.eq(users_organizations::uuid).and(
collections_groups::collections_uuid.is_not_null()
)
)
)
)
.select(collections::all_columns)
.distinct()
.load::<CollectionDb>(conn).expect("Error loading collections").from_db()
}}
}
// Check if a user has access to a specific collection
// FIXME: This needs to be reviewed. The query used by `find_by_user_uuid` could be adjusted to filter when needed.
// For now this is a good solution without making to much changes.
pub async fn has_access_by_collection_and_user_uuid(
collection_uuid: &str,
user_uuid: &str,
conn: &mut DbConn,
) -> bool {
Self::find_by_user_uuid(user_uuid.to_owned(), conn).await.into_iter().any(|c| c.uuid == collection_uuid)
.select(collections::all_columns)
.distinct()
.load::<CollectionDb>(conn).expect("Error loading collections").from_db()
}}
} else {
db_run! { conn: {
collections::table
.left_join(users_collections::table.on(
users_collections::collection_uuid.eq(collections::uuid).and(
users_collections::user_uuid.eq(user_uuid.clone())
)
))
.left_join(users_organizations::table.on(
collections::org_uuid.eq(users_organizations::org_uuid).and(
users_organizations::user_uuid.eq(user_uuid.clone())
)
))
.filter(
users_organizations::status.eq(UserOrgStatus::Confirmed as i32)
)
.filter(
users_collections::user_uuid.eq(user_uuid).or( // Directly accessed collection
users_organizations::access_all.eq(true) // access_all in Organization
)
)
.select(collections::all_columns)
.distinct()
.load::<CollectionDb>(conn).expect("Error loading collections").from_db()
}}
}
}
pub async fn find_by_organization_and_user_uuid(org_uuid: &str, user_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
@ -277,45 +303,70 @@ impl Collection {
}
pub async fn find_by_uuid_and_user(uuid: &str, user_uuid: String, conn: &mut DbConn) -> Option<Self> {
db_run! { conn: {
collections::table
.left_join(users_collections::table.on(
users_collections::collection_uuid.eq(collections::uuid).and(
users_collections::user_uuid.eq(user_uuid.clone())
)
))
.left_join(users_organizations::table.on(
collections::org_uuid.eq(users_organizations::org_uuid).and(
users_organizations::user_uuid.eq(user_uuid)
)
))
.left_join(groups_users::table.on(
groups_users::users_organizations_uuid.eq(users_organizations::uuid)
))
.left_join(groups::table.on(
groups::uuid.eq(groups_users::groups_uuid)
))
.left_join(collections_groups::table.on(
collections_groups::groups_uuid.eq(groups_users::groups_uuid).and(
collections_groups::collections_uuid.eq(collections::uuid)
)
))
.filter(collections::uuid.eq(uuid))
.filter(
users_collections::collection_uuid.eq(uuid).or( // Directly accessed collection
users_organizations::access_all.eq(true).or( // access_all in Organization
users_organizations::atype.le(UserOrgType::Admin as i32) // Org admin or owner
)).or(
groups::access_all.eq(true) // access_all in groups
).or( // access via groups
groups_users::users_organizations_uuid.eq(users_organizations::uuid).and(
collections_groups::collections_uuid.is_not_null()
if CONFIG.org_groups_enabled() {
db_run! { conn: {
collections::table
.left_join(users_collections::table.on(
users_collections::collection_uuid.eq(collections::uuid).and(
users_collections::user_uuid.eq(user_uuid.clone())
)
)
).select(collections::all_columns)
.first::<CollectionDb>(conn).ok()
.from_db()
}}
))
.left_join(users_organizations::table.on(
collections::org_uuid.eq(users_organizations::org_uuid).and(
users_organizations::user_uuid.eq(user_uuid)
)
))
.left_join(groups_users::table.on(
groups_users::users_organizations_uuid.eq(users_organizations::uuid)
))
.left_join(groups::table.on(
groups::uuid.eq(groups_users::groups_uuid)
))
.left_join(collections_groups::table.on(
collections_groups::groups_uuid.eq(groups_users::groups_uuid).and(
collections_groups::collections_uuid.eq(collections::uuid)
)
))
.filter(collections::uuid.eq(uuid))
.filter(
users_collections::collection_uuid.eq(uuid).or( // Directly accessed collection
users_organizations::access_all.eq(true).or( // access_all in Organization
users_organizations::atype.le(UserOrgType::Admin as i32) // Org admin or owner
)).or(
groups::access_all.eq(true) // access_all in groups
).or( // access via groups
groups_users::users_organizations_uuid.eq(users_organizations::uuid).and(
collections_groups::collections_uuid.is_not_null()
)
)
).select(collections::all_columns)
.first::<CollectionDb>(conn).ok()
.from_db()
}}
} else {
db_run! { conn: {
collections::table
.left_join(users_collections::table.on(
users_collections::collection_uuid.eq(collections::uuid).and(
users_collections::user_uuid.eq(user_uuid.clone())
)
))
.left_join(users_organizations::table.on(
collections::org_uuid.eq(users_organizations::org_uuid).and(
users_organizations::user_uuid.eq(user_uuid)
)
))
.filter(collections::uuid.eq(uuid))
.filter(
users_collections::collection_uuid.eq(uuid).or( // Directly accessed collection
users_organizations::access_all.eq(true).or( // access_all in Organization
users_organizations::atype.le(UserOrgType::Admin as i32) // Org admin or owner
))
).select(collections::all_columns)
.first::<CollectionDb>(conn).ok()
.from_db()
}}
}
}
pub async fn is_writable_by_user(&self, user_uuid: &str, conn: &mut DbConn) -> bool {
@ -591,6 +642,10 @@ impl CollectionUser {
Ok(())
}}
}
pub async fn has_access_to_collection_by_user(col_id: &str, user_uuid: &str, conn: &mut DbConn) -> bool {
Self::find_by_collection_and_user(col_id, user_uuid, conn).await.is_some()
}
}
/// Database methods

Datei anzeigen

@ -59,12 +59,7 @@ impl Device {
self.twofactor_remember = None;
}
pub fn refresh_tokens(
&mut self,
user: &super::User,
orgs: Vec<super::UserOrganization>,
scope: Vec<String>,
) -> (String, i64) {
pub fn refresh_tokens(&mut self, user: &super::User, scope: Vec<String>) -> (String, i64) {
// If there is no refresh token, we create one
if self.refresh_token.is_empty() {
use data_encoding::BASE64URL;
@ -72,13 +67,20 @@ impl Device {
}
// Update the expiration of the device and the last update date
let time_now = Utc::now().naive_utc();
self.updated_at = time_now;
let time_now = Utc::now();
self.updated_at = time_now.naive_utc();
let orgowner: Vec<_> = orgs.iter().filter(|o| o.atype == 0).map(|o| o.org_uuid.clone()).collect();
let orgadmin: Vec<_> = orgs.iter().filter(|o| o.atype == 1).map(|o| o.org_uuid.clone()).collect();
let orguser: Vec<_> = orgs.iter().filter(|o| o.atype == 2).map(|o| o.org_uuid.clone()).collect();
let orgmanager: Vec<_> = orgs.iter().filter(|o| o.atype == 3).map(|o| o.org_uuid.clone()).collect();
// ---
// Disabled these keys to be added to the JWT since they could cause the JWT to get too large
// Also These key/value pairs are not used anywhere by either Vaultwarden or Bitwarden Clients
// Because these might get used in the future, and they are added by the Bitwarden Server, lets keep it, but then commented out
// ---
// fn arg: orgs: Vec<super::UserOrganization>,
// ---
// let orgowner: Vec<_> = orgs.iter().filter(|o| o.atype == 0).map(|o| o.org_uuid.clone()).collect();
// let orgadmin: Vec<_> = orgs.iter().filter(|o| o.atype == 1).map(|o| o.org_uuid.clone()).collect();
// let orguser: Vec<_> = orgs.iter().filter(|o| o.atype == 2).map(|o| o.org_uuid.clone()).collect();
// let orgmanager: Vec<_> = orgs.iter().filter(|o| o.atype == 3).map(|o| o.org_uuid.clone()).collect();
// Create the JWT claims struct, to send to the client
use crate::auth::{encode_jwt, LoginJwtClaims, DEFAULT_VALIDITY, JWT_LOGIN_ISSUER};
@ -93,11 +95,16 @@ impl Device {
email: user.email.clone(),
email_verified: !CONFIG.mail_enabled() || user.verified_at.is_some(),
orgowner,
orgadmin,
orguser,
orgmanager,
// ---
// Disabled these keys to be added to the JWT since they could cause the JWT to get too large
// Also These key/value pairs are not used anywhere by either Vaultwarden or Bitwarden Clients
// Because these might get used in the future, and they are added by the Bitwarden Server, lets keep it, but then commented out
// See: https://github.com/dani-garcia/vaultwarden/issues/4156
// ---
// orgowner,
// orgadmin,
// orguser,
// orgmanager,
sstamp: user.security_stamp.clone(),
device: self.uuid.clone(),
scope,
@ -106,6 +113,14 @@ impl Device {
(encode_jwt(&claims), DEFAULT_VALIDITY.num_seconds())
}
pub fn is_push_device(&self) -> bool {
matches!(DeviceType::from_i32(self.atype), DeviceType::Android | DeviceType::Ios)
}
pub fn is_registered(&self) -> bool {
self.push_uuid.is_some()
}
}
use crate::db::DbConn;
@ -203,6 +218,7 @@ impl Device {
.from_db()
}}
}
pub async fn find_push_devices_by_user(user_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: {
devices::table

Datei anzeigen

@ -81,25 +81,32 @@ impl EmergencyAccess {
})
}
pub async fn to_json_grantee_details(&self, conn: &mut DbConn) -> Value {
pub async fn to_json_grantee_details(&self, conn: &mut DbConn) -> Option<Value> {
let grantee_user = if let Some(grantee_uuid) = self.grantee_uuid.as_deref() {
Some(User::find_by_uuid(grantee_uuid, conn).await.expect("Grantee user not found."))
User::find_by_uuid(grantee_uuid, conn).await.expect("Grantee user not found.")
} else if let Some(email) = self.email.as_deref() {
Some(User::find_by_mail(email, conn).await.expect("Grantee user not found."))
match User::find_by_mail(email, conn).await {
Some(user) => user,
None => {
// remove outstanding invitations which should not exist
let _ = Self::delete_all_by_grantee_email(email, conn).await;
return None;
}
}
} else {
None
return None;
};
json!({
Some(json!({
"Id": self.uuid,
"Status": self.status,
"Type": self.atype,
"WaitTimeDays": self.wait_time_days,
"GranteeId": grantee_user.as_ref().map_or("", |u| &u.uuid),
"Email": grantee_user.as_ref().map_or("", |u| &u.email),
"Name": grantee_user.as_ref().map_or("", |u| &u.name),
"GranteeId": grantee_user.uuid,
"Email": grantee_user.email,
"Name": grantee_user.name,
"Object": "emergencyAccessGranteeDetails",
})
}))
}
}
@ -174,7 +181,7 @@ impl EmergencyAccess {
// Update the grantee so that it will refresh it's status.
User::update_uuid_revision(self.grantee_uuid.as_ref().expect("Error getting grantee"), conn).await;
self.status = status;
self.updated_at = date.to_owned();
date.clone_into(&mut self.updated_at);
db_run! {conn: {
crate::util::retry(|| {
@ -192,7 +199,7 @@ impl EmergencyAccess {
conn: &mut DbConn,
) -> EmptyResult {
self.last_notification_at = Some(date.to_owned());
self.updated_at = date.to_owned();
date.clone_into(&mut self.updated_at);
db_run! {conn: {
crate::util::retry(|| {
@ -214,6 +221,13 @@ impl EmergencyAccess {
Ok(())
}
pub async fn delete_all_by_grantee_email(grantee_email: &str, conn: &mut DbConn) -> EmptyResult {
for ea in Self::find_all_invited_by_grantee_email(grantee_email, conn).await {
ea.delete(conn).await?;
}
Ok(())
}
pub async fn delete(self, conn: &mut DbConn) -> EmptyResult {
User::update_uuid_revision(&self.grantor_uuid, conn).await;
@ -285,6 +299,15 @@ impl EmergencyAccess {
}}
}
pub async fn find_all_invited_by_grantee_email(grantee_email: &str, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: {
emergency_access::table
.filter(emergency_access::email.eq(grantee_email))
.filter(emergency_access::status.eq(EmergencyAccessStatus::Invited as i32))
.load::<EmergencyAccessDb>(conn).expect("Error loading emergency_access").from_db()
}}
}
pub async fn find_all_by_grantor_uuid(grantor_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: {
emergency_access::table
@ -292,6 +315,21 @@ impl EmergencyAccess {
.load::<EmergencyAccessDb>(conn).expect("Error loading emergency_access").from_db()
}}
}
pub async fn accept_invite(&mut self, grantee_uuid: &str, grantee_email: &str, conn: &mut DbConn) -> EmptyResult {
if self.email.is_none() || self.email.as_ref().unwrap() != grantee_email {
err!("User email does not match invite.");
}
if self.status == EmergencyAccessStatus::Accepted as i32 {
err!("Emergency contact already accepted.");
}
self.status = EmergencyAccessStatus::Accepted as i32;
self.grantee_uuid = Some(String::from(grantee_uuid));
self.email = None;
self.save(conn).await
}
}
// endregion

Datei anzeigen

@ -3,7 +3,7 @@ use serde_json::Value;
use crate::{api::EmptyResult, error::MapResult, CONFIG};
use chrono::{Duration, NaiveDateTime, Utc};
use chrono::{NaiveDateTime, TimeDelta, Utc};
// https://bitwarden.com/help/event-logs/
@ -316,7 +316,7 @@ impl Event {
pub async fn clean_events(conn: &mut DbConn) -> EmptyResult {
if let Some(days_to_retain) = CONFIG.events_days_retain() {
let dt = Utc::now().naive_utc() - Duration::days(days_to_retain);
let dt = Utc::now().naive_utc() - TimeDelta::try_days(days_to_retain).unwrap();
db_run! { conn: {
diesel::delete(event::table.filter(event::event_date.lt(dt)))
.execute(conn)

Datei anzeigen

@ -486,6 +486,39 @@ impl GroupUser {
}}
}
pub async fn has_access_to_collection_by_member(
collection_uuid: &str,
member_uuid: &str,
conn: &mut DbConn,
) -> bool {
db_run! { conn: {
groups_users::table
.inner_join(collections_groups::table.on(
collections_groups::groups_uuid.eq(groups_users::groups_uuid)
))
.filter(collections_groups::collections_uuid.eq(collection_uuid))
.filter(groups_users::users_organizations_uuid.eq(member_uuid))
.count()
.first::<i64>(conn)
.unwrap_or(0) != 0
}}
}
pub async fn has_full_access_by_member(org_uuid: &str, member_uuid: &str, conn: &mut DbConn) -> bool {
db_run! { conn: {
groups_users::table
.inner_join(groups::table.on(
groups::uuid.eq(groups_users::groups_uuid)
))
.filter(groups::organizations_uuid.eq(org_uuid))
.filter(groups::access_all.eq(true))
.filter(groups_users::users_organizations_uuid.eq(member_uuid))
.count()
.first::<i64>(conn)
.unwrap_or(0) != 0
}}
}
pub async fn update_user_revision(&self, conn: &mut DbConn) {
match UserOrganization::find_by_uuid(&self.users_organizations_uuid, conn).await {
Some(user) => User::update_uuid_revision(&user.user_uuid, conn).await,

Datei anzeigen

@ -340,4 +340,11 @@ impl OrgPolicy {
}
false
}
pub async fn is_enabled_by_org(org_uuid: &str, policy_type: OrgPolicyType, conn: &mut DbConn) -> bool {
if let Some(policy) = OrgPolicy::find_by_org_and_type(org_uuid, policy_type, conn).await {
return policy.enabled;
}
false
}
}

Datei anzeigen

@ -214,7 +214,7 @@ impl UserOrganization {
}
pub fn restore(&mut self) -> bool {
if self.status < UserOrgStatus::Accepted as i32 {
if self.status < UserOrgStatus::Invited as i32 {
self.status += ACTIVATE_REVOKE_DIFF;
return true;
}
@ -316,6 +316,7 @@ impl Organization {
UserOrganization::delete_all_by_organization(&self.uuid, conn).await?;
OrgPolicy::delete_all_by_organization(&self.uuid, conn).await?;
Group::delete_all_by_organization(&self.uuid, conn).await?;
OrganizationApiKey::delete_all_by_organization(&self.uuid, conn).await?;
db_run! { conn: {
diesel::delete(organizations::table.filter(organizations::uuid.eq(self.uuid)))
@ -344,6 +345,25 @@ impl UserOrganization {
pub async fn to_json(&self, conn: &mut DbConn) -> Value {
let org = Organization::find_by_uuid(&self.org_uuid, conn).await.unwrap();
let permissions = json!({
// TODO: Add support for Custom User Roles
// See: https://bitwarden.com/help/article/user-types-access-control/#custom-role
"accessEventLogs": false,
"accessImportExport": false,
"accessReports": false,
"createNewCollections": false,
"editAnyCollection": false,
"deleteAnyCollection": false,
"editAssignedCollections": false,
"deleteAssignedCollections": false,
"manageGroups": false,
"managePolicies": false,
"manageSso": false, // Not supported
"manageUsers": false,
"manageResetPassword": false,
"manageScim": false // Not supported (Not AGPLv3 Licensed)
});
// https://github.com/bitwarden/server/blob/13d1e74d6960cf0d042620b72d85bf583a4236f7/src/Api/Models/Response/ProfileOrganizationResponseModel.cs
json!({
"Id": self.org_uuid,
@ -371,27 +391,7 @@ impl UserOrganization {
// "KeyConnectorEnabled": false,
// "KeyConnectorUrl": null,
// TODO: Add support for Custom User Roles
// See: https://bitwarden.com/help/article/user-types-access-control/#custom-role
// "Permissions": {
// "AccessEventLogs": false,
// "AccessImportExport": false,
// "AccessReports": false,
// "ManageAllCollections": false,
// "CreateNewCollections": false,
// "EditAnyCollection": false,
// "DeleteAnyCollection": false,
// "ManageAssignedCollections": false,
// "editAssignedCollections": false,
// "deleteAssignedCollections": false,
// "ManageCiphers": false,
// "ManageGroups": false,
// "ManagePolicies": false,
// "ManageResetPassword": false,
// "ManageSso": false, // Not supported
// "ManageUsers": false,
// "ManageScim": false, // Not supported (Not AGPLv3 Licensed)
// },
"permissions": permissions,
"MaxStorageGb": 10, // The value doesn't matter, we don't check server-side
@ -648,8 +648,7 @@ impl UserOrganization {
db_run! { conn: {
users_organizations::table
.filter(users_organizations::user_uuid.eq(user_uuid))
.filter(users_organizations::status.eq(UserOrgStatus::Accepted as i32))
.or_filter(users_organizations::status.eq(UserOrgStatus::Confirmed as i32))
.filter(users_organizations::status.eq(UserOrgStatus::Accepted as i32).or(users_organizations::status.eq(UserOrgStatus::Confirmed as i32)))
.count()
.first::<i64>(conn)
.unwrap_or(0)
@ -665,6 +664,16 @@ impl UserOrganization {
}}
}
pub async fn find_confirmed_by_org(org_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: {
users_organizations::table
.filter(users_organizations::org_uuid.eq(org_uuid))
.filter(users_organizations::status.eq(UserOrgStatus::Confirmed as i32))
.load::<UserOrganizationDb>(conn)
.unwrap_or_default().from_db()
}}
}
pub async fn count_by_org(org_uuid: &str, conn: &mut DbConn) -> i64 {
db_run! { conn: {
users_organizations::table
@ -769,6 +778,32 @@ impl UserOrganization {
}}
}
pub async fn find_by_cipher_and_org_with_group(cipher_uuid: &str, org_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
db_run! { conn: {
users_organizations::table
.filter(users_organizations::org_uuid.eq(org_uuid))
.inner_join(groups_users::table.on(
groups_users::users_organizations_uuid.eq(users_organizations::uuid)
))
.left_join(collections_groups::table.on(
collections_groups::groups_uuid.eq(groups_users::groups_uuid)
))
.left_join(groups::table.on(groups::uuid.eq(groups_users::groups_uuid)))
.left_join(ciphers_collections::table.on(
ciphers_collections::collection_uuid.eq(collections_groups::collections_uuid).and(ciphers_collections::cipher_uuid.eq(&cipher_uuid))
))
.filter(
groups::access_all.eq(true).or( // AccessAll via groups
ciphers_collections::cipher_uuid.eq(&cipher_uuid) // ..or access to collection via group
)
)
.select(users_organizations::all_columns)
.distinct()
.load::<UserOrganizationDb>(conn).expect("Error loading user organizations with groups").from_db()
}}
}
pub async fn user_has_ge_admin_access_to_cipher(user_uuid: &str, cipher_uuid: &str, conn: &mut DbConn) -> bool {
db_run! { conn: {
users_organizations::table
@ -852,6 +887,14 @@ impl OrganizationApiKey {
.ok().from_db()
}}
}
pub async fn delete_all_by_organization(org_uuid: &str, conn: &mut DbConn) -> EmptyResult {
db_run! { conn: {
diesel::delete(organization_api_key::table.filter(organization_api_key::org_uuid.eq(org_uuid)))
.execute(conn)
.map_res("Error removing organization api key from organization")
}}
}
}
#[cfg(test)]

Datei anzeigen

@ -172,6 +172,7 @@ use crate::db::DbConn;
use crate::api::EmptyResult;
use crate::error::MapResult;
use crate::util::NumberOrString;
impl Send {
pub async fn save(&mut self, conn: &mut DbConn) -> EmptyResult {
@ -286,6 +287,36 @@ impl Send {
}}
}
pub async fn size_by_user(user_uuid: &str, conn: &mut DbConn) -> Option<i64> {
let sends = Self::find_by_user(user_uuid, conn).await;
#[allow(non_snake_case)]
#[derive(serde::Deserialize, Default)]
struct FileData {
Size: Option<NumberOrString>,
size: Option<NumberOrString>,
}
let mut total: i64 = 0;
for send in sends {
if send.atype == SendType::File as i32 {
let data: FileData = serde_json::from_str(&send.data).unwrap_or_default();
let size = match (data.size, data.Size) {
(Some(s), _) => s.into_i64(),
(_, Some(s)) => s.into_i64(),
(None, None) => continue,
};
if let Ok(size) = size {
total = total.checked_add(size)?;
};
}
}
Some(total)
}
pub async fn find_by_org(org_uuid: &str, conn: &mut DbConn) -> Vec<Self> {
db_run! {conn: {
sends::table

Datei anzeigen

@ -12,7 +12,7 @@ db_object! {
pub atype: i32,
pub enabled: bool,
pub data: String,
pub last_used: i32,
pub last_used: i64,
}
}
@ -34,6 +34,9 @@ pub enum TwoFactorType {
EmailVerificationChallenge = 1002,
WebauthnRegisterChallenge = 1003,
WebauthnLoginChallenge = 1004,
// Special type for Protected Actions verification via email
ProtectedActions = 2000,
}
/// Local methods

Datei anzeigen

@ -1,4 +1,4 @@
use chrono::{Duration, NaiveDateTime, Utc};
use chrono::{NaiveDateTime, TimeDelta, Utc};
use serde_json::Value;
use crate::crypto;
@ -202,7 +202,7 @@ impl User {
let stamp_exception = UserStampException {
routes: route_exception,
security_stamp: self.security_stamp.clone(),
expire: (Utc::now().naive_utc() + Duration::minutes(2)).timestamp(),
expire: (Utc::now() + TimeDelta::try_minutes(2).unwrap()).timestamp(),
};
self.stamp_exception = Some(serde_json::to_string(&stamp_exception).unwrap_or_default());
}
@ -246,6 +246,7 @@ impl User {
"Email": self.email,
"EmailVerified": !CONFIG.mail_enabled() || self.verified_at.is_some(),
"Premium": true,
"PremiumFromOrganization": false,
"MasterPasswordHint": self.password_hint,
"Culture": "en-US",
"TwoFactorEnabled": twofactor_enabled,
@ -257,6 +258,7 @@ impl User {
"ProviderOrganizations": [],
"ForcePasswordReset": false,
"AvatarColor": self.avatar_color,
"UsesKeyConnector": false,
"Object": "profile",
})
}
@ -311,6 +313,7 @@ impl User {
Send::delete_all_by_user(&self.uuid, conn).await?;
EmergencyAccess::delete_all_by_user(&self.uuid, conn).await?;
EmergencyAccess::delete_all_by_grantee_email(&self.email, conn).await?;
UserOrganization::delete_all_by_user(&self.uuid, conn).await?;
Cipher::delete_all_by_user(&self.uuid, conn).await?;
Favorite::delete_all_by_user(&self.uuid, conn).await?;

Datei anzeigen

@ -3,7 +3,7 @@ table! {
id -> Text,
cipher_uuid -> Text,
file_name -> Text,
file_size -> Integer,
file_size -> BigInt,
akey -> Nullable<Text>,
}
}
@ -160,7 +160,7 @@ table! {
atype -> Integer,
enabled -> Bool,
data -> Text,
last_used -> Integer,
last_used -> BigInt,
}
}

Datei anzeigen

@ -3,7 +3,7 @@ table! {
id -> Text,
cipher_uuid -> Text,
file_name -> Text,
file_size -> Integer,
file_size -> BigInt,
akey -> Nullable<Text>,
}
}
@ -160,7 +160,7 @@ table! {
atype -> Integer,
enabled -> Bool,
data -> Text,
last_used -> Integer,
last_used -> BigInt,
}
}

Datei anzeigen

@ -3,7 +3,7 @@ table! {
id -> Text,
cipher_uuid -> Text,
file_name -> Text,
file_size -> Integer,
file_size -> BigInt,
akey -> Nullable<Text>,
}
}
@ -160,7 +160,7 @@ table! {
atype -> Integer,
enabled -> Bool,
data -> Text,
last_used -> Integer,
last_used -> BigInt,
}
}

Datei anzeigen

@ -52,7 +52,6 @@ use rocket::error::Error as RocketErr;
use serde_json::{Error as SerdeErr, Value};
use std::io::Error as IoErr;
use std::time::SystemTimeError as TimeErr;
use tokio_tungstenite::tungstenite::Error as TungstError;
use webauthn_rs::error::WebauthnError as WebauthnErr;
use yubico::yubicoerror::YubicoError as YubiErr;
@ -91,7 +90,6 @@ make_error! {
DieselCon(DieselConErr): _has_source, _api_error,
Webauthn(WebauthnErr): _has_source, _api_error,
WebSocket(TungstError): _has_source, _api_error,
}
impl std::fmt::Debug for Error {
@ -291,10 +289,10 @@ macro_rules! err_json {
macro_rules! err_handler {
($expr:expr) => {{
error!(target: "auth", "Unauthorized Error: {}", $expr);
return ::rocket::request::Outcome::Failure((rocket::http::Status::Unauthorized, $expr));
return ::rocket::request::Outcome::Error((rocket::http::Status::Unauthorized, $expr));
}};
($usr_msg:expr, $log_value:expr) => {{
error!(target: "auth", "Unauthorized Error: {}. {}", $usr_msg, $log_value);
return ::rocket::request::Outcome::Failure((rocket::http::Status::Unauthorized, $usr_msg));
return ::rocket::request::Outcome::Error((rocket::http::Status::Unauthorized, $usr_msg));
}};
}

Datei anzeigen

@ -517,6 +517,19 @@ pub async fn send_admin_reset_password(address: &str, user_name: &str, org_name:
send_email(address, &subject, body_html, body_text).await
}
pub async fn send_protected_action_token(address: &str, token: &str) -> EmptyResult {
let (subject, body_html, body_text) = get_text(
"email/protected_action",
json!({
"url": CONFIG.domain(),
"img_src": CONFIG._smtp_img_src(),
"token": token,
}),
)?;
send_email(address, &subject, body_html, body_text).await
}
async fn send_with_selected_transport(email: Message) -> EmptyResult {
if CONFIG.use_sendmail() {
match sendmail_transport().send(email).await {

Datei anzeigen

@ -1,40 +1,9 @@
#![forbid(unsafe_code, non_ascii_idents)]
#![deny(
rust_2018_idioms,
rust_2021_compatibility,
noop_method_call,
pointer_structural_match,
trivial_casts,
trivial_numeric_casts,
unused_import_braces,
clippy::cast_lossless,
clippy::clone_on_ref_ptr,
clippy::equatable_if_let,
clippy::float_cmp_const,
clippy::inefficient_to_string,
clippy::iter_on_empty_collections,
clippy::iter_on_single_items,
clippy::linkedlist,
clippy::macro_use_imports,
clippy::manual_assert,
clippy::manual_instant_elapsed,
clippy::manual_string_new,
clippy::match_wildcard_for_single_variants,
clippy::mem_forget,
clippy::string_add_assign,
clippy::string_to_string,
clippy::unnecessary_join,
clippy::unnecessary_self_imports,
clippy::unused_async,
clippy::verbose_file_reads,
clippy::zero_sized_map_values
)]
#![cfg_attr(feature = "unstable", feature(ip))]
// The recursion_limit is mainly triggered by the json!() macro.
// The more key/value pairs there are the more recursion occurs.
// We want to keep this as low as possible, but not higher then 128.
// If you go above 128 it will cause rust-analyzer to fail,
#![recursion_limit = "103"]
#![recursion_limit = "90"]
// When enabled use MiMalloc as malloc instead of the default malloc
#[cfg(feature = "enable_mimalloc")]
@ -83,12 +52,12 @@ mod ratelimit;
mod util;
use crate::api::purge_auth_requests;
use crate::api::WS_ANONYMOUS_SUBSCRIPTIONS;
use crate::api::{WS_ANONYMOUS_SUBSCRIPTIONS, WS_USERS};
pub use config::CONFIG;
pub use error::{Error, MapResult};
use rocket::data::{Limits, ToByteUnit};
use std::sync::Arc;
pub use util::is_running_in_docker;
pub use util::is_running_in_container;
#[rocket::main]
async fn main() -> Result<(), Error> {
@ -96,13 +65,17 @@ async fn main() -> Result<(), Error> {
launch_info();
use log::LevelFilter as LF;
let level = LF::from_str(&CONFIG.log_level()).expect("Valid log level");
let level = LF::from_str(&CONFIG.log_level()).unwrap_or_else(|_| {
let valid_log_levels = LF::iter().map(|lvl| lvl.as_str().to_lowercase()).collect::<Vec<String>>().join(", ");
println!("Log level must be one of the following: {valid_log_levels}");
exit(1);
});
init_logging(level).ok();
let extra_debug = matches!(level, LF::Trace | LF::Debug);
check_data_folder().await;
check_rsa_keys().unwrap_or_else(|_| {
auth::initialize_keys().unwrap_or_else(|_| {
error!("Error creating keys, exiting...");
exit(1);
});
@ -238,9 +211,9 @@ fn launch_info() {
}
fn init_logging(level: log::LevelFilter) -> Result<(), fern::InitError> {
// Depending on the main log level we either want to disable or enable logging for trust-dns.
// Else if there are timeouts it will clutter the logs since trust-dns uses warn for this.
let trust_dns_level = if level >= log::LevelFilter::Debug {
// Depending on the main log level we either want to disable or enable logging for hickory.
// Else if there are timeouts it will clutter the logs since hickory uses warn for this.
let hickory_level = if level >= log::LevelFilter::Debug {
level
} else {
log::LevelFilter::Off
@ -293,9 +266,9 @@ fn init_logging(level: log::LevelFilter) -> Result<(), fern::InitError> {
.level_for("handlebars::render", handlebars_level)
// Prevent cookie_store logs
.level_for("cookie_store", log::LevelFilter::Off)
// Variable level for trust-dns used by reqwest
.level_for("trust_dns_resolver::name_server::name_server", trust_dns_level)
.level_for("trust_dns_proto::xfer", trust_dns_level)
// Variable level for hickory used by reqwest
.level_for("hickory_resolver::name_server::name_server", hickory_level)
.level_for("hickory_proto::xfer", hickory_level)
.level_for("diesel_logger", diesel_logger_level)
.chain(std::io::stdout());
@ -415,7 +388,7 @@ async fn check_data_folder() {
let path = Path::new(data_folder);
if !path.exists() {
error!("Data folder '{}' doesn't exist.", data_folder);
if is_running_in_docker() {
if is_running_in_container() {
error!("Verify that your data volume is mounted at the correct location.");
} else {
error!("Create the data folder and try again.");
@ -427,9 +400,9 @@ async fn check_data_folder() {
exit(1);
}
if is_running_in_docker()
if is_running_in_container()
&& std::env::var("I_REALLY_WANT_VOLATILE_STORAGE").is_err()
&& !docker_data_folder_is_persistent(data_folder).await
&& !container_data_folder_is_persistent(data_folder).await
{
error!(
"No persistent volume!\n\
@ -448,7 +421,7 @@ async fn check_data_folder() {
/// A none persistent volume in either Docker or Podman is represented by a 64 alphanumerical string.
/// If we detect this string, we will alert about not having a persistent self defined volume.
/// This probably means that someone forgot to add `-v /path/to/vaultwarden_data/:/data`
async fn docker_data_folder_is_persistent(data_folder: &str) -> bool {
async fn container_data_folder_is_persistent(data_folder: &str) -> bool {
if let Ok(mountinfo) = File::open("/proc/self/mountinfo").await {
// Since there can only be one mountpoint to the DATA_FOLDER
// We do a basic check for this mountpoint surrounded by a space.
@ -475,31 +448,6 @@ async fn docker_data_folder_is_persistent(data_folder: &str) -> bool {
true
}
fn check_rsa_keys() -> Result<(), crate::error::Error> {
// If the RSA keys don't exist, try to create them
let priv_path = CONFIG.private_rsa_key();
let pub_path = CONFIG.public_rsa_key();
if !util::file_exists(&priv_path) {
let rsa_key = openssl::rsa::Rsa::generate(2048)?;
let priv_key = rsa_key.private_key_to_pem()?;
crate::util::write_file(&priv_path, &priv_key)?;
info!("Private key created correctly.");
}
if !util::file_exists(&pub_path) {
let rsa_key = openssl::rsa::Rsa::private_key_from_pem(&std::fs::read(&priv_path)?)?;
let pub_key = rsa_key.public_key_to_pem()?;
crate::util::write_file(&pub_path, &pub_key)?;
info!("Public key created correctly.");
}
auth::load_keys();
Ok(())
}
fn check_web_vault() {
if !CONFIG.web_vault_enabled() {
return;
@ -553,7 +501,7 @@ async fn launch_rocket(pool: db::DbPool, extra_debug: bool) -> Result<(), Error>
.register([basepath, "/api"].concat(), api::core_catchers())
.register([basepath, "/admin"].concat(), api::admin_catchers())
.manage(pool)
.manage(api::start_notification_server())
.manage(Arc::clone(&WS_USERS))
.manage(Arc::clone(&WS_ANONYMOUS_SUBSCRIPTIONS))
.attach(util::AppHeaders())
.attach(util::Cors())

Datei anzeigen

@ -77,7 +77,7 @@ async function generateSupportString(event, dj) {
supportString += `* Vaultwarden version: v${dj.current_release}\n`;
supportString += `* Web-vault version: v${dj.web_vault_version}\n`;
supportString += `* OS/Arch: ${dj.host_os}/${dj.host_arch}\n`;
supportString += `* Running within Docker: ${dj.running_within_docker} (Base: ${dj.docker_base_image})\n`;
supportString += `* Running within a container: ${dj.running_within_container} (Base: ${dj.container_base_image})\n`;
supportString += "* Environment settings overridden: ";
if (dj.overrides != "") {
supportString += "true\n";
@ -179,7 +179,7 @@ function initVersionCheck(dj) {
}
checkVersions("server", serverInstalled, serverLatest, serverLatestCommit);
if (!dj.running_within_docker) {
if (!dj.running_within_container) {
const webInstalled = dj.web_vault_version;
const webLatest = dj.latest_web_build;
checkVersions("web", webInstalled, webLatest);

Datei anzeigen

@ -1,5 +1,5 @@
/*!
* Bootstrap v5.3.1 (https://getbootstrap.com/)
* Bootstrap v5.3.2 (https://getbootstrap.com/)
* Copyright 2011-2023 The Bootstrap Authors (https://github.com/twbs/bootstrap/graphs/contributors)
* Licensed under MIT (https://github.com/twbs/bootstrap/blob/main/LICENSE)
*/
@ -648,7 +648,7 @@
* Constants
*/
const VERSION = '5.3.1';
const VERSION = '5.3.2';
/**
* Class definition
@ -729,9 +729,9 @@
if (hrefAttribute.includes('#') && !hrefAttribute.startsWith('#')) {
hrefAttribute = `#${hrefAttribute.split('#')[1]}`;
}
selector = hrefAttribute && hrefAttribute !== '#' ? hrefAttribute.trim() : null;
selector = hrefAttribute && hrefAttribute !== '#' ? parseSelector(hrefAttribute.trim()) : null;
}
return parseSelector(selector);
return selector;
};
const SelectorEngine = {
find(selector, element = document.documentElement) {
@ -5866,7 +5866,7 @@
const CLASS_DROPDOWN = 'dropdown';
const SELECTOR_DROPDOWN_TOGGLE = '.dropdown-toggle';
const SELECTOR_DROPDOWN_MENU = '.dropdown-menu';
const NOT_SELECTOR_DROPDOWN_TOGGLE = ':not(.dropdown-toggle)';
const NOT_SELECTOR_DROPDOWN_TOGGLE = `:not(${SELECTOR_DROPDOWN_TOGGLE})`;
const SELECTOR_TAB_PANEL = '.list-group, .nav, [role="tablist"]';
const SELECTOR_OUTER = '.nav-item, .list-group-item';
const SELECTOR_INNER = `.nav-link${NOT_SELECTOR_DROPDOWN_TOGGLE}, .list-group-item${NOT_SELECTOR_DROPDOWN_TOGGLE}, [role="tab"]${NOT_SELECTOR_DROPDOWN_TOGGLE}`;

Datei anzeigen

@ -1,6 +1,6 @@
@charset "UTF-8";
/*!
* Bootstrap v5.3.1 (https://getbootstrap.com/)
* Bootstrap v5.3.2 (https://getbootstrap.com/)
* Copyright 2011-2023 The Bootstrap Authors
* Licensed under MIT (https://github.com/twbs/bootstrap/blob/main/LICENSE)
*/
@ -99,6 +99,7 @@
--bs-link-hover-color: #0a58ca;
--bs-link-hover-color-rgb: 10, 88, 202;
--bs-code-color: #d63384;
--bs-highlight-color: #212529;
--bs-highlight-bg: #fff3cd;
--bs-border-width: 1px;
--bs-border-style: solid;
@ -170,6 +171,8 @@
--bs-link-color-rgb: 110, 168, 254;
--bs-link-hover-color-rgb: 139, 185, 254;
--bs-code-color: #e685b5;
--bs-highlight-color: #dee2e6;
--bs-highlight-bg: #664d03;
--bs-border-color: #495057;
--bs-border-color-translucent: rgba(255, 255, 255, 0.15);
--bs-form-valid-color: #75b798;
@ -325,6 +328,7 @@ small, .small {
mark, .mark {
padding: 0.1875em;
color: var(--bs-highlight-color);
background-color: var(--bs-highlight-bg);
}
@ -819,7 +823,7 @@ progress {
.row-cols-3 > * {
flex: 0 0 auto;
width: 33.3333333333%;
width: 33.33333333%;
}
.row-cols-4 > * {
@ -834,7 +838,7 @@ progress {
.row-cols-6 > * {
flex: 0 0 auto;
width: 16.6666666667%;
width: 16.66666667%;
}
.col-auto {
@ -1024,7 +1028,7 @@ progress {
}
.row-cols-sm-3 > * {
flex: 0 0 auto;
width: 33.3333333333%;
width: 33.33333333%;
}
.row-cols-sm-4 > * {
flex: 0 0 auto;
@ -1036,7 +1040,7 @@ progress {
}
.row-cols-sm-6 > * {
flex: 0 0 auto;
width: 16.6666666667%;
width: 16.66666667%;
}
.col-sm-auto {
flex: 0 0 auto;
@ -1193,7 +1197,7 @@ progress {
}
.row-cols-md-3 > * {
flex: 0 0 auto;
width: 33.3333333333%;
width: 33.33333333%;
}
.row-cols-md-4 > * {
flex: 0 0 auto;
@ -1205,7 +1209,7 @@ progress {
}
.row-cols-md-6 > * {
flex: 0 0 auto;
width: 16.6666666667%;
width: 16.66666667%;
}
.col-md-auto {
flex: 0 0 auto;
@ -1362,7 +1366,7 @@ progress {
}
.row-cols-lg-3 > * {
flex: 0 0 auto;
width: 33.3333333333%;
width: 33.33333333%;
}
.row-cols-lg-4 > * {
flex: 0 0 auto;
@ -1374,7 +1378,7 @@ progress {
}
.row-cols-lg-6 > * {
flex: 0 0 auto;
width: 16.6666666667%;
width: 16.66666667%;
}
.col-lg-auto {
flex: 0 0 auto;
@ -1531,7 +1535,7 @@ progress {
}
.row-cols-xl-3 > * {
flex: 0 0 auto;
width: 33.3333333333%;
width: 33.33333333%;
}
.row-cols-xl-4 > * {
flex: 0 0 auto;
@ -1543,7 +1547,7 @@ progress {
}
.row-cols-xl-6 > * {
flex: 0 0 auto;
width: 16.6666666667%;
width: 16.66666667%;
}
.col-xl-auto {
flex: 0 0 auto;
@ -1700,7 +1704,7 @@ progress {
}
.row-cols-xxl-3 > * {
flex: 0 0 auto;
width: 33.3333333333%;
width: 33.33333333%;
}
.row-cols-xxl-4 > * {
flex: 0 0 auto;
@ -1712,7 +1716,7 @@ progress {
}
.row-cols-xxl-6 > * {
flex: 0 0 auto;
width: 16.6666666667%;
width: 16.66666667%;
}
.col-xxl-auto {
flex: 0 0 auto;
@ -1856,16 +1860,16 @@ progress {
--bs-table-bg-type: initial;
--bs-table-color-state: initial;
--bs-table-bg-state: initial;
--bs-table-color: var(--bs-body-color);
--bs-table-color: var(--bs-emphasis-color);
--bs-table-bg: var(--bs-body-bg);
--bs-table-border-color: var(--bs-border-color);
--bs-table-accent-bg: transparent;
--bs-table-striped-color: var(--bs-body-color);
--bs-table-striped-bg: rgba(0, 0, 0, 0.05);
--bs-table-active-color: var(--bs-body-color);
--bs-table-active-bg: rgba(0, 0, 0, 0.1);
--bs-table-hover-color: var(--bs-body-color);
--bs-table-hover-bg: rgba(0, 0, 0, 0.075);
--bs-table-striped-color: var(--bs-emphasis-color);
--bs-table-striped-bg: rgba(var(--bs-emphasis-color-rgb), 0.05);
--bs-table-active-color: var(--bs-emphasis-color);
--bs-table-active-bg: rgba(var(--bs-emphasis-color-rgb), 0.1);
--bs-table-hover-color: var(--bs-emphasis-color);
--bs-table-hover-bg: rgba(var(--bs-emphasis-color-rgb), 0.075);
width: 100%;
margin-bottom: 1rem;
vertical-align: top;
@ -1934,7 +1938,7 @@ progress {
.table-primary {
--bs-table-color: #000;
--bs-table-bg: #cfe2ff;
--bs-table-border-color: #bacbe6;
--bs-table-border-color: #a6b5cc;
--bs-table-striped-bg: #c5d7f2;
--bs-table-striped-color: #000;
--bs-table-active-bg: #bacbe6;
@ -1948,7 +1952,7 @@ progress {
.table-secondary {
--bs-table-color: #000;
--bs-table-bg: #e2e3e5;
--bs-table-border-color: #cbccce;
--bs-table-border-color: #b5b6b7;
--bs-table-striped-bg: #d7d8da;
--bs-table-striped-color: #000;
--bs-table-active-bg: #cbccce;
@ -1962,7 +1966,7 @@ progress {
.table-success {
--bs-table-color: #000;
--bs-table-bg: #d1e7dd;
--bs-table-border-color: #bcd0c7;
--bs-table-border-color: #a7b9b1;
--bs-table-striped-bg: #c7dbd2;
--bs-table-striped-color: #000;
--bs-table-active-bg: #bcd0c7;
@ -1976,7 +1980,7 @@ progress {
.table-info {
--bs-table-color: #000;
--bs-table-bg: #cff4fc;
--bs-table-border-color: #badce3;
--bs-table-border-color: #a6c3ca;
--bs-table-striped-bg: #c5e8ef;
--bs-table-striped-color: #000;
--bs-table-active-bg: #badce3;
@ -1990,7 +1994,7 @@ progress {
.table-warning {
--bs-table-color: #000;
--bs-table-bg: #fff3cd;
--bs-table-border-color: #e6dbb9;
--bs-table-border-color: #ccc2a4;
--bs-table-striped-bg: #f2e7c3;
--bs-table-striped-color: #000;
--bs-table-active-bg: #e6dbb9;
@ -2004,7 +2008,7 @@ progress {
.table-danger {
--bs-table-color: #000;
--bs-table-bg: #f8d7da;
--bs-table-border-color: #dfc2c4;
--bs-table-border-color: #c6acae;
--bs-table-striped-bg: #eccccf;
--bs-table-striped-color: #000;
--bs-table-active-bg: #dfc2c4;
@ -2018,7 +2022,7 @@ progress {
.table-light {
--bs-table-color: #000;
--bs-table-bg: #f8f9fa;
--bs-table-border-color: #dfe0e1;
--bs-table-border-color: #c6c7c8;
--bs-table-striped-bg: #ecedee;
--bs-table-striped-color: #000;
--bs-table-active-bg: #dfe0e1;
@ -2032,7 +2036,7 @@ progress {
.table-dark {
--bs-table-color: #fff;
--bs-table-bg: #212529;
--bs-table-border-color: #373b3e;
--bs-table-border-color: #4d5154;
--bs-table-striped-bg: #2c3034;
--bs-table-striped-color: #fff;
--bs-table-active-bg: #373b3e;
@ -2388,6 +2392,7 @@ textarea.form-control-lg {
.form-check-input {
--bs-form-check-bg: var(--bs-body-bg);
flex-shrink: 0;
width: 1em;
height: 1em;
margin-top: 0.25em;
@ -2544,7 +2549,7 @@ textarea.form-control-lg {
height: 0.5rem;
color: transparent;
cursor: pointer;
background-color: var(--bs-tertiary-bg);
background-color: var(--bs-secondary-bg);
border-color: transparent;
border-radius: 1rem;
}
@ -2573,7 +2578,7 @@ textarea.form-control-lg {
height: 0.5rem;
color: transparent;
cursor: pointer;
background-color: var(--bs-tertiary-bg);
background-color: var(--bs-secondary-bg);
border-color: transparent;
border-radius: 1rem;
}
@ -3431,7 +3436,7 @@ textarea.form-control-lg {
--bs-dropdown-inner-border-radius: calc(var(--bs-border-radius) - var(--bs-border-width));
--bs-dropdown-divider-bg: var(--bs-border-color-translucent);
--bs-dropdown-divider-margin-y: 0.5rem;
--bs-dropdown-box-shadow: 0 0.5rem 1rem rgba(0, 0, 0, 0.15);
--bs-dropdown-box-shadow: var(--bs-box-shadow);
--bs-dropdown-link-color: var(--bs-body-color);
--bs-dropdown-link-hover-color: var(--bs-body-color);
--bs-dropdown-link-hover-bg: var(--bs-tertiary-bg);
@ -5473,7 +5478,7 @@ textarea.form-control-lg {
--bs-modal-border-color: var(--bs-border-color-translucent);
--bs-modal-border-width: var(--bs-border-width);
--bs-modal-border-radius: var(--bs-border-radius-lg);
--bs-modal-box-shadow: 0 0.125rem 0.25rem rgba(0, 0, 0, 0.075);
--bs-modal-box-shadow: var(--bs-box-shadow-sm);
--bs-modal-inner-border-radius: calc(var(--bs-border-radius-lg) - (var(--bs-border-width)));
--bs-modal-header-padding-x: 1rem;
--bs-modal-header-padding-y: 1rem;
@ -5614,7 +5619,7 @@ textarea.form-control-lg {
@media (min-width: 576px) {
.modal {
--bs-modal-margin: 1.75rem;
--bs-modal-box-shadow: 0 0.5rem 1rem rgba(0, 0, 0, 0.15);
--bs-modal-box-shadow: var(--bs-box-shadow);
}
.modal-dialog {
max-width: var(--bs-modal-width);
@ -5866,7 +5871,7 @@ textarea.form-control-lg {
--bs-popover-border-color: var(--bs-border-color-translucent);
--bs-popover-border-radius: var(--bs-border-radius-lg);
--bs-popover-inner-border-radius: calc(var(--bs-border-radius-lg) - var(--bs-border-width));
--bs-popover-box-shadow: 0 0.5rem 1rem rgba(0, 0, 0, 0.15);
--bs-popover-box-shadow: var(--bs-box-shadow);
--bs-popover-header-padding-x: 1rem;
--bs-popover-header-padding-y: 0.5rem;
--bs-popover-header-font-size: 1rem;
@ -6301,7 +6306,7 @@ textarea.form-control-lg {
--bs-offcanvas-bg: var(--bs-body-bg);
--bs-offcanvas-border-width: var(--bs-border-width);
--bs-offcanvas-border-color: var(--bs-border-color-translucent);
--bs-offcanvas-box-shadow: 0 0.125rem 0.25rem rgba(0, 0, 0, 0.075);
--bs-offcanvas-box-shadow: var(--bs-box-shadow-sm);
--bs-offcanvas-transition: transform 0.3s ease-in-out;
--bs-offcanvas-title-line-height: 1.5;
}
@ -7380,15 +7385,15 @@ textarea.form-control-lg {
}
.shadow {
box-shadow: 0 0.5rem 1rem rgba(0, 0, 0, 0.15) !important;
box-shadow: var(--bs-box-shadow) !important;
}
.shadow-sm {
box-shadow: 0 0.125rem 0.25rem rgba(0, 0, 0, 0.075) !important;
box-shadow: var(--bs-box-shadow-sm) !important;
}
.shadow-lg {
box-shadow: 0 1rem 3rem rgba(0, 0, 0, 0.175) !important;
box-shadow: var(--bs-box-shadow-lg) !important;
}
.shadow-none {

Datei anzeigen

@ -4,10 +4,10 @@
*
* To rebuild or modify this file with the latest versions of the included
* software please visit:
* https://datatables.net/download/#bs5/dt-1.13.6
* https://datatables.net/download/#bs5/dt-2.0.0
*
* Included libraries:
* DataTables 1.13.6
* DataTables 2.0.0
*/
@charset "UTF-8";
@ -30,76 +30,124 @@ table.dataTable td.dt-control {
}
table.dataTable td.dt-control:before {
display: inline-block;
color: rgba(0, 0, 0, 0.5);
content: "►";
box-sizing: border-box;
content: "";
border-top: 5px solid transparent;
border-left: 10px solid rgba(0, 0, 0, 0.5);
border-bottom: 5px solid transparent;
border-right: 0px solid transparent;
}
table.dataTable tr.dt-hasChild td.dt-control:before {
content: "▼";
border-top: 10px solid rgba(0, 0, 0, 0.5);
border-left: 5px solid transparent;
border-bottom: 0px solid transparent;
border-right: 5px solid transparent;
}
html.dark table.dataTable td.dt-control:before {
color: rgba(255, 255, 255, 0.5);
html.dark table.dataTable td.dt-control:before,
:root[data-bs-theme=dark] table.dataTable td.dt-control:before {
border-left-color: rgba(255, 255, 255, 0.5);
}
html.dark table.dataTable tr.dt-hasChild td.dt-control:before {
color: rgba(255, 255, 255, 0.5);
html.dark table.dataTable tr.dt-hasChild td.dt-control:before,
:root[data-bs-theme=dark] table.dataTable tr.dt-hasChild td.dt-control:before {
border-top-color: rgba(255, 255, 255, 0.5);
border-left-color: transparent;
}
table.dataTable thead > tr > th.sorting, table.dataTable thead > tr > th.sorting_asc, table.dataTable thead > tr > th.sorting_desc, table.dataTable thead > tr > th.sorting_asc_disabled, table.dataTable thead > tr > th.sorting_desc_disabled,
table.dataTable thead > tr > td.sorting,
table.dataTable thead > tr > td.sorting_asc,
table.dataTable thead > tr > td.sorting_desc,
table.dataTable thead > tr > td.sorting_asc_disabled,
table.dataTable thead > tr > td.sorting_desc_disabled {
cursor: pointer;
position: relative;
padding-right: 26px;
div.dt-scroll-body thead tr,
div.dt-scroll-body tfoot tr {
height: 0;
}
table.dataTable thead > tr > th.sorting:before, table.dataTable thead > tr > th.sorting:after, table.dataTable thead > tr > th.sorting_asc:before, table.dataTable thead > tr > th.sorting_asc:after, table.dataTable thead > tr > th.sorting_desc:before, table.dataTable thead > tr > th.sorting_desc:after, table.dataTable thead > tr > th.sorting_asc_disabled:before, table.dataTable thead > tr > th.sorting_asc_disabled:after, table.dataTable thead > tr > th.sorting_desc_disabled:before, table.dataTable thead > tr > th.sorting_desc_disabled:after,
table.dataTable thead > tr > td.sorting:before,
table.dataTable thead > tr > td.sorting:after,
table.dataTable thead > tr > td.sorting_asc:before,
table.dataTable thead > tr > td.sorting_asc:after,
table.dataTable thead > tr > td.sorting_desc:before,
table.dataTable thead > tr > td.sorting_desc:after,
table.dataTable thead > tr > td.sorting_asc_disabled:before,
table.dataTable thead > tr > td.sorting_asc_disabled:after,
table.dataTable thead > tr > td.sorting_desc_disabled:before,
table.dataTable thead > tr > td.sorting_desc_disabled:after {
div.dt-scroll-body thead tr th, div.dt-scroll-body thead tr td,
div.dt-scroll-body tfoot tr th,
div.dt-scroll-body tfoot tr td {
height: 0 !important;
padding-top: 0px !important;
padding-bottom: 0px !important;
border-top-width: 0px !important;
border-bottom-width: 0px !important;
}
div.dt-scroll-body thead tr th div.dt-scroll-sizing, div.dt-scroll-body thead tr td div.dt-scroll-sizing,
div.dt-scroll-body tfoot tr th div.dt-scroll-sizing,
div.dt-scroll-body tfoot tr td div.dt-scroll-sizing {
height: 0 !important;
overflow: hidden !important;
}
table.dataTable thead > tr > th:active,
table.dataTable thead > tr > td:active {
outline: none;
}
table.dataTable thead > tr > th.dt-orderable-asc span.dt-column-order:before, table.dataTable thead > tr > th.dt-ordering-asc span.dt-column-order:before,
table.dataTable thead > tr > td.dt-orderable-asc span.dt-column-order:before,
table.dataTable thead > tr > td.dt-ordering-asc span.dt-column-order:before {
position: absolute;
display: block;
opacity: 0.125;
right: 10px;
line-height: 9px;
font-size: 0.8em;
}
table.dataTable thead > tr > th.sorting:before, table.dataTable thead > tr > th.sorting_asc:before, table.dataTable thead > tr > th.sorting_desc:before, table.dataTable thead > tr > th.sorting_asc_disabled:before, table.dataTable thead > tr > th.sorting_desc_disabled:before,
table.dataTable thead > tr > td.sorting:before,
table.dataTable thead > tr > td.sorting_asc:before,
table.dataTable thead > tr > td.sorting_desc:before,
table.dataTable thead > tr > td.sorting_asc_disabled:before,
table.dataTable thead > tr > td.sorting_desc_disabled:before {
bottom: 50%;
content: "▲";
content: "▲"/"";
}
table.dataTable thead > tr > th.sorting:after, table.dataTable thead > tr > th.sorting_asc:after, table.dataTable thead > tr > th.sorting_desc:after, table.dataTable thead > tr > th.sorting_asc_disabled:after, table.dataTable thead > tr > th.sorting_desc_disabled:after,
table.dataTable thead > tr > td.sorting:after,
table.dataTable thead > tr > td.sorting_asc:after,
table.dataTable thead > tr > td.sorting_desc:after,
table.dataTable thead > tr > td.sorting_asc_disabled:after,
table.dataTable thead > tr > td.sorting_desc_disabled:after {
table.dataTable thead > tr > th.dt-orderable-desc span.dt-column-order:after, table.dataTable thead > tr > th.dt-ordering-desc span.dt-column-order:after,
table.dataTable thead > tr > td.dt-orderable-desc span.dt-column-order:after,
table.dataTable thead > tr > td.dt-ordering-desc span.dt-column-order:after {
position: absolute;
display: block;
top: 50%;
content: "▼";
content: "▼"/"";
}
table.dataTable thead > tr > th.sorting_asc:before, table.dataTable thead > tr > th.sorting_desc:after,
table.dataTable thead > tr > td.sorting_asc:before,
table.dataTable thead > tr > td.sorting_desc:after {
table.dataTable thead > tr > th.dt-orderable-asc, table.dataTable thead > tr > th.dt-orderable-desc, table.dataTable thead > tr > th.dt-ordering-asc, table.dataTable thead > tr > th.dt-ordering-desc,
table.dataTable thead > tr > td.dt-orderable-asc,
table.dataTable thead > tr > td.dt-orderable-desc,
table.dataTable thead > tr > td.dt-ordering-asc,
table.dataTable thead > tr > td.dt-ordering-desc {
position: relative;
padding-right: 30px;
}
table.dataTable thead > tr > th.dt-orderable-asc span.dt-column-order, table.dataTable thead > tr > th.dt-orderable-desc span.dt-column-order, table.dataTable thead > tr > th.dt-ordering-asc span.dt-column-order, table.dataTable thead > tr > th.dt-ordering-desc span.dt-column-order,
table.dataTable thead > tr > td.dt-orderable-asc span.dt-column-order,
table.dataTable thead > tr > td.dt-orderable-desc span.dt-column-order,
table.dataTable thead > tr > td.dt-ordering-asc span.dt-column-order,
table.dataTable thead > tr > td.dt-ordering-desc span.dt-column-order {
position: absolute;
right: 12px;
top: 0;
bottom: 0;
width: 12px;
}
table.dataTable thead > tr > th.dt-orderable-asc span.dt-column-order:before, table.dataTable thead > tr > th.dt-orderable-asc span.dt-column-order:after, table.dataTable thead > tr > th.dt-orderable-desc span.dt-column-order:before, table.dataTable thead > tr > th.dt-orderable-desc span.dt-column-order:after, table.dataTable thead > tr > th.dt-ordering-asc span.dt-column-order:before, table.dataTable thead > tr > th.dt-ordering-asc span.dt-column-order:after, table.dataTable thead > tr > th.dt-ordering-desc span.dt-column-order:before, table.dataTable thead > tr > th.dt-ordering-desc span.dt-column-order:after,
table.dataTable thead > tr > td.dt-orderable-asc span.dt-column-order:before,
table.dataTable thead > tr > td.dt-orderable-asc span.dt-column-order:after,
table.dataTable thead > tr > td.dt-orderable-desc span.dt-column-order:before,
table.dataTable thead > tr > td.dt-orderable-desc span.dt-column-order:after,
table.dataTable thead > tr > td.dt-ordering-asc span.dt-column-order:before,
table.dataTable thead > tr > td.dt-ordering-asc span.dt-column-order:after,
table.dataTable thead > tr > td.dt-ordering-desc span.dt-column-order:before,
table.dataTable thead > tr > td.dt-ordering-desc span.dt-column-order:after {
left: 0;
opacity: 0.125;
line-height: 9px;
font-size: 0.8em;
}
table.dataTable thead > tr > th.dt-orderable-asc, table.dataTable thead > tr > th.dt-orderable-desc,
table.dataTable thead > tr > td.dt-orderable-asc,
table.dataTable thead > tr > td.dt-orderable-desc {
cursor: pointer;
}
table.dataTable thead > tr > th.dt-orderable-asc:hover, table.dataTable thead > tr > th.dt-orderable-desc:hover,
table.dataTable thead > tr > td.dt-orderable-asc:hover,
table.dataTable thead > tr > td.dt-orderable-desc:hover {
outline: 2px solid rgba(0, 0, 0, 0.05);
outline-offset: -2px;
}
table.dataTable thead > tr > th.dt-ordering-asc span.dt-column-order:before, table.dataTable thead > tr > th.dt-ordering-desc span.dt-column-order:after,
table.dataTable thead > tr > td.dt-ordering-asc span.dt-column-order:before,
table.dataTable thead > tr > td.dt-ordering-desc span.dt-column-order:after {
opacity: 0.6;
}
table.dataTable thead > tr > th.sorting_desc_disabled:after, table.dataTable thead > tr > th.sorting_asc_disabled:before,
table.dataTable thead > tr > td.sorting_desc_disabled:after,
table.dataTable thead > tr > td.sorting_asc_disabled:before {
table.dataTable thead > tr > th.sorting_desc_disabled span.dt-column-order:after, table.dataTable thead > tr > th.sorting_asc_disabled span.dt-column-order:before,
table.dataTable thead > tr > td.sorting_desc_disabled span.dt-column-order:after,
table.dataTable thead > tr > td.sorting_asc_disabled span.dt-column-order:before {
display: none;
}
table.dataTable thead > tr > th:active,
@ -107,29 +155,39 @@ table.dataTable thead > tr > td:active {
outline: none;
}
div.dataTables_scrollBody > table.dataTable > thead > tr > th:before, div.dataTables_scrollBody > table.dataTable > thead > tr > th:after,
div.dataTables_scrollBody > table.dataTable > thead > tr > td:before,
div.dataTables_scrollBody > table.dataTable > thead > tr > td:after {
display: none;
div.dt-scroll-body > table.dataTable > thead > tr > th,
div.dt-scroll-body > table.dataTable > thead > tr > td {
overflow: hidden;
}
div.dataTables_processing {
:root.dark table.dataTable thead > tr > th.dt-orderable-asc:hover, :root.dark table.dataTable thead > tr > th.dt-orderable-desc:hover,
:root.dark table.dataTable thead > tr > td.dt-orderable-asc:hover,
:root.dark table.dataTable thead > tr > td.dt-orderable-desc:hover,
:root[data-bs-theme=dark] table.dataTable thead > tr > th.dt-orderable-asc:hover,
:root[data-bs-theme=dark] table.dataTable thead > tr > th.dt-orderable-desc:hover,
:root[data-bs-theme=dark] table.dataTable thead > tr > td.dt-orderable-asc:hover,
:root[data-bs-theme=dark] table.dataTable thead > tr > td.dt-orderable-desc:hover {
outline: 2px solid rgba(255, 255, 255, 0.05);
}
div.dt-processing {
position: absolute;
top: 50%;
left: 50%;
width: 200px;
margin-left: -100px;
margin-top: -26px;
margin-top: -22px;
text-align: center;
padding: 2px;
z-index: 10;
}
div.dataTables_processing > div:last-child {
div.dt-processing > div:last-child {
position: relative;
width: 80px;
height: 15px;
margin: 1em auto;
}
div.dataTables_processing > div:last-child > div {
div.dt-processing > div:last-child > div {
position: absolute;
top: 0;
width: 13px;
@ -139,19 +197,19 @@ div.dataTables_processing > div:last-child > div {
background: rgb(var(--dt-row-selected));
animation-timing-function: cubic-bezier(0, 1, 1, 0);
}
div.dataTables_processing > div:last-child > div:nth-child(1) {
div.dt-processing > div:last-child > div:nth-child(1) {
left: 8px;
animation: datatables-loader-1 0.6s infinite;
}
div.dataTables_processing > div:last-child > div:nth-child(2) {
div.dt-processing > div:last-child > div:nth-child(2) {
left: 8px;
animation: datatables-loader-2 0.6s infinite;
}
div.dataTables_processing > div:last-child > div:nth-child(3) {
div.dt-processing > div:last-child > div:nth-child(3) {
left: 32px;
animation: datatables-loader-2 0.6s infinite;
}
div.dataTables_processing > div:last-child > div:nth-child(4) {
div.dt-processing > div:last-child > div:nth-child(4) {
left: 56px;
animation: datatables-loader-3 0.6s infinite;
}
@ -183,13 +241,16 @@ div.dataTables_processing > div:last-child > div:nth-child(4) {
table.dataTable.nowrap th, table.dataTable.nowrap td {
white-space: nowrap;
}
table.dataTable th,
table.dataTable td {
box-sizing: border-box;
}
table.dataTable th.dt-left,
table.dataTable td.dt-left {
text-align: left;
}
table.dataTable th.dt-center,
table.dataTable td.dt-center,
table.dataTable td.dataTables_empty {
table.dataTable td.dt-center {
text-align: center;
}
table.dataTable th.dt-right,
@ -204,6 +265,16 @@ table.dataTable th.dt-nowrap,
table.dataTable td.dt-nowrap {
white-space: nowrap;
}
table.dataTable th.dt-empty,
table.dataTable td.dt-empty {
text-align: center;
vertical-align: top;
}
table.dataTable th.dt-type-numeric, table.dataTable th.dt-type-date,
table.dataTable td.dt-type-numeric,
table.dataTable td.dt-type-date {
text-align: right;
}
table.dataTable thead th,
table.dataTable thead td,
table.dataTable tfoot th,
@ -266,179 +337,150 @@ table.dataTable tbody td.dt-body-nowrap {
* ©2020 SpryMedia Ltd, all rights reserved.
* License: MIT datatables.net/license/mit
*/
table.dataTable {
table.table.dataTable {
clear: both;
margin-top: 6px !important;
margin-bottom: 6px !important;
max-width: none !important;
border-collapse: separate !important;
margin-bottom: 0;
max-width: none;
border-spacing: 0;
}
table.dataTable td,
table.dataTable th {
-webkit-box-sizing: content-box;
box-sizing: content-box;
}
table.dataTable td.dataTables_empty,
table.dataTable th.dataTables_empty {
text-align: center;
}
table.dataTable.nowrap th,
table.dataTable.nowrap td {
white-space: nowrap;
}
table.dataTable.table-striped > tbody > tr:nth-of-type(2n+1) > * {
table.table.dataTable.table-striped > tbody > tr:nth-of-type(2n+1) > * {
box-shadow: none;
}
table.dataTable > tbody > tr {
table.table.dataTable > :not(caption) > * > * {
background-color: transparent;
}
table.dataTable > tbody > tr.selected > * {
table.table.dataTable > tbody > tr {
background-color: transparent;
}
table.table.dataTable > tbody > tr.selected > * {
box-shadow: inset 0 0 0 9999px rgb(13, 110, 253);
box-shadow: inset 0 0 0 9999px rgb(var(--dt-row-selected));
color: rgb(255, 255, 255);
color: rgb(var(--dt-row-selected-text));
}
table.dataTable > tbody > tr.selected a {
table.table.dataTable > tbody > tr.selected a {
color: rgb(9, 10, 11);
color: rgb(var(--dt-row-selected-link));
}
table.dataTable.table-striped > tbody > tr.odd > * {
table.table.dataTable.table-striped > tbody > tr:nth-of-type(2n+1) > * {
box-shadow: inset 0 0 0 9999px rgba(var(--dt-row-stripe), 0.05);
}
table.dataTable.table-striped > tbody > tr.odd.selected > * {
table.table.dataTable.table-striped > tbody > tr:nth-of-type(2n+1).selected > * {
box-shadow: inset 0 0 0 9999px rgba(13, 110, 253, 0.95);
box-shadow: inset 0 0 0 9999px rgba(var(--dt-row-selected), 0.95);
}
table.dataTable.table-hover > tbody > tr:hover > * {
table.table.dataTable.table-hover > tbody > tr:hover > * {
box-shadow: inset 0 0 0 9999px rgba(var(--dt-row-hover), 0.075);
}
table.dataTable.table-hover > tbody > tr.selected:hover > * {
table.table.dataTable.table-hover > tbody > tr.selected:hover > * {
box-shadow: inset 0 0 0 9999px rgba(13, 110, 253, 0.975);
box-shadow: inset 0 0 0 9999px rgba(var(--dt-row-selected), 0.975);
}
div.dataTables_wrapper div.dataTables_length label {
div.dt-container div.dt-length label {
font-weight: normal;
text-align: left;
white-space: nowrap;
}
div.dataTables_wrapper div.dataTables_length select {
div.dt-container div.dt-length select {
width: auto;
display: inline-block;
margin-right: 0.5em;
}
div.dataTables_wrapper div.dataTables_filter {
div.dt-container div.dt-search {
text-align: right;
}
div.dataTables_wrapper div.dataTables_filter label {
div.dt-container div.dt-search label {
font-weight: normal;
white-space: nowrap;
text-align: left;
}
div.dataTables_wrapper div.dataTables_filter input {
div.dt-container div.dt-search input {
margin-left: 0.5em;
display: inline-block;
width: auto;
}
div.dataTables_wrapper div.dataTables_info {
div.dt-container div.dt-info {
padding-top: 0.85em;
}
div.dataTables_wrapper div.dataTables_paginate {
div.dt-container div.dt-paging {
margin: 0;
white-space: nowrap;
text-align: right;
}
div.dataTables_wrapper div.dataTables_paginate ul.pagination {
div.dt-container div.dt-paging ul.pagination {
margin: 2px 0;
white-space: nowrap;
justify-content: flex-end;
flex-wrap: wrap;
}
div.dataTables_wrapper div.dt-row {
div.dt-container div.dt-row {
position: relative;
}
div.dataTables_scrollHead table.dataTable {
div.dt-scroll-head table.dataTable {
margin-bottom: 0 !important;
}
div.dataTables_scrollBody > table {
div.dt-scroll-body {
border-bottom-color: var(--bs-border-color);
border-bottom-width: var(--bs-border-width);
border-bottom-style: solid;
}
div.dt-scroll-body > table {
border-top: none;
margin-top: 0 !important;
margin-bottom: 0 !important;
}
div.dataTables_scrollBody > table > thead .sorting:before,
div.dataTables_scrollBody > table > thead .sorting_asc:before,
div.dataTables_scrollBody > table > thead .sorting_desc:before,
div.dataTables_scrollBody > table > thead .sorting:after,
div.dataTables_scrollBody > table > thead .sorting_asc:after,
div.dataTables_scrollBody > table > thead .sorting_desc:after {
display: none;
div.dt-scroll-body > table > tbody > tr:first-child {
border-top-width: 0;
}
div.dataTables_scrollBody > table > tbody tr:first-child th,
div.dataTables_scrollBody > table > tbody tr:first-child td {
border-top: none;
div.dt-scroll-body > table > thead > tr {
border-width: 0 !important;
}
div.dt-scroll-body > table > tbody > tr:last-child > * {
border-bottom: none;
}
div.dataTables_scrollFoot > .dataTables_scrollFootInner {
div.dt-scroll-foot > .dt-scroll-footInner {
box-sizing: content-box;
}
div.dataTables_scrollFoot > .dataTables_scrollFootInner > table {
div.dt-scroll-foot > .dt-scroll-footInner > table {
margin-top: 0 !important;
border-top: none;
}
div.dt-scroll-foot > .dt-scroll-footInner > table > tfoot > tr:first-child {
border-top-width: 0 !important;
}
@media screen and (max-width: 767px) {
div.dataTables_wrapper div.dataTables_length,
div.dataTables_wrapper div.dataTables_filter,
div.dataTables_wrapper div.dataTables_info,
div.dataTables_wrapper div.dataTables_paginate {
div.dt-container div.dt-length,
div.dt-container div.dt-search,
div.dt-container div.dt-info,
div.dt-container div.dt-paging {
text-align: center;
}
div.dataTables_wrapper div.dataTables_paginate ul.pagination {
div.dt-container .row {
--bs-gutter-y: 0.5rem;
}
div.dt-container div.dt-paging ul.pagination {
justify-content: center !important;
}
}
table.dataTable.table-sm > thead > tr > th:not(.sorting_disabled) {
padding-right: 20px;
}
table.table-bordered.dataTable {
border-right-width: 0;
}
table.table-bordered.dataTable thead tr:first-child th,
table.table-bordered.dataTable thead tr:first-child td {
border-top-width: 1px;
}
table.table-bordered.dataTable th,
table.table-bordered.dataTable td {
border-left-width: 0;
}
table.table-bordered.dataTable th:first-child, table.table-bordered.dataTable th:first-child,
table.table-bordered.dataTable td:first-child,
table.table-bordered.dataTable td:first-child {
border-left-width: 1px;
}
table.table-bordered.dataTable th:last-child, table.table-bordered.dataTable th:last-child,
table.table-bordered.dataTable td:last-child,
table.table-bordered.dataTable td:last-child {
border-right-width: 1px;
}
table.table-bordered.dataTable th,
table.table-bordered.dataTable td {
border-bottom-width: 1px;
table.dataTable.table-sm > thead > tr > th:not(.sorting_disabled):before, table.dataTable.table-sm > thead > tr > th:not(.sorting_disabled):after {
right: 5px;
}
div.dataTables_scrollHead table.table-bordered {
div.dt-scroll-head table.table-bordered {
border-bottom-width: 0;
}
div.table-responsive > div.dataTables_wrapper > div.row {
div.table-responsive > div.dt-container > div.row {
margin: 0;
}
div.table-responsive > div.dataTables_wrapper > div.row > div[class^=col-]:first-child {
div.table-responsive > div.dt-container > div.row > div[class^=col-]:first-child {
padding-left: 0;
}
div.table-responsive > div.dataTables_wrapper > div.row > div[class^=col-]:last-child {
div.table-responsive > div.dt-container > div.row > div[class^=col-]:last-child {
padding-right: 0;
}

Datei-Diff unterdrückt, da er zu groß ist Diff laden

Datei anzeigen

@ -1,12 +1,12 @@
/*!
* jQuery JavaScript Library v3.7.0 -ajax,-ajax/jsonp,-ajax/load,-ajax/script,-ajax/var/location,-ajax/var/nonce,-ajax/var/rquery,-ajax/xhr,-manipulation/_evalUrl,-deprecated/ajax-event-alias,-effects,-effects/animatedSelector,-effects/Tween
* jQuery JavaScript Library v3.7.1 -ajax,-ajax/jsonp,-ajax/load,-ajax/script,-ajax/var/location,-ajax/var/nonce,-ajax/var/rquery,-ajax/xhr,-manipulation/_evalUrl,-deprecated/ajax-event-alias,-effects,-effects/animatedSelector,-effects/Tween
* https://jquery.com/
*
* Copyright OpenJS Foundation and other contributors
* Released under the MIT license
* https://jquery.org/license
*
* Date: 2023-05-11T18:29Z
* Date: 2023-08-28T13:37Z
*/
( function( global, factory ) {
@ -147,7 +147,7 @@ function toType( obj ) {
var version = "3.7.0 -ajax,-ajax/jsonp,-ajax/load,-ajax/script,-ajax/var/location,-ajax/var/nonce,-ajax/var/rquery,-ajax/xhr,-manipulation/_evalUrl,-deprecated/ajax-event-alias,-effects,-effects/animatedSelector,-effects/Tween",
var version = "3.7.1 -ajax,-ajax/jsonp,-ajax/load,-ajax/script,-ajax/var/location,-ajax/var/nonce,-ajax/var/rquery,-ajax/xhr,-manipulation/_evalUrl,-deprecated/ajax-event-alias,-effects,-effects/animatedSelector,-effects/Tween",
rhtmlSuffix = /HTML$/i,
@ -411,9 +411,14 @@ jQuery.extend( {
// Do not traverse comment nodes
ret += jQuery.text( node );
}
} else if ( nodeType === 1 || nodeType === 9 || nodeType === 11 ) {
}
if ( nodeType === 1 || nodeType === 11 ) {
return elem.textContent;
} else if ( nodeType === 3 || nodeType === 4 ) {
}
if ( nodeType === 9 ) {
return elem.documentElement.textContent;
}
if ( nodeType === 3 || nodeType === 4 ) {
return elem.nodeValue;
}
@ -1126,12 +1131,17 @@ function setDocument( node ) {
documentElement.msMatchesSelector;
// Support: IE 9 - 11+, Edge 12 - 18+
// Accessing iframe documents after unload throws "permission denied" errors (see trac-13936)
// Support: IE 11+, Edge 17 - 18+
// IE/Edge sometimes throw a "Permission denied" error when strict-comparing
// two documents; shallow comparisons work.
// eslint-disable-next-line eqeqeq
if ( preferredDoc != document &&
// Accessing iframe documents after unload throws "permission denied" errors
// (see trac-13936).
// Limit the fix to IE & Edge Legacy; despite Edge 15+ implementing `matches`,
// all IE 9+ and Edge Legacy versions implement `msMatchesSelector` as well.
if ( documentElement.msMatchesSelector &&
// Support: IE 11+, Edge 17 - 18+
// IE/Edge sometimes throw a "Permission denied" error when strict-comparing
// two documents; shallow comparisons work.
// eslint-disable-next-line eqeqeq
preferredDoc != document &&
( subWindow = document.defaultView ) && subWindow.top !== subWindow ) {
// Support: IE 9 - 11+, Edge 12 - 18+
@ -2694,12 +2704,12 @@ jQuery.find = find;
jQuery.expr[ ":" ] = jQuery.expr.pseudos;
jQuery.unique = jQuery.uniqueSort;
// These have always been private, but they used to be documented
// as part of Sizzle so let's maintain them in the 3.x line
// for backwards compatibility purposes.
// These have always been private, but they used to be documented as part of
// Sizzle so let's maintain them for now for backwards compatibility purposes.
find.compile = compile;
find.select = select;
find.setDocument = setDocument;
find.tokenize = tokenize;
find.escape = jQuery.escapeSelector;
find.getText = jQuery.text;
@ -5913,7 +5923,7 @@ function domManip( collection, args, callback, ignored ) {
if ( hasScripts ) {
doc = scripts[ scripts.length - 1 ].ownerDocument;
// Reenable scripts
// Re-enable scripts
jQuery.map( scripts, restoreScript );
// Evaluate executable scripts on first document insertion
@ -6370,7 +6380,7 @@ var rboxStyle = new RegExp( cssExpand.join( "|" ), "i" );
trChild = document.createElement( "div" );
table.style.cssText = "position:absolute;left:-11111px;border-collapse:separate";
tr.style.cssText = "border:1px solid";
tr.style.cssText = "box-sizing:content-box;border:1px solid";
// Support: Chrome 86+
// Height set through cssText does not get applied.
@ -6382,7 +6392,7 @@ var rboxStyle = new RegExp( cssExpand.join( "|" ), "i" );
// In our bodyBackground.html iframe,
// display for all div elements is set to "inline",
// which causes a problem only in Android 8 Chrome 86.
// Ensuring the div is display: block
// Ensuring the div is `display: block`
// gets around this issue.
trChild.style.display = "block";
@ -8451,7 +8461,9 @@ jQuery.fn.extend( {
},
hover: function( fnOver, fnOut ) {
return this.mouseenter( fnOver ).mouseleave( fnOut || fnOver );
return this
.on( "mouseenter", fnOver )
.on( "mouseleave", fnOut || fnOver );
}
} );

Datei anzeigen

@ -23,12 +23,12 @@
{{#if page_data.web_vault_enabled}}
<dt class="col-sm-5">Web Installed
<span class="badge bg-success d-none" id="web-success" title="Latest version is installed.">Ok</span>
<span class="badge bg-warning d-none" id="web-warning" title="There seems to be an update available.">Update</span>
<span class="badge bg-warning text-dark d-none" id="web-warning" title="There seems to be an update available.">Update</span>
</dt>
<dd class="col-sm-7">
<span id="web-installed">{{page_data.web_vault_version}}</span>
</dd>
{{#unless page_data.running_within_docker}}
{{#unless page_data.running_within_container}}
<dt class="col-sm-5">Web Latest
<span class="badge bg-secondary d-none" id="web-failed" title="Unable to determine latest version.">Unknown</span>
</dt>
@ -59,12 +59,12 @@
<dd class="col-sm-7">
<span class="d-block"><b>{{ page_data.host_os }} / {{ page_data.host_arch }}</b></span>
</dd>
<dt class="col-sm-5">Running within Docker</dt>
<dt class="col-sm-5">Running within a container</dt>
<dd class="col-sm-7">
{{#if page_data.running_within_docker}}
<span class="d-block"><b>Yes (Base: {{ page_data.docker_base_image }})</b></span>
{{#if page_data.running_within_container}}
<span class="d-block"><b>Yes (Base: {{ page_data.container_base_image }})</b></span>
{{/if}}
{{#unless page_data.running_within_docker}}
{{#unless page_data.running_within_container}}
<span class="d-block"><b>No</b></span>
{{/unless}}
</dd>

Datei anzeigen

@ -59,7 +59,7 @@
</main>
<link rel="stylesheet" href="{{urlpath}}/vw_static/datatables.css" />
<script src="{{urlpath}}/vw_static/jquery-3.7.0.slim.js"></script>
<script src="{{urlpath}}/vw_static/jquery-3.7.1.slim.js"></script>
<script src="{{urlpath}}/vw_static/datatables.js"></script>
<script src="{{urlpath}}/vw_static/admin_organizations.js"></script>
<script src="{{urlpath}}/vw_static/jdenticon.js"></script>

Datei anzeigen

@ -140,7 +140,7 @@
</main>
<link rel="stylesheet" href="{{urlpath}}/vw_static/datatables.css" />
<script src="{{urlpath}}/vw_static/jquery-3.7.0.slim.js"></script>
<script src="{{urlpath}}/vw_static/jquery-3.7.1.slim.js"></script>
<script src="{{urlpath}}/vw_static/datatables.js"></script>
<script src="{{urlpath}}/vw_static/admin_users.js"></script>
<script src="{{urlpath}}/vw_static/jdenticon.js"></script>

Datei anzeigen

@ -2,5 +2,5 @@ Your Email Change
<!---------------->
To finalize changing your email address enter the following code in web vault: {{token}}
If you did not try to change an email address, you can safely ignore this email.
If you did not try to change your email address, contact your administrator.
{{> email/email_footer_text }}

Datei anzeigen

@ -9,7 +9,7 @@ Your Email Change
</tr>
<tr style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">
<td class="content-block last" style="font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; margin: 0; -webkit-font-smoothing: antialiased; padding: 0; -webkit-text-size-adjust: none; text-align: center;" valign="top" align="center">
If you did not try to change an email address, you can safely ignore this email.
If you did not try to change your email address, contact your administrator.
</td>
</tr>
</table>

Datei anzeigen

@ -0,0 +1,6 @@
Your Vaultwarden Verification Code
<!---------------->
Your email verification code is: {{token}}
Use this code to complete the protected action in Vaultwarden.
{{> email/email_footer_text }}

Datei anzeigen

@ -0,0 +1,16 @@
Your Vaultwarden Verification Code
<!---------------->
{{> email/email_header }}
<table width="100%" cellpadding="0" cellspacing="0" style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">
<tr style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">
<td class="content-block" style="font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; margin: 0; -webkit-font-smoothing: antialiased; padding: 0 0 10px; -webkit-text-size-adjust: none;" valign="top">
Your email verification code is: <b>{{token}}</b>
</td>
</tr>
<tr style="margin: 0; font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; -webkit-font-smoothing: antialiased; -webkit-text-size-adjust: none;">
<td class="content-block" style="font-family: 'Helvetica Neue', Helvetica, Arial, sans-serif; box-sizing: border-box; font-size: 16px; color: #333; line-height: 25px; margin: 0; -webkit-font-smoothing: antialiased; padding: 0 0 10px; -webkit-text-size-adjust: none;" valign="top">
Use this code to complete the protected action in Vaultwarden.
</td>
</tr>
</table>
{{> email/email_footer }}

Datei anzeigen

@ -1,11 +1,10 @@
//
// Web Headers and caching
//
use std::{
io::{Cursor, ErrorKind},
ops::Deref,
};
use std::{collections::HashMap, io::Cursor, ops::Deref, path::Path};
use num_traits::ToPrimitive;
use once_cell::sync::Lazy;
use rocket::{
fairing::{Fairing, Info, Kind},
http::{ContentType, Header, HeaderMap, Method, Status},
@ -46,6 +45,7 @@ impl Fairing for AppHeaders {
// Remove headers which could cause websocket connection issues
res.remove_header("X-Frame-Options");
res.remove_header("X-Content-Type-Options");
res.remove_header("Permissions-Policy");
return;
}
(_, _) => (),
@ -215,7 +215,7 @@ impl<'r, R: 'r + Responder<'r, 'static> + Send> Responder<'r, 'static> for Cache
res.set_raw_header("Cache-Control", cache_control_header);
let time_now = chrono::Local::now();
let expiry_time = time_now + chrono::Duration::seconds(self.ttl.try_into().unwrap());
let expiry_time = time_now + chrono::TimeDelta::try_seconds(self.ttl.try_into().unwrap()).unwrap();
res.set_raw_header("Expires", format_datetime_http(&expiry_time));
Ok(res)
}
@ -331,52 +331,14 @@ impl Fairing for BetterLogging {
}
}
//
// File handling
//
use std::{
fs::{self, File},
io::Result as IOResult,
path::Path,
};
pub fn file_exists(path: &str) -> bool {
Path::new(path).exists()
}
pub fn write_file(path: &str, content: &[u8]) -> Result<(), crate::error::Error> {
use std::io::Write;
let mut f = match File::create(path) {
Ok(file) => file,
Err(e) => {
if e.kind() == ErrorKind::PermissionDenied {
error!("Can't create '{}': Permission denied", path);
}
return Err(From::from(e));
}
};
f.write_all(content)?;
f.flush()?;
Ok(())
}
pub fn delete_file(path: &str) -> IOResult<()> {
let res = fs::remove_file(path);
if let Some(parent) = Path::new(path).parent() {
// If the directory isn't empty, this returns an error, which we ignore
// We only want to delete the folder if it's empty
fs::remove_dir(parent).ok();
}
res
}
pub fn get_display_size(size: i32) -> String {
pub fn get_display_size(size: i64) -> String {
const UNITS: [&str; 6] = ["bytes", "KB", "MB", "GB", "TB", "PB"];
let mut size: f64 = size.into();
// If we're somehow too big for a f64, just return the size in bytes
let Some(mut size) = size.to_f64() else {
return format!("{size} bytes");
};
let mut unit_counter = 0;
loop {
@ -445,7 +407,7 @@ pub fn get_env_str_value(key: &str) -> Option<String> {
match (value_from_env, value_file) {
(Ok(_), Ok(_)) => panic!("You should not define both {key} and {key_file}!"),
(Ok(v_env), Err(_)) => Some(v_env),
(Err(_), Ok(v_file)) => match fs::read_to_string(v_file) {
(Err(_), Ok(v_file)) => match std::fs::read_to_string(v_file) {
Ok(content) => Some(content.trim().to_string()),
Err(e) => panic!("Failed to load {key}: {e:?}"),
},
@ -532,14 +494,17 @@ pub fn parse_date(date: &str) -> NaiveDateTime {
// Deployment environment methods
//
/// Returns true if the program is running in Docker or Podman.
pub fn is_running_in_docker() -> bool {
Path::new("/.dockerenv").exists() || Path::new("/run/.containerenv").exists()
/// Returns true if the program is running in Docker, Podman or Kubernetes.
pub fn is_running_in_container() -> bool {
Path::new("/.dockerenv").exists()
|| Path::new("/run/.containerenv").exists()
|| Path::new("/run/secrets/kubernetes.io").exists()
|| Path::new("/var/run/secrets/kubernetes.io").exists()
}
/// Simple check to determine on which docker base image vaultwarden is running.
/// Simple check to determine on which container base image vaultwarden is running.
/// We build images based upon Debian or Alpine, so these we check here.
pub fn docker_base_image() -> &'static str {
pub fn container_base_image() -> &'static str {
if Path::new("/etc/debian_version").exists() {
"Debian"
} else if Path::new("/etc/alpine-release").exists() {
@ -556,7 +521,7 @@ pub fn docker_base_image() -> &'static str {
use std::fmt;
use serde::de::{self, DeserializeOwned, Deserializer, MapAccess, SeqAccess, Visitor};
use serde_json::{self, Value};
use serde_json::Value;
pub type JsonMap = serde_json::Map<String, Value>;
@ -644,6 +609,47 @@ fn _process_key(key: &str) -> String {
}
}
#[derive(Deserialize, Debug, Clone)]
#[serde(untagged)]
pub enum NumberOrString {
Number(i64),
String(String),
}
impl NumberOrString {
pub fn into_string(self) -> String {
match self {
NumberOrString::Number(n) => n.to_string(),
NumberOrString::String(s) => s,
}
}
#[allow(clippy::wrong_self_convention)]
pub fn into_i32(&self) -> Result<i32, crate::Error> {
use std::num::ParseIntError as PIE;
match self {
NumberOrString::Number(n) => match n.to_i32() {
Some(n) => Ok(n),
None => err!("Number does not fit in i32"),
},
NumberOrString::String(s) => {
s.parse().map_err(|e: PIE| crate::Error::new("Can't convert to number", e.to_string()))
}
}
}
#[allow(clippy::wrong_self_convention)]
pub fn into_i64(&self) -> Result<i64, crate::Error> {
use std::num::ParseIntError as PIE;
match self {
NumberOrString::Number(n) => Ok(*n),
NumberOrString::String(s) => {
s.parse().map_err(|e: PIE| crate::Error::new("Can't convert to number", e.to_string()))
}
}
}
}
//
// Retry methods
//
@ -696,14 +702,9 @@ where
use reqwest::{header, Client, ClientBuilder};
pub fn get_reqwest_client() -> Client {
match get_reqwest_client_builder().build() {
Ok(client) => client,
Err(e) => {
error!("Possible trust-dns error, trying with trust-dns disabled: '{e}'");
get_reqwest_client_builder().trust_dns(false).build().expect("Failed to build client")
}
}
pub fn get_reqwest_client() -> &'static Client {
static INSTANCE: Lazy<Client> = Lazy::new(|| get_reqwest_client_builder().build().expect("Failed to build client"));
&INSTANCE
}
pub fn get_reqwest_client_builder() -> ClientBuilder {
@ -754,3 +755,256 @@ pub fn convert_json_key_lcase_first(src_json: Value) -> Value {
value => value,
}
}
/// Parses the experimental client feature flags string into a HashMap.
pub fn parse_experimental_client_feature_flags(experimental_client_feature_flags: &str) -> HashMap<String, bool> {
let feature_states =
experimental_client_feature_flags.to_lowercase().split(',').map(|f| (f.trim().to_owned(), true)).collect();
feature_states
}
mod dns_resolver {
use std::{
fmt,
net::{IpAddr, SocketAddr},
sync::Arc,
};
use hickory_resolver::{system_conf::read_system_conf, TokioAsyncResolver};
use once_cell::sync::Lazy;
use reqwest::dns::{Name, Resolve, Resolving};
use crate::{util::is_global, CONFIG};
#[derive(Debug, Clone)]
pub enum CustomResolverError {
Blacklist {
domain: String,
},
NonGlobalIp {
domain: String,
ip: IpAddr,
},
}
impl CustomResolverError {
pub fn downcast_ref(e: &dyn std::error::Error) -> Option<&Self> {
let mut source = e.source();
while let Some(err) = source {
source = err.source();
if let Some(err) = err.downcast_ref::<CustomResolverError>() {
return Some(err);
}
}
None
}
}
impl fmt::Display for CustomResolverError {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self {
Self::Blacklist {
domain,
} => write!(f, "Blacklisted domain: {domain} matched ICON_BLACKLIST_REGEX"),
Self::NonGlobalIp {
domain,
ip,
} => write!(f, "IP {ip} for domain '{domain}' is not a global IP!"),
}
}
}
impl std::error::Error for CustomResolverError {}
#[derive(Debug, Clone)]
pub enum CustomDnsResolver {
Default(),
Hickory(Arc<TokioAsyncResolver>),
}
type BoxError = Box<dyn std::error::Error + Send + Sync>;
impl CustomDnsResolver {
pub fn instance() -> Arc<Self> {
static INSTANCE: Lazy<Arc<CustomDnsResolver>> = Lazy::new(CustomDnsResolver::new);
Arc::clone(&*INSTANCE)
}
fn new() -> Arc<Self> {
match read_system_conf() {
Ok((config, opts)) => {
let resolver = TokioAsyncResolver::tokio(config.clone(), opts.clone());
Arc::new(Self::Hickory(Arc::new(resolver)))
}
Err(e) => {
warn!("Error creating Hickory resolver, falling back to default: {e:?}");
Arc::new(Self::Default())
}
}
}
// Note that we get an iterator of addresses, but we only grab the first one for convenience
async fn resolve_domain(&self, name: &str) -> Result<Option<SocketAddr>, BoxError> {
pre_resolve(name)?;
let result = match self {
Self::Default() => tokio::net::lookup_host(name).await?.next(),
Self::Hickory(r) => r.lookup_ip(name).await?.iter().next().map(|a| SocketAddr::new(a, 0)),
};
if let Some(addr) = &result {
post_resolve(name, addr.ip())?;
}
Ok(result)
}
}
fn pre_resolve(name: &str) -> Result<(), CustomResolverError> {
if crate::api::is_domain_blacklisted(name) {
return Err(CustomResolverError::Blacklist {
domain: name.to_string(),
});
}
Ok(())
}
fn post_resolve(name: &str, ip: IpAddr) -> Result<(), CustomResolverError> {
if CONFIG.icon_blacklist_non_global_ips() && !is_global(ip) {
Err(CustomResolverError::NonGlobalIp {
domain: name.to_string(),
ip,
})
} else {
Ok(())
}
}
impl Resolve for CustomDnsResolver {
fn resolve(&self, name: Name) -> Resolving {
let this = self.clone();
Box::pin(async move {
let name = name.as_str();
let result = this.resolve_domain(name).await?;
Ok::<reqwest::dns::Addrs, _>(Box::new(result.into_iter()))
})
}
}
}
pub use dns_resolver::{CustomDnsResolver, CustomResolverError};
/// TODO: This is extracted from IpAddr::is_global, which is unstable:
/// https://doc.rust-lang.org/nightly/std/net/enum.IpAddr.html#method.is_global
/// Remove once https://github.com/rust-lang/rust/issues/27709 is merged
#[allow(clippy::nonminimal_bool)]
#[cfg(any(not(feature = "unstable"), test))]
pub fn is_global_hardcoded(ip: std::net::IpAddr) -> bool {
match ip {
std::net::IpAddr::V4(ip) => {
!(ip.octets()[0] == 0 // "This network"
|| ip.is_private()
|| (ip.octets()[0] == 100 && (ip.octets()[1] & 0b1100_0000 == 0b0100_0000)) //ip.is_shared()
|| ip.is_loopback()
|| ip.is_link_local()
// addresses reserved for future protocols (`192.0.0.0/24`)
||(ip.octets()[0] == 192 && ip.octets()[1] == 0 && ip.octets()[2] == 0)
|| ip.is_documentation()
|| (ip.octets()[0] == 198 && (ip.octets()[1] & 0xfe) == 18) // ip.is_benchmarking()
|| (ip.octets()[0] & 240 == 240 && !ip.is_broadcast()) //ip.is_reserved()
|| ip.is_broadcast())
}
std::net::IpAddr::V6(ip) => {
!(ip.is_unspecified()
|| ip.is_loopback()
// IPv4-mapped Address (`::ffff:0:0/96`)
|| matches!(ip.segments(), [0, 0, 0, 0, 0, 0xffff, _, _])
// IPv4-IPv6 Translat. (`64:ff9b:1::/48`)
|| matches!(ip.segments(), [0x64, 0xff9b, 1, _, _, _, _, _])
// Discard-Only Address Block (`100::/64`)
|| matches!(ip.segments(), [0x100, 0, 0, 0, _, _, _, _])
// IETF Protocol Assignments (`2001::/23`)
|| (matches!(ip.segments(), [0x2001, b, _, _, _, _, _, _] if b < 0x200)
&& !(
// Port Control Protocol Anycast (`2001:1::1`)
u128::from_be_bytes(ip.octets()) == 0x2001_0001_0000_0000_0000_0000_0000_0001
// Traversal Using Relays around NAT Anycast (`2001:1::2`)
|| u128::from_be_bytes(ip.octets()) == 0x2001_0001_0000_0000_0000_0000_0000_0002
// AMT (`2001:3::/32`)
|| matches!(ip.segments(), [0x2001, 3, _, _, _, _, _, _])
// AS112-v6 (`2001:4:112::/48`)
|| matches!(ip.segments(), [0x2001, 4, 0x112, _, _, _, _, _])
// ORCHIDv2 (`2001:20::/28`)
|| matches!(ip.segments(), [0x2001, b, _, _, _, _, _, _] if (0x20..=0x2F).contains(&b))
))
|| ((ip.segments()[0] == 0x2001) && (ip.segments()[1] == 0xdb8)) // ip.is_documentation()
|| ((ip.segments()[0] & 0xfe00) == 0xfc00) //ip.is_unique_local()
|| ((ip.segments()[0] & 0xffc0) == 0xfe80)) //ip.is_unicast_link_local()
}
}
}
#[cfg(not(feature = "unstable"))]
pub use is_global_hardcoded as is_global;
#[cfg(feature = "unstable")]
#[inline(always)]
pub fn is_global(ip: std::net::IpAddr) -> bool {
ip.is_global()
}
/// These are some tests to check that the implementations match
/// The IPv4 can be all checked in 30 seconds or so and they are correct as of nightly 2023-07-17
/// The IPV6 can't be checked in a reasonable time, so we check over a hundred billion random ones, so far correct
/// Note that the is_global implementation is subject to change as new IP RFCs are created
///
/// To run while showing progress output:
/// cargo +nightly test --release --features sqlite,unstable -- --nocapture --ignored
#[cfg(test)]
#[cfg(feature = "unstable")]
mod tests {
use super::*;
use std::net::IpAddr;
#[test]
#[ignore]
fn test_ipv4_global() {
for a in 0..u8::MAX {
println!("Iter: {}/255", a);
for b in 0..u8::MAX {
for c in 0..u8::MAX {
for d in 0..u8::MAX {
let ip = IpAddr::V4(std::net::Ipv4Addr::new(a, b, c, d));
assert_eq!(ip.is_global(), is_global_hardcoded(ip), "IP mismatch: {}", ip)
}
}
}
}
}
#[test]
#[ignore]
fn test_ipv6_global() {
use rand::Rng;
std::thread::scope(|s| {
for t in 0..16 {
let handle = s.spawn(move || {
let mut v = [0u8; 16];
let mut rng = rand::thread_rng();
for i in 0..20 {
println!("Thread {t} Iter: {i}/50");
for _ in 0..500_000_000 {
rng.fill(&mut v);
let ip = IpAddr::V6(std::net::Ipv6Addr::from(v));
assert_eq!(ip.is_global(), is_global_hardcoded(ip), "IP mismatch: {ip}");
}
}
});
}
});
}
}

Datei anzeigen

@ -10,19 +10,19 @@ import urllib.request
from collections import OrderedDict
if not (2 <= len(sys.argv) <= 3):
print("usage: %s <OUTPUT-FILE> [GIT-REF]" % sys.argv[0])
if not 2 <= len(sys.argv) <= 3:
print(f"usage: {sys.argv[0]} <OUTPUT-FILE> [GIT-REF]")
print()
print("This script generates a global equivalent domains JSON file from")
print("the upstream Bitwarden source repo.")
sys.exit(1)
OUTPUT_FILE = sys.argv[1]
GIT_REF = 'master' if len(sys.argv) == 2 else sys.argv[2]
GIT_REF = 'main' if len(sys.argv) == 2 else sys.argv[2]
BASE_URL = 'https://github.com/bitwarden/server/raw/%s' % GIT_REF
ENUMS_URL = '%s/src/Core/Enums/GlobalEquivalentDomainsType.cs' % BASE_URL
DOMAIN_LISTS_URL = '%s/src/Core/Utilities/StaticStore.cs' % BASE_URL
BASE_URL = f'https://github.com/bitwarden/server/raw/{GIT_REF}'
ENUMS_URL = f'{BASE_URL}/src/Core/Enums/GlobalEquivalentDomainsType.cs'
DOMAIN_LISTS_URL = f'{BASE_URL}/src/Core/Utilities/StaticStore.cs'
# Enum lines look like:
#
@ -77,5 +77,5 @@ for name, domain_list in domain_lists.items():
global_domains.append(entry)
# Write out the global domains JSON file.
with open(OUTPUT_FILE, 'w') as f:
with open(file=OUTPUT_FILE, mode='w', encoding='utf-8') as f:
json.dump(global_domains, f, indent=2)