From 9342fa5744a02884d7573c4c58d371f469c76b34 Mon Sep 17 00:00:00 2001 From: BlackDex Date: Thu, 16 Jun 2022 15:13:10 +0200 Subject: [PATCH 01/24] Re-License Vaultwarden to AGPLv3 This commit prepares Vaultwarden for the Re-Licensing to AGPLv3 Solves #2450 --- Cargo.toml | 2 +- LICENSE.txt | 143 ++++++++++++++++++++++++---------------------------- README.md | 4 +- hooks/build | 2 +- 4 files changed, 69 insertions(+), 82 deletions(-) diff --git a/Cargo.toml b/Cargo.toml index 41965557..c83f8fed 100644 --- a/Cargo.toml +++ b/Cargo.toml @@ -8,7 +8,7 @@ resolver = "2" repository = "https://github.com/dani-garcia/vaultwarden" readme = "README.md" -license = "GPL-3.0-only" +license = "AGPL-3.0-only" publish = false build = "build.rs" diff --git a/LICENSE.txt b/LICENSE.txt index f288702d..0ad25db4 100644 --- a/LICENSE.txt +++ b/LICENSE.txt @@ -1,5 +1,5 @@ - GNU GENERAL PUBLIC LICENSE - Version 3, 29 June 2007 + GNU AFFERO GENERAL PUBLIC LICENSE + Version 3, 19 November 2007 Copyright (C) 2007 Free Software Foundation, Inc. Everyone is permitted to copy and distribute verbatim copies @@ -7,17 +7,15 @@ Preamble - The GNU General Public License is a free, copyleft license for -software and other kinds of works. + The GNU Affero General Public License is a free, copyleft license for +software and other kinds of works, specifically designed to ensure +cooperation with the community in the case of network server software. The licenses for most software and other practical works are designed to take away your freedom to share and change the works. By contrast, -the GNU General Public License is intended to guarantee your freedom to +our General Public Licenses are intended to guarantee your freedom to share and change all versions of a program--to make sure it remains free -software for all its users. We, the Free Software Foundation, use the -GNU General Public License for most of our software; it applies also to -any other work released this way by its authors. You can apply it to -your programs, too. +software for all its users. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you @@ -26,44 +24,34 @@ them if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs, and that you know you can do these things. - To protect your rights, we need to prevent others from denying you -these rights or asking you to surrender the rights. Therefore, you have -certain responsibilities if you distribute copies of the software, or if -you modify it: responsibilities to respect the freedom of others. + Developers that use our General Public Licenses protect your rights +with two steps: (1) assert copyright on the software, and (2) offer +you this License which gives you legal permission to copy, distribute +and/or modify the software. - For example, if you distribute copies of such a program, whether -gratis or for a fee, you must pass on to the recipients the same -freedoms that you received. You must make sure that they, too, receive -or can get the source code. And you must show them these terms so they -know their rights. + A secondary benefit of defending all users' freedom is that +improvements made in alternate versions of the program, if they +receive widespread use, become available for other developers to +incorporate. Many developers of free software are heartened and +encouraged by the resulting cooperation. However, in the case of +software used on network servers, this result may fail to come about. +The GNU General Public License permits making a modified version and +letting the public access it on a server without ever releasing its +source code to the public. - Developers that use the GNU GPL protect your rights with two steps: -(1) assert copyright on the software, and (2) offer you this License -giving you legal permission to copy, distribute and/or modify it. + The GNU Affero General Public License is designed specifically to +ensure that, in such cases, the modified source code becomes available +to the community. It requires the operator of a network server to +provide the source code of the modified version running there to the +users of that server. Therefore, public use of a modified version, on +a publicly accessible server, gives the public access to the source +code of the modified version. - For the developers' and authors' protection, the GPL clearly explains -that there is no warranty for this free software. For both users' and -authors' sake, the GPL requires that modified versions be marked as -changed, so that their problems will not be attributed erroneously to -authors of previous versions. - - Some devices are designed to deny users access to install or run -modified versions of the software inside them, although the manufacturer -can do so. This is fundamentally incompatible with the aim of -protecting users' freedom to change the software. The systematic -pattern of such abuse occurs in the area of products for individuals to -use, which is precisely where it is most unacceptable. Therefore, we -have designed this version of the GPL to prohibit the practice for those -products. If such problems arise substantially in other domains, we -stand ready to extend this provision to those domains in future versions -of the GPL, as needed to protect the freedom of users. - - Finally, every program is threatened constantly by software patents. -States should not allow patents to restrict development and use of -software on general-purpose computers, but in those that do, we wish to -avoid the special danger that patents applied to a free program could -make it effectively proprietary. To prevent this, the GPL assures that -patents cannot be used to render the program non-free. + An older license, called the Affero General Public License and +published by Affero, was designed to accomplish similar goals. This is +a different license, not a version of the Affero GPL, but Affero has +released a new version of the Affero GPL which permits relicensing under +this license. The precise terms and conditions for copying, distribution and modification follow. @@ -72,7 +60,7 @@ modification follow. 0. Definitions. - "This License" refers to version 3 of the GNU General Public License. + "This License" refers to version 3 of the GNU Affero General Public License. "Copyright" also means copyright-like laws that apply to other kinds of works, such as semiconductor masks. @@ -549,35 +537,45 @@ to collect a royalty for further conveying from those to whom you convey the Program, the only way you could satisfy both those terms and this License would be to refrain entirely from conveying the Program. - 13. Use with the GNU Affero General Public License. + 13. Remote Network Interaction; Use with the GNU General Public License. + + Notwithstanding any other provision of this License, if you modify the +Program, your modified version must prominently offer all users +interacting with it remotely through a computer network (if your version +supports such interaction) an opportunity to receive the Corresponding +Source of your version by providing access to the Corresponding Source +from a network server at no charge, through some standard or customary +means of facilitating copying of software. This Corresponding Source +shall include the Corresponding Source for any work covered by version 3 +of the GNU General Public License that is incorporated pursuant to the +following paragraph. Notwithstanding any other provision of this License, you have permission to link or combine any covered work with a work licensed -under version 3 of the GNU Affero General Public License into a single +under version 3 of the GNU General Public License into a single combined work, and to convey the resulting work. The terms of this License will continue to apply to the part which is the covered work, -but the special requirements of the GNU Affero General Public License, -section 13, concerning interaction through a network will apply to the -combination as such. +but the work with which it is combined will remain governed by version +3 of the GNU General Public License. 14. Revised Versions of this License. The Free Software Foundation may publish revised and/or new versions of -the GNU General Public License from time to time. Such new versions will -be similar in spirit to the present version, but may differ in detail to +the GNU Affero General Public License from time to time. Such new versions +will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the -Program specifies that a certain numbered version of the GNU General +Program specifies that a certain numbered version of the GNU Affero General Public License "or any later version" applies to it, you have the option of following the terms and conditions either of that numbered version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of the -GNU General Public License, you may choose any version ever published +GNU Affero General Public License, you may choose any version ever published by the Free Software Foundation. If the Program specifies that a proxy can decide which future -versions of the GNU General Public License can be used, that proxy's +versions of the GNU Affero General Public License can be used, that proxy's public statement of acceptance of a version permanently authorizes you to choose that version for the Program. @@ -635,40 +633,29 @@ the "copyright" line and a pointer to where the full notice is found. Copyright (C) This program is free software: you can redistribute it and/or modify - it under the terms of the GNU General Public License as published by - the Free Software Foundation, either version 3 of the License, or + it under the terms of the GNU Affero General Public License as published + by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - GNU General Public License for more details. + GNU Affero General Public License for more details. - You should have received a copy of the GNU General Public License + You should have received a copy of the GNU Affero General Public License along with this program. If not, see . Also add information on how to contact you by electronic and paper mail. - If the program does terminal interaction, make it output a short -notice like this when it starts in an interactive mode: - - Copyright (C) - This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'. - This is free software, and you are welcome to redistribute it - under certain conditions; type `show c' for details. - -The hypothetical commands `show w' and `show c' should show the appropriate -parts of the General Public License. Of course, your program's commands -might be different; for a GUI interface, you would use an "about box". + If your software can interact with users remotely through a computer +network, you should also make sure that it provides a way for users to +get its source. For example, if your program is a web application, its +interface could display a "Source" link that leads users to an archive +of the code. There are many ways you could offer source, and different +solutions will be better for different programs; see section 13 for the +specific requirements. You should also get your employer (if you work as a programmer) or school, if any, to sign a "copyright disclaimer" for the program, if necessary. -For more information on this, and how to apply and follow the GNU GPL, see +For more information on this, and how to apply and follow the GNU AGPL, see . - - The GNU General Public License does not permit incorporating your program -into proprietary programs. If your program is a subroutine library, you -may consider it more useful to permit linking proprietary applications with -the library. If this is what you want to do, use the GNU Lesser General -Public License instead of this License. But first, please read -. diff --git a/README.md b/README.md index 4a233496..1201ab2b 100644 --- a/README.md +++ b/README.md @@ -7,7 +7,7 @@ [![Docker Pulls](https://img.shields.io/docker/pulls/vaultwarden/server.svg)](https://hub.docker.com/r/vaultwarden/server) [![Dependency Status](https://deps.rs/repo/github/dani-garcia/vaultwarden/status.svg)](https://deps.rs/repo/github/dani-garcia/vaultwarden) [![GitHub Release](https://img.shields.io/github/release/dani-garcia/vaultwarden.svg)](https://github.com/dani-garcia/vaultwarden/releases/latest) -[![GPL-3.0 Licensed](https://img.shields.io/github/license/dani-garcia/vaultwarden.svg)](https://github.com/dani-garcia/vaultwarden/blob/main/LICENSE.txt) +[![AGPL-3.0 Licensed](https://img.shields.io/github/license/dani-garcia/vaultwarden.svg)](https://github.com/dani-garcia/vaultwarden/blob/main/LICENSE.txt) [![Matrix Chat](https://img.shields.io/matrix/vaultwarden:matrix.org.svg?logo=matrix)](https://matrix.to/#/#vaultwarden:matrix.org) Image is based on [Rust implementation of Bitwarden API](https://github.com/dani-garcia/vaultwarden). @@ -39,7 +39,7 @@ docker run -d --name vaultwarden -v /vw-data/:/data/ -p 80:80 vaultwarden/server ``` This will preserve any persistent data under /vw-data/, you can adapt the path to whatever suits you. -**IMPORTANT**: Some web browsers, like Chrome, disallow the use of Web Crypto APIs in insecure contexts. In this case, you might get an error like `Cannot read property 'importKey'`. To solve this problem, you need to access the web vault from HTTPS. +**IMPORTANT**: Some web browsers, like Chrome, disallow the use of Web Crypto APIs in insecure contexts. In this case, you might get an error like `Cannot read property 'importKey'`. To solve this problem, you need to access the web vault from HTTPS. This can be configured in [vaultwarden directly](https://github.com/dani-garcia/vaultwarden/wiki/Enabling-HTTPS) or using a third-party reverse proxy ([some examples](https://github.com/dani-garcia/vaultwarden/wiki/Proxy-examples)). diff --git a/hooks/build b/hooks/build index 96f04d15..79e57c53 100755 --- a/hooks/build +++ b/hooks/build @@ -23,7 +23,7 @@ LABELS=( # https://github.com/opencontainers/image-spec/blob/master/annotations.md org.opencontainers.image.created="$(date --utc --iso-8601=seconds)" org.opencontainers.image.documentation="https://github.com/dani-garcia/vaultwarden/wiki" - org.opencontainers.image.licenses="GPL-3.0-only" + org.opencontainers.image.licenses="AGPL-3.0-only" org.opencontainers.image.revision="${SOURCE_COMMIT}" org.opencontainers.image.source="${SOURCE_REPOSITORY_URL}" org.opencontainers.image.url="https://hub.docker.com/r/${DOCKER_REPO#*/}" From 2c6bd8c9dc67d3e0208e1873d8bf3fef6d8f9aa3 Mon Sep 17 00:00:00 2001 From: Jeremy Lin Date: Sun, 22 Jan 2023 01:01:02 -0800 Subject: [PATCH 02/24] Rename `.buildx` Dockerfiles to `.buildkit` This is a more accurate name, since these Dockerfiles require BuildKit, not Buildx. --- .github/workflows/release.yml | 5 ++++- docker/Dockerfile.j2 | 2 +- docker/Makefile | 4 ++-- docker/amd64/{Dockerfile.buildx => Dockerfile.buildkit} | 0 ...{Dockerfile.buildx.alpine => Dockerfile.buildkit.alpine} | 0 docker/arm64/{Dockerfile.buildx => Dockerfile.buildkit} | 0 ...{Dockerfile.buildx.alpine => Dockerfile.buildkit.alpine} | 0 docker/armv6/{Dockerfile.buildx => Dockerfile.buildkit} | 0 ...{Dockerfile.buildx.alpine => Dockerfile.buildkit.alpine} | 0 docker/armv7/{Dockerfile.buildx => Dockerfile.buildkit} | 0 ...{Dockerfile.buildx.alpine => Dockerfile.buildkit.alpine} | 0 hooks/build | 6 +++--- 12 files changed, 10 insertions(+), 7 deletions(-) rename docker/amd64/{Dockerfile.buildx => Dockerfile.buildkit} (100%) rename docker/amd64/{Dockerfile.buildx.alpine => Dockerfile.buildkit.alpine} (100%) rename docker/arm64/{Dockerfile.buildx => Dockerfile.buildkit} (100%) rename docker/arm64/{Dockerfile.buildx.alpine => Dockerfile.buildkit.alpine} (100%) rename docker/armv6/{Dockerfile.buildx => Dockerfile.buildkit} (100%) rename docker/armv6/{Dockerfile.buildx.alpine => Dockerfile.buildkit.alpine} (100%) rename docker/armv7/{Dockerfile.buildx => Dockerfile.buildkit} (100%) rename docker/armv7/{Dockerfile.buildx.alpine => Dockerfile.buildkit.alpine} (100%) diff --git a/.github/workflows/release.yml b/.github/workflows/release.yml index b3690ceb..32f6abc0 100644 --- a/.github/workflows/release.yml +++ b/.github/workflows/release.yml @@ -48,7 +48,10 @@ jobs: ports: - 5000:5000 env: - DOCKER_BUILDKIT: 1 # Disabled for now, but we should look at this because it will speedup building! + # Use BuildKit (https://docs.docker.com/build/buildkit/) for better + # build performance and the ability to copy extended file attributes + # (e.g., for executable capabilities) across build phases. + DOCKER_BUILDKIT: 1 # DOCKER_REPO/secrets.DOCKERHUB_REPO needs to be 'index.docker.io//' DOCKER_REPO: ${{ secrets.DOCKERHUB_REPO }} SOURCE_COMMIT: ${{ github.sha }} diff --git a/docker/Dockerfile.j2 b/docker/Dockerfile.j2 index 82e8527f..095c295a 100644 --- a/docker/Dockerfile.j2 +++ b/docker/Dockerfile.j2 @@ -50,7 +50,7 @@ {% else %} {% set package_arch_target_param = "" %} {% endif %} -{% if "buildx" in target_file %} +{% if "buildkit" in target_file %} {% set mount_rust_cache = "--mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry " %} {% else %} {% set mount_rust_cache = "" %} diff --git a/docker/Makefile b/docker/Makefile index 8c939cba..d7c0ab80 100644 --- a/docker/Makefile +++ b/docker/Makefile @@ -8,8 +8,8 @@ all: $(OBJECTS) %/Dockerfile.alpine: Dockerfile.j2 render_template ./render_template "$<" "{\"target_file\":\"$@\"}" > "$@" -%/Dockerfile.buildx: Dockerfile.j2 render_template +%/Dockerfile.buildkit: Dockerfile.j2 render_template ./render_template "$<" "{\"target_file\":\"$@\"}" > "$@" -%/Dockerfile.buildx.alpine: Dockerfile.j2 render_template +%/Dockerfile.buildkit.alpine: Dockerfile.j2 render_template ./render_template "$<" "{\"target_file\":\"$@\"}" > "$@" diff --git a/docker/amd64/Dockerfile.buildx b/docker/amd64/Dockerfile.buildkit similarity index 100% rename from docker/amd64/Dockerfile.buildx rename to docker/amd64/Dockerfile.buildkit diff --git a/docker/amd64/Dockerfile.buildx.alpine b/docker/amd64/Dockerfile.buildkit.alpine similarity index 100% rename from docker/amd64/Dockerfile.buildx.alpine rename to docker/amd64/Dockerfile.buildkit.alpine diff --git a/docker/arm64/Dockerfile.buildx b/docker/arm64/Dockerfile.buildkit similarity index 100% rename from docker/arm64/Dockerfile.buildx rename to docker/arm64/Dockerfile.buildkit diff --git a/docker/arm64/Dockerfile.buildx.alpine b/docker/arm64/Dockerfile.buildkit.alpine similarity index 100% rename from docker/arm64/Dockerfile.buildx.alpine rename to docker/arm64/Dockerfile.buildkit.alpine diff --git a/docker/armv6/Dockerfile.buildx b/docker/armv6/Dockerfile.buildkit similarity index 100% rename from docker/armv6/Dockerfile.buildx rename to docker/armv6/Dockerfile.buildkit diff --git a/docker/armv6/Dockerfile.buildx.alpine b/docker/armv6/Dockerfile.buildkit.alpine similarity index 100% rename from docker/armv6/Dockerfile.buildx.alpine rename to docker/armv6/Dockerfile.buildkit.alpine diff --git a/docker/armv7/Dockerfile.buildx b/docker/armv7/Dockerfile.buildkit similarity index 100% rename from docker/armv7/Dockerfile.buildx rename to docker/armv7/Dockerfile.buildkit diff --git a/docker/armv7/Dockerfile.buildx.alpine b/docker/armv7/Dockerfile.buildkit.alpine similarity index 100% rename from docker/armv7/Dockerfile.buildx.alpine rename to docker/armv7/Dockerfile.buildkit.alpine diff --git a/hooks/build b/hooks/build index 96f04d15..223b4153 100755 --- a/hooks/build +++ b/hooks/build @@ -34,9 +34,9 @@ for label in "${LABELS[@]}"; do LABEL_ARGS+=(--label "${label}") done -# Check if DOCKER_BUILDKIT is set, if so, use the Dockerfile.buildx as template +# Check if DOCKER_BUILDKIT is set, if so, use the Dockerfile.buildkit as template if [[ -n "${DOCKER_BUILDKIT}" ]]; then - buildx_suffix=.buildx + buildkit_suffix=.buildkit fi set -ex @@ -45,6 +45,6 @@ for arch in "${arches[@]}"; do docker build \ "${LABEL_ARGS[@]}" \ -t "${DOCKER_REPO}:${DOCKER_TAG}-${arch}" \ - -f docker/${arch}/Dockerfile${buildx_suffix}${distro_suffix} \ + -f docker/${arch}/Dockerfile${buildkit_suffix}${distro_suffix} \ . done From 686474f81505b0b7aae323669809dd86f6186427 Mon Sep 17 00:00:00 2001 From: Jeremy Lin Date: Sun, 22 Jan 2023 01:21:52 -0800 Subject: [PATCH 03/24] Disable Hadolint check for consecutive `RUN` instructions (DL3059) This check doesn't seem to add enough value to justify the difficulties it tends to create when generating `RUN` instructions from a template. --- .hadolint.yaml | 2 ++ docker/Dockerfile.j2 | 5 ----- docker/amd64/Dockerfile | 1 - docker/amd64/Dockerfile.alpine | 1 - docker/amd64/Dockerfile.buildkit | 1 - docker/amd64/Dockerfile.buildkit.alpine | 1 - docker/arm64/Dockerfile | 4 ---- docker/arm64/Dockerfile.alpine | 3 --- docker/arm64/Dockerfile.buildkit | 4 ---- docker/arm64/Dockerfile.buildkit.alpine | 3 --- docker/armv6/Dockerfile | 5 ----- docker/armv6/Dockerfile.alpine | 3 --- docker/armv6/Dockerfile.buildkit | 5 ----- docker/armv6/Dockerfile.buildkit.alpine | 3 --- docker/armv7/Dockerfile | 4 ---- docker/armv7/Dockerfile.alpine | 3 --- docker/armv7/Dockerfile.buildkit | 4 ---- docker/armv7/Dockerfile.buildkit.alpine | 3 --- 18 files changed, 2 insertions(+), 53 deletions(-) diff --git a/.hadolint.yaml b/.hadolint.yaml index f1c324b8..1c305f9d 100644 --- a/.hadolint.yaml +++ b/.hadolint.yaml @@ -3,5 +3,7 @@ ignored: - DL3008 # disable explicit version for apk install - DL3018 + # disable check for consecutive `RUN` instructions + - DL3059 trustedRegistries: - docker.io diff --git a/docker/Dockerfile.j2 b/docker/Dockerfile.j2 index 095c295a..8c5157f4 100644 --- a/docker/Dockerfile.j2 +++ b/docker/Dockerfile.j2 @@ -106,7 +106,6 @@ ENV RUSTFLAGS='-Clink-arg=/usr/local/musl/{{ package_arch_target }}/lib/libatomi {% elif "arm" in target_file %} # # Install required build libs for {{ package_arch_name }} architecture. -# hadolint ignore=DL3059 RUN dpkg --add-architecture {{ package_arch_name }} \ && apt-get update \ && apt-get install -y \ @@ -178,7 +177,6 @@ RUN touch src/main.rs # Builds again, this time it'll just be # your actual source files being built -# hadolint ignore=DL3059 RUN {{ mount_rust_cache -}} cargo build --features ${DB} --release{{ package_arch_target_param }} ######################## RUNTIME IMAGE ######################## @@ -195,7 +193,6 @@ ENV ROCKET_PROFILE="release" \ {% if "amd64" not in target_file %} -# hadolint ignore=DL3059 RUN [ "cross-build-start" ] {% endif %} @@ -222,13 +219,11 @@ RUN mkdir /data \ {% if "armv6" in target_file and "alpine" not in target_file %} # In the Balena Bullseye images for armv6/rpi-debian there is a missing symlink. # This symlink was there in the buster images, and for some reason this is needed. -# hadolint ignore=DL3059 RUN ln -v -s /lib/ld-linux-armhf.so.3 /lib/ld-linux.so.3 {% endif -%} {% if "amd64" not in target_file %} -# hadolint ignore=DL3059 RUN [ "cross-build-end" ] {% endif %} diff --git a/docker/amd64/Dockerfile b/docker/amd64/Dockerfile index 09b959dd..281146f7 100644 --- a/docker/amd64/Dockerfile +++ b/docker/amd64/Dockerfile @@ -81,7 +81,6 @@ RUN touch src/main.rs # Builds again, this time it'll just be # your actual source files being built -# hadolint ignore=DL3059 RUN cargo build --features ${DB} --release ######################## RUNTIME IMAGE ######################## diff --git a/docker/amd64/Dockerfile.alpine b/docker/amd64/Dockerfile.alpine index eba7a10f..6dd624b6 100644 --- a/docker/amd64/Dockerfile.alpine +++ b/docker/amd64/Dockerfile.alpine @@ -75,7 +75,6 @@ RUN touch src/main.rs # Builds again, this time it'll just be # your actual source files being built -# hadolint ignore=DL3059 RUN cargo build --features ${DB} --release --target=x86_64-unknown-linux-musl ######################## RUNTIME IMAGE ######################## diff --git a/docker/amd64/Dockerfile.buildkit b/docker/amd64/Dockerfile.buildkit index ae841026..12e85211 100644 --- a/docker/amd64/Dockerfile.buildkit +++ b/docker/amd64/Dockerfile.buildkit @@ -81,7 +81,6 @@ RUN touch src/main.rs # Builds again, this time it'll just be # your actual source files being built -# hadolint ignore=DL3059 RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release ######################## RUNTIME IMAGE ######################## diff --git a/docker/amd64/Dockerfile.buildkit.alpine b/docker/amd64/Dockerfile.buildkit.alpine index e1a1de9b..ba45c39b 100644 --- a/docker/amd64/Dockerfile.buildkit.alpine +++ b/docker/amd64/Dockerfile.buildkit.alpine @@ -75,7 +75,6 @@ RUN touch src/main.rs # Builds again, this time it'll just be # your actual source files being built -# hadolint ignore=DL3059 RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=x86_64-unknown-linux-musl ######################## RUNTIME IMAGE ######################## diff --git a/docker/arm64/Dockerfile b/docker/arm64/Dockerfile index eabadb47..093afadd 100644 --- a/docker/arm64/Dockerfile +++ b/docker/arm64/Dockerfile @@ -46,7 +46,6 @@ RUN mkdir -pv "${CARGO_HOME}" \ # # Install required build libs for arm64 architecture. -# hadolint ignore=DL3059 RUN dpkg --add-architecture arm64 \ && apt-get update \ && apt-get install -y \ @@ -101,7 +100,6 @@ RUN touch src/main.rs # Builds again, this time it'll just be # your actual source files being built -# hadolint ignore=DL3059 RUN cargo build --features ${DB} --release --target=aarch64-unknown-linux-gnu ######################## RUNTIME IMAGE ######################## @@ -113,7 +111,6 @@ ENV ROCKET_PROFILE="release" \ ROCKET_ADDRESS=0.0.0.0 \ ROCKET_PORT=80 -# hadolint ignore=DL3059 RUN [ "cross-build-start" ] # Create data folder and Install needed libraries @@ -128,7 +125,6 @@ RUN mkdir /data \ && apt-get clean \ && rm -rf /var/lib/apt/lists/* -# hadolint ignore=DL3059 RUN [ "cross-build-end" ] VOLUME /data diff --git a/docker/arm64/Dockerfile.alpine b/docker/arm64/Dockerfile.alpine index f880d8ec..83bf0745 100644 --- a/docker/arm64/Dockerfile.alpine +++ b/docker/arm64/Dockerfile.alpine @@ -75,7 +75,6 @@ RUN touch src/main.rs # Builds again, this time it'll just be # your actual source files being built -# hadolint ignore=DL3059 RUN cargo build --features ${DB} --release --target=aarch64-unknown-linux-musl ######################## RUNTIME IMAGE ######################## @@ -89,7 +88,6 @@ ENV ROCKET_PROFILE="release" \ SSL_CERT_DIR=/etc/ssl/certs -# hadolint ignore=DL3059 RUN [ "cross-build-start" ] # Create data folder and Install needed libraries @@ -100,7 +98,6 @@ RUN mkdir /data \ curl \ ca-certificates -# hadolint ignore=DL3059 RUN [ "cross-build-end" ] VOLUME /data diff --git a/docker/arm64/Dockerfile.buildkit b/docker/arm64/Dockerfile.buildkit index dc5620e4..cdabd35c 100644 --- a/docker/arm64/Dockerfile.buildkit +++ b/docker/arm64/Dockerfile.buildkit @@ -46,7 +46,6 @@ RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/. # # Install required build libs for arm64 architecture. -# hadolint ignore=DL3059 RUN dpkg --add-architecture arm64 \ && apt-get update \ && apt-get install -y \ @@ -101,7 +100,6 @@ RUN touch src/main.rs # Builds again, this time it'll just be # your actual source files being built -# hadolint ignore=DL3059 RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=aarch64-unknown-linux-gnu ######################## RUNTIME IMAGE ######################## @@ -113,7 +111,6 @@ ENV ROCKET_PROFILE="release" \ ROCKET_ADDRESS=0.0.0.0 \ ROCKET_PORT=80 -# hadolint ignore=DL3059 RUN [ "cross-build-start" ] # Create data folder and Install needed libraries @@ -128,7 +125,6 @@ RUN mkdir /data \ && apt-get clean \ && rm -rf /var/lib/apt/lists/* -# hadolint ignore=DL3059 RUN [ "cross-build-end" ] VOLUME /data diff --git a/docker/arm64/Dockerfile.buildkit.alpine b/docker/arm64/Dockerfile.buildkit.alpine index b8fc36c1..837a7a39 100644 --- a/docker/arm64/Dockerfile.buildkit.alpine +++ b/docker/arm64/Dockerfile.buildkit.alpine @@ -75,7 +75,6 @@ RUN touch src/main.rs # Builds again, this time it'll just be # your actual source files being built -# hadolint ignore=DL3059 RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=aarch64-unknown-linux-musl ######################## RUNTIME IMAGE ######################## @@ -89,7 +88,6 @@ ENV ROCKET_PROFILE="release" \ SSL_CERT_DIR=/etc/ssl/certs -# hadolint ignore=DL3059 RUN [ "cross-build-start" ] # Create data folder and Install needed libraries @@ -100,7 +98,6 @@ RUN mkdir /data \ curl \ ca-certificates -# hadolint ignore=DL3059 RUN [ "cross-build-end" ] VOLUME /data diff --git a/docker/armv6/Dockerfile b/docker/armv6/Dockerfile index 7ddbdee8..84baa7b6 100644 --- a/docker/armv6/Dockerfile +++ b/docker/armv6/Dockerfile @@ -46,7 +46,6 @@ RUN mkdir -pv "${CARGO_HOME}" \ # # Install required build libs for armel architecture. -# hadolint ignore=DL3059 RUN dpkg --add-architecture armel \ && apt-get update \ && apt-get install -y \ @@ -101,7 +100,6 @@ RUN touch src/main.rs # Builds again, this time it'll just be # your actual source files being built -# hadolint ignore=DL3059 RUN cargo build --features ${DB} --release --target=arm-unknown-linux-gnueabi ######################## RUNTIME IMAGE ######################## @@ -113,7 +111,6 @@ ENV ROCKET_PROFILE="release" \ ROCKET_ADDRESS=0.0.0.0 \ ROCKET_PORT=80 -# hadolint ignore=DL3059 RUN [ "cross-build-start" ] # Create data folder and Install needed libraries @@ -130,10 +127,8 @@ RUN mkdir /data \ # In the Balena Bullseye images for armv6/rpi-debian there is a missing symlink. # This symlink was there in the buster images, and for some reason this is needed. -# hadolint ignore=DL3059 RUN ln -v -s /lib/ld-linux-armhf.so.3 /lib/ld-linux.so.3 -# hadolint ignore=DL3059 RUN [ "cross-build-end" ] VOLUME /data diff --git a/docker/armv6/Dockerfile.alpine b/docker/armv6/Dockerfile.alpine index 65bb552b..1f969d7c 100644 --- a/docker/armv6/Dockerfile.alpine +++ b/docker/armv6/Dockerfile.alpine @@ -77,7 +77,6 @@ RUN touch src/main.rs # Builds again, this time it'll just be # your actual source files being built -# hadolint ignore=DL3059 RUN cargo build --features ${DB} --release --target=arm-unknown-linux-musleabi ######################## RUNTIME IMAGE ######################## @@ -91,7 +90,6 @@ ENV ROCKET_PROFILE="release" \ SSL_CERT_DIR=/etc/ssl/certs -# hadolint ignore=DL3059 RUN [ "cross-build-start" ] # Create data folder and Install needed libraries @@ -102,7 +100,6 @@ RUN mkdir /data \ curl \ ca-certificates -# hadolint ignore=DL3059 RUN [ "cross-build-end" ] VOLUME /data diff --git a/docker/armv6/Dockerfile.buildkit b/docker/armv6/Dockerfile.buildkit index 7b9aab8a..1e33a25f 100644 --- a/docker/armv6/Dockerfile.buildkit +++ b/docker/armv6/Dockerfile.buildkit @@ -46,7 +46,6 @@ RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/. # # Install required build libs for armel architecture. -# hadolint ignore=DL3059 RUN dpkg --add-architecture armel \ && apt-get update \ && apt-get install -y \ @@ -101,7 +100,6 @@ RUN touch src/main.rs # Builds again, this time it'll just be # your actual source files being built -# hadolint ignore=DL3059 RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=arm-unknown-linux-gnueabi ######################## RUNTIME IMAGE ######################## @@ -113,7 +111,6 @@ ENV ROCKET_PROFILE="release" \ ROCKET_ADDRESS=0.0.0.0 \ ROCKET_PORT=80 -# hadolint ignore=DL3059 RUN [ "cross-build-start" ] # Create data folder and Install needed libraries @@ -130,10 +127,8 @@ RUN mkdir /data \ # In the Balena Bullseye images for armv6/rpi-debian there is a missing symlink. # This symlink was there in the buster images, and for some reason this is needed. -# hadolint ignore=DL3059 RUN ln -v -s /lib/ld-linux-armhf.so.3 /lib/ld-linux.so.3 -# hadolint ignore=DL3059 RUN [ "cross-build-end" ] VOLUME /data diff --git a/docker/armv6/Dockerfile.buildkit.alpine b/docker/armv6/Dockerfile.buildkit.alpine index 4bced53d..d0f5cfbe 100644 --- a/docker/armv6/Dockerfile.buildkit.alpine +++ b/docker/armv6/Dockerfile.buildkit.alpine @@ -77,7 +77,6 @@ RUN touch src/main.rs # Builds again, this time it'll just be # your actual source files being built -# hadolint ignore=DL3059 RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=arm-unknown-linux-musleabi ######################## RUNTIME IMAGE ######################## @@ -91,7 +90,6 @@ ENV ROCKET_PROFILE="release" \ SSL_CERT_DIR=/etc/ssl/certs -# hadolint ignore=DL3059 RUN [ "cross-build-start" ] # Create data folder and Install needed libraries @@ -102,7 +100,6 @@ RUN mkdir /data \ curl \ ca-certificates -# hadolint ignore=DL3059 RUN [ "cross-build-end" ] VOLUME /data diff --git a/docker/armv7/Dockerfile b/docker/armv7/Dockerfile index bcbf946c..8df12612 100644 --- a/docker/armv7/Dockerfile +++ b/docker/armv7/Dockerfile @@ -46,7 +46,6 @@ RUN mkdir -pv "${CARGO_HOME}" \ # # Install required build libs for armhf architecture. -# hadolint ignore=DL3059 RUN dpkg --add-architecture armhf \ && apt-get update \ && apt-get install -y \ @@ -101,7 +100,6 @@ RUN touch src/main.rs # Builds again, this time it'll just be # your actual source files being built -# hadolint ignore=DL3059 RUN cargo build --features ${DB} --release --target=armv7-unknown-linux-gnueabihf ######################## RUNTIME IMAGE ######################## @@ -113,7 +111,6 @@ ENV ROCKET_PROFILE="release" \ ROCKET_ADDRESS=0.0.0.0 \ ROCKET_PORT=80 -# hadolint ignore=DL3059 RUN [ "cross-build-start" ] # Create data folder and Install needed libraries @@ -128,7 +125,6 @@ RUN mkdir /data \ && apt-get clean \ && rm -rf /var/lib/apt/lists/* -# hadolint ignore=DL3059 RUN [ "cross-build-end" ] VOLUME /data diff --git a/docker/armv7/Dockerfile.alpine b/docker/armv7/Dockerfile.alpine index 6d14ae34..1872e54e 100644 --- a/docker/armv7/Dockerfile.alpine +++ b/docker/armv7/Dockerfile.alpine @@ -75,7 +75,6 @@ RUN touch src/main.rs # Builds again, this time it'll just be # your actual source files being built -# hadolint ignore=DL3059 RUN cargo build --features ${DB} --release --target=armv7-unknown-linux-musleabihf ######################## RUNTIME IMAGE ######################## @@ -89,7 +88,6 @@ ENV ROCKET_PROFILE="release" \ SSL_CERT_DIR=/etc/ssl/certs -# hadolint ignore=DL3059 RUN [ "cross-build-start" ] # Create data folder and Install needed libraries @@ -100,7 +98,6 @@ RUN mkdir /data \ curl \ ca-certificates -# hadolint ignore=DL3059 RUN [ "cross-build-end" ] VOLUME /data diff --git a/docker/armv7/Dockerfile.buildkit b/docker/armv7/Dockerfile.buildkit index 0084526b..4ff8364a 100644 --- a/docker/armv7/Dockerfile.buildkit +++ b/docker/armv7/Dockerfile.buildkit @@ -46,7 +46,6 @@ RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/. # # Install required build libs for armhf architecture. -# hadolint ignore=DL3059 RUN dpkg --add-architecture armhf \ && apt-get update \ && apt-get install -y \ @@ -101,7 +100,6 @@ RUN touch src/main.rs # Builds again, this time it'll just be # your actual source files being built -# hadolint ignore=DL3059 RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=armv7-unknown-linux-gnueabihf ######################## RUNTIME IMAGE ######################## @@ -113,7 +111,6 @@ ENV ROCKET_PROFILE="release" \ ROCKET_ADDRESS=0.0.0.0 \ ROCKET_PORT=80 -# hadolint ignore=DL3059 RUN [ "cross-build-start" ] # Create data folder and Install needed libraries @@ -128,7 +125,6 @@ RUN mkdir /data \ && apt-get clean \ && rm -rf /var/lib/apt/lists/* -# hadolint ignore=DL3059 RUN [ "cross-build-end" ] VOLUME /data diff --git a/docker/armv7/Dockerfile.buildkit.alpine b/docker/armv7/Dockerfile.buildkit.alpine index d29465bb..2fc23849 100644 --- a/docker/armv7/Dockerfile.buildkit.alpine +++ b/docker/armv7/Dockerfile.buildkit.alpine @@ -75,7 +75,6 @@ RUN touch src/main.rs # Builds again, this time it'll just be # your actual source files being built -# hadolint ignore=DL3059 RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=armv7-unknown-linux-musleabihf ######################## RUNTIME IMAGE ######################## @@ -89,7 +88,6 @@ ENV ROCKET_PROFILE="release" \ SSL_CERT_DIR=/etc/ssl/certs -# hadolint ignore=DL3059 RUN [ "cross-build-start" ] # Create data folder and Install needed libraries @@ -100,7 +98,6 @@ RUN mkdir /data \ curl \ ca-certificates -# hadolint ignore=DL3059 RUN [ "cross-build-end" ] VOLUME /data From 95494083f2b09417fef916f4d315cb5a38a78128 Mon Sep 17 00:00:00 2001 From: sirux88 Date: Wed, 25 Jan 2023 08:06:21 +0100 Subject: [PATCH 04/24] added database migration --- .../down.sql | 0 .../up.sql | 2 ++ .../down.sql | 0 .../up.sql | 2 ++ .../down.sql | 0 .../up.sql | 2 ++ src/db/models/event.rs | 6 ++--- src/db/models/org_policy.rs | 23 ++++++++++++++++++- src/db/models/organization.rs | 8 +++++-- src/db/models/user.rs | 21 +++++++++++++++++ src/db/schemas/mysql/schema.rs | 1 + src/db/schemas/postgresql/schema.rs | 1 + src/db/schemas/sqlite/schema.rs | 1 + 13 files changed, 61 insertions(+), 6 deletions(-) create mode 100644 migrations/mysql/2023-01-06-151600_add_reset_password_support/down.sql create mode 100644 migrations/mysql/2023-01-06-151600_add_reset_password_support/up.sql create mode 100644 migrations/postgresql/2023-01-06-151600_add_reset_password_support/down.sql create mode 100644 migrations/postgresql/2023-01-06-151600_add_reset_password_support/up.sql create mode 100644 migrations/sqlite/2023-01-06-151600_add_reset_password_support/down.sql create mode 100644 migrations/sqlite/2023-01-06-151600_add_reset_password_support/up.sql diff --git a/migrations/mysql/2023-01-06-151600_add_reset_password_support/down.sql b/migrations/mysql/2023-01-06-151600_add_reset_password_support/down.sql new file mode 100644 index 00000000..e69de29b diff --git a/migrations/mysql/2023-01-06-151600_add_reset_password_support/up.sql b/migrations/mysql/2023-01-06-151600_add_reset_password_support/up.sql new file mode 100644 index 00000000..d8173af4 --- /dev/null +++ b/migrations/mysql/2023-01-06-151600_add_reset_password_support/up.sql @@ -0,0 +1,2 @@ +ALTER TABLE users_organizations +ADD COLUMN reset_password_key VARCHAR(255); diff --git a/migrations/postgresql/2023-01-06-151600_add_reset_password_support/down.sql b/migrations/postgresql/2023-01-06-151600_add_reset_password_support/down.sql new file mode 100644 index 00000000..e69de29b diff --git a/migrations/postgresql/2023-01-06-151600_add_reset_password_support/up.sql b/migrations/postgresql/2023-01-06-151600_add_reset_password_support/up.sql new file mode 100644 index 00000000..326b3106 --- /dev/null +++ b/migrations/postgresql/2023-01-06-151600_add_reset_password_support/up.sql @@ -0,0 +1,2 @@ +ALTER TABLE users_organizations +ADD COLUMN reset_password_key TEXT; diff --git a/migrations/sqlite/2023-01-06-151600_add_reset_password_support/down.sql b/migrations/sqlite/2023-01-06-151600_add_reset_password_support/down.sql new file mode 100644 index 00000000..e69de29b diff --git a/migrations/sqlite/2023-01-06-151600_add_reset_password_support/up.sql b/migrations/sqlite/2023-01-06-151600_add_reset_password_support/up.sql new file mode 100644 index 00000000..326b3106 --- /dev/null +++ b/migrations/sqlite/2023-01-06-151600_add_reset_password_support/up.sql @@ -0,0 +1,2 @@ +ALTER TABLE users_organizations +ADD COLUMN reset_password_key TEXT; diff --git a/src/db/models/event.rs b/src/db/models/event.rs index 9196b8a8..64312273 100644 --- a/src/db/models/event.rs +++ b/src/db/models/event.rs @@ -87,9 +87,9 @@ pub enum EventType { OrganizationUserRemoved = 1503, OrganizationUserUpdatedGroups = 1504, // OrganizationUserUnlinkedSso = 1505, // Not supported - // OrganizationUserResetPasswordEnroll = 1506, // Not supported - // OrganizationUserResetPasswordWithdraw = 1507, // Not supported - // OrganizationUserAdminResetPassword = 1508, // Not supported + OrganizationUserResetPasswordEnroll = 1506, + OrganizationUserResetPasswordWithdraw = 1507, + OrganizationUserAdminResetPassword = 1508, // OrganizationUserResetSsoLink = 1509, // Not supported // OrganizationUserFirstSsoLogin = 1510, // Not supported OrganizationUserRevoked = 1511, diff --git a/src/db/models/org_policy.rs b/src/db/models/org_policy.rs index caa3335f..c9fd5c34 100644 --- a/src/db/models/org_policy.rs +++ b/src/db/models/org_policy.rs @@ -32,7 +32,7 @@ pub enum OrgPolicyType { PersonalOwnership = 5, DisableSend = 6, SendOptions = 7, - // ResetPassword = 8, // Not supported + ResetPassword = 8, // MaximumVaultTimeout = 9, // Not supported (Not AGPLv3 Licensed) // DisablePersonalVaultExport = 10, // Not supported (Not AGPLv3 Licensed) } @@ -44,6 +44,13 @@ pub struct SendOptionsPolicyData { pub DisableHideEmail: bool, } +// https://github.com/bitwarden/server/blob/5cbdee137921a19b1f722920f0fa3cd45af2ef0f/src/Core/Models/Data/Organizations/Policies/ResetPasswordDataModel.cs +#[derive(Deserialize)] +#[allow(non_snake_case)] +pub struct ResetPasswordDataModel { + pub AutoEnrollEnabled: bool, +} + pub type OrgPolicyResult = Result<(), OrgPolicyErr>; #[derive(Debug)] @@ -298,6 +305,20 @@ impl OrgPolicy { Ok(()) } + pub async fn org_is_reset_password_auto_enroll(org_uuid: &str, conn: &mut DbConn) -> bool { + match OrgPolicy::find_by_org_and_type(org_uuid, OrgPolicyType::ResetPassword, conn).await { + Some(policy) => match serde_json::from_str::>(&policy.data) { + Ok(opts) => { + return opts.data.AutoEnrollEnabled; + } + _ => error!("Failed to deserialize ResetPasswordDataModel: {}", policy.data), + }, + None => return false, + } + + false + } + /// Returns true if the user belongs to an org that has enabled the `DisableHideEmail` /// option of the `Send Options` policy, and the user is not an owner or admin of that org. pub async fn is_hide_email_disabled(user_uuid: &str, conn: &mut DbConn) -> bool { diff --git a/src/db/models/organization.rs b/src/db/models/organization.rs index 331e1007..1de321bd 100644 --- a/src/db/models/organization.rs +++ b/src/db/models/organization.rs @@ -29,6 +29,7 @@ db_object! { pub akey: String, pub status: i32, pub atype: i32, + pub reset_password_key: Option, } } @@ -158,7 +159,7 @@ impl Organization { "SelfHost": true, "UseApi": false, // Not supported "HasPublicAndPrivateKeys": self.private_key.is_some() && self.public_key.is_some(), - "UseResetPassword": false, // Not supported + "UseResetPassword": true, "BusinessName": null, "BusinessAddress1": null, @@ -194,6 +195,7 @@ impl UserOrganization { akey: String::new(), status: UserOrgStatus::Accepted as i32, atype: UserOrgType::User as i32, + reset_password_key: None, } } @@ -311,7 +313,8 @@ impl UserOrganization { "UseApi": false, // Not supported "SelfHost": true, "HasPublicAndPrivateKeys": org.private_key.is_some() && org.public_key.is_some(), - "ResetPasswordEnrolled": false, // Not supported + "ResetPasswordEnrolled": self.reset_password_key.is_some(), + "UseResetPassword": true, "SsoBound": false, // Not supported "UseSso": false, // Not supported "ProviderId": null, @@ -377,6 +380,7 @@ impl UserOrganization { "Type": self.atype, "AccessAll": self.access_all, "TwoFactorEnabled": twofactor_enabled, + "ResetPasswordEnrolled":self.reset_password_key.is_some(), "Object": "organizationUserUserDetails", }) diff --git a/src/db/models/user.rs b/src/db/models/user.rs index 5ce87e14..2ca770b5 100644 --- a/src/db/models/user.rs +++ b/src/db/models/user.rs @@ -178,6 +178,27 @@ impl User { self.security_stamp = crate::util::get_uuid(); } + /// Set the password hash generated + /// And resets the security_stamp. Based upon the allow_next_route the security_stamp will be different. + /// + /// # Arguments + /// + /// * `new_password_hash` - A str which contains a hashed version of the users master password. + /// * `new_key` - A String which contains the new aKey value of the users master password. + /// * `allow_next_route` - A Option> with the function names of the next allowed (rocket) routes. + /// These routes are able to use the previous stamp id for the next 2 minutes. + /// After these 2 minutes this stamp will expire. + /// + pub fn set_password_and_key( + &mut self, + new_password_hash: &str, + new_key: &str, + allow_next_route: Option>, + ) { + self.set_password(new_password_hash, allow_next_route); + self.akey = String::from(new_key); + } + /// Set the stamp_exception to only allow a subsequent request matching a specific route using the current security-stamp. /// /// # Arguments diff --git a/src/db/schemas/mysql/schema.rs b/src/db/schemas/mysql/schema.rs index 27cd24c3..cdb3e059 100644 --- a/src/db/schemas/mysql/schema.rs +++ b/src/db/schemas/mysql/schema.rs @@ -222,6 +222,7 @@ table! { akey -> Text, status -> Integer, atype -> Integer, + reset_password_key -> Nullable, } } diff --git a/src/db/schemas/postgresql/schema.rs b/src/db/schemas/postgresql/schema.rs index 0233e0c9..6ec8a979 100644 --- a/src/db/schemas/postgresql/schema.rs +++ b/src/db/schemas/postgresql/schema.rs @@ -222,6 +222,7 @@ table! { akey -> Text, status -> Integer, atype -> Integer, + reset_password_key -> Nullable, } } diff --git a/src/db/schemas/sqlite/schema.rs b/src/db/schemas/sqlite/schema.rs index 391e6700..faaf6fae 100644 --- a/src/db/schemas/sqlite/schema.rs +++ b/src/db/schemas/sqlite/schema.rs @@ -222,6 +222,7 @@ table! { akey -> Text, status -> Integer, atype -> Integer, + reset_password_key -> Nullable, } } From c6c45c4c49be81a86240b7a4462b7d502be4257d Mon Sep 17 00:00:00 2001 From: sirux88 Date: Wed, 25 Jan 2023 08:06:21 +0100 Subject: [PATCH 05/24] working implementation --- src/api/core/organizations.rs | 232 ++++++++++++++++++ src/config.rs | 1 + src/mail.rs | 13 + .../templates/email/admin_reset_password.hbs | 6 + .../email/admin_reset_password.html.hbs | 11 + 5 files changed, 263 insertions(+) create mode 100644 src/static/templates/email/admin_reset_password.hbs create mode 100644 src/static/templates/email/admin_reset_password.html.hbs diff --git a/src/api/core/organizations.rs b/src/api/core/organizations.rs index c0af8f6e..4b0605f8 100644 --- a/src/api/core/organizations.rs +++ b/src/api/core/organizations.rs @@ -62,6 +62,7 @@ pub fn routes() -> Vec { get_plans_tax_rates, import, post_org_keys, + get_organization_keys, bulk_public_keys, deactivate_organization_user, bulk_deactivate_organization_user, @@ -86,6 +87,9 @@ pub fn routes() -> Vec { put_user_groups, delete_group_user, post_delete_group_user, + put_reset_password_enrollment, + get_reset_password_details, + put_reset_password, get_org_export ] } @@ -707,6 +711,10 @@ async fn send_invite( err!("Only Owners can invite Managers, Admins or Owners") } + if !CONFIG.mail_enabled() && OrgPolicy::org_is_reset_password_auto_enroll(&org_id, &mut conn).await { + err!("With mailing disabled and auto-enrollment-feature of reset-password-policy enabled it's not possible to invite users"); + } + for email in data.Emails.iter() { let email = email.to_lowercase(); let mut user_org_status = UserOrgStatus::Invited as i32; @@ -721,6 +729,10 @@ async fn send_invite( } if !CONFIG.mail_enabled() { + if OrgPolicy::org_is_reset_password_auto_enroll(&org_id, &mut conn).await { + err!("With disabled mailing and enabled auto-enrollment-feature of reset-password-policy it's not possible to invite existing users"); + } + let invitation = Invitation::new(&email); invitation.save(&mut conn).await?; } @@ -736,6 +748,10 @@ async fn send_invite( // automatically accept existing users if mail is disabled if !CONFIG.mail_enabled() && !user.password_hash.is_empty() { user_org_status = UserOrgStatus::Accepted as i32; + + if OrgPolicy::org_is_reset_password_auto_enroll(&org_id, &mut conn).await { + err!("With disabled mailing and enabled auto-enrollment-feature of reset-password-policy it's not possible to invite existing users"); + } } user } @@ -882,6 +898,7 @@ async fn _reinvite_user(org_id: &str, user_org: &str, invited_by_email: &str, co #[allow(non_snake_case)] struct AcceptData { Token: String, + ResetPasswordKey: Option, } #[post("/organizations//users/<_org_user_id>/accept", data = "")] @@ -909,6 +926,11 @@ async fn accept_invite( err!("User already accepted the invitation") } + let master_password_required = OrgPolicy::org_is_reset_password_auto_enroll(org, &mut conn).await; + if data.ResetPasswordKey.is_none() && master_password_required { + err!("Reset password key is required, but not provided."); + } + // This check is also done at accept_invite(), _confirm_invite, _activate_user(), edit_user(), admin::update_user_org_type // It returns different error messages per function. if user_org.atype < UserOrgType::Admin { @@ -924,6 +946,11 @@ async fn accept_invite( } user_org.status = UserOrgStatus::Accepted as i32; + + if master_password_required { + user_org.reset_password_key = data.ResetPasswordKey; + } + user_org.save(&mut conn).await?; } } @@ -1570,6 +1597,19 @@ async fn put_policy( } } + // This check is required since invited users automatically get accepted if mailing is not enabled (this seems like a vaultwarden specific feature) + // As a result of this the necessary "/accepted"-endpoint doesn't get hit. + // But this endpoint is required for autoenrollment while invitation. + // Nevertheless reset password is fully fuctiontional in settings without mailing by manual enrollment + + if pol_type_enum == OrgPolicyType::ResetPassword && data.enabled && !CONFIG.mail_enabled() { + if let Some(policy_data) = &data.data { + if policy_data["autoEnrollEnabled"].as_bool().unwrap_or(false) { + err!("Autoenroll can't be used since it requires enabled emailing") + } + } + } + let mut policy = match OrgPolicy::find_by_org_and_type(&org_id, pol_type_enum, &mut conn).await { Some(p) => p, None => OrgPolicy::new(org_id.clone(), pol_type_enum, "{}".to_string()), @@ -2460,6 +2500,198 @@ async fn delete_group_user( GroupUser::delete_by_group_id_and_user_id(&group_id, &org_user_id, &mut conn).await } +#[derive(Deserialize)] +#[allow(non_snake_case)] +struct OrganizationUserResetPasswordEnrollmentRequest { + ResetPasswordKey: Option, +} + +#[derive(Deserialize)] +#[allow(non_snake_case)] +struct OrganizationUserResetPasswordRequest { + NewMasterPasswordHash: String, + Key: String, +} + +#[get("/organizations//keys")] +async fn get_organization_keys(org_id: String, mut conn: DbConn) -> JsonResult { + let org = match Organization::find_by_uuid(&org_id, &mut conn).await { + Some(organization) => organization, + None => err!("Organization not found"), + }; + + Ok(Json(json!({ + "Object": "organizationKeys", + "PublicKey": org.public_key, + "PrivateKey": org.private_key, + }))) +} + +#[put("/organizations//users//reset-password", data = "")] +async fn put_reset_password( + org_id: String, + org_user_id: String, + headers: AdminHeaders, + data: JsonUpcase, + mut conn: DbConn, + ip: ClientIp, + nt: Notify<'_>, +) -> EmptyResult { + let org = match Organization::find_by_uuid(&org_id, &mut conn).await { + Some(org) => org, + None => err!("Required organization not found"), + }; + + let policy = match OrgPolicy::find_by_org_and_type(&org.uuid, OrgPolicyType::ResetPassword, &mut conn).await { + Some(p) => p, + None => err!("Policy not found"), + }; + + if !policy.enabled { + err!("Reset password policy not enabled"); + } + + let org_user = match UserOrganization::find_by_uuid_and_org(&org_user_id, &org.uuid, &mut conn).await { + Some(user) => user, + None => err!("User to reset isn't member of required organization"), + }; + + if org_user.reset_password_key.is_none() { + err!("Password reset not or not corretly enrolled"); + } + if org_user.status != (UserOrgStatus::Confirmed as i32) { + err!("Organization user must be confirmed for password reset functionality"); + } + + //Resetting user must be higher/equal to user to reset + let mut reset_allowed = false; + if headers.org_user_type == UserOrgType::Owner { + reset_allowed = true; + } + if headers.org_user_type == UserOrgType::Admin { + reset_allowed = org_user.atype != (UserOrgType::Owner as i32); + } + + if !reset_allowed { + err!("No permission to reset this user's password"); + } + + let mut user = match User::find_by_uuid(&org_user.user_uuid, &mut conn).await { + Some(user) => user, + None => err!("User not found"), + }; + + let reset_request = data.into_inner().data; + + user.set_password_and_key(reset_request.NewMasterPasswordHash.as_str(), reset_request.Key.as_str(), None); + user.save(&mut conn).await?; + + nt.send_user_update(UpdateType::LogOut, &user).await; + + if CONFIG.mail_enabled() { + mail::send_admin_reset_password(&user.email.to_lowercase(), &user.name, &org.name).await?; + } + + log_event( + EventType::OrganizationUserAdminResetPassword as i32, + &org_user_id, + org.uuid.clone(), + headers.user.uuid.clone(), + headers.device.atype, + &ip.ip, + &mut conn, + ) + .await; + + Ok(()) +} + +#[get("/organizations//users//reset-password-details")] +async fn get_reset_password_details( + org_id: String, + org_user_id: String, + _headers: AdminHeaders, + mut conn: DbConn, +) -> JsonResult { + let org = match Organization::find_by_uuid(&org_id, &mut conn).await { + Some(org) => org, + None => err!("Required organization not found"), + }; + + let policy = match OrgPolicy::find_by_org_and_type(&org_id, OrgPolicyType::ResetPassword, &mut conn).await { + Some(p) => p, + None => err!("Policy not found"), + }; + + if !policy.enabled { + err!("Reset password policy not enabled"); + } + + let org_user = match UserOrganization::find_by_uuid_and_org(&org_user_id, &org_id, &mut conn).await { + Some(user) => user, + None => err!("User to reset isn't member of required organization"), + }; + + let user = match User::find_by_uuid(&org_user.user_uuid, &mut conn).await { + Some(user) => user, + None => err!("User not found"), + }; + + Ok(Json(json!({ + "Object": "organizationUserResetPasswordDetails", + "Kdf":user.client_kdf_type, + "KdfIterations":user.client_kdf_iter, + "ResetPasswordKey":org_user.reset_password_key, + "EncryptedPrivateKey":org.private_key , + + }))) +} + +#[put("/organizations//users//reset-password-enrollment", data = "")] +async fn put_reset_password_enrollment( + org_id: String, + org_user_id: String, + headers: Headers, + data: JsonUpcase, + mut conn: DbConn, + ip: ClientIp, +) -> EmptyResult { + let policy = match OrgPolicy::find_by_org_and_type(&org_id, OrgPolicyType::ResetPassword, &mut conn).await { + Some(p) => p, + None => err!("Policy not found"), + }; + + if !policy.enabled { + err!("Reset password policy not enabled"); + } + + let mut org_user = match UserOrganization::find_by_user_and_org(&headers.user.uuid, &org_id, &mut conn).await { + Some(u) => u, + None => err!("User to enroll isn't member of required organization"), + }; + + let reset_request = data.into_inner().data; + + if reset_request.ResetPasswordKey.is_none() + && OrgPolicy::org_is_reset_password_auto_enroll(&org_id, &mut conn).await + { + err!("Reset password can't be withdrawed due to an enterprise policy"); + } + + org_user.reset_password_key = reset_request.ResetPasswordKey; + org_user.save(&mut conn).await?; + + let log_id = if org_user.reset_password_key.is_some() { + EventType::OrganizationUserResetPasswordEnroll as i32 + } else { + EventType::OrganizationUserResetPasswordWithdraw as i32 + }; + + log_event(log_id, &org_user_id, org_id, headers.user.uuid.clone(), headers.device.atype, &ip.ip, &mut conn).await; + + Ok(()) +} + // This is a new function active since the v2022.9.x clients. // It combines the previous two calls done before. // We call those two functions here and combine them our selfs. diff --git a/src/config.rs b/src/config.rs index 46deed54..68a6811c 100644 --- a/src/config.rs +++ b/src/config.rs @@ -1136,6 +1136,7 @@ where reg!("email/email_footer"); reg!("email/email_footer_text"); + reg!("email/admin_reset_password", ".html"); reg!("email/change_email", ".html"); reg!("email/delete_account", ".html"); reg!("email/emergency_access_invite_accepted", ".html"); diff --git a/src/mail.rs b/src/mail.rs index 8ecb11c6..cffa65fb 100644 --- a/src/mail.rs +++ b/src/mail.rs @@ -496,6 +496,19 @@ pub async fn send_test(address: &str) -> EmptyResult { send_email(address, &subject, body_html, body_text).await } +pub async fn send_admin_reset_password(address: &str, user_name: &str, org_name: &str) -> EmptyResult { + let (subject, body_html, body_text) = get_text( + "email/admin_reset_password", + json!({ + "url": CONFIG.domain(), + "img_src": CONFIG._smtp_img_src(), + "user_name": user_name, + "org_name": org_name, + }), + )?; + send_email(address, &subject, body_html, body_text).await +} + async fn send_email(address: &str, subject: &str, body_html: String, body_text: String) -> EmptyResult { let smtp_from = &CONFIG.smtp_from(); diff --git a/src/static/templates/email/admin_reset_password.hbs b/src/static/templates/email/admin_reset_password.hbs new file mode 100644 index 00000000..8d381772 --- /dev/null +++ b/src/static/templates/email/admin_reset_password.hbs @@ -0,0 +1,6 @@ +Master Password Has Been Changed + +The master password for {{user_name}} has been changed by an administrator in your {{org_name}} organization. If you did not initiate this request, please reach out to your administrator immediately. + +=== +Github: https://github.com/dani-garcia/vaultwarden diff --git a/src/static/templates/email/admin_reset_password.html.hbs b/src/static/templates/email/admin_reset_password.html.hbs new file mode 100644 index 00000000..d9749d22 --- /dev/null +++ b/src/static/templates/email/admin_reset_password.html.hbs @@ -0,0 +1,11 @@ +Master Password Has Been Changed + +{{> email/email_header }} + + + + +
+ The master password for {{user_name}} has been changed by an administrator in your {{org_name}} organization. If you did not initiate this request, please reach out to your administrator immediately. +
+{{> email/email_footer }} From adaefc8628423dd7aebb39e76ce35ee00ce618c5 Mon Sep 17 00:00:00 2001 From: sirux88 Date: Wed, 25 Jan 2023 08:09:26 +0100 Subject: [PATCH 06/24] fixes for current upstream main --- src/api/core/organizations.rs | 2 +- src/db/models/user.rs | 21 --------------------- 2 files changed, 1 insertion(+), 22 deletions(-) diff --git a/src/api/core/organizations.rs b/src/api/core/organizations.rs index 4b0605f8..964d4c4d 100644 --- a/src/api/core/organizations.rs +++ b/src/api/core/organizations.rs @@ -2583,7 +2583,7 @@ async fn put_reset_password( let reset_request = data.into_inner().data; - user.set_password_and_key(reset_request.NewMasterPasswordHash.as_str(), reset_request.Key.as_str(), None); + user.set_password(reset_request.NewMasterPasswordHash.as_str(), Some(reset_request.Key), true, None); user.save(&mut conn).await?; nt.send_user_update(UpdateType::LogOut, &user).await; diff --git a/src/db/models/user.rs b/src/db/models/user.rs index 2ca770b5..5ce87e14 100644 --- a/src/db/models/user.rs +++ b/src/db/models/user.rs @@ -178,27 +178,6 @@ impl User { self.security_stamp = crate::util::get_uuid(); } - /// Set the password hash generated - /// And resets the security_stamp. Based upon the allow_next_route the security_stamp will be different. - /// - /// # Arguments - /// - /// * `new_password_hash` - A str which contains a hashed version of the users master password. - /// * `new_key` - A String which contains the new aKey value of the users master password. - /// * `allow_next_route` - A Option> with the function names of the next allowed (rocket) routes. - /// These routes are able to use the previous stamp id for the next 2 minutes. - /// After these 2 minutes this stamp will expire. - /// - pub fn set_password_and_key( - &mut self, - new_password_hash: &str, - new_key: &str, - allow_next_route: Option>, - ) { - self.set_password(new_password_hash, allow_next_route); - self.akey = String::from(new_key); - } - /// Set the stamp_exception to only allow a subsequent request matching a specific route using the current security-stamp. /// /// # Arguments From 9b20decdc1c6e400b738e28cf4238a2a73d9a18a Mon Sep 17 00:00:00 2001 From: Daniel Hammer Date: Sun, 15 Jan 2023 15:17:00 +0100 Subject: [PATCH 07/24] "Spell-Jacking" mitigation ~ prevent sensitive data leak from spell checker. @see https://www.otto-js.com/news/article/chrome-and-edge-enhanced-spellcheck-features-expose-pii-even-your-passwords --- src/static/templates/admin/settings.hbs | 4 ++-- src/static/templates/admin/users.hbs | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/src/static/templates/admin/settings.hbs b/src/static/templates/admin/settings.hbs index e3874335..50cd1a75 100644 --- a/src/static/templates/admin/settings.hbs +++ b/src/static/templates/admin/settings.hbs @@ -47,7 +47,7 @@
- +
Please provide a valid email address
@@ -85,7 +85,7 @@ {{else}} - + {{#case type "password"}} {{/case}} diff --git a/src/static/templates/admin/users.hbs b/src/static/templates/admin/users.hbs index 3dbee11c..933c939a 100644 --- a/src/static/templates/admin/users.hbs +++ b/src/static/templates/admin/users.hbs @@ -96,7 +96,7 @@ Email:
- +
From c9ed9aa73382adcc37e0c7bf59a06f72c8774281 Mon Sep 17 00:00:00 2001 From: BlackDex Date: Tue, 24 Jan 2023 23:31:38 +0100 Subject: [PATCH 08/24] Fix Javascript issue on non sqlite databases When a non sqlite database is used, loading the admin interface fails because the backup button is not generated. This PR is solves it by checking if the elements are valid. Also made some other changes and fixed some eslint errors. Showing `_post` errors is better now. Update jquery to latest version. Fixes #3166 --- src/api/web.rs | 4 +- src/static/scripts/admin.css | 21 +++-- src/static/scripts/admin.js | 26 ++++-- src/static/scripts/admin_diagnostics.js | 22 +++-- src/static/scripts/admin_organizations.js | 26 +++++- src/static/scripts/admin_settings.js | 65 ++++++++----- src/static/scripts/admin_users.js | 91 ++++++++++++------- ...ery-3.6.2.slim.js => jquery-3.6.3.slim.js} | 17 ++-- src/static/templates/admin/organizations.hbs | 12 +-- src/static/templates/admin/users.hbs | 14 +-- 10 files changed, 190 insertions(+), 108 deletions(-) rename src/static/scripts/{jquery-3.6.2.slim.js => jquery-3.6.3.slim.js} (99%) diff --git a/src/api/web.rs b/src/api/web.rs index 6e3921ed..7f9a77da 100644 --- a/src/api/web.rs +++ b/src/api/web.rs @@ -118,8 +118,8 @@ pub fn static_files(filename: String) -> Result<(ContentType, &'static [u8]), Er "jdenticon.js" => Ok((ContentType::JavaScript, include_bytes!("../static/scripts/jdenticon.js"))), "datatables.js" => Ok((ContentType::JavaScript, include_bytes!("../static/scripts/datatables.js"))), "datatables.css" => Ok((ContentType::CSS, include_bytes!("../static/scripts/datatables.css"))), - "jquery-3.6.2.slim.js" => { - Ok((ContentType::JavaScript, include_bytes!("../static/scripts/jquery-3.6.2.slim.js"))) + "jquery-3.6.3.slim.js" => { + Ok((ContentType::JavaScript, include_bytes!("../static/scripts/jquery-3.6.3.slim.js"))) } _ => err!(format!("Static file not found: {filename}")), } diff --git a/src/static/scripts/admin.css b/src/static/scripts/admin.css index d77b5372..67f2c00d 100644 --- a/src/static/scripts/admin.css +++ b/src/static/scripts/admin.css @@ -18,24 +18,31 @@ img { border: var(--bs-alert-border); } +#users-table .vw-account-details { + min-width: 250px; +} #users-table .vw-created-at, #users-table .vw-last-active { - width: 85px; - min-width: 70px; + min-width: 85px; + max-width: 85px; } -#users-table .vw-items { - width: 35px; +#users-table .vw-items, #orgs-table .vw-items, #orgs-table .vw-users { min-width: 35px; + max-width: 40px; } -#users-table .vw-organizations { - min-width: 120px; +#users-table .vw-attachments, #orgs-table .vw-attachments { + min-width: 100px; + max-width: 130px; } #users-table .vw-actions, #orgs-table .vw-actions { - width: 130px; min-width: 130px; + max-width: 130px; } #users-table .vw-org-cell { max-height: 120px; } +#orgs-table .vw-org-details { + min-width: 285px; +} #support-string { height: 16rem; diff --git a/src/static/scripts/admin.js b/src/static/scripts/admin.js index 7849ac19..7408c955 100644 --- a/src/static/scripts/admin.js +++ b/src/static/scripts/admin.js @@ -1,4 +1,6 @@ "use strict"; +/* eslint-env es2017, browser */ +/* exported BASE_URL, _post */ function getBaseUrl() { // If the base URL is `https://vaultwarden.example.com/base/path/`, @@ -26,6 +28,8 @@ function msg(text, reload_page = true) { } function _post(url, successMsg, errMsg, body, reload_page = true) { + let respStatus; + let respStatusText; fetch(url, { method: "POST", body: body, @@ -33,22 +37,30 @@ function _post(url, successMsg, errMsg, body, reload_page = true) { credentials: "same-origin", headers: { "Content-Type": "application/json" } }).then( resp => { - if (resp.ok) { msg(successMsg, reload_page); return Promise.reject({error: false}); } - const respStatus = resp.status; - const respStatusText = resp.statusText; + if (resp.ok) { + msg(successMsg, reload_page); + // Abuse the catch handler by setting error to false and continue + return Promise.reject({error: false}); + } + respStatus = resp.status; + respStatusText = resp.statusText; return resp.text(); }).then( respText => { try { const respJson = JSON.parse(respText); - return respJson ? respJson.ErrorModel.Message : "Unknown error"; + if (respJson.ErrorModel && respJson.ErrorModel.Message) { + return respJson.ErrorModel.Message; + } else { + return Promise.reject({body:`${respStatus} - ${respStatusText}\n\nUnknown error`, error: true}); + } } catch (e) { - return Promise.reject({body:respStatus + " - " + respStatusText, error: true}); + return Promise.reject({body:`${respStatus} - ${respStatusText}\n\n[Catch] ${e}`, error: true}); } }).then( apiMsg => { - msg(errMsg + "\n" + apiMsg, reload_page); + msg(`${errMsg}\n${apiMsg}`, reload_page); }).catch( e => { if (e.error === false) { return true; } - else { msg(errMsg + "\n" + e.body, reload_page); } + else { msg(`${errMsg}\n${e.body}`, reload_page); } }); } diff --git a/src/static/scripts/admin_diagnostics.js b/src/static/scripts/admin_diagnostics.js index 84a7ecc5..a7a574fc 100644 --- a/src/static/scripts/admin_diagnostics.js +++ b/src/static/scripts/admin_diagnostics.js @@ -1,4 +1,6 @@ "use strict"; +/* eslint-env es2017, browser */ +/* global BASE_URL:readable, BSN:readable */ var dnsCheck = false; var timeCheck = false; @@ -65,7 +67,7 @@ function checkVersions(platform, installed, latest, commit=null) { // ================================ // Generate support string to be pasted on github or the forum -async function generateSupportString(dj) { +async function generateSupportString(event, dj) { event.preventDefault(); event.stopPropagation(); @@ -114,7 +116,7 @@ async function generateSupportString(dj) { document.getElementById("copy-support").classList.remove("d-none"); } -function copyToClipboard() { +function copyToClipboard(event) { event.preventDefault(); event.stopPropagation(); @@ -208,12 +210,18 @@ function init(dj) { } // onLoad events -document.addEventListener("DOMContentLoaded", (/*event*/) => { +document.addEventListener("DOMContentLoaded", (event) => { const diag_json = JSON.parse(document.getElementById("diagnostics_json").innerText); init(diag_json); - document.getElementById("gen-support").addEventListener("click", () => { - generateSupportString(diag_json); - }); - document.getElementById("copy-support").addEventListener("click", copyToClipboard); + const btnGenSupport = document.getElementById("gen-support"); + if (btnGenSupport) { + btnGenSupport.addEventListener("click", () => { + generateSupportString(event, diag_json); + }); + } + const btnCopySupport = document.getElementById("copy-support"); + if (btnCopySupport) { + btnCopySupport.addEventListener("click", copyToClipboard); + } }); \ No newline at end of file diff --git a/src/static/scripts/admin_organizations.js b/src/static/scripts/admin_organizations.js index ae15e2fd..db4037b4 100644 --- a/src/static/scripts/admin_organizations.js +++ b/src/static/scripts/admin_organizations.js @@ -1,6 +1,8 @@ "use strict"; +/* eslint-env es2017, browser, jquery */ +/* global _post:readable, BASE_URL:readable, reload:readable, jdenticon:readable */ -function deleteOrganization() { +function deleteOrganization(event) { event.preventDefault(); event.stopPropagation(); const org_uuid = event.target.dataset.vwOrgUuid; @@ -28,9 +30,22 @@ function deleteOrganization() { } } +function initActions() { + document.querySelectorAll("button[vw-delete-organization]").forEach(btn => { + btn.addEventListener("click", deleteOrganization); + }); + + if (jdenticon) { + jdenticon(); + } +} + // onLoad events document.addEventListener("DOMContentLoaded", (/*event*/) => { jQuery("#orgs-table").DataTable({ + "drawCallback": function() { + initActions(); + }, "stateSave": true, "responsive": true, "lengthMenu": [ @@ -46,9 +61,10 @@ document.addEventListener("DOMContentLoaded", (/*event*/) => { }); // Add click events for organization actions - document.querySelectorAll("button[vw-delete-organization]").forEach(btn => { - btn.addEventListener("click", deleteOrganization); - }); + initActions(); - document.getElementById("reload").addEventListener("click", reload); + const btnReload = document.getElementById("reload"); + if (btnReload) { + btnReload.addEventListener("click", reload); + } }); \ No newline at end of file diff --git a/src/static/scripts/admin_settings.js b/src/static/scripts/admin_settings.js index 4f248cbd..2e36795f 100644 --- a/src/static/scripts/admin_settings.js +++ b/src/static/scripts/admin_settings.js @@ -1,6 +1,8 @@ "use strict"; +/* eslint-env es2017, browser */ +/* global _post:readable, BASE_URL:readable */ -function smtpTest() { +function smtpTest(event) { event.preventDefault(); event.stopPropagation(); if (formHasChanges(config_form)) { @@ -41,7 +43,7 @@ function getFormData() { return data; } -function saveConfig() { +function saveConfig(event) { const data = JSON.stringify(getFormData()); _post(`${BASE_URL}/admin/config/`, "Config saved correctly", @@ -51,7 +53,7 @@ function saveConfig() { event.preventDefault(); } -function deleteConf() { +function deleteConf(event) { event.preventDefault(); event.stopPropagation(); const input = prompt( @@ -68,7 +70,7 @@ function deleteConf() { } } -function backupDatabase() { +function backupDatabase(event) { event.preventDefault(); event.stopPropagation(); _post(`${BASE_URL}/admin/config/backup_db`, @@ -94,24 +96,26 @@ function formHasChanges(form) { // This function will prevent submitting a from when someone presses enter. function preventFormSubmitOnEnter(form) { - form.onkeypress = function(e) { - const key = e.charCode || e.keyCode || 0; - if (key == 13) { - e.preventDefault(); - } - }; + if (form) { + form.addEventListener("keypress", (event) => { + if (event.key == "Enter") { + event.preventDefault(); + } + }); + } } // This function will hook into the smtp-test-email input field and will call the smtpTest() function when enter is pressed. function submitTestEmailOnEnter() { const smtp_test_email_input = document.getElementById("smtp-test-email"); - smtp_test_email_input.onkeypress = function(e) { - const key = e.charCode || e.keyCode || 0; - if (key == 13) { - e.preventDefault(); - smtpTest(); - } - }; + if (smtp_test_email_input) { + smtp_test_email_input.addEventListener("keypress", (event) => { + if (event.key == "Enter") { + event.preventDefault(); + smtpTest(event); + } + }); + } } // Colorize some settings which are high risk @@ -124,11 +128,11 @@ function colorRiskSettings() { }); } -function toggleVis(evt) { +function toggleVis(event) { event.preventDefault(); event.stopPropagation(); - const elem = document.getElementById(evt.target.dataset.vwPwToggle); + const elem = document.getElementById(event.target.dataset.vwPwToggle); const type = elem.getAttribute("type"); if (type === "text") { elem.setAttribute("type", "password"); @@ -146,9 +150,11 @@ function masterCheck(check_id, inputs_query) { } const checkbox = document.getElementById(check_id); - const onChange = onChanged(checkbox, inputs_query); - onChange(); // Trigger the event initially - checkbox.addEventListener("change", onChange); + if (checkbox) { + const onChange = onChanged(checkbox, inputs_query); + onChange(); // Trigger the event initially + checkbox.addEventListener("change", onChange); + } } const config_form = document.getElementById("config-form"); @@ -172,9 +178,18 @@ document.addEventListener("DOMContentLoaded", (/*event*/) => { password_toggle_btn.addEventListener("click", toggleVis); }); - document.getElementById("backupDatabase").addEventListener("click", backupDatabase); - document.getElementById("deleteConf").addEventListener("click", deleteConf); - document.getElementById("smtpTest").addEventListener("click", smtpTest); + const btnBackupDatabase = document.getElementById("backupDatabase"); + if (btnBackupDatabase) { + btnBackupDatabase.addEventListener("click", backupDatabase); + } + const btnDeleteConf = document.getElementById("deleteConf"); + if (btnDeleteConf) { + btnDeleteConf.addEventListener("click", deleteConf); + } + const btnSmtpTest = document.getElementById("smtpTest"); + if (btnSmtpTest) { + btnSmtpTest.addEventListener("click", smtpTest); + } config_form.addEventListener("submit", saveConfig); }); \ No newline at end of file diff --git a/src/static/scripts/admin_users.js b/src/static/scripts/admin_users.js index 8f7ddf20..b4da0f97 100644 --- a/src/static/scripts/admin_users.js +++ b/src/static/scripts/admin_users.js @@ -1,6 +1,8 @@ "use strict"; +/* eslint-env es2017, browser, jquery */ +/* global _post:readable, BASE_URL:readable, reload:readable, jdenticon:readable */ -function deleteUser() { +function deleteUser(event) { event.preventDefault(); event.stopPropagation(); const id = event.target.parentNode.dataset.vwUserUuid; @@ -22,7 +24,7 @@ function deleteUser() { } } -function remove2fa() { +function remove2fa(event) { event.preventDefault(); event.stopPropagation(); const id = event.target.parentNode.dataset.vwUserUuid; @@ -36,7 +38,7 @@ function remove2fa() { ); } -function deauthUser() { +function deauthUser(event) { event.preventDefault(); event.stopPropagation(); const id = event.target.parentNode.dataset.vwUserUuid; @@ -50,7 +52,7 @@ function deauthUser() { ); } -function disableUser() { +function disableUser(event) { event.preventDefault(); event.stopPropagation(); const id = event.target.parentNode.dataset.vwUserUuid; @@ -68,7 +70,7 @@ function disableUser() { } } -function enableUser() { +function enableUser(event) { event.preventDefault(); event.stopPropagation(); const id = event.target.parentNode.dataset.vwUserUuid; @@ -86,7 +88,7 @@ function enableUser() { } } -function updateRevisions() { +function updateRevisions(event) { event.preventDefault(); event.stopPropagation(); _post(`${BASE_URL}/admin/users/update_revision`, @@ -95,7 +97,7 @@ function updateRevisions() { ); } -function inviteUser() { +function inviteUser(event) { event.preventDefault(); event.stopPropagation(); const email = document.getElementById("inviteEmail"); @@ -182,7 +184,7 @@ userOrgTypeDialog.addEventListener("hide.bs.modal", function() { document.getElementById("userOrgTypeOrgUuid").value = ""; }, false); -function updateUserOrgType() { +function updateUserOrgType(event) { event.preventDefault(); event.stopPropagation(); @@ -195,26 +197,7 @@ function updateUserOrgType() { ); } -// onLoad events -document.addEventListener("DOMContentLoaded", (/*event*/) => { - jQuery("#users-table").DataTable({ - "stateSave": true, - "responsive": true, - "lengthMenu": [ - [-1, 5, 10, 25, 50], - ["All", 5, 10, 25, 50] - ], - "pageLength": -1, // Default show all - "columnDefs": [{ - "targets": [1, 2], - "type": "date-iso" - }, { - "targets": 6, - "searchable": false, - "orderable": false - }] - }); - +function initUserTable() { // Color all the org buttons per type document.querySelectorAll("button[data-vw-org-type]").forEach(function(e) { const orgType = ORG_TYPES[e.dataset.vwOrgType]; @@ -222,7 +205,6 @@ document.addEventListener("DOMContentLoaded", (/*event*/) => { e.title = orgType.name; }); - // Add click events for user actions document.querySelectorAll("button[vw-remove2fa]").forEach(btn => { btn.addEventListener("click", remove2fa); }); @@ -239,8 +221,51 @@ document.addEventListener("DOMContentLoaded", (/*event*/) => { btn.addEventListener("click", enableUser); }); - document.getElementById("updateRevisions").addEventListener("click", updateRevisions); - document.getElementById("reload").addEventListener("click", reload); - document.getElementById("userOrgTypeForm").addEventListener("submit", updateUserOrgType); - document.getElementById("inviteUserForm").addEventListener("submit", inviteUser); + if (jdenticon) { + jdenticon(); + } +} + +// onLoad events +document.addEventListener("DOMContentLoaded", (/*event*/) => { + jQuery("#users-table").DataTable({ + "drawCallback": function() { + initUserTable(); + }, + "stateSave": true, + "responsive": true, + "lengthMenu": [ + [-1, 2, 5, 10, 25, 50], + ["All", 2, 5, 10, 25, 50] + ], + "pageLength": 2, // Default show all + "columnDefs": [{ + "targets": [1, 2], + "type": "date-iso" + }, { + "targets": 6, + "searchable": false, + "orderable": false + }] + }); + + // Add click events for user actions + initUserTable(); + + const btnUpdateRevisions = document.getElementById("updateRevisions"); + if (btnUpdateRevisions) { + btnUpdateRevisions.addEventListener("click", updateRevisions); + } + const btnReload = document.getElementById("reload"); + if (btnReload) { + btnReload.addEventListener("click", reload); + } + const btnUserOrgTypeForm = document.getElementById("userOrgTypeForm"); + if (btnUserOrgTypeForm) { + btnUserOrgTypeForm.addEventListener("submit", updateUserOrgType); + } + const btnInviteUserForm = document.getElementById("inviteUserForm"); + if (btnInviteUserForm) { + btnInviteUserForm.addEventListener("submit", inviteUser); + } }); \ No newline at end of file diff --git a/src/static/scripts/jquery-3.6.2.slim.js b/src/static/scripts/jquery-3.6.3.slim.js similarity index 99% rename from src/static/scripts/jquery-3.6.2.slim.js rename to src/static/scripts/jquery-3.6.3.slim.js index 4c41f3eb..d7e1a94c 100644 --- a/src/static/scripts/jquery-3.6.2.slim.js +++ b/src/static/scripts/jquery-3.6.3.slim.js @@ -1,5 +1,5 @@ /*! - * jQuery JavaScript Library v3.6.2 -ajax,-ajax/jsonp,-ajax/load,-ajax/script,-ajax/var/location,-ajax/var/nonce,-ajax/var/rquery,-ajax/xhr,-manipulation/_evalUrl,-deprecated/ajax-event-alias,-effects,-effects/Tween,-effects/animatedSelector + * jQuery JavaScript Library v3.6.3 -ajax,-ajax/jsonp,-ajax/load,-ajax/script,-ajax/var/location,-ajax/var/nonce,-ajax/var/rquery,-ajax/xhr,-manipulation/_evalUrl,-deprecated/ajax-event-alias,-effects,-effects/Tween,-effects/animatedSelector * https://jquery.com/ * * Includes Sizzle.js @@ -9,7 +9,7 @@ * Released under the MIT license * https://jquery.org/license * - * Date: 2022-12-13T14:56Z + * Date: 2022-12-20T21:28Z */ ( function( global, factory ) { @@ -151,7 +151,7 @@ function toType( obj ) { var - version = "3.6.2 -ajax,-ajax/jsonp,-ajax/load,-ajax/script,-ajax/var/location,-ajax/var/nonce,-ajax/var/rquery,-ajax/xhr,-manipulation/_evalUrl,-deprecated/ajax-event-alias,-effects,-effects/Tween,-effects/animatedSelector", + version = "3.6.3 -ajax,-ajax/jsonp,-ajax/load,-ajax/script,-ajax/var/location,-ajax/var/nonce,-ajax/var/rquery,-ajax/xhr,-manipulation/_evalUrl,-deprecated/ajax-event-alias,-effects,-effects/Tween,-effects/animatedSelector", // Define a local copy of jQuery jQuery = function( selector, context ) { @@ -522,14 +522,14 @@ function isArrayLike( obj ) { } var Sizzle = /*! - * Sizzle CSS Selector Engine v2.3.8 + * Sizzle CSS Selector Engine v2.3.9 * https://sizzlejs.com/ * * Copyright JS Foundation and other contributors * Released under the MIT license * https://js.foundation/ * - * Date: 2022-11-16 + * Date: 2022-12-19 */ ( function( window ) { var i, @@ -890,7 +890,7 @@ function Sizzle( selector, context, results, seed ) { if ( support.cssSupportsSelector && // eslint-disable-next-line no-undef - !CSS.supports( "selector(" + newSelector + ")" ) ) { + !CSS.supports( "selector(:is(" + newSelector + "))" ) ) { // Support: IE 11+ // Throw to get to the same code path as an error directly in qSA. @@ -1492,9 +1492,8 @@ setDocument = Sizzle.setDocument = function( node ) { // `:has()` uses a forgiving selector list as an argument so our regular // `try-catch` mechanism fails to catch `:has()` with arguments not supported // natively like `:has(:contains("Foo"))`. Where supported & spec-compliant, - // we now use `CSS.supports("selector(SELECTOR_TO_BE_TESTED)")` but outside - // that, let's mark `:has` as buggy to always use jQuery traversal for - // `:has()`. + // we now use `CSS.supports("selector(:is(SELECTOR_TO_BE_TESTED))")`, but + // outside that we mark `:has` as buggy. rbuggyQSA.push( ":has" ); } diff --git a/src/static/templates/admin/organizations.hbs b/src/static/templates/admin/organizations.hbs index eef6ae1a..d95370c4 100644 --- a/src/static/templates/admin/organizations.hbs +++ b/src/static/templates/admin/organizations.hbs @@ -5,10 +5,10 @@ - - - - + + + + @@ -38,7 +38,7 @@ {{/if}} {{/each}} @@ -53,7 +53,7 @@ - + diff --git a/src/static/templates/admin/users.hbs b/src/static/templates/admin/users.hbs index 3dbee11c..b08df02e 100644 --- a/src/static/templates/admin/users.hbs +++ b/src/static/templates/admin/users.hbs @@ -5,7 +5,7 @@
OrganizationUsersItemsAttachmentsOrganizationUsersItemsAttachments Actions
- +
- + @@ -63,14 +63,14 @@ @@ -137,7 +137,7 @@ - + From a2162f4d69eda9f836497ef137cbc3c2d00cd86b Mon Sep 17 00:00:00 2001 From: Jeremy Lin Date: Wed, 18 Jan 2023 21:50:29 -0800 Subject: [PATCH 09/24] Allow listening on privileged ports (below 1024) as non-root This is done by running `setcap cap_net_bind_service=+ep` on the executable in the build stage (doing it in the runtime stage creates an extra copy of the executable that bloats the image). This only works when using the BuildKit-based builder, since the `COPY` instruction doesn't copy capabilities on the legacy builder. --- docker/Dockerfile.j2 | 47 ++++++++++++++----------- docker/amd64/Dockerfile | 13 +++---- docker/amd64/Dockerfile.alpine | 10 +++--- docker/amd64/Dockerfile.buildkit | 18 +++++----- docker/amd64/Dockerfile.buildkit.alpine | 15 ++++---- docker/arm64/Dockerfile | 21 +++++------ docker/arm64/Dockerfile.alpine | 10 +++--- docker/arm64/Dockerfile.buildkit | 26 +++++++------- docker/arm64/Dockerfile.buildkit.alpine | 15 ++++---- docker/armv6/Dockerfile | 21 +++++------ docker/armv6/Dockerfile.alpine | 10 +++--- docker/armv6/Dockerfile.buildkit | 26 +++++++------- docker/armv6/Dockerfile.buildkit.alpine | 15 ++++---- docker/armv7/Dockerfile | 21 +++++------ docker/armv7/Dockerfile.alpine | 10 +++--- docker/armv7/Dockerfile.buildkit | 26 +++++++------- docker/armv7/Dockerfile.buildkit.alpine | 15 ++++---- 17 files changed, 163 insertions(+), 156 deletions(-) diff --git a/docker/Dockerfile.j2 b/docker/Dockerfile.j2 index 8c5157f4..22acfdf4 100644 --- a/docker/Dockerfile.j2 +++ b/docker/Dockerfile.j2 @@ -83,8 +83,6 @@ FROM vaultwarden/web-vault@{{ vault_image_digest }} as vault ########################## BUILD IMAGE ########################## FROM {{ build_stage_base_image }} as build - - # Build time options to avoid dpkg warnings and help with reproducible builds. ENV DEBIAN_FRONTEND=noninteractive \ LANG=C.UTF-8 \ @@ -93,7 +91,6 @@ ENV DEBIAN_FRONTEND=noninteractive \ CARGO_HOME="/root/.cargo" \ USER="root" - # Create CARGO_HOME folder and don't download rust docs RUN {{ mount_rust_cache -}} mkdir -pv "${CARGO_HOME}" \ && rustup set profile minimal @@ -104,20 +101,20 @@ RUN {{ mount_rust_cache -}} mkdir -pv "${CARGO_HOME}" \ ENV RUSTFLAGS='-Clink-arg=/usr/local/musl/{{ package_arch_target }}/lib/libatomic.a' {% endif %} {% elif "arm" in target_file %} -# -# Install required build libs for {{ package_arch_name }} architecture. +# Install build dependencies for the {{ package_arch_name }} architecture RUN dpkg --add-architecture {{ package_arch_name }} \ && apt-get update \ && apt-get install -y \ --no-install-recommends \ - libssl-dev{{ package_arch_prefix }} \ + gcc-{{ package_cross_compiler }} \ libc6-dev{{ package_arch_prefix }} \ - libpq5{{ package_arch_prefix }} \ - libpq-dev{{ package_arch_prefix }} \ - libmariadb3{{ package_arch_prefix }} \ + libcap2-bin \ libmariadb-dev{{ package_arch_prefix }} \ libmariadb-dev-compat{{ package_arch_prefix }} \ - gcc-{{ package_cross_compiler }} \ + libmariadb3{{ package_arch_prefix }} \ + libpq-dev{{ package_arch_prefix }} \ + libpq5{{ package_arch_prefix }} \ + libssl-dev{{ package_arch_prefix }} \ # # Make sure cargo has the right target config && echo '[target.{{ package_arch_target }}]' >> "${CARGO_HOME}/config" \ @@ -129,16 +126,14 @@ ENV CC_{{ package_arch_target | replace("-", "_") }}="/usr/bin/{{ package_cross_ CROSS_COMPILE="1" \ OPENSSL_INCLUDE_DIR="/usr/include/{{ package_cross_compiler }}" \ OPENSSL_LIB_DIR="/usr/lib/{{ package_cross_compiler }}" - {% elif "amd64" in target_file %} -# Install DB packages +# Install build dependencies RUN apt-get update \ && apt-get install -y \ --no-install-recommends \ - libmariadb-dev{{ package_arch_prefix }} \ - libpq-dev{{ package_arch_prefix }} \ - && apt-get clean \ - && rm -rf /var/lib/apt/lists/* + libcap2-bin \ + libmariadb-dev \ + libpq-dev {% endif %} # Creates a dummy project used to grab dependencies @@ -179,6 +174,18 @@ RUN touch src/main.rs # your actual source files being built RUN {{ mount_rust_cache -}} cargo build --features ${DB} --release{{ package_arch_target_param }} +{% if "buildkit" in target_file %} +# Add the `cap_net_bind_service` capability to allow listening on +# privileged (< 1024) ports even when running as a non-root user. +# This is only done if building with BuildKit; with the legacy +# builder, the `COPY` instruction doesn't carry over capabilities. +{% if package_arch_target is defined %} +RUN setcap cap_net_bind_service=+ep target/{{ package_arch_target }}/release/vaultwarden +{% else %} +RUN setcap cap_net_bind_service=+ep target/release/vaultwarden +{% endif %} +{% endif %} + ######################## RUNTIME IMAGE ######################## # Create a new stage with a minimal image # because we already have a binary built @@ -200,18 +207,18 @@ RUN [ "cross-build-start" ] RUN mkdir /data \ {% if "alpine" in runtime_stage_base_image %} && apk add --no-cache \ - openssl \ - tzdata \ + ca-certificates \ curl \ - ca-certificates + openssl \ + tzdata {% else %} && apt-get update && apt-get install -y \ --no-install-recommends \ - openssl \ ca-certificates \ curl \ libmariadb-dev-compat \ libpq5 \ + openssl \ && apt-get clean \ && rm -rf /var/lib/apt/lists/* {% endif %} diff --git a/docker/amd64/Dockerfile b/docker/amd64/Dockerfile index 281146f7..00983f50 100644 --- a/docker/amd64/Dockerfile +++ b/docker/amd64/Dockerfile @@ -29,8 +29,6 @@ FROM vaultwarden/web-vault@sha256:d5f71fb05c4b87935bf51d84140db0f8716cabfe2974fb ########################## BUILD IMAGE ########################## FROM rust:1.66-bullseye as build - - # Build time options to avoid dpkg warnings and help with reproducible builds. ENV DEBIAN_FRONTEND=noninteractive \ LANG=C.UTF-8 \ @@ -39,19 +37,17 @@ ENV DEBIAN_FRONTEND=noninteractive \ CARGO_HOME="/root/.cargo" \ USER="root" - # Create CARGO_HOME folder and don't download rust docs RUN mkdir -pv "${CARGO_HOME}" \ && rustup set profile minimal -# Install DB packages +# Install build dependencies RUN apt-get update \ && apt-get install -y \ --no-install-recommends \ + libcap2-bin \ libmariadb-dev \ - libpq-dev \ - && apt-get clean \ - && rm -rf /var/lib/apt/lists/* + libpq-dev # Creates a dummy project used to grab dependencies RUN USER=root cargo new --bin /app @@ -83,6 +79,7 @@ RUN touch src/main.rs # your actual source files being built RUN cargo build --features ${DB} --release + ######################## RUNTIME IMAGE ######################## # Create a new stage with a minimal image # because we already have a binary built @@ -97,11 +94,11 @@ ENV ROCKET_PROFILE="release" \ RUN mkdir /data \ && apt-get update && apt-get install -y \ --no-install-recommends \ - openssl \ ca-certificates \ curl \ libmariadb-dev-compat \ libpq5 \ + openssl \ && apt-get clean \ && rm -rf /var/lib/apt/lists/* diff --git a/docker/amd64/Dockerfile.alpine b/docker/amd64/Dockerfile.alpine index 6dd624b6..cb38bf8b 100644 --- a/docker/amd64/Dockerfile.alpine +++ b/docker/amd64/Dockerfile.alpine @@ -29,8 +29,6 @@ FROM vaultwarden/web-vault@sha256:d5f71fb05c4b87935bf51d84140db0f8716cabfe2974fb ########################## BUILD IMAGE ########################## FROM blackdex/rust-musl:x86_64-musl-stable-1.66.1 as build - - # Build time options to avoid dpkg warnings and help with reproducible builds. ENV DEBIAN_FRONTEND=noninteractive \ LANG=C.UTF-8 \ @@ -39,7 +37,6 @@ ENV DEBIAN_FRONTEND=noninteractive \ CARGO_HOME="/root/.cargo" \ USER="root" - # Create CARGO_HOME folder and don't download rust docs RUN mkdir -pv "${CARGO_HOME}" \ && rustup set profile minimal @@ -77,6 +74,7 @@ RUN touch src/main.rs # your actual source files being built RUN cargo build --features ${DB} --release --target=x86_64-unknown-linux-musl + ######################## RUNTIME IMAGE ######################## # Create a new stage with a minimal image # because we already have a binary built @@ -92,10 +90,10 @@ ENV ROCKET_PROFILE="release" \ # Create data folder and Install needed libraries RUN mkdir /data \ && apk add --no-cache \ - openssl \ - tzdata \ + ca-certificates \ curl \ - ca-certificates + openssl \ + tzdata VOLUME /data diff --git a/docker/amd64/Dockerfile.buildkit b/docker/amd64/Dockerfile.buildkit index 12e85211..8330958e 100644 --- a/docker/amd64/Dockerfile.buildkit +++ b/docker/amd64/Dockerfile.buildkit @@ -29,8 +29,6 @@ FROM vaultwarden/web-vault@sha256:d5f71fb05c4b87935bf51d84140db0f8716cabfe2974fb ########################## BUILD IMAGE ########################## FROM rust:1.66-bullseye as build - - # Build time options to avoid dpkg warnings and help with reproducible builds. ENV DEBIAN_FRONTEND=noninteractive \ LANG=C.UTF-8 \ @@ -39,19 +37,17 @@ ENV DEBIAN_FRONTEND=noninteractive \ CARGO_HOME="/root/.cargo" \ USER="root" - # Create CARGO_HOME folder and don't download rust docs RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry mkdir -pv "${CARGO_HOME}" \ && rustup set profile minimal -# Install DB packages +# Install build dependencies RUN apt-get update \ && apt-get install -y \ --no-install-recommends \ + libcap2-bin \ libmariadb-dev \ - libpq-dev \ - && apt-get clean \ - && rm -rf /var/lib/apt/lists/* + libpq-dev # Creates a dummy project used to grab dependencies RUN USER=root cargo new --bin /app @@ -83,6 +79,12 @@ RUN touch src/main.rs # your actual source files being built RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release +# Add the `cap_net_bind_service` capability to allow listening on +# privileged (< 1024) ports even when running as a non-root user. +# This is only done if building with BuildKit; with the legacy +# builder, the `COPY` instruction doesn't carry over capabilities. +RUN setcap cap_net_bind_service=+ep target/release/vaultwarden + ######################## RUNTIME IMAGE ######################## # Create a new stage with a minimal image # because we already have a binary built @@ -97,11 +99,11 @@ ENV ROCKET_PROFILE="release" \ RUN mkdir /data \ && apt-get update && apt-get install -y \ --no-install-recommends \ - openssl \ ca-certificates \ curl \ libmariadb-dev-compat \ libpq5 \ + openssl \ && apt-get clean \ && rm -rf /var/lib/apt/lists/* diff --git a/docker/amd64/Dockerfile.buildkit.alpine b/docker/amd64/Dockerfile.buildkit.alpine index ba45c39b..eb551e03 100644 --- a/docker/amd64/Dockerfile.buildkit.alpine +++ b/docker/amd64/Dockerfile.buildkit.alpine @@ -29,8 +29,6 @@ FROM vaultwarden/web-vault@sha256:d5f71fb05c4b87935bf51d84140db0f8716cabfe2974fb ########################## BUILD IMAGE ########################## FROM blackdex/rust-musl:x86_64-musl-stable-1.66.1 as build - - # Build time options to avoid dpkg warnings and help with reproducible builds. ENV DEBIAN_FRONTEND=noninteractive \ LANG=C.UTF-8 \ @@ -39,7 +37,6 @@ ENV DEBIAN_FRONTEND=noninteractive \ CARGO_HOME="/root/.cargo" \ USER="root" - # Create CARGO_HOME folder and don't download rust docs RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry mkdir -pv "${CARGO_HOME}" \ && rustup set profile minimal @@ -77,6 +74,12 @@ RUN touch src/main.rs # your actual source files being built RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=x86_64-unknown-linux-musl +# Add the `cap_net_bind_service` capability to allow listening on +# privileged (< 1024) ports even when running as a non-root user. +# This is only done if building with BuildKit; with the legacy +# builder, the `COPY` instruction doesn't carry over capabilities. +RUN setcap cap_net_bind_service=+ep target/x86_64-unknown-linux-musl/release/vaultwarden + ######################## RUNTIME IMAGE ######################## # Create a new stage with a minimal image # because we already have a binary built @@ -92,10 +95,10 @@ ENV ROCKET_PROFILE="release" \ # Create data folder and Install needed libraries RUN mkdir /data \ && apk add --no-cache \ - openssl \ - tzdata \ + ca-certificates \ curl \ - ca-certificates + openssl \ + tzdata VOLUME /data diff --git a/docker/arm64/Dockerfile b/docker/arm64/Dockerfile index 093afadd..0087b8ea 100644 --- a/docker/arm64/Dockerfile +++ b/docker/arm64/Dockerfile @@ -29,8 +29,6 @@ FROM vaultwarden/web-vault@sha256:d5f71fb05c4b87935bf51d84140db0f8716cabfe2974fb ########################## BUILD IMAGE ########################## FROM rust:1.66-bullseye as build - - # Build time options to avoid dpkg warnings and help with reproducible builds. ENV DEBIAN_FRONTEND=noninteractive \ LANG=C.UTF-8 \ @@ -39,25 +37,24 @@ ENV DEBIAN_FRONTEND=noninteractive \ CARGO_HOME="/root/.cargo" \ USER="root" - # Create CARGO_HOME folder and don't download rust docs RUN mkdir -pv "${CARGO_HOME}" \ && rustup set profile minimal -# -# Install required build libs for arm64 architecture. +# Install build dependencies for the arm64 architecture RUN dpkg --add-architecture arm64 \ && apt-get update \ && apt-get install -y \ --no-install-recommends \ - libssl-dev:arm64 \ + gcc-aarch64-linux-gnu \ libc6-dev:arm64 \ - libpq5:arm64 \ - libpq-dev:arm64 \ - libmariadb3:arm64 \ + libcap2-bin \ libmariadb-dev:arm64 \ libmariadb-dev-compat:arm64 \ - gcc-aarch64-linux-gnu \ + libmariadb3:arm64 \ + libpq-dev:arm64 \ + libpq5:arm64 \ + libssl-dev:arm64 \ # # Make sure cargo has the right target config && echo '[target.aarch64-unknown-linux-gnu]' >> "${CARGO_HOME}/config" \ @@ -70,7 +67,6 @@ ENV CC_aarch64_unknown_linux_gnu="/usr/bin/aarch64-linux-gnu-gcc" \ OPENSSL_INCLUDE_DIR="/usr/include/aarch64-linux-gnu" \ OPENSSL_LIB_DIR="/usr/lib/aarch64-linux-gnu" - # Creates a dummy project used to grab dependencies RUN USER=root cargo new --bin /app WORKDIR /app @@ -102,6 +98,7 @@ RUN touch src/main.rs # your actual source files being built RUN cargo build --features ${DB} --release --target=aarch64-unknown-linux-gnu + ######################## RUNTIME IMAGE ######################## # Create a new stage with a minimal image # because we already have a binary built @@ -117,11 +114,11 @@ RUN [ "cross-build-start" ] RUN mkdir /data \ && apt-get update && apt-get install -y \ --no-install-recommends \ - openssl \ ca-certificates \ curl \ libmariadb-dev-compat \ libpq5 \ + openssl \ && apt-get clean \ && rm -rf /var/lib/apt/lists/* diff --git a/docker/arm64/Dockerfile.alpine b/docker/arm64/Dockerfile.alpine index 83bf0745..139d1a31 100644 --- a/docker/arm64/Dockerfile.alpine +++ b/docker/arm64/Dockerfile.alpine @@ -29,8 +29,6 @@ FROM vaultwarden/web-vault@sha256:d5f71fb05c4b87935bf51d84140db0f8716cabfe2974fb ########################## BUILD IMAGE ########################## FROM blackdex/rust-musl:aarch64-musl-stable-1.66.1 as build - - # Build time options to avoid dpkg warnings and help with reproducible builds. ENV DEBIAN_FRONTEND=noninteractive \ LANG=C.UTF-8 \ @@ -39,7 +37,6 @@ ENV DEBIAN_FRONTEND=noninteractive \ CARGO_HOME="/root/.cargo" \ USER="root" - # Create CARGO_HOME folder and don't download rust docs RUN mkdir -pv "${CARGO_HOME}" \ && rustup set profile minimal @@ -77,6 +74,7 @@ RUN touch src/main.rs # your actual source files being built RUN cargo build --features ${DB} --release --target=aarch64-unknown-linux-musl + ######################## RUNTIME IMAGE ######################## # Create a new stage with a minimal image # because we already have a binary built @@ -93,10 +91,10 @@ RUN [ "cross-build-start" ] # Create data folder and Install needed libraries RUN mkdir /data \ && apk add --no-cache \ - openssl \ - tzdata \ + ca-certificates \ curl \ - ca-certificates + openssl \ + tzdata RUN [ "cross-build-end" ] diff --git a/docker/arm64/Dockerfile.buildkit b/docker/arm64/Dockerfile.buildkit index cdabd35c..e1f1e0d2 100644 --- a/docker/arm64/Dockerfile.buildkit +++ b/docker/arm64/Dockerfile.buildkit @@ -29,8 +29,6 @@ FROM vaultwarden/web-vault@sha256:d5f71fb05c4b87935bf51d84140db0f8716cabfe2974fb ########################## BUILD IMAGE ########################## FROM rust:1.66-bullseye as build - - # Build time options to avoid dpkg warnings and help with reproducible builds. ENV DEBIAN_FRONTEND=noninteractive \ LANG=C.UTF-8 \ @@ -39,25 +37,24 @@ ENV DEBIAN_FRONTEND=noninteractive \ CARGO_HOME="/root/.cargo" \ USER="root" - # Create CARGO_HOME folder and don't download rust docs RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry mkdir -pv "${CARGO_HOME}" \ && rustup set profile minimal -# -# Install required build libs for arm64 architecture. +# Install build dependencies for the arm64 architecture RUN dpkg --add-architecture arm64 \ && apt-get update \ && apt-get install -y \ --no-install-recommends \ - libssl-dev:arm64 \ + gcc-aarch64-linux-gnu \ libc6-dev:arm64 \ - libpq5:arm64 \ - libpq-dev:arm64 \ - libmariadb3:arm64 \ + libcap2-bin \ libmariadb-dev:arm64 \ libmariadb-dev-compat:arm64 \ - gcc-aarch64-linux-gnu \ + libmariadb3:arm64 \ + libpq-dev:arm64 \ + libpq5:arm64 \ + libssl-dev:arm64 \ # # Make sure cargo has the right target config && echo '[target.aarch64-unknown-linux-gnu]' >> "${CARGO_HOME}/config" \ @@ -70,7 +67,6 @@ ENV CC_aarch64_unknown_linux_gnu="/usr/bin/aarch64-linux-gnu-gcc" \ OPENSSL_INCLUDE_DIR="/usr/include/aarch64-linux-gnu" \ OPENSSL_LIB_DIR="/usr/lib/aarch64-linux-gnu" - # Creates a dummy project used to grab dependencies RUN USER=root cargo new --bin /app WORKDIR /app @@ -102,6 +98,12 @@ RUN touch src/main.rs # your actual source files being built RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=aarch64-unknown-linux-gnu +# Add the `cap_net_bind_service` capability to allow listening on +# privileged (< 1024) ports even when running as a non-root user. +# This is only done if building with BuildKit; with the legacy +# builder, the `COPY` instruction doesn't carry over capabilities. +RUN setcap cap_net_bind_service=+ep target/aarch64-unknown-linux-gnu/release/vaultwarden + ######################## RUNTIME IMAGE ######################## # Create a new stage with a minimal image # because we already have a binary built @@ -117,11 +119,11 @@ RUN [ "cross-build-start" ] RUN mkdir /data \ && apt-get update && apt-get install -y \ --no-install-recommends \ - openssl \ ca-certificates \ curl \ libmariadb-dev-compat \ libpq5 \ + openssl \ && apt-get clean \ && rm -rf /var/lib/apt/lists/* diff --git a/docker/arm64/Dockerfile.buildkit.alpine b/docker/arm64/Dockerfile.buildkit.alpine index 837a7a39..26d75edc 100644 --- a/docker/arm64/Dockerfile.buildkit.alpine +++ b/docker/arm64/Dockerfile.buildkit.alpine @@ -29,8 +29,6 @@ FROM vaultwarden/web-vault@sha256:d5f71fb05c4b87935bf51d84140db0f8716cabfe2974fb ########################## BUILD IMAGE ########################## FROM blackdex/rust-musl:aarch64-musl-stable-1.66.1 as build - - # Build time options to avoid dpkg warnings and help with reproducible builds. ENV DEBIAN_FRONTEND=noninteractive \ LANG=C.UTF-8 \ @@ -39,7 +37,6 @@ ENV DEBIAN_FRONTEND=noninteractive \ CARGO_HOME="/root/.cargo" \ USER="root" - # Create CARGO_HOME folder and don't download rust docs RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry mkdir -pv "${CARGO_HOME}" \ && rustup set profile minimal @@ -77,6 +74,12 @@ RUN touch src/main.rs # your actual source files being built RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=aarch64-unknown-linux-musl +# Add the `cap_net_bind_service` capability to allow listening on +# privileged (< 1024) ports even when running as a non-root user. +# This is only done if building with BuildKit; with the legacy +# builder, the `COPY` instruction doesn't carry over capabilities. +RUN setcap cap_net_bind_service=+ep target/aarch64-unknown-linux-musl/release/vaultwarden + ######################## RUNTIME IMAGE ######################## # Create a new stage with a minimal image # because we already have a binary built @@ -93,10 +96,10 @@ RUN [ "cross-build-start" ] # Create data folder and Install needed libraries RUN mkdir /data \ && apk add --no-cache \ - openssl \ - tzdata \ + ca-certificates \ curl \ - ca-certificates + openssl \ + tzdata RUN [ "cross-build-end" ] diff --git a/docker/armv6/Dockerfile b/docker/armv6/Dockerfile index 84baa7b6..f90e5c07 100644 --- a/docker/armv6/Dockerfile +++ b/docker/armv6/Dockerfile @@ -29,8 +29,6 @@ FROM vaultwarden/web-vault@sha256:d5f71fb05c4b87935bf51d84140db0f8716cabfe2974fb ########################## BUILD IMAGE ########################## FROM rust:1.66-bullseye as build - - # Build time options to avoid dpkg warnings and help with reproducible builds. ENV DEBIAN_FRONTEND=noninteractive \ LANG=C.UTF-8 \ @@ -39,25 +37,24 @@ ENV DEBIAN_FRONTEND=noninteractive \ CARGO_HOME="/root/.cargo" \ USER="root" - # Create CARGO_HOME folder and don't download rust docs RUN mkdir -pv "${CARGO_HOME}" \ && rustup set profile minimal -# -# Install required build libs for armel architecture. +# Install build dependencies for the armel architecture RUN dpkg --add-architecture armel \ && apt-get update \ && apt-get install -y \ --no-install-recommends \ - libssl-dev:armel \ + gcc-arm-linux-gnueabi \ libc6-dev:armel \ - libpq5:armel \ - libpq-dev:armel \ - libmariadb3:armel \ + libcap2-bin \ libmariadb-dev:armel \ libmariadb-dev-compat:armel \ - gcc-arm-linux-gnueabi \ + libmariadb3:armel \ + libpq-dev:armel \ + libpq5:armel \ + libssl-dev:armel \ # # Make sure cargo has the right target config && echo '[target.arm-unknown-linux-gnueabi]' >> "${CARGO_HOME}/config" \ @@ -70,7 +67,6 @@ ENV CC_arm_unknown_linux_gnueabi="/usr/bin/arm-linux-gnueabi-gcc" \ OPENSSL_INCLUDE_DIR="/usr/include/arm-linux-gnueabi" \ OPENSSL_LIB_DIR="/usr/lib/arm-linux-gnueabi" - # Creates a dummy project used to grab dependencies RUN USER=root cargo new --bin /app WORKDIR /app @@ -102,6 +98,7 @@ RUN touch src/main.rs # your actual source files being built RUN cargo build --features ${DB} --release --target=arm-unknown-linux-gnueabi + ######################## RUNTIME IMAGE ######################## # Create a new stage with a minimal image # because we already have a binary built @@ -117,11 +114,11 @@ RUN [ "cross-build-start" ] RUN mkdir /data \ && apt-get update && apt-get install -y \ --no-install-recommends \ - openssl \ ca-certificates \ curl \ libmariadb-dev-compat \ libpq5 \ + openssl \ && apt-get clean \ && rm -rf /var/lib/apt/lists/* diff --git a/docker/armv6/Dockerfile.alpine b/docker/armv6/Dockerfile.alpine index 1f969d7c..129f0216 100644 --- a/docker/armv6/Dockerfile.alpine +++ b/docker/armv6/Dockerfile.alpine @@ -29,8 +29,6 @@ FROM vaultwarden/web-vault@sha256:d5f71fb05c4b87935bf51d84140db0f8716cabfe2974fb ########################## BUILD IMAGE ########################## FROM blackdex/rust-musl:arm-musleabi-stable-1.66.1 as build - - # Build time options to avoid dpkg warnings and help with reproducible builds. ENV DEBIAN_FRONTEND=noninteractive \ LANG=C.UTF-8 \ @@ -39,7 +37,6 @@ ENV DEBIAN_FRONTEND=noninteractive \ CARGO_HOME="/root/.cargo" \ USER="root" - # Create CARGO_HOME folder and don't download rust docs RUN mkdir -pv "${CARGO_HOME}" \ && rustup set profile minimal @@ -79,6 +76,7 @@ RUN touch src/main.rs # your actual source files being built RUN cargo build --features ${DB} --release --target=arm-unknown-linux-musleabi + ######################## RUNTIME IMAGE ######################## # Create a new stage with a minimal image # because we already have a binary built @@ -95,10 +93,10 @@ RUN [ "cross-build-start" ] # Create data folder and Install needed libraries RUN mkdir /data \ && apk add --no-cache \ - openssl \ - tzdata \ + ca-certificates \ curl \ - ca-certificates + openssl \ + tzdata RUN [ "cross-build-end" ] diff --git a/docker/armv6/Dockerfile.buildkit b/docker/armv6/Dockerfile.buildkit index 1e33a25f..4fa86cfa 100644 --- a/docker/armv6/Dockerfile.buildkit +++ b/docker/armv6/Dockerfile.buildkit @@ -29,8 +29,6 @@ FROM vaultwarden/web-vault@sha256:d5f71fb05c4b87935bf51d84140db0f8716cabfe2974fb ########################## BUILD IMAGE ########################## FROM rust:1.66-bullseye as build - - # Build time options to avoid dpkg warnings and help with reproducible builds. ENV DEBIAN_FRONTEND=noninteractive \ LANG=C.UTF-8 \ @@ -39,25 +37,24 @@ ENV DEBIAN_FRONTEND=noninteractive \ CARGO_HOME="/root/.cargo" \ USER="root" - # Create CARGO_HOME folder and don't download rust docs RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry mkdir -pv "${CARGO_HOME}" \ && rustup set profile minimal -# -# Install required build libs for armel architecture. +# Install build dependencies for the armel architecture RUN dpkg --add-architecture armel \ && apt-get update \ && apt-get install -y \ --no-install-recommends \ - libssl-dev:armel \ + gcc-arm-linux-gnueabi \ libc6-dev:armel \ - libpq5:armel \ - libpq-dev:armel \ - libmariadb3:armel \ + libcap2-bin \ libmariadb-dev:armel \ libmariadb-dev-compat:armel \ - gcc-arm-linux-gnueabi \ + libmariadb3:armel \ + libpq-dev:armel \ + libpq5:armel \ + libssl-dev:armel \ # # Make sure cargo has the right target config && echo '[target.arm-unknown-linux-gnueabi]' >> "${CARGO_HOME}/config" \ @@ -70,7 +67,6 @@ ENV CC_arm_unknown_linux_gnueabi="/usr/bin/arm-linux-gnueabi-gcc" \ OPENSSL_INCLUDE_DIR="/usr/include/arm-linux-gnueabi" \ OPENSSL_LIB_DIR="/usr/lib/arm-linux-gnueabi" - # Creates a dummy project used to grab dependencies RUN USER=root cargo new --bin /app WORKDIR /app @@ -102,6 +98,12 @@ RUN touch src/main.rs # your actual source files being built RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=arm-unknown-linux-gnueabi +# Add the `cap_net_bind_service` capability to allow listening on +# privileged (< 1024) ports even when running as a non-root user. +# This is only done if building with BuildKit; with the legacy +# builder, the `COPY` instruction doesn't carry over capabilities. +RUN setcap cap_net_bind_service=+ep target/arm-unknown-linux-gnueabi/release/vaultwarden + ######################## RUNTIME IMAGE ######################## # Create a new stage with a minimal image # because we already have a binary built @@ -117,11 +119,11 @@ RUN [ "cross-build-start" ] RUN mkdir /data \ && apt-get update && apt-get install -y \ --no-install-recommends \ - openssl \ ca-certificates \ curl \ libmariadb-dev-compat \ libpq5 \ + openssl \ && apt-get clean \ && rm -rf /var/lib/apt/lists/* diff --git a/docker/armv6/Dockerfile.buildkit.alpine b/docker/armv6/Dockerfile.buildkit.alpine index d0f5cfbe..10559387 100644 --- a/docker/armv6/Dockerfile.buildkit.alpine +++ b/docker/armv6/Dockerfile.buildkit.alpine @@ -29,8 +29,6 @@ FROM vaultwarden/web-vault@sha256:d5f71fb05c4b87935bf51d84140db0f8716cabfe2974fb ########################## BUILD IMAGE ########################## FROM blackdex/rust-musl:arm-musleabi-stable-1.66.1 as build - - # Build time options to avoid dpkg warnings and help with reproducible builds. ENV DEBIAN_FRONTEND=noninteractive \ LANG=C.UTF-8 \ @@ -39,7 +37,6 @@ ENV DEBIAN_FRONTEND=noninteractive \ CARGO_HOME="/root/.cargo" \ USER="root" - # Create CARGO_HOME folder and don't download rust docs RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry mkdir -pv "${CARGO_HOME}" \ && rustup set profile minimal @@ -79,6 +76,12 @@ RUN touch src/main.rs # your actual source files being built RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=arm-unknown-linux-musleabi +# Add the `cap_net_bind_service` capability to allow listening on +# privileged (< 1024) ports even when running as a non-root user. +# This is only done if building with BuildKit; with the legacy +# builder, the `COPY` instruction doesn't carry over capabilities. +RUN setcap cap_net_bind_service=+ep target/arm-unknown-linux-musleabi/release/vaultwarden + ######################## RUNTIME IMAGE ######################## # Create a new stage with a minimal image # because we already have a binary built @@ -95,10 +98,10 @@ RUN [ "cross-build-start" ] # Create data folder and Install needed libraries RUN mkdir /data \ && apk add --no-cache \ - openssl \ - tzdata \ + ca-certificates \ curl \ - ca-certificates + openssl \ + tzdata RUN [ "cross-build-end" ] diff --git a/docker/armv7/Dockerfile b/docker/armv7/Dockerfile index 8df12612..bf0e4f01 100644 --- a/docker/armv7/Dockerfile +++ b/docker/armv7/Dockerfile @@ -29,8 +29,6 @@ FROM vaultwarden/web-vault@sha256:d5f71fb05c4b87935bf51d84140db0f8716cabfe2974fb ########################## BUILD IMAGE ########################## FROM rust:1.66-bullseye as build - - # Build time options to avoid dpkg warnings and help with reproducible builds. ENV DEBIAN_FRONTEND=noninteractive \ LANG=C.UTF-8 \ @@ -39,25 +37,24 @@ ENV DEBIAN_FRONTEND=noninteractive \ CARGO_HOME="/root/.cargo" \ USER="root" - # Create CARGO_HOME folder and don't download rust docs RUN mkdir -pv "${CARGO_HOME}" \ && rustup set profile minimal -# -# Install required build libs for armhf architecture. +# Install build dependencies for the armhf architecture RUN dpkg --add-architecture armhf \ && apt-get update \ && apt-get install -y \ --no-install-recommends \ - libssl-dev:armhf \ + gcc-arm-linux-gnueabihf \ libc6-dev:armhf \ - libpq5:armhf \ - libpq-dev:armhf \ - libmariadb3:armhf \ + libcap2-bin \ libmariadb-dev:armhf \ libmariadb-dev-compat:armhf \ - gcc-arm-linux-gnueabihf \ + libmariadb3:armhf \ + libpq-dev:armhf \ + libpq5:armhf \ + libssl-dev:armhf \ # # Make sure cargo has the right target config && echo '[target.armv7-unknown-linux-gnueabihf]' >> "${CARGO_HOME}/config" \ @@ -70,7 +67,6 @@ ENV CC_armv7_unknown_linux_gnueabihf="/usr/bin/arm-linux-gnueabihf-gcc" \ OPENSSL_INCLUDE_DIR="/usr/include/arm-linux-gnueabihf" \ OPENSSL_LIB_DIR="/usr/lib/arm-linux-gnueabihf" - # Creates a dummy project used to grab dependencies RUN USER=root cargo new --bin /app WORKDIR /app @@ -102,6 +98,7 @@ RUN touch src/main.rs # your actual source files being built RUN cargo build --features ${DB} --release --target=armv7-unknown-linux-gnueabihf + ######################## RUNTIME IMAGE ######################## # Create a new stage with a minimal image # because we already have a binary built @@ -117,11 +114,11 @@ RUN [ "cross-build-start" ] RUN mkdir /data \ && apt-get update && apt-get install -y \ --no-install-recommends \ - openssl \ ca-certificates \ curl \ libmariadb-dev-compat \ libpq5 \ + openssl \ && apt-get clean \ && rm -rf /var/lib/apt/lists/* diff --git a/docker/armv7/Dockerfile.alpine b/docker/armv7/Dockerfile.alpine index 1872e54e..43d2509c 100644 --- a/docker/armv7/Dockerfile.alpine +++ b/docker/armv7/Dockerfile.alpine @@ -29,8 +29,6 @@ FROM vaultwarden/web-vault@sha256:d5f71fb05c4b87935bf51d84140db0f8716cabfe2974fb ########################## BUILD IMAGE ########################## FROM blackdex/rust-musl:armv7-musleabihf-stable-1.66.1 as build - - # Build time options to avoid dpkg warnings and help with reproducible builds. ENV DEBIAN_FRONTEND=noninteractive \ LANG=C.UTF-8 \ @@ -39,7 +37,6 @@ ENV DEBIAN_FRONTEND=noninteractive \ CARGO_HOME="/root/.cargo" \ USER="root" - # Create CARGO_HOME folder and don't download rust docs RUN mkdir -pv "${CARGO_HOME}" \ && rustup set profile minimal @@ -77,6 +74,7 @@ RUN touch src/main.rs # your actual source files being built RUN cargo build --features ${DB} --release --target=armv7-unknown-linux-musleabihf + ######################## RUNTIME IMAGE ######################## # Create a new stage with a minimal image # because we already have a binary built @@ -93,10 +91,10 @@ RUN [ "cross-build-start" ] # Create data folder and Install needed libraries RUN mkdir /data \ && apk add --no-cache \ - openssl \ - tzdata \ + ca-certificates \ curl \ - ca-certificates + openssl \ + tzdata RUN [ "cross-build-end" ] diff --git a/docker/armv7/Dockerfile.buildkit b/docker/armv7/Dockerfile.buildkit index 4ff8364a..07b51478 100644 --- a/docker/armv7/Dockerfile.buildkit +++ b/docker/armv7/Dockerfile.buildkit @@ -29,8 +29,6 @@ FROM vaultwarden/web-vault@sha256:d5f71fb05c4b87935bf51d84140db0f8716cabfe2974fb ########################## BUILD IMAGE ########################## FROM rust:1.66-bullseye as build - - # Build time options to avoid dpkg warnings and help with reproducible builds. ENV DEBIAN_FRONTEND=noninteractive \ LANG=C.UTF-8 \ @@ -39,25 +37,24 @@ ENV DEBIAN_FRONTEND=noninteractive \ CARGO_HOME="/root/.cargo" \ USER="root" - # Create CARGO_HOME folder and don't download rust docs RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry mkdir -pv "${CARGO_HOME}" \ && rustup set profile minimal -# -# Install required build libs for armhf architecture. +# Install build dependencies for the armhf architecture RUN dpkg --add-architecture armhf \ && apt-get update \ && apt-get install -y \ --no-install-recommends \ - libssl-dev:armhf \ + gcc-arm-linux-gnueabihf \ libc6-dev:armhf \ - libpq5:armhf \ - libpq-dev:armhf \ - libmariadb3:armhf \ + libcap2-bin \ libmariadb-dev:armhf \ libmariadb-dev-compat:armhf \ - gcc-arm-linux-gnueabihf \ + libmariadb3:armhf \ + libpq-dev:armhf \ + libpq5:armhf \ + libssl-dev:armhf \ # # Make sure cargo has the right target config && echo '[target.armv7-unknown-linux-gnueabihf]' >> "${CARGO_HOME}/config" \ @@ -70,7 +67,6 @@ ENV CC_armv7_unknown_linux_gnueabihf="/usr/bin/arm-linux-gnueabihf-gcc" \ OPENSSL_INCLUDE_DIR="/usr/include/arm-linux-gnueabihf" \ OPENSSL_LIB_DIR="/usr/lib/arm-linux-gnueabihf" - # Creates a dummy project used to grab dependencies RUN USER=root cargo new --bin /app WORKDIR /app @@ -102,6 +98,12 @@ RUN touch src/main.rs # your actual source files being built RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=armv7-unknown-linux-gnueabihf +# Add the `cap_net_bind_service` capability to allow listening on +# privileged (< 1024) ports even when running as a non-root user. +# This is only done if building with BuildKit; with the legacy +# builder, the `COPY` instruction doesn't carry over capabilities. +RUN setcap cap_net_bind_service=+ep target/armv7-unknown-linux-gnueabihf/release/vaultwarden + ######################## RUNTIME IMAGE ######################## # Create a new stage with a minimal image # because we already have a binary built @@ -117,11 +119,11 @@ RUN [ "cross-build-start" ] RUN mkdir /data \ && apt-get update && apt-get install -y \ --no-install-recommends \ - openssl \ ca-certificates \ curl \ libmariadb-dev-compat \ libpq5 \ + openssl \ && apt-get clean \ && rm -rf /var/lib/apt/lists/* diff --git a/docker/armv7/Dockerfile.buildkit.alpine b/docker/armv7/Dockerfile.buildkit.alpine index 2fc23849..9a9e1a9b 100644 --- a/docker/armv7/Dockerfile.buildkit.alpine +++ b/docker/armv7/Dockerfile.buildkit.alpine @@ -29,8 +29,6 @@ FROM vaultwarden/web-vault@sha256:d5f71fb05c4b87935bf51d84140db0f8716cabfe2974fb ########################## BUILD IMAGE ########################## FROM blackdex/rust-musl:armv7-musleabihf-stable-1.66.1 as build - - # Build time options to avoid dpkg warnings and help with reproducible builds. ENV DEBIAN_FRONTEND=noninteractive \ LANG=C.UTF-8 \ @@ -39,7 +37,6 @@ ENV DEBIAN_FRONTEND=noninteractive \ CARGO_HOME="/root/.cargo" \ USER="root" - # Create CARGO_HOME folder and don't download rust docs RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry mkdir -pv "${CARGO_HOME}" \ && rustup set profile minimal @@ -77,6 +74,12 @@ RUN touch src/main.rs # your actual source files being built RUN --mount=type=cache,target=/root/.cargo/git --mount=type=cache,target=/root/.cargo/registry cargo build --features ${DB} --release --target=armv7-unknown-linux-musleabihf +# Add the `cap_net_bind_service` capability to allow listening on +# privileged (< 1024) ports even when running as a non-root user. +# This is only done if building with BuildKit; with the legacy +# builder, the `COPY` instruction doesn't carry over capabilities. +RUN setcap cap_net_bind_service=+ep target/armv7-unknown-linux-musleabihf/release/vaultwarden + ######################## RUNTIME IMAGE ######################## # Create a new stage with a minimal image # because we already have a binary built @@ -93,10 +96,10 @@ RUN [ "cross-build-start" ] # Create data folder and Install needed libraries RUN mkdir /data \ && apk add --no-cache \ - openssl \ - tzdata \ + ca-certificates \ curl \ - ca-certificates + openssl \ + tzdata RUN [ "cross-build-end" ] From e65fbbfc2105566a3c457dd34d3ae790ce7f4fb5 Mon Sep 17 00:00:00 2001 From: Stefan Melmuk Date: Wed, 1 Feb 2023 23:10:09 +0100 Subject: [PATCH 10/24] don't nullify key when editing emergency access the client does not send the key on every update of an emergency access contact so the field would be emptied on a change of the wait days or access level. --- src/api/core/emergency_access.rs | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/src/api/core/emergency_access.rs b/src/api/core/emergency_access.rs index fcabc617..90a5e6b8 100644 --- a/src/api/core/emergency_access.rs +++ b/src/api/core/emergency_access.rs @@ -123,7 +123,9 @@ async fn post_emergency_access( emergency_access.atype = new_type; emergency_access.wait_time_days = data.WaitTimeDays; - emergency_access.key_encrypted = data.KeyEncrypted; + if data.KeyEncrypted.is_some() { + emergency_access.key_encrypted = data.KeyEncrypted; + } emergency_access.save(&mut conn).await?; Ok(Json(emergency_access.to_json())) From 26cd5d96434d497cc0a7ca11cacc5ea9845230c0 Mon Sep 17 00:00:00 2001 From: sirux88 Date: Sat, 4 Feb 2023 09:23:13 +0100 Subject: [PATCH 11/24] Replaced wrong mysql column type --- .../mysql/2023-01-06-151600_add_reset_password_support/up.sql | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/migrations/mysql/2023-01-06-151600_add_reset_password_support/up.sql b/migrations/mysql/2023-01-06-151600_add_reset_password_support/up.sql index d8173af4..326b3106 100644 --- a/migrations/mysql/2023-01-06-151600_add_reset_password_support/up.sql +++ b/migrations/mysql/2023-01-06-151600_add_reset_password_support/up.sql @@ -1,2 +1,2 @@ ALTER TABLE users_organizations -ADD COLUMN reset_password_key VARCHAR(255); +ADD COLUMN reset_password_key TEXT; From 62dfeb80f211f9ec283b46d8158f86d715f8e6b5 Mon Sep 17 00:00:00 2001 From: sirux88 Date: Sat, 4 Feb 2023 13:29:57 +0100 Subject: [PATCH 12/24] improved security, disabling policy usage on email-disabled clients and some refactoring --- src/api/core/organizations.rs | 143 ++++++++++++++++------------------ 1 file changed, 68 insertions(+), 75 deletions(-) diff --git a/src/api/core/organizations.rs b/src/api/core/organizations.rs index 964d4c4d..6224c18b 100644 --- a/src/api/core/organizations.rs +++ b/src/api/core/organizations.rs @@ -711,10 +711,6 @@ async fn send_invite( err!("Only Owners can invite Managers, Admins or Owners") } - if !CONFIG.mail_enabled() && OrgPolicy::org_is_reset_password_auto_enroll(&org_id, &mut conn).await { - err!("With mailing disabled and auto-enrollment-feature of reset-password-policy enabled it's not possible to invite users"); - } - for email in data.Emails.iter() { let email = email.to_lowercase(); let mut user_org_status = UserOrgStatus::Invited as i32; @@ -729,10 +725,6 @@ async fn send_invite( } if !CONFIG.mail_enabled() { - if OrgPolicy::org_is_reset_password_auto_enroll(&org_id, &mut conn).await { - err!("With disabled mailing and enabled auto-enrollment-feature of reset-password-policy it's not possible to invite existing users"); - } - let invitation = Invitation::new(&email); invitation.save(&mut conn).await?; } @@ -748,10 +740,6 @@ async fn send_invite( // automatically accept existing users if mail is disabled if !CONFIG.mail_enabled() && !user.password_hash.is_empty() { user_org_status = UserOrgStatus::Accepted as i32; - - if OrgPolicy::org_is_reset_password_auto_enroll(&org_id, &mut conn).await { - err!("With disabled mailing and enabled auto-enrollment-feature of reset-password-policy it's not possible to invite existing users"); - } } user } @@ -1597,17 +1585,8 @@ async fn put_policy( } } - // This check is required since invited users automatically get accepted if mailing is not enabled (this seems like a vaultwarden specific feature) - // As a result of this the necessary "/accepted"-endpoint doesn't get hit. - // But this endpoint is required for autoenrollment while invitation. - // Nevertheless reset password is fully fuctiontional in settings without mailing by manual enrollment - if pol_type_enum == OrgPolicyType::ResetPassword && data.enabled && !CONFIG.mail_enabled() { - if let Some(policy_data) = &data.data { - if policy_data["autoEnrollEnabled"].as_bool().unwrap_or(false) { - err!("Autoenroll can't be used since it requires enabled emailing") - } - } + err!("Due to potential security flaws and/or misuse reset password policy is disabled on mail disabled instances") } let mut policy = match OrgPolicy::find_by_org_and_type(&org_id, pol_type_enum, &mut conn).await { @@ -2542,55 +2521,37 @@ async fn put_reset_password( None => err!("Required organization not found"), }; - let policy = match OrgPolicy::find_by_org_and_type(&org.uuid, OrgPolicyType::ResetPassword, &mut conn).await { - Some(p) => p, - None => err!("Policy not found"), - }; - - if !policy.enabled { - err!("Reset password policy not enabled"); - } - let org_user = match UserOrganization::find_by_uuid_and_org(&org_user_id, &org.uuid, &mut conn).await { Some(user) => user, None => err!("User to reset isn't member of required organization"), }; - if org_user.reset_password_key.is_none() { - err!("Password reset not or not corretly enrolled"); - } - if org_user.status != (UserOrgStatus::Confirmed as i32) { - err!("Organization user must be confirmed for password reset functionality"); - } - - //Resetting user must be higher/equal to user to reset - let mut reset_allowed = false; - if headers.org_user_type == UserOrgType::Owner { - reset_allowed = true; - } - if headers.org_user_type == UserOrgType::Admin { - reset_allowed = org_user.atype != (UserOrgType::Owner as i32); - } - - if !reset_allowed { - err!("No permission to reset this user's password"); - } - let mut user = match User::find_by_uuid(&org_user.user_uuid, &mut conn).await { Some(user) => user, None => err!("User not found"), }; + check_reset_password_applicable_and_permissions(&org_id, &org_user_id, &headers, &mut conn).await?; + + if org_user.reset_password_key.is_none() { + err!("Password reset not or not correctly enrolled"); + } + if org_user.status != (UserOrgStatus::Confirmed as i32) { + err!("Organization user must be confirmed for password reset functionality"); + } + + // Sending email before resetting password to ensure working email configuration and the resulting + // user notification. Also this might add some protection against security flaws and misuse + if let Err(e) = mail::send_admin_reset_password(&user.email.to_lowercase(), &user.name, &org.name).await { + error!("Error sending user reset password email: {:#?}", e); + } + let reset_request = data.into_inner().data; user.set_password(reset_request.NewMasterPasswordHash.as_str(), Some(reset_request.Key), true, None); user.save(&mut conn).await?; - nt.send_user_update(UpdateType::LogOut, &user).await; - - if CONFIG.mail_enabled() { - mail::send_admin_reset_password(&user.email.to_lowercase(), &user.name, &org.name).await?; - } + nt.send_logout(&user, None).await; log_event( EventType::OrganizationUserAdminResetPassword as i32, @@ -2610,7 +2571,7 @@ async fn put_reset_password( async fn get_reset_password_details( org_id: String, org_user_id: String, - _headers: AdminHeaders, + headers: AdminHeaders, mut conn: DbConn, ) -> JsonResult { let org = match Organization::find_by_uuid(&org_id, &mut conn).await { @@ -2618,15 +2579,6 @@ async fn get_reset_password_details( None => err!("Required organization not found"), }; - let policy = match OrgPolicy::find_by_org_and_type(&org_id, OrgPolicyType::ResetPassword, &mut conn).await { - Some(p) => p, - None => err!("Policy not found"), - }; - - if !policy.enabled { - err!("Reset password policy not enabled"); - } - let org_user = match UserOrganization::find_by_uuid_and_org(&org_user_id, &org_id, &mut conn).await { Some(user) => user, None => err!("User to reset isn't member of required organization"), @@ -2637,6 +2589,8 @@ async fn get_reset_password_details( None => err!("User not found"), }; + check_reset_password_applicable_and_permissions(&org_id, &org_user_id, &headers, &mut conn).await?; + Ok(Json(json!({ "Object": "organizationUserResetPasswordDetails", "Kdf":user.client_kdf_type, @@ -2647,6 +2601,52 @@ async fn get_reset_password_details( }))) } +async fn check_reset_password_applicable_and_permissions( + org_id: &str, + org_user_id: &str, + headers: &AdminHeaders, + conn: &mut DbConn, +) -> EmptyResult { + check_reset_password_applicable(org_id, conn).await?; + + let target_user = match UserOrganization::find_by_uuid_and_org(org_user_id, org_id, conn).await { + Some(user) => user, + None => err!("Reset target user not found"), + }; + + // Resetting user must be higher/equal to user to reset + let mut reset_allowed = false; + if headers.org_user_type == UserOrgType::Owner { + reset_allowed = true; + } + if headers.org_user_type == UserOrgType::Admin { + reset_allowed = target_user.atype != (UserOrgType::Owner as i32); + } + + if !reset_allowed { + err!("No permission to reset this user's password"); + } + + Ok(()) +} + +async fn check_reset_password_applicable(org_id: &str, conn: &mut DbConn) -> EmptyResult { + if !CONFIG.mail_enabled() { + err!("Password reset is not supported on an email-disabled instance."); + } + + let policy = match OrgPolicy::find_by_org_and_type(org_id, OrgPolicyType::ResetPassword, conn).await { + Some(p) => p, + None => err!("Policy not found"), + }; + + if !policy.enabled { + err!("Reset password policy not enabled"); + } + + Ok(()) +} + #[put("/organizations//users//reset-password-enrollment", data = "")] async fn put_reset_password_enrollment( org_id: String, @@ -2656,20 +2656,13 @@ async fn put_reset_password_enrollment( mut conn: DbConn, ip: ClientIp, ) -> EmptyResult { - let policy = match OrgPolicy::find_by_org_and_type(&org_id, OrgPolicyType::ResetPassword, &mut conn).await { - Some(p) => p, - None => err!("Policy not found"), - }; - - if !policy.enabled { - err!("Reset password policy not enabled"); - } - let mut org_user = match UserOrganization::find_by_user_and_org(&headers.user.uuid, &org_id, &mut conn).await { Some(u) => u, None => err!("User to enroll isn't member of required organization"), }; + check_reset_password_applicable(&org_id, &mut conn).await?; + let reset_request = data.into_inner().data; if reset_request.ResetPasswordKey.is_none() From a6558f55488c86c1aa702e9a5e7875afbd2e7490 Mon Sep 17 00:00:00 2001 From: sirux88 Date: Sun, 5 Feb 2023 16:34:48 +0100 Subject: [PATCH 13/24] rust lang specific improvements --- src/api/core/organizations.rs | 18 +++++------------- 1 file changed, 5 insertions(+), 13 deletions(-) diff --git a/src/api/core/organizations.rs b/src/api/core/organizations.rs index 6224c18b..3b0d5fad 100644 --- a/src/api/core/organizations.rs +++ b/src/api/core/organizations.rs @@ -2542,7 +2542,7 @@ async fn put_reset_password( // Sending email before resetting password to ensure working email configuration and the resulting // user notification. Also this might add some protection against security flaws and misuse - if let Err(e) = mail::send_admin_reset_password(&user.email.to_lowercase(), &user.name, &org.name).await { + if let Err(e) = mail::send_admin_reset_password(&user.email, &user.name, &org.name).await { error!("Error sending user reset password email: {:#?}", e); } @@ -2615,19 +2615,11 @@ async fn check_reset_password_applicable_and_permissions( }; // Resetting user must be higher/equal to user to reset - let mut reset_allowed = false; - if headers.org_user_type == UserOrgType::Owner { - reset_allowed = true; + match headers.org_user_type { + UserOrgType::Owner => Ok(()), + UserOrgType::Admin if target_user.atype <= UserOrgType::Admin => Ok(()), + _ => err!("No permission to reset this user's password"), } - if headers.org_user_type == UserOrgType::Admin { - reset_allowed = target_user.atype != (UserOrgType::Owner as i32); - } - - if !reset_allowed { - err!("No permission to reset this user's password"); - } - - Ok(()) } async fn check_reset_password_applicable(org_id: &str, conn: &mut DbConn) -> EmptyResult { From 0d1753ac747c43b9f48310c4f7be284fdc6ee669 Mon Sep 17 00:00:00 2001 From: sirux88 Date: Sun, 5 Feb 2023 16:47:23 +0100 Subject: [PATCH 14/24] completly hide reset password policy on email disabled instances --- src/api/core/organizations.rs | 4 ---- src/db/models/organization.rs | 4 ++-- 2 files changed, 2 insertions(+), 6 deletions(-) diff --git a/src/api/core/organizations.rs b/src/api/core/organizations.rs index 3b0d5fad..9250f929 100644 --- a/src/api/core/organizations.rs +++ b/src/api/core/organizations.rs @@ -1585,10 +1585,6 @@ async fn put_policy( } } - if pol_type_enum == OrgPolicyType::ResetPassword && data.enabled && !CONFIG.mail_enabled() { - err!("Due to potential security flaws and/or misuse reset password policy is disabled on mail disabled instances") - } - let mut policy = match OrgPolicy::find_by_org_and_type(&org_id, pol_type_enum, &mut conn).await { Some(p) => p, None => OrgPolicy::new(org_id.clone(), pol_type_enum, "{}".to_string()), diff --git a/src/db/models/organization.rs b/src/db/models/organization.rs index 1de321bd..a6e4be21 100644 --- a/src/db/models/organization.rs +++ b/src/db/models/organization.rs @@ -159,7 +159,7 @@ impl Organization { "SelfHost": true, "UseApi": false, // Not supported "HasPublicAndPrivateKeys": self.private_key.is_some() && self.public_key.is_some(), - "UseResetPassword": true, + "UseResetPassword": CONFIG.mail_enabled(), "BusinessName": null, "BusinessAddress1": null, @@ -314,7 +314,7 @@ impl UserOrganization { "SelfHost": true, "HasPublicAndPrivateKeys": org.private_key.is_some() && org.public_key.is_some(), "ResetPasswordEnrolled": self.reset_password_key.is_some(), - "UseResetPassword": true, + "UseResetPassword": CONFIG.mail_enabled(), "SsoBound": false, // Not supported "UseSso": false, // Not supported "ProviderId": null, From 64edc49392f1786aebf6a5a0c9b60742ca734d6e Mon Sep 17 00:00:00 2001 From: BlockListed <44610569+BlockListed@users.noreply.github.com> Date: Mon, 6 Feb 2023 23:19:08 +0100 Subject: [PATCH 15/24] change description of domain configuration Vaultwarden send won't work if the domain includes a trailing slash. This should be documented, as it may lead to confusion amoung users. --- src/config.rs | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/src/config.rs b/src/config.rs index 46deed54..1ba6e402 100644 --- a/src/config.rs +++ b/src/config.rs @@ -401,7 +401,8 @@ make_config! { /// General settings settings { /// Domain URL |> This needs to be set to the URL used to access the server, including 'http[s]://' - /// and port, if it's different than the default. Some server functions don't work correctly without this value + /// and port, if it's different than the default, but excluding a trailing slash. + /// Some server functions don't work correctly without this value domain: String, true, def, "http://localhost".to_string(); /// Domain Set |> Indicates if the domain is set by the admin. Otherwise the default will be used. domain_set: bool, false, def, false; From eb9b481eba63dbbfe0e83fac238be3482e21ccfa Mon Sep 17 00:00:00 2001 From: BlockListed <44610569+BlockListed@users.noreply.github.com> Date: Tue, 7 Feb 2023 08:48:48 +0100 Subject: [PATCH 16/24] improve wording of domain description --- src/config.rs | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/config.rs b/src/config.rs index 1ba6e402..60b2db27 100644 --- a/src/config.rs +++ b/src/config.rs @@ -401,7 +401,7 @@ make_config! { /// General settings settings { /// Domain URL |> This needs to be set to the URL used to access the server, including 'http[s]://' - /// and port, if it's different than the default, but excluding a trailing slash. + /// and port, if it's different than the default. Don't include a trailing slash. /// Some server functions don't work correctly without this value domain: String, true, def, "http://localhost".to_string(); /// Domain Set |> Indicates if the domain is set by the admin. Otherwise the default will be used. From 24b5784f027b04ccdb7d34b89a87e204b97bc22b Mon Sep 17 00:00:00 2001 From: "Kevin P. Fleming" Date: Tue, 7 Feb 2023 05:24:23 -0500 Subject: [PATCH 17/24] Generate distinct log messages for regex vs. IP blacklisting. When an icon will not be downloaded due to matching a configured blacklist, ensure that the log message indicates the type of blacklist that was matched. --- src/api/icons.rs | 30 ++++++++++++++++++++---------- 1 file changed, 20 insertions(+), 10 deletions(-) diff --git a/src/api/icons.rs b/src/api/icons.rs index 23d122f1..f1b1ee70 100644 --- a/src/api/icons.rs +++ b/src/api/icons.rs @@ -79,7 +79,7 @@ async fn icon_redirect(domain: &str, template: &str) -> Option { return None; } - if is_domain_blacklisted(domain).await { + if check_domain_blacklist_reason(domain).await.is_some() { return None; } @@ -258,9 +258,15 @@ mod tests { } } +#[derive(Debug, Clone)] +enum DomainBlacklistReason { + Regex, + IP, +} + use cached::proc_macro::cached; -#[cached(key = "String", convert = r#"{ domain.to_string() }"#, size = 16, time = 60)] -async fn is_domain_blacklisted(domain: &str) -> bool { +#[cached(key = "String", convert = r#"{ domain.to_string() }"#, size = 16, time = 60, option = true)] +async fn check_domain_blacklist_reason(domain: &str) -> Option { // First check the blacklist regex if there is a match. // This prevents the blocked domain(s) from being leaked via a DNS lookup. if let Some(blacklist) = CONFIG.icon_blacklist_regex() { @@ -284,7 +290,7 @@ async fn is_domain_blacklisted(domain: &str) -> bool { if is_match { debug!("Blacklisted domain: {} matched ICON_BLACKLIST_REGEX", domain); - return true; + return Some(DomainBlacklistReason::Regex); } } @@ -293,13 +299,13 @@ async fn is_domain_blacklisted(domain: &str) -> bool { for addr in s { if !is_global(addr.ip()) { debug!("IP {} for domain '{}' is not a global IP!", addr.ip(), domain); - return true; + return Some(DomainBlacklistReason::IP); } } } } - false + None } async fn get_icon(domain: &str) -> Option<(Vec, String)> { @@ -564,8 +570,10 @@ async fn get_page(url: &str) -> Result { } async fn get_page_with_referer(url: &str, referer: &str) -> Result { - if is_domain_blacklisted(url::Url::parse(url).unwrap().host_str().unwrap_or_default()).await { - warn!("Favicon '{}' resolves to a blacklisted domain or IP!", url); + match check_domain_blacklist_reason(url::Url::parse(url).unwrap().host_str().unwrap_or_default()).await { + Some(DomainBlacklistReason::Regex) => warn!("Favicon '{}' is from a blacklisted domain!", url), + Some(DomainBlacklistReason::IP) => warn!("Favicon '{}' is hosted on a non-global IP!", url), + None => (), } let mut client = CLIENT.get(url); @@ -659,8 +667,10 @@ fn parse_sizes(sizes: &str) -> (u16, u16) { } async fn download_icon(domain: &str) -> Result<(Bytes, Option<&str>), Error> { - if is_domain_blacklisted(domain).await { - err_silent!("Domain is blacklisted", domain) + match check_domain_blacklist_reason(domain).await { + Some(DomainBlacklistReason::Regex) => err_silent!("Domain is blacklisted", domain), + Some(DomainBlacklistReason::IP) => err_silent!("Host resolves to a non-global IP", domain), + None => (), } let icon_result = get_icon_url(domain).await?; From 6741b2590709957dffcd16d7a04b4ebdbaa92c1d Mon Sep 17 00:00:00 2001 From: "Kevin P. Fleming" Date: Tue, 7 Feb 2023 05:54:06 -0500 Subject: [PATCH 18/24] Ensure that all results from check_domain_blacklist_reason are cached. --- src/api/icons.rs | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/api/icons.rs b/src/api/icons.rs index f1b1ee70..9bff0162 100644 --- a/src/api/icons.rs +++ b/src/api/icons.rs @@ -265,7 +265,7 @@ enum DomainBlacklistReason { } use cached::proc_macro::cached; -#[cached(key = "String", convert = r#"{ domain.to_string() }"#, size = 16, time = 60, option = true)] +#[cached(key = "String", convert = r#"{ domain.to_string() }"#, size = 16, time = 60)] async fn check_domain_blacklist_reason(domain: &str) -> Option { // First check the blacklist regex if there is a match. // This prevents the blocked domain(s) from being leaked via a DNS lookup. From a72d0b518fa96ba63c4029f0c60e6bd53cd92661 Mon Sep 17 00:00:00 2001 From: BlockListed <44610569+BlockListed@users.noreply.github.com> Date: Tue, 7 Feb 2023 12:48:48 +0100 Subject: [PATCH 19/24] remove documentation of bug since I'm fixing it --- src/config.rs | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/src/config.rs b/src/config.rs index 60b2db27..46deed54 100644 --- a/src/config.rs +++ b/src/config.rs @@ -401,8 +401,7 @@ make_config! { /// General settings settings { /// Domain URL |> This needs to be set to the URL used to access the server, including 'http[s]://' - /// and port, if it's different than the default. Don't include a trailing slash. - /// Some server functions don't work correctly without this value + /// and port, if it's different than the default. Some server functions don't work correctly without this value domain: String, true, def, "http://localhost".to_string(); /// Domain Set |> Indicates if the domain is set by the admin. Otherwise the default will be used. domain_set: bool, false, def, false; From 679bc7a59b5daef65ef7916474577106a46fa0e9 Mon Sep 17 00:00:00 2001 From: BlockListed <44610569+BlockListed@users.noreply.github.com> Date: Tue, 7 Feb 2023 13:03:28 +0100 Subject: [PATCH 20/24] fix trailing slash not being removed from domain --- src/auth.rs | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/src/auth.rs b/src/auth.rs index 03f14cb8..5e31524e 100644 --- a/src/auth.rs +++ b/src/auth.rs @@ -283,7 +283,8 @@ impl<'r> FromRequest<'r> for Host { // Get host let host = if CONFIG.domain_set() { - CONFIG.domain() + // Remove trailing slash if it exists since we're getting a host + CONFIG.domain().trim_end_matches('/').to_string() } else if let Some(referer) = headers.get_one("Referer") { referer.to_string() } else { From b3a351ccb2abd0fa1aa267922f798a3d02446199 Mon Sep 17 00:00:00 2001 From: Jan Jansen Date: Thu, 5 Jan 2023 17:04:11 +0100 Subject: [PATCH 21/24] allow editing/unhiding by group Fixes #2989 Signed-off-by: Jan Jansen --- src/db/models/collection.rs | 124 +++++++++++++++++++++++++----------- 1 file changed, 86 insertions(+), 38 deletions(-) diff --git a/src/db/models/collection.rs b/src/db/models/collection.rs index eba0ffee..0b40196d 100644 --- a/src/db/models/collection.rs +++ b/src/db/models/collection.rs @@ -287,47 +287,95 @@ impl Collection { } pub async fn is_writable_by_user(&self, user_uuid: &str, conn: &mut DbConn) -> bool { - match UserOrganization::find_by_user_and_org(user_uuid, &self.org_uuid, conn).await { - None => false, // Not in Org - Some(user_org) => { - if user_org.has_full_access() { - return true; - } - - db_run! { conn: { - users_collections::table - .filter(users_collections::collection_uuid.eq(&self.uuid)) - .filter(users_collections::user_uuid.eq(user_uuid)) - .filter(users_collections::read_only.eq(false)) - .count() - .first::(conn) - .ok() - .unwrap_or(0) != 0 - }} - } - } + let user_uuid = user_uuid.to_string(); + db_run! { conn: { + collections::table + .left_join(users_collections::table.on( + users_collections::collection_uuid.eq(collections::uuid).and( + users_collections::user_uuid.eq(user_uuid.clone()) + ) + )) + .left_join(users_organizations::table.on( + collections::org_uuid.eq(users_organizations::org_uuid).and( + users_organizations::user_uuid.eq(user_uuid) + ) + )) + .left_join(groups_users::table.on( + groups_users::users_organizations_uuid.eq(users_organizations::uuid) + )) + .left_join(groups::table.on( + groups::uuid.eq(groups_users::groups_uuid) + )) + .left_join(collections_groups::table.on( + collections_groups::groups_uuid.eq(groups_users::groups_uuid).and( + collections_groups::collections_uuid.eq(collections::uuid) + ) + )) + .filter(collections::uuid.eq(&self.uuid)) + .filter( + users_collections::collection_uuid.eq(&self.uuid).and(users_collections::read_only.eq(false)).or(// Directly accessed collection + users_organizations::access_all.eq(true).or( // access_all in Organization + users_organizations::atype.le(UserOrgType::Admin as i32) // Org admin or owner + )).or( + groups::access_all.eq(true) // access_all in groups + ).or( // access via groups + groups_users::users_organizations_uuid.eq(users_organizations::uuid).and( + collections_groups::collections_uuid.is_not_null().and( + collections_groups::read_only.eq(false)) + ) + ) + ) + .count() + .first::(conn) + .ok() + .unwrap_or(0) != 0 + }} } pub async fn hide_passwords_for_user(&self, user_uuid: &str, conn: &mut DbConn) -> bool { - match UserOrganization::find_by_user_and_org(user_uuid, &self.org_uuid, conn).await { - None => true, // Not in Org - Some(user_org) => { - if user_org.has_full_access() { - return false; - } - - db_run! { conn: { - users_collections::table - .filter(users_collections::collection_uuid.eq(&self.uuid)) - .filter(users_collections::user_uuid.eq(user_uuid)) - .filter(users_collections::hide_passwords.eq(true)) - .count() - .first::(conn) - .ok() - .unwrap_or(0) != 0 - }} - } - } + let user_uuid = user_uuid.to_string(); + db_run! { conn: { + collections::table + .left_join(users_collections::table.on( + users_collections::collection_uuid.eq(collections::uuid).and( + users_collections::user_uuid.eq(user_uuid.clone()) + ) + )) + .left_join(users_organizations::table.on( + collections::org_uuid.eq(users_organizations::org_uuid).and( + users_organizations::user_uuid.eq(user_uuid) + ) + )) + .left_join(groups_users::table.on( + groups_users::users_organizations_uuid.eq(users_organizations::uuid) + )) + .left_join(groups::table.on( + groups::uuid.eq(groups_users::groups_uuid) + )) + .left_join(collections_groups::table.on( + collections_groups::groups_uuid.eq(groups_users::groups_uuid).and( + collections_groups::collections_uuid.eq(collections::uuid) + ) + )) + .filter(collections::uuid.eq(&self.uuid)) + .filter( + users_collections::collection_uuid.eq(&self.uuid).and(users_collections::hide_passwords.eq(true)).or(// Directly accessed collection + users_organizations::access_all.eq(true).or( // access_all in Organization + users_organizations::atype.le(UserOrgType::Admin as i32) // Org admin or owner + )).or( + groups::access_all.eq(true) // access_all in groups + ).or( // access via groups + groups_users::users_organizations_uuid.eq(users_organizations::uuid).and( + collections_groups::collections_uuid.is_not_null().and( + collections_groups::hide_passwords.eq(true)) + ) + ) + ) + .count() + .first::(conn) + .ok() + .unwrap_or(0) != 0 + }} } } From a2aa7c9bc23145f0f5db72f8aeed826902c86fde Mon Sep 17 00:00:00 2001 From: BlockListed <44610569+BlockListed@users.noreply.github.com> Date: Tue, 7 Feb 2023 18:19:16 +0100 Subject: [PATCH 22/24] Revert "fix trailing slash not being removed from domain" This reverts commit 679bc7a59b5daef65ef7916474577106a46fa0e9. --- src/auth.rs | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/src/auth.rs b/src/auth.rs index 5e31524e..03f14cb8 100644 --- a/src/auth.rs +++ b/src/auth.rs @@ -283,8 +283,7 @@ impl<'r> FromRequest<'r> for Host { // Get host let host = if CONFIG.domain_set() { - // Remove trailing slash if it exists since we're getting a host - CONFIG.domain().trim_end_matches('/').to_string() + CONFIG.domain() } else if let Some(referer) = headers.get_one("Referer") { referer.to_string() } else { From 5d1c11ceba3826b5ae000d9a4d8c0ec7e094428c Mon Sep 17 00:00:00 2001 From: BlockListed <44610569+BlockListed@users.noreply.github.com> Date: Tue, 7 Feb 2023 18:34:47 +0100 Subject: [PATCH 23/24] fix trailing slash in configuration builder --- src/config.rs | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/src/config.rs b/src/config.rs index 46deed54..42a75ca2 100644 --- a/src/config.rs +++ b/src/config.rs @@ -141,6 +141,14 @@ macro_rules! make_config { )+)+ config.domain_set = _domain_set; + if config.domain_set { + if config.domain.ends_with('/') { + println!("[WARNING] The configured domain ends with a trailing slash."); + println!("[WARNING] The trailing slash is getting removed."); + config.domain = config.domain.trim_end_matches('/').to_string(); + } + } + config.signups_domains_whitelist = config.signups_domains_whitelist.trim().to_lowercase(); config.org_creation_users = config.org_creation_users.trim().to_lowercase(); From c04a1352cbc62cb55e4bb412c245c832780a20df Mon Sep 17 00:00:00 2001 From: BlockListed <44610569+BlockListed@users.noreply.github.com> Date: Tue, 7 Feb 2023 18:49:26 +0100 Subject: [PATCH 24/24] remove warn when sanitizing domain --- src/config.rs | 8 +------- 1 file changed, 1 insertion(+), 7 deletions(-) diff --git a/src/config.rs b/src/config.rs index 42a75ca2..8ae7109c 100644 --- a/src/config.rs +++ b/src/config.rs @@ -141,13 +141,7 @@ macro_rules! make_config { )+)+ config.domain_set = _domain_set; - if config.domain_set { - if config.domain.ends_with('/') { - println!("[WARNING] The configured domain ends with a trailing slash."); - println!("[WARNING] The trailing slash is getting removed."); - config.domain = config.domain.trim_end_matches('/').to_string(); - } - } + config.domain = config.domain.trim_end_matches('/').to_string(); config.signups_domains_whitelist = config.signups_domains_whitelist.trim().to_lowercase(); config.org_creation_users = config.org_creation_users.trim().to_lowercase();
User Created at Last Active Items {{#if TwoFactorEnabled}} - + {{/if}} - - + + {{#if user_enabled}} - + {{else}} - + {{/if}}