Compare commits

...

8 commits

Author SHA1 Message Date
88781b61ce Update compatibility target
Some checks failed
continuous-integration/drone/push Build is failing
2022-01-24 17:50:16 +01:00
c435004d9f Reword connect
All checks were successful
continuous-integration/drone/push Build is passing
2022-01-24 13:59:25 +01:00
3e07f10255 Add WinSCP and update menu
Some checks reported errors
continuous-integration/drone/push Build encountered an error
2022-01-24 13:52:53 +01:00
11f54e2b8b Add support for duck
Some checks failed
continuous-integration/drone/push Build is failing
2022-01-24 13:44:39 +01:00
a8ef761731 Add docker registry
Some checks failed
continuous-integration/drone/push Build is failing
2022-01-24 12:37:12 +01:00
ba7be3f895 Add doc for Publii + Peertube
Some checks failed
continuous-integration/drone/push Build is failing
2022-01-24 12:04:58 +01:00
9374389f87 Add tests for CORS
Some checks failed
continuous-integration/drone/push Build is failing
2022-01-14 11:47:27 +01:00
bed3106c6a
Implement {Put,Get,Delete}BucketCors and CORS in web server
All checks were successful
continuous-integration/drone/pr Build is passing
continuous-integration/drone/push Build is passing
2022-01-13 17:27:16 +01:00
24 changed files with 848 additions and 323 deletions

View file

@ -14,10 +14,10 @@
- [Recovering from failures](./cookbook/recovering.md) - [Recovering from failures](./cookbook/recovering.md)
- [Integrations](./connect/index.md) - [Integrations](./connect/index.md)
- [Browsing tools (awscli, mc...)](./connect/cli.md)
- [Apps (Nextcloud, Peertube...)](./connect/apps.md) - [Apps (Nextcloud, Peertube...)](./connect/apps.md)
- [Websites (Hugo, Jekyll, Publii...)](./connect/websites.md) - [Websites (Hugo, Jekyll, Publii...)](./connect/websites.md)
- [Repositories (Docker, Nix, Git...)](./connect/repositories.md) - [Repositories (Docker, Nix, Git...)](./connect/repositories.md)
- [CLI tools (rclone, awscli, mc...)](./connect/cli.md)
- [Backups (restic, duplicity...)](./connect/backup.md) - [Backups (restic, duplicity...)](./connect/backup.md)
- [Your code (PHP, JS, Go...)](./connect/code.md) - [Your code (PHP, JS, Go...)](./connect/code.md)
- [FUSE (s3fs, goofys, s3backer...)](./connect/fs.md) - [FUSE (s3fs, goofys, s3backer...)](./connect/fs.md)

View file

@ -1,6 +1,20 @@
# Apps (Nextcloud, Peertube...) # Apps (Nextcloud, Peertube...)
In this section, we cover the following software: [Nextcloud](#nextcloud), [Peertube](#peertube), [Mastodon](#mastodon), [Matrix](#matrix) In this section, we cover the following web applications:
| Name | Status | Note |
|------|--------|------|
| [Nextcloud](#nextcloud) | ✅ | Both Primary Storage and External Storage are supported |
| [Peertube](#peertube) | ✅ | Must be configured with the website endpoint |
| [Mastodon](#mastodon) | ❓ | Not yet tested |
| [Matrix](#matrix) | ✅ | Tested with `synapse-s3-storage-provider` |
| [Pixelfed](#pixelfed) | ❓ | Not yet tested |
| [Pleroma](#pleroma) | ❓ | Not yet tested |
| [Lemmy](#lemmy) | ❓ | Not yet tested |
| [Funkwhale](#funkwhale) | ❓ | Not yet tested |
| [Misskey](#misskey) | ❓ | Not yet tested |
| [Prismo](#prismo) | ❓ | Not yet tested |
| [Owncloud OCIS](#owncloud-infinite-scale-ocis) | ❓| Not yet tested |
## Nextcloud ## Nextcloud
@ -108,109 +122,8 @@ Do not change the `use_path_style` and `legacy_auth` entries, other configuratio
Peertube proposes a clever integration of S3 by directly exposing its endpoint instead of proxifying requests through the application. Peertube proposes a clever integration of S3 by directly exposing its endpoint instead of proxifying requests through the application.
In other words, Peertube is only responsible of the "control plane" and offload the "data plane" to Garage. In other words, Peertube is only responsible of the "control plane" and offload the "data plane" to Garage.
In return, this system is a bit harder to configure, especially with Garage that supports less feature than other older S3 backends. In return, this system is a bit harder to configure.
We show that it is still possible to configure Garage with Peertube, allowing you to spread the load and the bandwidth usage on the Garage cluster. We show how it is still possible to configure Garage with Peertube, allowing you to spread the load and the bandwidth usage on the Garage cluster.
### Enable path-style access by patching Peertube
First, you will need to apply a small patch on Peertube ([#4510](https://github.com/Chocobozzz/PeerTube/pull/4510)):
```diff
From e3b4c641bdf67e07d406a1d49d6aa6b1fbce2ab4 Mon Sep 17 00:00:00 2001
From: Martin Honermeyer <maze@strahlungsfrei.de>
Date: Sun, 31 Oct 2021 12:34:04 +0100
Subject: [PATCH] Allow setting path-style access for object storage
---
config/default.yaml | 4 ++++
config/production.yaml.example | 4 ++++
server/initializers/config.ts | 1 +
server/lib/object-storage/shared/client.ts | 3 ++-
.../production/config/custom-environment-variables.yaml | 2 ++
5 files changed, 13 insertions(+), 1 deletion(-)
diff --git a/config/default.yaml b/config/default.yaml
index cf9d69a6211..4efd56fb804 100644
--- a/config/default.yaml
+++ b/config/default.yaml
@@ -123,6 +123,10 @@ object_storage:
# You can also use AWS_SECRET_ACCESS_KEY env variable
secret_access_key: ''
+ # Reference buckets via path rather than subdomain
+ # (i.e. "my-endpoint.com/bucket" instead of "bucket.my-endpoint.com")
+ force_path_style: false
+
# Maximum amount to upload in one request to object storage
max_upload_part: 2GB
diff --git a/config/production.yaml.example b/config/production.yaml.example
index 70993bf57a3..9ca2de5f4c9 100644
--- a/config/production.yaml.example
+++ b/config/production.yaml.example
@@ -121,6 +121,10 @@ object_storage:
# You can also use AWS_SECRET_ACCESS_KEY env variable
secret_access_key: ''
+ # Reference buckets via path rather than subdomain
+ # (i.e. "my-endpoint.com/bucket" instead of "bucket.my-endpoint.com")
+ force_path_style: false
+
# Maximum amount to upload in one request to object storage
max_upload_part: 2GB
diff --git a/server/initializers/config.ts b/server/initializers/config.ts
index 8375bf4304c..d726c59a4b6 100644
--- a/server/initializers/config.ts
+++ b/server/initializers/config.ts
@@ -91,6 +91,7 @@ const CONFIG = {
ACCESS_KEY_ID: config.get<string>('object_storage.credentials.access_key_id'),
SECRET_ACCESS_KEY: config.get<string>('object_storage.credentials.secret_access_key')
},
+ FORCE_PATH_STYLE: config.get<boolean>('object_storage.force_path_style'),
VIDEOS: {
BUCKET_NAME: config.get<string>('object_storage.videos.bucket_name'),
PREFIX: config.get<string>('object_storage.videos.prefix'),
diff --git a/server/lib/object-storage/shared/client.ts b/server/lib/object-storage/shared/client.ts
index c9a61459336..eadad02f93f 100644
--- a/server/lib/object-storage/shared/client.ts
+++ b/server/lib/object-storage/shared/client.ts
@@ -26,7 +26,8 @@ function getClient () {
accessKeyId: OBJECT_STORAGE.CREDENTIALS.ACCESS_KEY_ID,
secretAccessKey: OBJECT_STORAGE.CREDENTIALS.SECRET_ACCESS_KEY
}
- : undefined
+ : undefined,
+ forcePathStyle: CONFIG.OBJECT_STORAGE.FORCE_PATH_STYLE
})
logger.info('Initialized S3 client %s with region %s.', getEndpoint(), OBJECT_STORAGE.REGION, lTags())
diff --git a/support/docker/production/config/custom-environment-variables.yaml b/support/docker/production/config/custom-environment-variables.yaml
index c7cd28e6521..a960bab0bc9 100644
--- a/support/docker/production/config/custom-environment-variables.yaml
+++ b/support/docker/production/config/custom-environment-variables.yaml
@@ -54,6 +54,8 @@ object_storage:
region: "PEERTUBE_OBJECT_STORAGE_REGION"
+ force_path_style: "PEERTUBE_OBJECT_STORAGE_FORCE_PATH_STYLE"
+
max_upload_part:
__name: "PEERTUBE_OBJECT_STORAGE_MAX_UPLOAD_PART"
__format: "json"
```
You can then recompile it with:
```
npm run build
```
And it can be started with:
```
NODE_ENV=production NODE_CONFIG_DIR=/srv/peertube/config node dist/server.js
```
### Create resources in Garage ### Create resources in Garage
@ -232,31 +145,32 @@ garage bucket create peertube-playlist
Now we allow our key to read and write on these buckets: Now we allow our key to read and write on these buckets:
``` ```
garage bucket allow peertube-playlist --read --write --key peertube-key garage bucket allow peertube-playlists --read --write --owner --key peertube-key
garage bucket allow peertube-video --read --write --key peertube-key garage bucket allow peertube-videos --read --write --owner --key peertube-key
``` ```
Finally, we need to expose these buckets publicly to serve their content to users: We also need to expose these buckets publicly to serve their content to users:
```bash ```bash
garage bucket website --allow peertube-playlist garage bucket website --allow peertube-playlists
garage bucket website --allow peertube-video garage bucket website --allow peertube-videos
```
Finally, we must allow Cross-Origin Resource Sharing (CORS).
CORS are required by your browser to allow requests triggered from the peertube website (eg. peertube.tld) to your bucket's domain (eg. peertube-videos.web.garage.tld)
```bash
export CORS='{"CORSRules":[{"AllowedHeaders":["*"],"AllowedMethods":["GET"],"AllowedOrigins":["*"]}]}'
aws --endpoint http://s3.garage.localhost s3api put-bucket-cors --bucket peertube-playlists --cors-configuration $CORS
aws --endpoint http://s3.garage.localhost s3api put-bucket-cors --bucket peertube-videos --cors-configuration $CORS
``` ```
These buckets are now accessible on the web port (by default 3902) with the following URL: `http://<bucket><root_domain>:<web_port>` where the root domain is defined in your configuration file (by default `.web.garage`). So we have currently the following URLs: These buckets are now accessible on the web port (by default 3902) with the following URL: `http://<bucket><root_domain>:<web_port>` where the root domain is defined in your configuration file (by default `.web.garage`). So we have currently the following URLs:
* http://peertube-playlist.web.garage:3902 * http://peertube-playlists.web.garage:3902
* http://peertube-video.web.garage:3902 * http://peertube-videos.web.garage:3902
Make sure you (will) have a corresponding DNS entry for them. Make sure you (will) have a corresponding DNS entry for them.
### Configure a Reverse Proxy to serve CORS
Now we will configure a reverse proxy in front of Garage.
This is required as we have no other way to serve CORS headers yet.
Check the [Configuring a reverse proxy](/cookbook/reverse_proxy.html) section to know how.
Now make sure that your 2 dns entries are pointing to your reverse proxy.
### Configure Peertube ### Configure Peertube
You must edit the file named `config/production.yaml`, we are only modifying the root key named `object_storage`: You must edit the file named `config/production.yaml`, we are only modifying the root key named `object_storage`:
@ -268,9 +182,6 @@ object_storage:
# Put localhost only if you have a garage instance running on that node # Put localhost only if you have a garage instance running on that node
endpoint: 'http://localhost:3900' # or "garage.example.com" if you have TLS on port 443 endpoint: 'http://localhost:3900' # or "garage.example.com" if you have TLS on port 443
# This entry has been added by our patch, must be set to true
force_path_style: true
# Garage supports only one region for now, named garage # Garage supports only one region for now, named garage
region: 'garage' region: 'garage'
@ -287,28 +198,23 @@ object_storage:
prefix: '' prefix: ''
# You must fill this field to make Peertube use our reverse proxy/website logic # You must fill this field to make Peertube use our reverse proxy/website logic
base_url: 'http://peertube-playlist.web.garage' # Example: 'https://mirror.example.com' base_url: 'http://peertube-playlists.web.garage.localhost' # Example: 'https://mirror.example.com'
# Same settings but for webtorrent videos # Same settings but for webtorrent videos
videos: videos:
bucket_name: 'peertube-video' bucket_name: 'peertube-video'
prefix: '' prefix: ''
# You must fill this field to make Peertube use our reverse proxy/website logic # You must fill this field to make Peertube use our reverse proxy/website logic
base_url: 'http://peertube-video.web.garage' base_url: 'http://peertube-videos.web.garage.localhost'
``` ```
### That's all ### That's all
Everything must be configured now, simply restart Peertube and try to upload a video. Everything must be configured now, simply restart Peertube and try to upload a video.
You must see in your browser console that data are fetched directly from our bucket (through the reverse proxy).
### Miscellaneous Peertube will start by serving the video from its own domain while it is encoding.
Once the encoding is done, the video is uploaded to Garage.
*Known bug:* The playback does not start and some 400 Bad Request Errors appear in your browser console and on Garage. You can now reload the page and see in your browser console that data are fetched directly from your bucket.
If the description of the error contains HTTP Invalid Range: InvalidRange, the error is due to a buggy ffmpeg version.
You must avoid the 4.4.0 and use either a newer or older version.
*Associated issues:* [#137](https://git.deuxfleurs.fr/Deuxfleurs/garage/issues/137), [#138](https://git.deuxfleurs.fr/Deuxfleurs/garage/issues/138), [#140](https://git.deuxfleurs.fr/Deuxfleurs/garage/issues/140). These issues are non blocking.
*External link:* [Peertube Documentation > Remote Storage](https://docs.joinpeertube.org/admin-remote-storage) *External link:* [Peertube Documentation > Remote Storage](https://docs.joinpeertube.org/admin-remote-storage)
@ -429,31 +335,34 @@ And add a new line. For example, to run it every 10 minutes:
## Pixelfed ## Pixelfed
https://docs.pixelfed.org/technical-documentation/env.html#filesystem [Pixelfed Technical Documentation > Configuration](https://docs.pixelfed.org/technical-documentation/env.html#filesystem)
## Pleroma ## Pleroma
https://docs-develop.pleroma.social/backend/configuration/cheatsheet/#pleromauploaderss3 [Pleroma Documentation > Pleroma.Uploaders.S3](https://docs-develop.pleroma.social/backend/configuration/cheatsheet/#pleromauploaderss3)
## Lemmy ## Lemmy
via pict-rs Lemmy uses pict-rs that [supports S3 backends](https://git.asonix.dog/asonix/pict-rs/commit/f9f4fc63d670f357c93f24147c2ee3e1278e2d97)
https://git.asonix.dog/asonix/pict-rs/commit/f9f4fc63d670f357c93f24147c2ee3e1278e2d97
## Funkwhale ## Funkwhale
https://docs.funkwhale.audio/admin/configuration.html#s3-storage [Funkwhale Documentation > S3 Storage](https://docs.funkwhale.audio/admin/configuration.html#s3-storage)
## Misskey ## Misskey
https://github.com/misskey-dev/misskey/commit/9d944243a3a59e8880a360cbfe30fd5a3ec8d52d [Misskey Github > commit 9d94424](https://github.com/misskey-dev/misskey/commit/9d944243a3a59e8880a360cbfe30fd5a3ec8d52d)
## Prismo ## Prismo
https://gitlab.com/prismosuite/prismo/-/blob/dev/.env.production.sample#L26-33 [Prismo Gitlab > .env.production.sample](https://gitlab.com/prismosuite/prismo/-/blob/dev/.env.production.sample#L26-33)
## Owncloud Infinite Scale (ocis) ## Owncloud Infinite Scale (ocis)
OCIS could be compatible with S3:
- [Deploying OCIS with S3](https://owncloud.dev/ocis/deployment/ocis_s3/)
- [OCIS 1.7 release note](https://central.owncloud.org/t/owncloud-infinite-scale-tech-preview-1-7-enables-s3-storage/32514/3)
## Unsupported ## Unsupported
- Mobilizon: No S3 integration - Mobilizon: No S3 integration

View file

@ -1,9 +1,19 @@
# CLI tools # Browsing tools
CLI tools allow you to query the S3 API without too many abstractions. Browsing tools allow you to query the S3 API without too many abstractions.
These tools are particularly suitable for debug, backups, website deployments or any scripted task that need to handle data. These tools are particularly suitable for debug, backups, website deployments or any scripted task that need to handle data.
## Minio client (recommended) | Name | Status | Note |
|------|--------|------|
| [Minio client](#minio-client-recommended) | ✅ | Recommended |
| [AWS CLI](#aws-cli) | ✅ | Recommended |
| [rclone](#rclone) | ✅ | |
| [s3cmd](#s3cmd) | ✅ | |
| [(Cyber)duck](#cyberduck--duck) | ✅ | |
| [WinSCP (libs3)](#winscp) | ✅ | No instructions yet |
## Minio client
Use the following command to set an "alias", i.e. define a new S3 server to be Use the following command to set an "alias", i.e. define a new S3 server to be
used by the Minio client: used by the Minio client:
@ -161,6 +171,107 @@ s3cmd get s3://my-bucket/hello.txt hello.txt
## Cyberduck & duck ## Cyberduck & duck
TODO Both Cyberduck (the GUI) and duck (the CLI) have a concept of "Connection Profiles" that contain some presets for a specific provider.
We wrote the following connection profile for Garage:
```xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Protocol</key>
<string>s3</string>
<key>Vendor</key>
<string>garage</string>
<key>Scheme</key>
<string>https</string>
<key>Description</key>
<string>GarageS3</string>
<key>Default Hostname</key>
<string>127.0.0.1</string>
<key>Default Port</key>
<string>4443</string>
<key>Hostname Configurable</key>
<false/>
<key>Port Configurable</key>
<false/>
<key>Username Configurable</key>
<true/>
<key>Username Placeholder</key>
<string>Access Key ID (GK...)</string>
<key>Password Placeholder</key>
<string>Secret Key</string>
<key>Properties</key>
<array>
<string>s3service.disable-dns-buckets=true</string>
</array>
<key>Region</key>
<string>garage</string>
<key>Regions</key>
<array>
<string>garage</string>
</array>
</dict>
</plist>
```
*Note: If your garage instance is configured with vhost access style, you can remove `s3service.disable-dns-buckets=true`.*
### Instructions for the GUI
Copy the connection profile, and save it anywhere as `garage.cyberduckprofile`.
Then find this file with your file explorer and double click on it: Cyberduck will open a connection wizard for this profile.
Simply follow the wizard and you should be done!
### Instuctions for the CLI
To configure duck (Cyberduck's CLI tool), start by creating its folder hierarchy:
```
mkdir -p ~/.duck/profiles/
```
Then, save the connection profile for Garage in `~/.duck/profiles/garage.cyberduckprofile`.
To set your credentials in `~/.duck/credentials`, use the following commands to generate the appropriate string:
```bash
export AWS_ACCESS_KEY_ID="GK..."
export AWS_SECRET_ACCESS_KEY="..."
export HOST="s3.garage.localhost"
export PORT="4443"
export PROTOCOL="https"
cat > ~/.duck/credentials <<EOF
$PROTOCOL\://$AWS_ACCESS_KEY_ID@$HOST\:$PORT=$AWS_SECRET_ACCESS_KEY
EOF
```
And finally, I recommend appending a small wrapper to your `~/.bashrc` to avoid setting the username on each command (do not forget to replace `GK...` by your access key):
```bash
function duck { command duck --username GK... $@ ; }
```
Finally, you can then use `duck` as follow:
```bash
# List buckets
duck --list garage:/
# List objects in a bucket
duck --list garage:/my-files/
# Download an object
duck --download garage:/my-files/an-object.txt /tmp/object.txt
# Upload an object
duck --upload /tmp/object.txt garage:/my-files/another-object.txt
# Delete an object
duck --delete garage:/my-files/an-object.txt
```
## WinSCP (libs3)
*No instruction yet. You can find ones in french [in our wiki](https://wiki.deuxfleurs.fr/fr/Guide/Garage/WinSCP).*

View file

@ -4,11 +4,12 @@ Garage implements the Amazon S3 protocol, which makes it compatible with many ex
In particular, you will find here instructions to connect it with: In particular, you will find here instructions to connect it with:
- [web applications](./apps.md) - [Browsing tools](./cli.md)
- [website hosting](./websites.md) - [Applications](./apps.md)
- [software repositories](./repositories.md) - [Website hosting](./websites.md)
- [CLI tools](./cli.md) - [Software repositories](./repositories.md)
- [your own code](./code.md) - [Your own code](./code.md)
- [FUSE](./fs.md)
### Generic instructions ### Generic instructions
@ -30,9 +31,9 @@ you will need the following parameters:
Most S3 clients can be configured easily with these parameters, Most S3 clients can be configured easily with these parameters,
provided that you follow the following guidelines: provided that you follow the following guidelines:
- **Force path style:** Garage does not support DNS-style buckets, which are now by default - **Be careful to DNS-style/path-style access:** Garage supports both DNS-style buckets, which are now by default
on Amazon S3. Instead, Garage uses the legacy path-style bucket addressing. on Amazon S3, and legacy path-style buckets. If you use a reverse proxy in front of Garage,
Remember to configure your client to acknowledge this fact. make sure that you configured it to support the access-style required by the software you want to use.
- **Configuring the S3 region:** Garage requires your client to talk to the correct "S3 region", - **Configuring the S3 region:** Garage requires your client to talk to the correct "S3 region",
which is set in the configuration file. This is often set just to `garage`. which is set in the configuration file. This is often set just to `garage`.

Binary file not shown.

After

Width:  |  Height:  |  Size: 134 KiB

View file

@ -3,6 +3,15 @@
Whether you need to store and serve binary packages or source code, you may want to deploy a tool referred as a repository or registry. Whether you need to store and serve binary packages or source code, you may want to deploy a tool referred as a repository or registry.
Garage can also help you serve this content. Garage can also help you serve this content.
| Name | Status | Note |
|------|--------|------|
| [Gitea](#gitea) | ✅ | |
| [Docker](#generic-static-site-generator) | ✅ | Requires garage >= v0.6.0 |
| [Nix](#generic-static-site-generator) | ✅ | |
| [Gitlab](#gitlab) | ❓ | Not yet tested |
## Gitea ## Gitea
You can use Garage with Gitea to store your [git LFS](https://git-lfs.github.com/) data, your users' avatar, and their attachements. You can use Garage with Gitea to store your [git LFS](https://git-lfs.github.com/) data, your users' avatar, and their attachements.
@ -52,18 +61,42 @@ $ aws s3 ls s3://gitea/avatars/
*External link:* [Gitea Documentation > Configuration Cheat Sheet](https://docs.gitea.io/en-us/config-cheat-sheet/) *External link:* [Gitea Documentation > Configuration Cheat Sheet](https://docs.gitea.io/en-us/config-cheat-sheet/)
## Gitlab
*External link:* [Gitlab Documentation > Object storage](https://docs.gitlab.com/ee/administration/object_storage.html)
## Private NPM Registry (Verdacio)
*External link:* [Verdaccio Github Repository > aws-storage plugin](https://github.com/verdaccio/verdaccio/tree/master/packages/plugins/aws-storage)
## Docker ## Docker
Not yet compatible, follow [#103](https://git.deuxfleurs.fr/Deuxfleurs/garage/issues/103). Create a bucket and a key for your docker registry, then create `config.yml` with the following content:
```yml
version: 0.1
http:
addr: 0.0.0.0:5000
secret: asecretforlocaldevelopment
debug:
addr: localhost:5001
storage:
s3:
accesskey: GKxxxx
secretkey: yyyyy
region: garage
regionendpoint: http://localhost:3900
bucket: docker
secure: false
v4auth: true
rootdirectory: /
```
Replace the `accesskey`, `secretkey`, `bucket`, `regionendpoint` and `secure` values by the one fitting your deployment.
Then simply run the docker registry:
```bash
docker run \
--net=host \
-v `pwd`/config.yml:/etc/docker/registry/config.yml \
registry:2
```
*We started a plain text registry but docker clients require encrypted registries. You must either [setup TLS](https://docs.docker.com/registry/deploying/#run-an-externally-accessible-registry) on your registry or add `--insecure-registry=localhost:5000` to your docker daemon parameters.*
*External link:* [Docker Documentation > Registry storage drivers > S3 storage driver](https://docs.docker.com/registry/storage-drivers/s3/) *External link:* [Docker Documentation > Registry storage drivers > S3 storage driver](https://docs.docker.com/registry/storage-drivers/s3/)
@ -167,3 +200,9 @@ on the binary cache, the client will download the result from the cache instead
Channels additionnaly serve Nix definitions, ie. a `.nix` file referencing Channels additionnaly serve Nix definitions, ie. a `.nix` file referencing
all the derivations you want to serve. all the derivations you want to serve.
## Gitlab
*External link:* [Gitlab Documentation > Object storage](https://docs.gitlab.com/ee/administration/object_storage.html)

View file

@ -3,6 +3,12 @@
Garage is also suitable to host static websites. Garage is also suitable to host static websites.
While they can be deployed with traditional CLI tools, some static website generators have integrated options to ease your workflow. While they can be deployed with traditional CLI tools, some static website generators have integrated options to ease your workflow.
| Name | Status | Note |
|------|--------|------|
| [Hugo](#hugo) | ✅ | Publishing logic is integrated in the tool |
| [Publii](#publii) | ✅ | Require a correctly configured s3 vhost endpoint |
| [Generic Static Site Generator](#generic-static-site-generator) | ✅ | Works for Jekyll, Zola, Gatsby, Pelican, etc. |
## Hugo ## Hugo
Add to your `config.toml` the following section: Add to your `config.toml` the following section:
@ -39,39 +45,38 @@ hugo deploy
## Publii ## Publii
It would require a patch either on Garage or on Publii to make both systems work. [![A screenshot of Publii's GUI](./publii.png)](./publii.png)
Currently, the proposed workaround is to deploy your website manually: Deploying a website to Garage from Publii is natively supported.
- On the left menu, click on Server, choose Manual Deployment (the logo looks like a compressed file) First, make sure that your Garage administrator allowed and configured Garage to support vhost access style.
- Set your website URL, keep Output type as "Non-compressed catalog" We also suppose that your bucket ("my-bucket") and key is already created and configured.
- Click on Save changes
- Click on Sync your website (bottom left of the app)
- On the new page, click again on Sync your website
- Click on Get website files
- You need to synchronize the output folder you see in your file explorer, we will use minio client.
Be sure that you [configured minio client](cli.html#minio-client-recommended). Then, from the left menu, click on server. Choose "S3" as the protocol.
In the configuration window, enter:
- Your finale website URL (eg. "http://my-bucket.web.garage.localhost:3902")
- Tick "Use a custom S3 provider"
- Set the S3 endpoint, (eg. "http://s3.garage.localhost:3900")
- Then put your access key (eg. "GK..."), your secret key, and your bucket (eg. "my-bucket")
- And hit the button "Save settings"
Then copy this output folder Now, each time you want to publish your website from Publii, just hit the bottom left button "Sync your website"!
```bash
mc mirror --overwrite output garage/my-site
```
## Generic (eg. Jekyll)
## Generic Static Site Generator
Some tools do not support sending to a S3 backend but output a compiled folder on your system. Some tools do not support sending to a S3 backend but output a compiled folder on your system.
We can then use any CLI tool to upload this content to our S3 target. We can then use any CLI tool to upload this content to our S3 target.
First, start by [configuring minio client](cli.html#minio-client-recommended). First, start by [configuring minio client](cli.html#minio-client-recommended).
Then build your website: Then build your website (example for jekyll):
```bash ```bash
jekyll build jekyll build
``` ```
And copy jekyll's output folder on S3: And copy its output folder (`_site` for Jekyll) on S3:
```bash ```bash
mc mirror --overwrite _site garage/my-site mc mirror --overwrite _site garage/my-site

View file

@ -1,6 +1,6 @@
# Configuring a reverse proxy # Configuring a reverse proxy
The main reason to add a reverse proxy in front of Garage is to provide TLS to your users. The main reason to add a reverse proxy in front of Garage is to provide TLS to your users and serve multiple web services on port 443.
In production you will likely need your certificates signed by a certificate authority. In production you will likely need your certificates signed by a certificate authority.
The most automated way is to use a provider supporting the [ACME protocol](https://datatracker.ietf.org/doc/html/rfc8555) The most automated way is to use a provider supporting the [ACME protocol](https://datatracker.ietf.org/doc/html/rfc8555)
@ -55,16 +55,15 @@ If you directly put the instructions in the root `nginx.conf`, keep in mind that
And do not forget to reload nginx with `systemctl reload nginx` or `nginx -s reload`. And do not forget to reload nginx with `systemctl reload nginx` or `nginx -s reload`.
### Defining backends ### Exposing the S3 endpoints
First, we need to tell to nginx how to access our Garage cluster. First, we need to tell to nginx how to access our Garage cluster.
Because we have multiple nodes, we want to leverage all of them by spreading the load. Because we have multiple nodes, we want to leverage all of them by spreading the load.
In nginx, we can do that with the `upstream` directive.
In nginx, we can do that with the upstream directive. Then in a `server` directive, we define the vhosts, the TLS certificates and the proxy rule.
Because we have 2 endpoints: one for the S3 API and one to serve websites,
we create 2 backends named respectively `s3_backend` and `web_backend`.
A documented example for the `s3_backend` assuming you chose port 3900: A possible configuration:
```nginx ```nginx
upstream s3_backend { upstream s3_backend {
@ -78,9 +77,34 @@ upstream s3_backend {
# that are more powerful than others # that are more powerful than others
server garage2.example.com:3900 weight=2; server garage2.example.com:3900 weight=2;
} }
server {
listen [::]:443 http2 ssl;
ssl_certificate /tmp/garage.crt;
ssl_certificate_key /tmp/garage.key;
# You need multiple server names here:
# - s3.garage.tld is used for path-based s3 requests
# - *.s3.garage.tld is used for vhost-based s3 requests
server_name s3.garage.tld *.s3.garage.tld;
location / {
proxy_pass http://s3_backend;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
}
}
``` ```
A similar example for the `web_backend` assuming you chose port 3902: ## Exposing the web endpoint
To better understand the logic involved, you can refer to the [Exposing buckets as websites](/cookbook/exposing_websites.html) section.
Otherwise, the configuration is very similar to the S3 endpoint.
You must only adapt `upstream` with the web port instead of the s3 port and change the `server_name` and `proxy_pass` entry
A possible configuration:
```nginx ```nginx
upstream web_backend { upstream web_backend {
@ -89,65 +113,19 @@ upstream web_backend {
server garage1.example.com:3902; server garage1.example.com:3902;
server garage2.example.com:3902 weight=2; server garage2.example.com:3902 weight=2;
} }
```
### Exposing the S3 API
The configuration section for the S3 API is simple as we only support path-access style yet.
We simply configure the TLS parameters and forward all the requests to the backend:
```nginx
server { server {
listen [::]:443 http2 ssl; listen [::]:443 http2 ssl;
ssl_certificate /tmp/garage.crt; ssl_certificate /tmp/garage.crt;
ssl_certificate_key /tmp/garage.key; ssl_certificate_key /tmp/garage.key;
# should be the endpoint you want # You need multiple server names here:
# aws uses s3.amazonaws.com for example # - *.web.garage.tld is used for your users wanting a website without reserving a domain name
server_name garage.example.com; # - example.com, my-site.tld, etc. are reserved domain name by your users that chose to host their website as a garage's bucket
server_name *.web.garage.tld example.com my-site.tld;
location / { location / {
proxy_pass http://s3_backend;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
}
}
```
### Exposing the web endpoint
The web endpoint is a bit more complicated to configure as it listens on many different `Host` fields.
To better understand the logic involved, you can refer to the [Exposing buckets as websites](/cookbook/exposing_websites.html) section.
Also, for some applications, you may need to serve CORS headers: Garage can not serve them directly but we show how we can use nginx to serve them.
You can use the following example as your starting point:
```nginx
server {
listen [::]:443 http2 ssl;
ssl_certificate /tmp/garage.crt;
ssl_certificate_key /tmp/garage.key;
# We list all the Hosts fields that can access our buckets
server_name *.web.garage
example.com
my-site.tld
;
location / {
# Add these headers only if you want to allow CORS requests
# For production use, more specific rules would be better for your security
add_header Access-Control-Allow-Origin *;
add_header Access-Control-Max-Age 3600;
add_header Access-Control-Expose-Headers Content-Length;
add_header Access-Control-Allow-Headers Range;
# We do not forward OPTIONS requests to Garage
# as it does not support them but they are needed for CORS.
if ($request_method = OPTIONS) {
return 200;
}
proxy_pass http://web_backend; proxy_pass http://web_backend;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host; proxy_set_header Host $host;
@ -155,7 +133,6 @@ server {
} }
``` ```
## Apache httpd ## Apache httpd
@TODO @TODO

View file

@ -49,11 +49,11 @@ bootstrap_peers = []
[s3_api] [s3_api]
s3_region = "garage" s3_region = "garage"
api_bind_addr = "[::]:3900" api_bind_addr = "[::]:3900"
root_domain = ".s3.garage" root_domain = ".s3.garage.localhost"
[s3_web] [s3_web]
bind_addr = "[::]:3902" bind_addr = "[::]:3902"
root_domain = ".web.garage" root_domain = ".web.garage.localhost"
index = "index.html" index = "index.html"
``` ```

View file

@ -9,7 +9,8 @@ Implemented:
- putting and getting objects in buckets - putting and getting objects in buckets
- multipart uploads - multipart uploads
- listing objects - listing objects
- access control on a per-key-per-bucket basis - access control on a per-access-key-per-bucket basis
- CORS headers on web endpoint
Not implemented: Not implemented:
@ -31,9 +32,11 @@ All APIs that are not mentionned are not implemented and will return a 501 Not I
| CreateBucket | Implemented | | CreateBucket | Implemented |
| CreateMultipartUpload | Implemented | | CreateMultipartUpload | Implemented |
| DeleteBucket | Implemented | | DeleteBucket | Implemented |
| DeleteBucketCors | Implemented |
| DeleteBucketWebsite | Implemented | | DeleteBucketWebsite | Implemented |
| DeleteObject | Implemented | | DeleteObject | Implemented |
| DeleteObjects | Implemented | | DeleteObjects | Implemented |
| GetBucketCors | Implemented |
| GetBucketLocation | Implemented | | GetBucketLocation | Implemented |
| GetBucketVersioning | Stub (see below) | | GetBucketVersioning | Stub (see below) |
| GetBucketWebsite | Implemented | | GetBucketWebsite | Implemented |
@ -46,6 +49,7 @@ All APIs that are not mentionned are not implemented and will return a 501 Not I
| ListMultipartUpload | Implemented | | ListMultipartUpload | Implemented |
| ListParts | Missing | | ListParts | Missing |
| PutObject | Implemented | | PutObject | Implemented |
| PutBucketCors | Implemented |
| PutBucketWebsite | Partially implemented (see below)| | PutBucketWebsite | Partially implemented (see below)|
| UploadPart | Implemented | | UploadPart | Implemented |
| UploadPartCopy | Implemented | | UploadPartCopy | Implemented |

View file

@ -24,16 +24,17 @@ your motivations for doing so in the PR message.
| | CompleteMultipartUpload | | | CompleteMultipartUpload |
| | AbortMultipartUpload | | | AbortMultipartUpload |
| | UploadPart | | | UploadPart |
| | [*ListMultipartUploads*](https://git.deuxfleurs.fr/Deuxfleurs/garage/issues/103) | | | ListMultipartUploads |
| | [*ListParts*](https://git.deuxfleurs.fr/Deuxfleurs/garage/issues/103) | | | ListParts |
| **A-tier** (will implement) | | | **A-tier** | |
| | [*GetBucketCors*](https://git.deuxfleurs.fr/Deuxfleurs/garage/issues/138) | | | GetBucketCors |
| | [*PutBucketCors*](https://git.deuxfleurs.fr/Deuxfleurs/garage/issues/138) | | | PutBucketCors |
| | [*DeleteBucketCors*](https://git.deuxfleurs.fr/Deuxfleurs/garage/issues/138) | | | DeleteBucketCors |
| | UploadPartCopy | | | UploadPartCopy |
| | GetBucketWebsite | | | GetBucketWebsite |
| | PutBucketWebsite | | | PutBucketWebsite |
| | DeleteBucketWebsite | | | DeleteBucketWebsite |
| | [PostObject](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPOST.html) |
| ~~~~~~~~~~~~~~~~~~~~~~~~~~ | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | | ~~~~~~~~~~~~~~~~~~~~~~~~~~ | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
| **B-tier** | | | **B-tier** | |
| | GetBucketAcl | | | GetBucketAcl |

View file

@ -13,7 +13,7 @@ garage -c /tmp/config.1.toml bucket create eprouvette
KEY_INFO=$(garage -c /tmp/config.1.toml key new --name opérateur) KEY_INFO=$(garage -c /tmp/config.1.toml key new --name opérateur)
ACCESS_KEY=`echo $KEY_INFO|grep -Po 'GK[a-f0-9]+'` ACCESS_KEY=`echo $KEY_INFO|grep -Po 'GK[a-f0-9]+'`
SECRET_KEY=`echo $KEY_INFO|grep -Po 'Secret key: [a-f0-9]+'|grep -Po '[a-f0-9]+$'` SECRET_KEY=`echo $KEY_INFO|grep -Po 'Secret key: [a-f0-9]+'|grep -Po '[a-f0-9]+$'`
garage -c /tmp/config.1.toml bucket allow eprouvette --read --write --key $ACCESS_KEY garage -c /tmp/config.1.toml bucket allow eprouvette --owner --read --write --key $ACCESS_KEY
echo "$ACCESS_KEY $SECRET_KEY" > /tmp/garage.s3 echo "$ACCESS_KEY $SECRET_KEY" > /tmp/garage.s3
echo "Bucket s3://eprouvette created. Credentials stored in /tmp/garage.s3." echo "Bucket s3://eprouvette created. Credentials stored in /tmp/garage.s3."

View file

@ -38,10 +38,11 @@ rpc_secret = "$NETWORK_SECRET"
[s3_api] [s3_api]
api_bind_addr = "0.0.0.0:$((3910+$count))" # the S3 API port, HTTP without TLS. Add a reverse proxy for the TLS part. api_bind_addr = "0.0.0.0:$((3910+$count))" # the S3 API port, HTTP without TLS. Add a reverse proxy for the TLS part.
s3_region = "garage" # set this to anything. S3 API calls will fail if they are not made against the region set here. s3_region = "garage" # set this to anything. S3 API calls will fail if they are not made against the region set here.
root_domain = ".s3.garage.localhost"
[s3_web] [s3_web]
bind_addr = "0.0.0.0:$((3920+$count))" bind_addr = "0.0.0.0:$((3920+$count))"
root_domain = ".garage.tld" root_domain = ".web.garage.localhost"
index = "index.html" index = "index.html"
EOF EOF

View file

@ -302,6 +302,25 @@ EOF
rm /tmp/garage.test_multipart rm /tmp/garage.test_multipart
rm /tmp/garage.test_multipart_reference rm /tmp/garage.test_multipart_reference
rm /tmp/garage.test_multipart_diff rm /tmp/garage.test_multipart_diff
echo "Test CORS endpoints"
# @FIXME remove bucket allow if/when testing on s3 endpoint
garage -c /tmp/config.1.toml bucket website --allow eprouvette
aws s3api put-object --bucket eprouvette --key index.html
CORS='{"CORSRules":[{"AllowedHeaders":["*"],"AllowedMethods":["GET","PUT"],"AllowedOrigins":["*"]}]}'
aws s3api put-bucket-cors --bucket eprouvette --cors-configuration $CORS
[ `aws s3api get-bucket-cors --bucket eprouvette | jq -c` == $CORS ]
# @FIXME should we really return these CORS on the WEB endpoint and not on the S3 endpoint?
curl -s -i -H 'Origin: http://example.com' http://eprouvette.web.garage.localhost:3921 | grep access-control-allow-origin
curl -s -i -X OPTIONS -H 'Access-Control-Request-Method: PUT' -H 'Origin: http://example.com' http://eprouvette.web.garage.localhost:3921|grep access-control-allow-methods
curl -s -i -X OPTIONS -H 'Access-Control-Request-Method: DELETE' -H 'Origin: http://example.com' http://eprouvette.web.garage.localhost:3921 |grep '403 Forbidden'
aws s3api delete-bucket-cors --bucket eprouvette
! [ -s `aws s3api get-bucket-cors --bucket eprouvette` ]
curl -s -i -X OPTIONS -H 'Access-Control-Request-Method: PUT' -H 'Origin: http://example.com' http://eprouvette.web.garage.localhost:3921|grep '403 Forbidden'
aws s3api delete-object --bucket eprouvette --key index.html
garage -c /tmp/config.1.toml bucket website --deny eprouvette
fi fi
rm /tmp/garage.{1..3}.{rnd,b64} rm /tmp/garage.{1..3}.{rnd,b64}
@ -325,11 +344,11 @@ if [ -z "$SKIP_AWS" ]; then
echo "🧪 Website Testing" echo "🧪 Website Testing"
echo "<h1>hello world</h1>" > /tmp/garage-index.html echo "<h1>hello world</h1>" > /tmp/garage-index.html
aws s3 cp /tmp/garage-index.html s3://eprouvette/index.html aws s3 cp /tmp/garage-index.html s3://eprouvette/index.html
[ `curl -s -o /dev/null -w "%{http_code}" --header "Host: eprouvette.garage.tld" http://127.0.0.1:3921/ ` == 404 ] [ `curl -s -o /dev/null -w "%{http_code}" --header "Host: eprouvette.web.garage.localhost" http://127.0.0.1:3921/ ` == 404 ]
garage -c /tmp/config.1.toml bucket website --allow eprouvette garage -c /tmp/config.1.toml bucket website --allow eprouvette
[ `curl -s -o /dev/null -w "%{http_code}" --header "Host: eprouvette.garage.tld" http://127.0.0.1:3921/ ` == 200 ] [ `curl -s -o /dev/null -w "%{http_code}" --header "Host: eprouvette.web.garage.localhost" http://127.0.0.1:3921/ ` == 200 ]
garage -c /tmp/config.1.toml bucket website --deny eprouvette garage -c /tmp/config.1.toml bucket website --deny eprouvette
[ `curl -s -o /dev/null -w "%{http_code}" --header "Host: eprouvette.garage.tld" http://127.0.0.1:3921/ ` == 404 ] [ `curl -s -o /dev/null -w "%{http_code}" --header "Host: eprouvette.web.garage.localhost" http://127.0.0.1:3921/ ` == 404 ]
aws s3 rm s3://eprouvette/index.html aws s3 rm s3://eprouvette/index.html
rm /tmp/garage-index.html rm /tmp/garage-index.html
fi fi

View file

@ -20,6 +20,7 @@ use crate::signature::check_signature;
use crate::helpers::*; use crate::helpers::*;
use crate::s3_bucket::*; use crate::s3_bucket::*;
use crate::s3_copy::*; use crate::s3_copy::*;
use crate::s3_cors::*;
use crate::s3_delete::*; use crate::s3_delete::*;
use crate::s3_get::*; use crate::s3_get::*;
use crate::s3_list::*; use crate::s3_list::*;
@ -310,6 +311,11 @@ async fn handler_inner(garage: Arc<Garage>, req: Request<Body>) -> Result<Respon
handle_put_website(garage, bucket_id, req, content_sha256).await handle_put_website(garage, bucket_id, req, content_sha256).await
} }
Endpoint::DeleteBucketWebsite { .. } => handle_delete_website(garage, bucket_id).await, Endpoint::DeleteBucketWebsite { .. } => handle_delete_website(garage, bucket_id).await,
Endpoint::GetBucketCors { .. } => handle_get_cors(garage, bucket_id).await,
Endpoint::PutBucketCors { .. } => {
handle_put_cors(garage, bucket_id, req, content_sha256).await
}
Endpoint::DeleteBucketCors { .. } => handle_delete_cors(garage, bucket_id).await,
endpoint => Err(Error::NotImplemented(endpoint.name().to_owned())), endpoint => Err(Error::NotImplemented(endpoint.name().to_owned())),
} }
} }

View file

@ -15,6 +15,7 @@ mod signature;
pub mod helpers; pub mod helpers;
mod s3_bucket; mod s3_bucket;
mod s3_copy; mod s3_copy;
pub mod s3_cors;
mod s3_delete; mod s3_delete;
pub mod s3_get; pub mod s3_get;
mod s3_list; mod s3_list;

339
src/api/s3_cors.rs Normal file
View file

@ -0,0 +1,339 @@
use quick_xml::de::from_reader;
use std::sync::Arc;
use http::header::{
ACCESS_CONTROL_ALLOW_HEADERS, ACCESS_CONTROL_ALLOW_METHODS, ACCESS_CONTROL_ALLOW_ORIGIN,
ACCESS_CONTROL_EXPOSE_HEADERS,
};
use hyper::{header::HeaderName, Body, Method, Request, Response, StatusCode};
use serde::{Deserialize, Serialize};
use crate::error::*;
use crate::s3_xml::{to_xml_with_header, xmlns_tag, IntValue, Value};
use crate::signature::verify_signed_content;
use garage_model::bucket_table::CorsRule as GarageCorsRule;
use garage_model::garage::Garage;
use garage_table::*;
use garage_util::data::*;
pub async fn handle_get_cors(
garage: Arc<Garage>,
bucket_id: Uuid,
) -> Result<Response<Body>, Error> {
let bucket = garage
.bucket_table
.get(&EmptyKey, &bucket_id)
.await?
.ok_or(Error::NoSuchBucket)?;
let param = bucket
.params()
.ok_or_internal_error("Bucket should not be deleted at this point")?;
if let Some(cors) = param.cors_config.get() {
let wc = CorsConfiguration {
xmlns: (),
cors_rules: cors
.iter()
.map(CorsRule::from_garage_cors_rule)
.collect::<Vec<_>>(),
};
let xml = to_xml_with_header(&wc)?;
Ok(Response::builder()
.status(StatusCode::OK)
.header(http::header::CONTENT_TYPE, "application/xml")
.body(Body::from(xml))?)
} else {
Ok(Response::builder()
.status(StatusCode::NO_CONTENT)
.body(Body::empty())?)
}
}
pub async fn handle_delete_cors(
garage: Arc<Garage>,
bucket_id: Uuid,
) -> Result<Response<Body>, Error> {
let mut bucket = garage
.bucket_table
.get(&EmptyKey, &bucket_id)
.await?
.ok_or(Error::NoSuchBucket)?;
let param = bucket
.params_mut()
.ok_or_internal_error("Bucket should not be deleted at this point")?;
param.cors_config.update(None);
garage.bucket_table.insert(&bucket).await?;
Ok(Response::builder()
.status(StatusCode::NO_CONTENT)
.body(Body::empty())?)
}
pub async fn handle_put_cors(
garage: Arc<Garage>,
bucket_id: Uuid,
req: Request<Body>,
content_sha256: Option<Hash>,
) -> Result<Response<Body>, Error> {
let body = hyper::body::to_bytes(req.into_body()).await?;
verify_signed_content(content_sha256, &body[..])?;
let mut bucket = garage
.bucket_table
.get(&EmptyKey, &bucket_id)
.await?
.ok_or(Error::NoSuchBucket)?;
let param = bucket
.params_mut()
.ok_or_internal_error("Bucket should not be deleted at this point")?;
let conf: CorsConfiguration = from_reader(&body as &[u8])?;
conf.validate()?;
param
.cors_config
.update(Some(conf.into_garage_cors_config()?));
garage.bucket_table.insert(&bucket).await?;
Ok(Response::builder()
.status(StatusCode::OK)
.body(Body::empty())?)
}
#[derive(Debug, Serialize, Deserialize, PartialEq, Eq, PartialOrd, Ord)]
#[serde(rename = "CORSConfiguration")]
pub struct CorsConfiguration {
#[serde(serialize_with = "xmlns_tag", skip_deserializing)]
pub xmlns: (),
#[serde(rename = "CORSRule")]
pub cors_rules: Vec<CorsRule>,
}
#[derive(Debug, Serialize, Deserialize, PartialEq, Eq, PartialOrd, Ord)]
pub struct CorsRule {
#[serde(rename = "ID")]
pub id: Option<Value>,
#[serde(rename = "MaxAgeSeconds")]
pub max_age_seconds: Option<IntValue>,
#[serde(rename = "AllowedOrigin")]
pub allowed_origins: Vec<Value>,
#[serde(rename = "AllowedMethod")]
pub allowed_methods: Vec<Value>,
#[serde(rename = "AllowedHeader", default)]
pub allowed_headers: Vec<Value>,
#[serde(rename = "ExposeHeader", default)]
pub expose_headers: Vec<Value>,
}
#[derive(Debug, Serialize, Deserialize, PartialEq, Eq, PartialOrd, Ord)]
pub struct AllowedMethod {
#[serde(rename = "AllowedMethod")]
pub allowed_method: Value,
}
#[derive(Debug, Serialize, Deserialize, PartialEq, Eq, PartialOrd, Ord)]
pub struct AllowedHeader {
#[serde(rename = "AllowedHeader")]
pub allowed_header: Value,
}
#[derive(Debug, Serialize, Deserialize, PartialEq, Eq, PartialOrd, Ord)]
pub struct ExposeHeader {
#[serde(rename = "ExposeHeader")]
pub expose_header: Value,
}
impl CorsConfiguration {
pub fn validate(&self) -> Result<(), Error> {
for r in self.cors_rules.iter() {
r.validate()?;
}
Ok(())
}
pub fn into_garage_cors_config(self) -> Result<Vec<GarageCorsRule>, Error> {
Ok(self
.cors_rules
.iter()
.map(CorsRule::to_garage_cors_rule)
.collect())
}
}
impl CorsRule {
pub fn validate(&self) -> Result<(), Error> {
for method in self.allowed_methods.iter() {
method
.0
.parse::<Method>()
.ok_or_bad_request("Invalid CORSRule method")?;
}
for header in self
.allowed_headers
.iter()
.chain(self.expose_headers.iter())
{
header
.0
.parse::<HeaderName>()
.ok_or_bad_request("Invalid HTTP header name")?;
}
Ok(())
}
pub fn to_garage_cors_rule(&self) -> GarageCorsRule {
let convert_vec =
|vval: &[Value]| vval.iter().map(|x| x.0.to_owned()).collect::<Vec<String>>();
GarageCorsRule {
id: self.id.as_ref().map(|x| x.0.to_owned()),
max_age_seconds: self.max_age_seconds.as_ref().map(|x| x.0 as u64),
allow_origins: convert_vec(&self.allowed_origins),
allow_methods: convert_vec(&self.allowed_methods),
allow_headers: convert_vec(&self.allowed_headers),
expose_headers: convert_vec(&self.expose_headers),
}
}
pub fn from_garage_cors_rule(rule: &GarageCorsRule) -> Self {
let convert_vec = |vval: &[String]| {
vval.iter()
.map(|x| Value(x.clone()))
.collect::<Vec<Value>>()
};
Self {
id: rule.id.as_ref().map(|x| Value(x.clone())),
max_age_seconds: rule.max_age_seconds.map(|x| IntValue(x as i64)),
allowed_origins: convert_vec(&rule.allow_origins),
allowed_methods: convert_vec(&rule.allow_methods),
allowed_headers: convert_vec(&rule.allow_headers),
expose_headers: convert_vec(&rule.expose_headers),
}
}
}
pub fn cors_rule_matches<'a, HI, S>(
rule: &GarageCorsRule,
origin: &'a str,
method: &'a str,
mut request_headers: HI,
) -> bool
where
HI: Iterator<Item = S>,
S: AsRef<str>,
{
rule.allow_origins.iter().any(|x| x == "*" || x == origin)
&& rule.allow_methods.iter().any(|x| x == "*" || x == method)
&& request_headers.all(|h| {
rule.allow_headers
.iter()
.any(|x| x == "*" || x == h.as_ref())
})
}
pub fn add_cors_headers(
resp: &mut Response<Body>,
rule: &GarageCorsRule,
) -> Result<(), http::header::InvalidHeaderValue> {
let h = resp.headers_mut();
h.insert(
ACCESS_CONTROL_ALLOW_ORIGIN,
rule.allow_origins.join(", ").parse()?,
);
h.insert(
ACCESS_CONTROL_ALLOW_METHODS,
rule.allow_methods.join(", ").parse()?,
);
h.insert(
ACCESS_CONTROL_ALLOW_HEADERS,
rule.allow_headers.join(", ").parse()?,
);
h.insert(
ACCESS_CONTROL_EXPOSE_HEADERS,
rule.expose_headers.join(", ").parse()?,
);
Ok(())
}
#[cfg(test)]
mod tests {
use super::*;
use quick_xml::de::from_str;
#[test]
fn test_deserialize() -> Result<(), Error> {
let message = r#"<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>http://www.example.com</AllowedOrigin>
<AllowedMethod>PUT</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>DELETE</AllowedMethod>
<AllowedHeader>*</AllowedHeader>
</CORSRule>
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
</CORSRule>
<CORSRule>
<ID>qsdfjklm</ID>
<MaxAgeSeconds>12345</MaxAgeSeconds>
<AllowedOrigin>https://perdu.com</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>DELETE</AllowedMethod>
<AllowedHeader>*</AllowedHeader>
<ExposeHeader>*</ExposeHeader>
</CORSRule>
</CORSConfiguration>"#;
let conf: CorsConfiguration = from_str(message).unwrap();
let ref_value = CorsConfiguration {
xmlns: (),
cors_rules: vec![
CorsRule {
id: None,
max_age_seconds: None,
allowed_origins: vec!["http://www.example.com".into()],
allowed_methods: vec!["PUT".into(), "POST".into(), "DELETE".into()],
allowed_headers: vec!["*".into()],
expose_headers: vec![],
},
CorsRule {
id: None,
max_age_seconds: None,
allowed_origins: vec!["*".into()],
allowed_methods: vec!["GET".into()],
allowed_headers: vec![],
expose_headers: vec![],
},
CorsRule {
id: Some("qsdfjklm".into()),
max_age_seconds: Some(IntValue(12345)),
allowed_origins: vec!["https://perdu.com".into()],
allowed_methods: vec!["GET".into(), "DELETE".into()],
allowed_headers: vec!["*".into()],
expose_headers: vec!["*".into()],
},
],
};
assert_eq! {
ref_value,
conf
};
let message2 = to_xml_with_header(&ref_value)?;
let cleanup = |c: &str| c.replace(char::is_whitespace, "");
assert_eq!(cleanup(message), cleanup(&message2));
Ok(())
}
}

View file

@ -773,7 +773,6 @@ impl Endpoint {
GetBucketAccelerateConfiguration, GetBucketAccelerateConfiguration,
GetBucketAcl, GetBucketAcl,
GetBucketAnalyticsConfiguration, GetBucketAnalyticsConfiguration,
GetBucketCors,
GetBucketEncryption, GetBucketEncryption,
GetBucketIntelligentTieringConfiguration, GetBucketIntelligentTieringConfiguration,
GetBucketInventoryConfiguration, GetBucketInventoryConfiguration,
@ -821,6 +820,9 @@ impl Endpoint {
GetBucketWebsite, GetBucketWebsite,
PutBucketWebsite, PutBucketWebsite,
DeleteBucketWebsite, DeleteBucketWebsite,
GetBucketCors,
PutBucketCors,
DeleteBucketCors,
] ]
} }
.is_some(); .is_some();
@ -1134,7 +1136,7 @@ mod tests {
OWNER_DELETE "/" => DeleteBucket OWNER_DELETE "/" => DeleteBucket
DELETE "/?analytics&id=list1" => DeleteBucketAnalyticsConfiguration DELETE "/?analytics&id=list1" => DeleteBucketAnalyticsConfiguration
DELETE "/?analytics&id=Id" => DeleteBucketAnalyticsConfiguration DELETE "/?analytics&id=Id" => DeleteBucketAnalyticsConfiguration
DELETE "/?cors" => DeleteBucketCors OWNER_DELETE "/?cors" => DeleteBucketCors
DELETE "/?encryption" => DeleteBucketEncryption DELETE "/?encryption" => DeleteBucketEncryption
DELETE "/?intelligent-tiering&id=Id" => DeleteBucketIntelligentTieringConfiguration DELETE "/?intelligent-tiering&id=Id" => DeleteBucketIntelligentTieringConfiguration
DELETE "/?inventory&id=list1" => DeleteBucketInventoryConfiguration DELETE "/?inventory&id=list1" => DeleteBucketInventoryConfiguration
@ -1157,7 +1159,7 @@ mod tests {
GET "/?accelerate" => GetBucketAccelerateConfiguration GET "/?accelerate" => GetBucketAccelerateConfiguration
GET "/?acl" => GetBucketAcl GET "/?acl" => GetBucketAcl
GET "/?analytics&id=Id" => GetBucketAnalyticsConfiguration GET "/?analytics&id=Id" => GetBucketAnalyticsConfiguration
GET "/?cors" => GetBucketCors OWNER_GET "/?cors" => GetBucketCors
GET "/?encryption" => GetBucketEncryption GET "/?encryption" => GetBucketEncryption
GET "/?intelligent-tiering&id=Id" => GetBucketIntelligentTieringConfiguration GET "/?intelligent-tiering&id=Id" => GetBucketIntelligentTieringConfiguration
GET "/?inventory&id=list1" => GetBucketInventoryConfiguration GET "/?inventory&id=list1" => GetBucketInventoryConfiguration
@ -1233,7 +1235,7 @@ mod tests {
PUT "/?acl" => PutBucketAcl PUT "/?acl" => PutBucketAcl
PUT "/?analytics&id=report1" => PutBucketAnalyticsConfiguration PUT "/?analytics&id=report1" => PutBucketAnalyticsConfiguration
PUT "/?analytics&id=Id" => PutBucketAnalyticsConfiguration PUT "/?analytics&id=Id" => PutBucketAnalyticsConfiguration
PUT "/?cors" => PutBucketCors OWNER_PUT "/?cors" => PutBucketCors
PUT "/?encryption" => PutBucketEncryption PUT "/?encryption" => PutBucketEncryption
PUT "/?intelligent-tiering&id=Id" => PutBucketIntelligentTieringConfiguration PUT "/?intelligent-tiering&id=Id" => PutBucketIntelligentTieringConfiguration
PUT "/?inventory&id=report1" => PutBucketInventoryConfiguration PUT "/?inventory&id=report1" => PutBucketInventoryConfiguration

View file

@ -5,7 +5,7 @@ use hyper::{Body, Request, Response, StatusCode};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use crate::error::*; use crate::error::*;
use crate::s3_xml::{xmlns_tag, IntValue, Value}; use crate::s3_xml::{to_xml_with_header, xmlns_tag, IntValue, Value};
use crate::signature::verify_signed_content; use crate::signature::verify_signed_content;
use garage_model::bucket_table::*; use garage_model::bucket_table::*;
@ -39,7 +39,7 @@ pub async fn handle_get_website(
redirect_all_requests_to: None, redirect_all_requests_to: None,
routing_rules: None, routing_rules: None,
}; };
let xml = quick_xml::se::to_string(&wc)?; let xml = to_xml_with_header(&wc)?;
Ok(Response::builder() Ok(Response::builder()
.status(StatusCode::OK) .status(StatusCode::OK)
.header(http::header::CONTENT_TYPE, "application/xml") .header(http::header::CONTENT_TYPE, "application/xml")
@ -303,7 +303,7 @@ mod tests {
use quick_xml::de::from_str; use quick_xml::de::from_str;
#[test] #[test]
fn test_deserialize() { fn test_deserialize() -> Result<(), Error> {
let message = r#"<?xml version="1.0" encoding="UTF-8"?> let message = r#"<?xml version="1.0" encoding="UTF-8"?>
<WebsiteConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/"> <WebsiteConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<ErrorDocument> <ErrorDocument>
@ -365,7 +365,12 @@ mod tests {
ref_value, ref_value,
conf conf
} }
// TODO verify result is ok
// TODO cycle back and verify if ok let message2 = to_xml_with_header(&ref_value)?;
let cleanup = |c: &str| c.replace(char::is_whitespace, "");
assert_eq!(cleanup(message), cleanup(&message2));
Ok(())
} }
} }

View file

@ -16,6 +16,12 @@ pub fn xmlns_tag<S: Serializer>(_v: &(), s: S) -> Result<S::Ok, S::Error> {
#[derive(Debug, Serialize, Deserialize, PartialEq, Eq, PartialOrd, Ord)] #[derive(Debug, Serialize, Deserialize, PartialEq, Eq, PartialOrd, Ord)]
pub struct Value(#[serde(rename = "$value")] pub String); pub struct Value(#[serde(rename = "$value")] pub String);
impl From<&str> for Value {
fn from(s: &str) -> Value {
Value(s.to_string())
}
}
#[derive(Debug, Serialize, Deserialize, PartialEq, Eq, PartialOrd, Ord)] #[derive(Debug, Serialize, Deserialize, PartialEq, Eq, PartialOrd, Ord)]
pub struct IntValue(#[serde(rename = "$value")] pub i64); pub struct IntValue(#[serde(rename = "$value")] pub i64);

View file

@ -27,10 +27,7 @@ pub struct BucketParams {
pub creation_date: u64, pub creation_date: u64,
/// Map of key with access to the bucket, and what kind of access they give /// Map of key with access to the bucket, and what kind of access they give
pub authorized_keys: crdt::Map<String, BucketKeyPerm>, pub authorized_keys: crdt::Map<String, BucketKeyPerm>,
/// Whether this bucket is allowed for website access
/// (under all of its global alias names),
/// and if so, the website configuration XML document
pub website_config: crdt::Lww<Option<WebsiteConfig>>,
/// Map of aliases that are or have been given to this bucket /// Map of aliases that are or have been given to this bucket
/// in the global namespace /// in the global namespace
/// (not authoritative: this is just used as an indication to /// (not authoritative: this is just used as an indication to
@ -40,6 +37,14 @@ pub struct BucketParams {
/// in namespaces local to keys /// in namespaces local to keys
/// key = (access key id, alias name) /// key = (access key id, alias name)
pub local_aliases: crdt::LwwMap<(String, String), bool>, pub local_aliases: crdt::LwwMap<(String, String), bool>,
/// Whether this bucket is allowed for website access
/// (under all of its global alias names),
/// and if so, the website configuration XML document
pub website_config: crdt::Lww<Option<WebsiteConfig>>,
/// CORS rules
#[serde(default)]
pub cors_config: crdt::Lww<Option<Vec<CorsRule>>>,
} }
#[derive(PartialEq, Eq, Clone, Debug, Serialize, Deserialize)] #[derive(PartialEq, Eq, Clone, Debug, Serialize, Deserialize)]
@ -48,15 +53,26 @@ pub struct WebsiteConfig {
pub error_document: Option<String>, pub error_document: Option<String>,
} }
#[derive(PartialEq, Eq, Clone, Debug, Serialize, Deserialize)]
pub struct CorsRule {
pub id: Option<String>,
pub max_age_seconds: Option<u64>,
pub allow_origins: Vec<String>,
pub allow_methods: Vec<String>,
pub allow_headers: Vec<String>,
pub expose_headers: Vec<String>,
}
impl BucketParams { impl BucketParams {
/// Create an empty BucketParams with no authorized keys and no website accesss /// Create an empty BucketParams with no authorized keys and no website accesss
pub fn new() -> Self { pub fn new() -> Self {
BucketParams { BucketParams {
creation_date: now_msec(), creation_date: now_msec(),
authorized_keys: crdt::Map::new(), authorized_keys: crdt::Map::new(),
website_config: crdt::Lww::new(None),
aliases: crdt::LwwMap::new(), aliases: crdt::LwwMap::new(),
local_aliases: crdt::LwwMap::new(), local_aliases: crdt::LwwMap::new(),
website_config: crdt::Lww::new(None),
cors_config: crdt::Lww::new(None),
} }
} }
} }
@ -65,9 +81,12 @@ impl Crdt for BucketParams {
fn merge(&mut self, o: &Self) { fn merge(&mut self, o: &Self) {
self.creation_date = std::cmp::min(self.creation_date, o.creation_date); self.creation_date = std::cmp::min(self.creation_date, o.creation_date);
self.authorized_keys.merge(&o.authorized_keys); self.authorized_keys.merge(&o.authorized_keys);
self.website_config.merge(&o.website_config);
self.aliases.merge(&o.aliases); self.aliases.merge(&o.aliases);
self.local_aliases.merge(&o.local_aliases); self.local_aliases.merge(&o.local_aliases);
self.website_config.merge(&o.website_config);
self.cors_config.merge(&o.cors_config);
} }
} }

View file

@ -69,9 +69,10 @@ impl Migrate {
state: Deletable::Present(BucketParams { state: Deletable::Present(BucketParams {
creation_date: now_msec(), creation_date: now_msec(),
authorized_keys: Map::new(), authorized_keys: Map::new(),
website_config: Lww::new(website),
aliases: LwwMap::new(), aliases: LwwMap::new(),
local_aliases: LwwMap::new(), local_aliases: LwwMap::new(),
website_config: Lww::new(website),
cors_config: Lww::new(None),
}), }),
}) })
.await?; .await?;

View file

@ -125,3 +125,15 @@ where
} }
} }
} }
impl<T> Default for Lww<T>
where
T: Default,
{
fn default() -> Self {
Self {
ts: 0,
v: T::default(),
}
}
}

View file

@ -2,19 +2,22 @@ use std::{borrow::Cow, convert::Infallible, net::SocketAddr, sync::Arc};
use futures::future::Future; use futures::future::Future;
use http::header::{ACCESS_CONTROL_REQUEST_HEADERS, ACCESS_CONTROL_REQUEST_METHOD};
use hyper::{ use hyper::{
header::{HeaderValue, HOST}, header::{HeaderValue, HOST},
server::conn::AddrStream, server::conn::AddrStream,
service::{make_service_fn, service_fn}, service::{make_service_fn, service_fn},
Body, Method, Request, Response, Server, Body, Method, Request, Response, Server, StatusCode,
}; };
use crate::error::*; use crate::error::*;
use garage_api::error::{Error as ApiError, OkOrBadRequest}; use garage_api::error::{Error as ApiError, OkOrBadRequest, OkOrInternalError};
use garage_api::helpers::{authority_to_host, host_to_bucket}; use garage_api::helpers::{authority_to_host, host_to_bucket};
use garage_api::s3_cors::{add_cors_headers, cors_rule_matches};
use garage_api::s3_get::{handle_get, handle_head}; use garage_api::s3_get::{handle_get, handle_head};
use garage_model::bucket_table::Bucket;
use garage_model::garage::Garage; use garage_model::garage::Garage;
use garage_table::*; use garage_table::*;
@ -132,72 +135,136 @@ async fn serve_file(garage: Arc<Garage>, req: &Request<Body>) -> Result<Response
); );
let ret_doc = match *req.method() { let ret_doc = match *req.method() {
Method::HEAD => handle_head(garage.clone(), req, bucket_id, &key).await, Method::OPTIONS => return handle_options(&bucket, req),
Method::HEAD => {
return handle_head(garage.clone(), req, bucket_id, &key)
.await
.map_err(Error::from)
}
Method::GET => handle_get(garage.clone(), req, bucket_id, &key).await, Method::GET => handle_get(garage.clone(), req, bucket_id, &key).await,
_ => Err(ApiError::BadRequest("HTTP method not supported".into())), _ => Err(ApiError::BadRequest("HTTP method not supported".into())),
} }
.map_err(Error::from); .map_err(Error::from);
if let Err(error) = ret_doc { match ret_doc {
if *req.method() == Method::HEAD || !error.http_status_code().is_client_error() { Err(error) => {
// Do not return the error document in the following cases: // For a HEAD or OPTIONS method, we don't return the error document
// - the error is not a 4xx error code // as content, we return above and just return the error message
// - the request is a HEAD method // by relying on err_to_res that is called when we return an Err.
// In this case we just return the error code and the error message in the body, assert!(*req.method() != Method::HEAD && *req.method() != Method::OPTIONS);
// by relying on err_to_res that is called above when we return an Err.
return Err(error); if !error.http_status_code().is_client_error() {
// Do not return the error document if it is not a 4xx error code.
return Err(error);
}
// If no error document is set: just return the error directly
let error_document = match &website_config.error_document {
Some(ed) => ed.trim_start_matches('/').to_owned(),
None => return Err(error),
};
// We want to return the error document
// Create a fake HTTP request with path = the error document
let req2 = Request::builder()
.uri(format!("http://{}/{}", host, &error_document))
.body(Body::empty())
.unwrap();
match handle_get(garage, &req2, bucket_id, &error_document).await {
Ok(mut error_doc) => {
// The error won't be logged back in handle_request,
// so log it here
info!(
"{} {} {} {}",
req.method(),
req.uri(),
error.http_status_code(),
error
);
*error_doc.status_mut() = error.http_status_code();
error.add_headers(error_doc.headers_mut());
// Preserve error message in a special header
for error_line in error.to_string().split('\n') {
if let Ok(v) = HeaderValue::from_bytes(error_line.as_bytes()) {
error_doc.headers_mut().append("X-Garage-Error", v);
}
}
Ok(error_doc)
}
Err(error_doc_error) => {
warn!(
"Couldn't get error document {} for bucket {:?}: {}",
error_document, bucket_id, error_doc_error
);
Err(error)
}
}
} }
Ok(mut resp) => {
// Same if no error document is set: just return the error directly // Maybe add CORS headers
let error_document = match &website_config.error_document { if let Some(cors_config) = bucket.params().unwrap().cors_config.get() {
Some(ed) => ed.trim_start_matches('/').to_owned(), if let Some(origin) = req.headers().get("Origin") {
None => return Err(error), let origin = origin.to_str()?;
}; let request_headers = match req.headers().get(ACCESS_CONTROL_REQUEST_HEADERS) {
Some(h) => h.to_str()?.split(',').map(|h| h.trim()).collect::<Vec<_>>(),
// We want to return the error document None => vec![],
// Create a fake HTTP request with path = the error document };
let req2 = Request::builder() let matching_rule = cors_config.iter().find(|rule| {
.uri(format!("http://{}/{}", host, &error_document)) cors_rule_matches(
.body(Body::empty()) rule,
.unwrap(); origin,
&req.method().to_string(),
match handle_get(garage, &req2, bucket_id, &error_document).await { request_headers.iter(),
Ok(mut error_doc) => { )
// The error won't be logged back in handle_request, });
// so log it here if let Some(rule) = matching_rule {
info!( add_cors_headers(&mut resp, rule)
"{} {} {} {}", .ok_or_internal_error("Invalid CORS configuration")?;
req.method(),
req.uri(),
error.http_status_code(),
error
);
*error_doc.status_mut() = error.http_status_code();
error.add_headers(error_doc.headers_mut());
// Preserve error message in a special header
for error_line in error.to_string().split('\n') {
if let Ok(v) = HeaderValue::from_bytes(error_line.as_bytes()) {
error_doc.headers_mut().append("X-Garage-Error", v);
} }
} }
Ok(error_doc)
}
Err(error_doc_error) => {
warn!(
"Couldn't get error document {} for bucket {:?}: {}",
error_document, bucket_id, error_doc_error
);
Err(error)
} }
Ok(resp)
} }
} else {
ret_doc
} }
} }
fn handle_options(bucket: &Bucket, req: &Request<Body>) -> Result<Response<Body>, Error> {
let origin = req
.headers()
.get("Origin")
.ok_or_bad_request("Missing Origin header")?
.to_str()?;
let request_method = req
.headers()
.get(ACCESS_CONTROL_REQUEST_METHOD)
.ok_or_bad_request("Missing Access-Control-Request-Method header")?
.to_str()?;
let request_headers = match req.headers().get(ACCESS_CONTROL_REQUEST_HEADERS) {
Some(h) => h.to_str()?.split(',').map(|h| h.trim()).collect::<Vec<_>>(),
None => vec![],
};
if let Some(cors_config) = bucket.params().unwrap().cors_config.get() {
let matching_rule = cors_config
.iter()
.find(|rule| cors_rule_matches(rule, origin, request_method, request_headers.iter()));
if let Some(rule) = matching_rule {
let mut resp = Response::builder()
.status(StatusCode::OK)
.body(Body::empty())
.map_err(ApiError::from)?;
add_cors_headers(&mut resp, rule).ok_or_internal_error("Invalid CORS configuration")?;
return Ok(resp);
}
}
Err(ApiError::Forbidden("No matching CORS rule".into()).into())
}
/// Path to key /// Path to key
/// ///
/// Convert the provided path to the internal key /// Convert the provided path to the internal key