forked from Deuxfleurs/garage
Improve integration part of the doc
This commit is contained in:
parent
1c0ba930b8
commit
5a1fb7cce7
7 changed files with 282 additions and 238 deletions
|
@ -10,11 +10,12 @@ Garage implements the Amazon S3 protocol, which makes it compatible with many ex
|
|||
|
||||
In particular, you will find here instructions to connect it with:
|
||||
|
||||
- [web applications](@/documentation/connect/apps/index.md)
|
||||
- [website hosting](@/documentation/connect/websites.md)
|
||||
- [software repositories](@/documentation/connect/repositories.md)
|
||||
- [CLI tools](@/documentation/connect/cli.md)
|
||||
- [your own code](@/documentation/connect/code.md)
|
||||
- [Browsing tools](@/documentation/connect/cli.md)
|
||||
- [Applications](@/documentation/connect/apps/index.md)
|
||||
- [Website hosting](@/documentation/connect/websites.md)
|
||||
- [Software repositories](@/documentation/connect/repositories.md)
|
||||
- [Your own code](@/documentation/connect/code.md)
|
||||
- [FUSE](@/documentation/connect/fs.md)
|
||||
|
||||
### Generic instructions
|
||||
|
||||
|
@ -36,9 +37,9 @@ you will need the following parameters:
|
|||
Most S3 clients can be configured easily with these parameters,
|
||||
provided that you follow the following guidelines:
|
||||
|
||||
- **Force path style:** Garage does not support DNS-style buckets, which are now by default
|
||||
on Amazon S3. Instead, Garage uses the legacy path-style bucket addressing.
|
||||
Remember to configure your client to acknowledge this fact.
|
||||
- **Be careful to DNS-style/path-style access:** Garage supports both DNS-style buckets, which are now by default
|
||||
on Amazon S3, and legacy path-style buckets. If you use a reverse proxy in front of Garage,
|
||||
make sure that you configured it to support the access-style required by the software you want to use.
|
||||
|
||||
- **Configuring the S3 region:** Garage requires your client to talk to the correct "S3 region",
|
||||
which is set in the configuration file. This is often set just to `garage`.
|
||||
|
|
|
@ -3,7 +3,21 @@ title = "Apps (Nextcloud, Peertube...)"
|
|||
weight = 5
|
||||
+++
|
||||
|
||||
In this section, we cover the following software: [Nextcloud](#nextcloud), [Peertube](#peertube), [Mastodon](#mastodon), [Matrix](#matrix)
|
||||
In this section, we cover the following web applications:
|
||||
|
||||
| Name | Status | Note |
|
||||
|------|--------|------|
|
||||
| [Nextcloud](#nextcloud) | ✅ | Both Primary Storage and External Storage are supported |
|
||||
| [Peertube](#peertube) | ✅ | Must be configured with the website endpoint |
|
||||
| [Mastodon](#mastodon) | ❓ | Not yet tested |
|
||||
| [Matrix](#matrix) | ✅ | Tested with `synapse-s3-storage-provider` |
|
||||
| [Pixelfed](#pixelfed) | ❓ | Not yet tested |
|
||||
| [Pleroma](#pleroma) | ❓ | Not yet tested |
|
||||
| [Lemmy](#lemmy) | ❓ | Not yet tested |
|
||||
| [Funkwhale](#funkwhale) | ❓ | Not yet tested |
|
||||
| [Misskey](#misskey) | ❓ | Not yet tested |
|
||||
| [Prismo](#prismo) | ❓ | Not yet tested |
|
||||
| [Owncloud OCIS](#owncloud-infinite-scale-ocis) | ❓| Not yet tested |
|
||||
|
||||
## Nextcloud
|
||||
|
||||
|
@ -111,109 +125,8 @@ Do not change the `use_path_style` and `legacy_auth` entries, other configuratio
|
|||
|
||||
Peertube proposes a clever integration of S3 by directly exposing its endpoint instead of proxifying requests through the application.
|
||||
In other words, Peertube is only responsible of the "control plane" and offload the "data plane" to Garage.
|
||||
In return, this system is a bit harder to configure, especially with Garage that supports less feature than other older S3 backends.
|
||||
We show that it is still possible to configure Garage with Peertube, allowing you to spread the load and the bandwidth usage on the Garage cluster.
|
||||
|
||||
### Enable path-style access by patching Peertube
|
||||
|
||||
First, you will need to apply a small patch on Peertube ([#4510](https://github.com/Chocobozzz/PeerTube/pull/4510)):
|
||||
|
||||
```diff
|
||||
From e3b4c641bdf67e07d406a1d49d6aa6b1fbce2ab4 Mon Sep 17 00:00:00 2001
|
||||
From: Martin Honermeyer <maze@strahlungsfrei.de>
|
||||
Date: Sun, 31 Oct 2021 12:34:04 +0100
|
||||
Subject: [PATCH] Allow setting path-style access for object storage
|
||||
|
||||
---
|
||||
config/default.yaml | 4 ++++
|
||||
config/production.yaml.example | 4 ++++
|
||||
server/initializers/config.ts | 1 +
|
||||
server/lib/object-storage/shared/client.ts | 3 ++-
|
||||
.../production/config/custom-environment-variables.yaml | 2 ++
|
||||
5 files changed, 13 insertions(+), 1 deletion(-)
|
||||
|
||||
diff --git a/config/default.yaml b/config/default.yaml
|
||||
index cf9d69a6211..4efd56fb804 100644
|
||||
--- a/config/default.yaml
|
||||
+++ b/config/default.yaml
|
||||
@@ -123,6 +123,10 @@ object_storage:
|
||||
# You can also use AWS_SECRET_ACCESS_KEY env variable
|
||||
secret_access_key: ''
|
||||
|
||||
+ # Reference buckets via path rather than subdomain
|
||||
+ # (i.e. "my-endpoint.com/bucket" instead of "bucket.my-endpoint.com")
|
||||
+ force_path_style: false
|
||||
+
|
||||
# Maximum amount to upload in one request to object storage
|
||||
max_upload_part: 2GB
|
||||
|
||||
diff --git a/config/production.yaml.example b/config/production.yaml.example
|
||||
index 70993bf57a3..9ca2de5f4c9 100644
|
||||
--- a/config/production.yaml.example
|
||||
+++ b/config/production.yaml.example
|
||||
@@ -121,6 +121,10 @@ object_storage:
|
||||
# You can also use AWS_SECRET_ACCESS_KEY env variable
|
||||
secret_access_key: ''
|
||||
|
||||
+ # Reference buckets via path rather than subdomain
|
||||
+ # (i.e. "my-endpoint.com/bucket" instead of "bucket.my-endpoint.com")
|
||||
+ force_path_style: false
|
||||
+
|
||||
# Maximum amount to upload in one request to object storage
|
||||
max_upload_part: 2GB
|
||||
|
||||
diff --git a/server/initializers/config.ts b/server/initializers/config.ts
|
||||
index 8375bf4304c..d726c59a4b6 100644
|
||||
--- a/server/initializers/config.ts
|
||||
+++ b/server/initializers/config.ts
|
||||
@@ -91,6 +91,7 @@ const CONFIG = {
|
||||
ACCESS_KEY_ID: config.get<string>('object_storage.credentials.access_key_id'),
|
||||
SECRET_ACCESS_KEY: config.get<string>('object_storage.credentials.secret_access_key')
|
||||
},
|
||||
+ FORCE_PATH_STYLE: config.get<boolean>('object_storage.force_path_style'),
|
||||
VIDEOS: {
|
||||
BUCKET_NAME: config.get<string>('object_storage.videos.bucket_name'),
|
||||
PREFIX: config.get<string>('object_storage.videos.prefix'),
|
||||
diff --git a/server/lib/object-storage/shared/client.ts b/server/lib/object-storage/shared/client.ts
|
||||
index c9a61459336..eadad02f93f 100644
|
||||
--- a/server/lib/object-storage/shared/client.ts
|
||||
+++ b/server/lib/object-storage/shared/client.ts
|
||||
@@ -26,7 +26,8 @@ function getClient () {
|
||||
accessKeyId: OBJECT_STORAGE.CREDENTIALS.ACCESS_KEY_ID,
|
||||
secretAccessKey: OBJECT_STORAGE.CREDENTIALS.SECRET_ACCESS_KEY
|
||||
}
|
||||
- : undefined
|
||||
+ : undefined,
|
||||
+ forcePathStyle: CONFIG.OBJECT_STORAGE.FORCE_PATH_STYLE
|
||||
})
|
||||
|
||||
logger.info('Initialized S3 client %s with region %s.', getEndpoint(), OBJECT_STORAGE.REGION, lTags())
|
||||
diff --git a/support/docker/production/config/custom-environment-variables.yaml b/support/docker/production/config/custom-environment-variables.yaml
|
||||
index c7cd28e6521..a960bab0bc9 100644
|
||||
--- a/support/docker/production/config/custom-environment-variables.yaml
|
||||
+++ b/support/docker/production/config/custom-environment-variables.yaml
|
||||
@@ -54,6 +54,8 @@ object_storage:
|
||||
|
||||
region: "PEERTUBE_OBJECT_STORAGE_REGION"
|
||||
|
||||
+ force_path_style: "PEERTUBE_OBJECT_STORAGE_FORCE_PATH_STYLE"
|
||||
+
|
||||
max_upload_part:
|
||||
__name: "PEERTUBE_OBJECT_STORAGE_MAX_UPLOAD_PART"
|
||||
__format: "json"
|
||||
```
|
||||
|
||||
You can then recompile it with:
|
||||
|
||||
```
|
||||
npm run build
|
||||
```
|
||||
|
||||
And it can be started with:
|
||||
|
||||
```
|
||||
NODE_ENV=production NODE_CONFIG_DIR=/srv/peertube/config node dist/server.js
|
||||
```
|
||||
In return, this system is a bit harder to configure.
|
||||
We show how it is still possible to configure Garage with Peertube, allowing you to spread the load and the bandwidth usage on the Garage cluster.
|
||||
|
||||
|
||||
### Create resources in Garage
|
||||
|
@ -235,30 +148,32 @@ garage bucket create peertube-playlist
|
|||
Now we allow our key to read and write on these buckets:
|
||||
|
||||
```
|
||||
garage bucket allow peertube-playlist --read --write --key peertube-key
|
||||
garage bucket allow peertube-video --read --write --key peertube-key
|
||||
garage bucket allow peertube-playlists --read --write --owner --key peertube-key
|
||||
garage bucket allow peertube-videos --read --write --owner --key peertube-key
|
||||
```
|
||||
|
||||
Finally, we need to expose these buckets publicly to serve their content to users:
|
||||
We also need to expose these buckets publicly to serve their content to users:
|
||||
|
||||
```bash
|
||||
garage bucket website --allow peertube-playlist
|
||||
garage bucket website --allow peertube-video
|
||||
garage bucket website --allow peertube-playlists
|
||||
garage bucket website --allow peertube-videos
|
||||
```
|
||||
|
||||
Finally, we must allow Cross-Origin Resource Sharing (CORS).
|
||||
CORS are required by your browser to allow requests triggered from the peertube website (eg. peertube.tld) to your bucket's domain (eg. peertube-videos.web.garage.tld)
|
||||
|
||||
```bash
|
||||
export CORS='{"CORSRules":[{"AllowedHeaders":["*"],"AllowedMethods":["GET"],"AllowedOrigins":["*"]}]}'
|
||||
aws --endpoint http://s3.garage.localhost s3api put-bucket-cors --bucket peertube-playlists --cors-configuration $CORS
|
||||
aws --endpoint http://s3.garage.localhost s3api put-bucket-cors --bucket peertube-videos --cors-configuration $CORS
|
||||
```
|
||||
|
||||
These buckets are now accessible on the web port (by default 3902) with the following URL: `http://<bucket><root_domain>:<web_port>` where the root domain is defined in your configuration file (by default `.web.garage`). So we have currently the following URLs:
|
||||
* http://peertube-playlist.web.garage:3902
|
||||
* http://peertube-video.web.garage:3902
|
||||
* http://peertube-playlists.web.garage:3902
|
||||
* http://peertube-videos.web.garage:3902
|
||||
|
||||
Make sure you (will) have a corresponding DNS entry for them.
|
||||
|
||||
### Configure a Reverse Proxy to serve CORS
|
||||
|
||||
Now we will configure a reverse proxy in front of Garage.
|
||||
This is required as we have no other way to serve CORS headers yet.
|
||||
Check the [Configuring a reverse proxy](@/documentation/cookbook/reverse-proxy.md) section to know how.
|
||||
|
||||
Now make sure that your 2 dns entries are pointing to your reverse proxy.
|
||||
|
||||
### Configure Peertube
|
||||
|
||||
|
@ -271,9 +186,6 @@ object_storage:
|
|||
# Put localhost only if you have a garage instance running on that node
|
||||
endpoint: 'http://localhost:3900' # or "garage.example.com" if you have TLS on port 443
|
||||
|
||||
# This entry has been added by our patch, must be set to true
|
||||
force_path_style: true
|
||||
|
||||
# Garage supports only one region for now, named garage
|
||||
region: 'garage'
|
||||
|
||||
|
@ -290,28 +202,23 @@ object_storage:
|
|||
prefix: ''
|
||||
|
||||
# You must fill this field to make Peertube use our reverse proxy/website logic
|
||||
base_url: 'http://peertube-playlist.web.garage' # Example: 'https://mirror.example.com'
|
||||
base_url: 'http://peertube-playlists.web.garage.localhost' # Example: 'https://mirror.example.com'
|
||||
|
||||
# Same settings but for webtorrent videos
|
||||
videos:
|
||||
bucket_name: 'peertube-video'
|
||||
prefix: ''
|
||||
# You must fill this field to make Peertube use our reverse proxy/website logic
|
||||
base_url: 'http://peertube-video.web.garage'
|
||||
base_url: 'http://peertube-videos.web.garage.localhost'
|
||||
```
|
||||
|
||||
### That's all
|
||||
|
||||
Everything must be configured now, simply restart Peertube and try to upload a video.
|
||||
You must see in your browser console that data are fetched directly from our bucket (through the reverse proxy).
|
||||
|
||||
### Miscellaneous
|
||||
|
||||
*Known bug:* The playback does not start and some 400 Bad Request Errors appear in your browser console and on Garage.
|
||||
If the description of the error contains HTTP Invalid Range: InvalidRange, the error is due to a buggy ffmpeg version.
|
||||
You must avoid the 4.4.0 and use either a newer or older version.
|
||||
|
||||
*Associated issues:* [#137](https://git.deuxfleurs.fr/Deuxfleurs/garage/issues/137), [#138](https://git.deuxfleurs.fr/Deuxfleurs/garage/issues/138), [#140](https://git.deuxfleurs.fr/Deuxfleurs/garage/issues/140). These issues are non blocking.
|
||||
Peertube will start by serving the video from its own domain while it is encoding.
|
||||
Once the encoding is done, the video is uploaded to Garage.
|
||||
You can now reload the page and see in your browser console that data are fetched directly from your bucket.
|
||||
|
||||
*External link:* [Peertube Documentation > Remote Storage](https://docs.joinpeertube.org/admin-remote-storage)
|
||||
|
||||
|
@ -432,31 +339,34 @@ And add a new line. For example, to run it every 10 minutes:
|
|||
|
||||
## Pixelfed
|
||||
|
||||
https://docs.pixelfed.org/technical-documentation/env.html#filesystem
|
||||
[Pixelfed Technical Documentation > Configuration](https://docs.pixelfed.org/technical-documentation/env.html#filesystem)
|
||||
|
||||
## Pleroma
|
||||
|
||||
https://docs-develop.pleroma.social/backend/configuration/cheatsheet/#pleromauploaderss3
|
||||
[Pleroma Documentation > Pleroma.Uploaders.S3](https://docs-develop.pleroma.social/backend/configuration/cheatsheet/#pleromauploaderss3)
|
||||
|
||||
## Lemmy
|
||||
|
||||
via pict-rs
|
||||
https://git.asonix.dog/asonix/pict-rs/commit/f9f4fc63d670f357c93f24147c2ee3e1278e2d97
|
||||
Lemmy uses pict-rs that [supports S3 backends](https://git.asonix.dog/asonix/pict-rs/commit/f9f4fc63d670f357c93f24147c2ee3e1278e2d97)
|
||||
|
||||
## Funkwhale
|
||||
|
||||
https://docs.funkwhale.audio/admin/configuration.html#s3-storage
|
||||
[Funkwhale Documentation > S3 Storage](https://docs.funkwhale.audio/admin/configuration.html#s3-storage)
|
||||
|
||||
## Misskey
|
||||
|
||||
https://github.com/misskey-dev/misskey/commit/9d944243a3a59e8880a360cbfe30fd5a3ec8d52d
|
||||
[Misskey Github > commit 9d94424](https://github.com/misskey-dev/misskey/commit/9d944243a3a59e8880a360cbfe30fd5a3ec8d52d)
|
||||
|
||||
## Prismo
|
||||
|
||||
https://gitlab.com/prismosuite/prismo/-/blob/dev/.env.production.sample#L26-33
|
||||
[Prismo Gitlab > .env.production.sample](https://gitlab.com/prismosuite/prismo/-/blob/dev/.env.production.sample#L26-33)
|
||||
|
||||
## Owncloud Infinite Scale (ocis)
|
||||
|
||||
OCIS could be compatible with S3:
|
||||
- [Deploying OCIS with S3](https://owncloud.dev/ocis/deployment/ocis_s3/)
|
||||
- [OCIS 1.7 release note](https://central.owncloud.org/t/owncloud-infinite-scale-tech-preview-1-7-enables-s3-storage/32514/3)
|
||||
|
||||
## Unsupported
|
||||
|
||||
- Mobilizon: No S3 integration
|
||||
|
|
|
@ -1,12 +1,22 @@
|
|||
+++
|
||||
title = "CLI tools"
|
||||
title = "Browsing tools"
|
||||
weight = 20
|
||||
+++
|
||||
|
||||
CLI tools allow you to query the S3 API without too many abstractions.
|
||||
Browsing tools allow you to query the S3 API without too many abstractions.
|
||||
These tools are particularly suitable for debug, backups, website deployments or any scripted task that need to handle data.
|
||||
|
||||
## Minio client (recommended)
|
||||
| Name | Status | Note |
|
||||
|------|--------|------|
|
||||
| [Minio client](#minio-client-recommended) | ✅ | Recommended |
|
||||
| [AWS CLI](#aws-cli) | ✅ | Recommended |
|
||||
| [rclone](#rclone) | ✅ | |
|
||||
| [s3cmd](#s3cmd) | ✅ | |
|
||||
| [(Cyber)duck](#cyberduck--duck) | ✅ | |
|
||||
| [WinSCP (libs3)](#winscp) | ✅ | No instructions yet |
|
||||
|
||||
|
||||
## Minio client
|
||||
|
||||
Use the following command to set an "alias", i.e. define a new S3 server to be
|
||||
used by the Minio client:
|
||||
|
@ -169,6 +179,107 @@ s3cmd get s3://my-bucket/hello.txt hello.txt
|
|||
|
||||
## Cyberduck & duck
|
||||
|
||||
TODO
|
||||
Both Cyberduck (the GUI) and duck (the CLI) have a concept of "Connection Profiles" that contain some presets for a specific provider.
|
||||
We wrote the following connection profile for Garage:
|
||||
|
||||
```xml
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
|
||||
<plist version="1.0">
|
||||
<dict>
|
||||
<key>Protocol</key>
|
||||
<string>s3</string>
|
||||
<key>Vendor</key>
|
||||
<string>garage</string>
|
||||
<key>Scheme</key>
|
||||
<string>https</string>
|
||||
<key>Description</key>
|
||||
<string>GarageS3</string>
|
||||
<key>Default Hostname</key>
|
||||
<string>127.0.0.1</string>
|
||||
<key>Default Port</key>
|
||||
<string>4443</string>
|
||||
<key>Hostname Configurable</key>
|
||||
<false/>
|
||||
<key>Port Configurable</key>
|
||||
<false/>
|
||||
<key>Username Configurable</key>
|
||||
<true/>
|
||||
<key>Username Placeholder</key>
|
||||
<string>Access Key ID (GK...)</string>
|
||||
<key>Password Placeholder</key>
|
||||
<string>Secret Key</string>
|
||||
<key>Properties</key>
|
||||
<array>
|
||||
<string>s3service.disable-dns-buckets=true</string>
|
||||
</array>
|
||||
<key>Region</key>
|
||||
<string>garage</string>
|
||||
<key>Regions</key>
|
||||
<array>
|
||||
<string>garage</string>
|
||||
</array>
|
||||
</dict>
|
||||
</plist>
|
||||
```
|
||||
|
||||
*Note: If your garage instance is configured with vhost access style, you can remove `s3service.disable-dns-buckets=true`.*
|
||||
|
||||
### Instructions for the GUI
|
||||
|
||||
Copy the connection profile, and save it anywhere as `garage.cyberduckprofile`.
|
||||
Then find this file with your file explorer and double click on it: Cyberduck will open a connection wizard for this profile.
|
||||
Simply follow the wizard and you should be done!
|
||||
|
||||
### Instuctions for the CLI
|
||||
|
||||
To configure duck (Cyberduck's CLI tool), start by creating its folder hierarchy:
|
||||
|
||||
```
|
||||
mkdir -p ~/.duck/profiles/
|
||||
```
|
||||
|
||||
Then, save the connection profile for Garage in `~/.duck/profiles/garage.cyberduckprofile`.
|
||||
To set your credentials in `~/.duck/credentials`, use the following commands to generate the appropriate string:
|
||||
|
||||
```bash
|
||||
export AWS_ACCESS_KEY_ID="GK..."
|
||||
export AWS_SECRET_ACCESS_KEY="..."
|
||||
export HOST="s3.garage.localhost"
|
||||
export PORT="4443"
|
||||
export PROTOCOL="https"
|
||||
|
||||
cat > ~/.duck/credentials <<EOF
|
||||
$PROTOCOL\://$AWS_ACCESS_KEY_ID@$HOST\:$PORT=$AWS_SECRET_ACCESS_KEY
|
||||
EOF
|
||||
```
|
||||
|
||||
And finally, I recommend appending a small wrapper to your `~/.bashrc` to avoid setting the username on each command (do not forget to replace `GK...` by your access key):
|
||||
|
||||
```bash
|
||||
function duck { command duck --username GK... $@ ; }
|
||||
```
|
||||
|
||||
Finally, you can then use `duck` as follow:
|
||||
|
||||
```bash
|
||||
# List buckets
|
||||
duck --list garage:/
|
||||
|
||||
# List objects in a bucket
|
||||
duck --list garage:/my-files/
|
||||
|
||||
# Download an object
|
||||
duck --download garage:/my-files/an-object.txt /tmp/object.txt
|
||||
|
||||
# Upload an object
|
||||
duck --upload /tmp/object.txt garage:/my-files/another-object.txt
|
||||
|
||||
# Delete an object
|
||||
duck --delete garage:/my-files/an-object.txt
|
||||
```
|
||||
|
||||
## WinSCP (libs3)
|
||||
|
||||
*No instruction yet. You can find ones in french [in our wiki](https://wiki.deuxfleurs.fr/fr/Guide/Garage/WinSCP).*
|
||||
|
||||
|
|
|
@ -6,6 +6,15 @@ weight = 15
|
|||
Whether you need to store and serve binary packages or source code, you may want to deploy a tool referred as a repository or registry.
|
||||
Garage can also help you serve this content.
|
||||
|
||||
| Name | Status | Note |
|
||||
|------|--------|------|
|
||||
| [Gitea](#gitea) | ✅ | |
|
||||
| [Docker](#generic-static-site-generator) | ✅ | Requires garage >= v0.6.0 |
|
||||
| [Nix](#generic-static-site-generator) | ✅ | |
|
||||
| [Gitlab](#gitlab) | ❓ | Not yet tested |
|
||||
|
||||
|
||||
|
||||
## Gitea
|
||||
|
||||
You can use Garage with Gitea to store your [git LFS](https://git-lfs.github.com/) data, your users' avatar, and their attachements.
|
||||
|
@ -55,18 +64,42 @@ $ aws s3 ls s3://gitea/avatars/
|
|||
|
||||
*External link:* [Gitea Documentation > Configuration Cheat Sheet](https://docs.gitea.io/en-us/config-cheat-sheet/)
|
||||
|
||||
## Gitlab
|
||||
|
||||
*External link:* [Gitlab Documentation > Object storage](https://docs.gitlab.com/ee/administration/object_storage.html)
|
||||
|
||||
|
||||
## Private NPM Registry (Verdacio)
|
||||
|
||||
*External link:* [Verdaccio Github Repository > aws-storage plugin](https://github.com/verdaccio/verdaccio/tree/master/packages/plugins/aws-storage)
|
||||
|
||||
## Docker
|
||||
|
||||
Not yet compatible, follow [#103](https://git.deuxfleurs.fr/Deuxfleurs/garage/issues/103).
|
||||
Create a bucket and a key for your docker registry, then create `config.yml` with the following content:
|
||||
|
||||
```yml
|
||||
version: 0.1
|
||||
http:
|
||||
addr: 0.0.0.0:5000
|
||||
secret: asecretforlocaldevelopment
|
||||
debug:
|
||||
addr: localhost:5001
|
||||
storage:
|
||||
s3:
|
||||
accesskey: GKxxxx
|
||||
secretkey: yyyyy
|
||||
region: garage
|
||||
regionendpoint: http://localhost:3900
|
||||
bucket: docker
|
||||
secure: false
|
||||
v4auth: true
|
||||
rootdirectory: /
|
||||
```
|
||||
|
||||
Replace the `accesskey`, `secretkey`, `bucket`, `regionendpoint` and `secure` values by the one fitting your deployment.
|
||||
|
||||
Then simply run the docker registry:
|
||||
|
||||
```bash
|
||||
docker run \
|
||||
--net=host \
|
||||
-v `pwd`/config.yml:/etc/docker/registry/config.yml \
|
||||
registry:2
|
||||
```
|
||||
|
||||
*We started a plain text registry but docker clients require encrypted registries. You must either [setup TLS](https://docs.docker.com/registry/deploying/#run-an-externally-accessible-registry) on your registry or add `--insecure-registry=localhost:5000` to your docker daemon parameters.*
|
||||
|
||||
|
||||
*External link:* [Docker Documentation > Registry storage drivers > S3 storage driver](https://docs.docker.com/registry/storage-drivers/s3/)
|
||||
|
||||
|
@ -170,3 +203,9 @@ on the binary cache, the client will download the result from the cache instead
|
|||
|
||||
Channels additionnaly serve Nix definitions, ie. a `.nix` file referencing
|
||||
all the derivations you want to serve.
|
||||
|
||||
## Gitlab
|
||||
|
||||
*External link:* [Gitlab Documentation > Object storage](https://docs.gitlab.com/ee/administration/object_storage.html)
|
||||
|
||||
|
||||
|
|
|
@ -6,6 +6,12 @@ weight = 10
|
|||
Garage is also suitable to host static websites.
|
||||
While they can be deployed with traditional CLI tools, some static website generators have integrated options to ease your workflow.
|
||||
|
||||
| Name | Status | Note |
|
||||
|------|--------|------|
|
||||
| [Hugo](#hugo) | ✅ | Publishing logic is integrated in the tool |
|
||||
| [Publii](#publii) | ✅ | Require a correctly configured s3 vhost endpoint |
|
||||
| [Generic Static Site Generator](#generic-static-site-generator) | ✅ | Works for Jekyll, Zola, Gatsby, Pelican, etc. |
|
||||
|
||||
## Hugo
|
||||
|
||||
Add to your `config.toml` the following section:
|
||||
|
@ -42,39 +48,38 @@ hugo deploy
|
|||
|
||||
## Publii
|
||||
|
||||
It would require a patch either on Garage or on Publii to make both systems work.
|
||||
[![A screenshot of Publii's GUI](./publii.png)](./publii.png)
|
||||
|
||||
Currently, the proposed workaround is to deploy your website manually:
|
||||
- On the left menu, click on Server, choose Manual Deployment (the logo looks like a compressed file)
|
||||
- Set your website URL, keep Output type as "Non-compressed catalog"
|
||||
- Click on Save changes
|
||||
- Click on Sync your website (bottom left of the app)
|
||||
- On the new page, click again on Sync your website
|
||||
- Click on Get website files
|
||||
- You need to synchronize the output folder you see in your file explorer, we will use minio client.
|
||||
Deploying a website to Garage from Publii is natively supported.
|
||||
First, make sure that your Garage administrator allowed and configured Garage to support vhost access style.
|
||||
We also suppose that your bucket ("my-bucket") and key is already created and configured.
|
||||
|
||||
Be sure that you [configured minio client](@/documentation/connect/cli.md#minio-client-recommended).
|
||||
Then, from the left menu, click on server. Choose "S3" as the protocol.
|
||||
In the configuration window, enter:
|
||||
- Your finale website URL (eg. "http://my-bucket.web.garage.localhost:3902")
|
||||
- Tick "Use a custom S3 provider"
|
||||
- Set the S3 endpoint, (eg. "http://s3.garage.localhost:3900")
|
||||
- Then put your access key (eg. "GK..."), your secret key, and your bucket (eg. "my-bucket")
|
||||
- And hit the button "Save settings"
|
||||
|
||||
Then copy this output folder
|
||||
Now, each time you want to publish your website from Publii, just hit the bottom left button "Sync your website"!
|
||||
|
||||
```bash
|
||||
mc mirror --overwrite output garage/my-site
|
||||
```
|
||||
|
||||
## Generic (eg. Jekyll)
|
||||
|
||||
## Generic Static Site Generator
|
||||
|
||||
Some tools do not support sending to a S3 backend but output a compiled folder on your system.
|
||||
We can then use any CLI tool to upload this content to our S3 target.
|
||||
|
||||
First, start by [configuring minio client](@/documentation/connect/cli.md#minio-client-recommended).
|
||||
|
||||
Then build your website:
|
||||
Then build your website (example for jekyll):
|
||||
|
||||
```bash
|
||||
jekyll build
|
||||
```
|
||||
|
||||
And copy jekyll's output folder on S3:
|
||||
And copy its output folder (`_site` for Jekyll) on S3:
|
||||
|
||||
```bash
|
||||
mc mirror --overwrite _site garage/my-site
|
||||
|
|
|
@ -3,7 +3,7 @@ title = "Configuring a reverse proxy"
|
|||
weight = 30
|
||||
+++
|
||||
|
||||
The main reason to add a reverse proxy in front of Garage is to provide TLS to your users.
|
||||
The main reason to add a reverse proxy in front of Garage is to provide TLS to your users and serve multiple web services on port 443.
|
||||
|
||||
In production you will likely need your certificates signed by a certificate authority.
|
||||
The most automated way is to use a provider supporting the [ACME protocol](https://datatracker.ietf.org/doc/html/rfc8555)
|
||||
|
@ -58,16 +58,15 @@ If you directly put the instructions in the root `nginx.conf`, keep in mind that
|
|||
|
||||
And do not forget to reload nginx with `systemctl reload nginx` or `nginx -s reload`.
|
||||
|
||||
### Defining backends
|
||||
### Exposing the S3 endpoints
|
||||
|
||||
First, we need to tell to nginx how to access our Garage cluster.
|
||||
Because we have multiple nodes, we want to leverage all of them by spreading the load.
|
||||
In nginx, we can do that with the `upstream` directive.
|
||||
|
||||
In nginx, we can do that with the upstream directive.
|
||||
Because we have 2 endpoints: one for the S3 API and one to serve websites,
|
||||
we create 2 backends named respectively `s3_backend` and `web_backend`.
|
||||
Then in a `server` directive, we define the vhosts, the TLS certificates and the proxy rule.
|
||||
|
||||
A documented example for the `s3_backend` assuming you chose port 3900:
|
||||
A possible configuration:
|
||||
|
||||
```nginx
|
||||
upstream s3_backend {
|
||||
|
@ -81,9 +80,34 @@ upstream s3_backend {
|
|||
# that are more powerful than others
|
||||
server garage2.example.com:3900 weight=2;
|
||||
}
|
||||
|
||||
server {
|
||||
listen [::]:443 http2 ssl;
|
||||
|
||||
ssl_certificate /tmp/garage.crt;
|
||||
ssl_certificate_key /tmp/garage.key;
|
||||
|
||||
# You need multiple server names here:
|
||||
# - s3.garage.tld is used for path-based s3 requests
|
||||
# - *.s3.garage.tld is used for vhost-based s3 requests
|
||||
server_name s3.garage.tld *.s3.garage.tld;
|
||||
|
||||
location / {
|
||||
proxy_pass http://s3_backend;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header Host $host;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
A similar example for the `web_backend` assuming you chose port 3902:
|
||||
## Exposing the web endpoint
|
||||
|
||||
To better understand the logic involved, you can refer to the [Exposing buckets as websites](/cookbook/exposing_websites.html) section.
|
||||
Otherwise, the configuration is very similar to the S3 endpoint.
|
||||
You must only adapt `upstream` with the web port instead of the s3 port and change the `server_name` and `proxy_pass` entry
|
||||
|
||||
A possible configuration:
|
||||
|
||||
|
||||
```nginx
|
||||
upstream web_backend {
|
||||
|
@ -92,65 +116,19 @@ upstream web_backend {
|
|||
server garage1.example.com:3902;
|
||||
server garage2.example.com:3902 weight=2;
|
||||
}
|
||||
```
|
||||
|
||||
### Exposing the S3 API
|
||||
|
||||
The configuration section for the S3 API is simple as we only support path-access style yet.
|
||||
We simply configure the TLS parameters and forward all the requests to the backend:
|
||||
|
||||
```nginx
|
||||
server {
|
||||
listen [::]:443 http2 ssl;
|
||||
|
||||
ssl_certificate /tmp/garage.crt;
|
||||
ssl_certificate_key /tmp/garage.key;
|
||||
|
||||
# should be the endpoint you want
|
||||
# aws uses s3.amazonaws.com for example
|
||||
server_name garage.example.com;
|
||||
# You need multiple server names here:
|
||||
# - *.web.garage.tld is used for your users wanting a website without reserving a domain name
|
||||
# - example.com, my-site.tld, etc. are reserved domain name by your users that chose to host their website as a garage's bucket
|
||||
server_name *.web.garage.tld example.com my-site.tld;
|
||||
|
||||
location / {
|
||||
proxy_pass http://s3_backend;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header Host $host;
|
||||
}
|
||||
}
|
||||
|
||||
```
|
||||
|
||||
### Exposing the web endpoint
|
||||
|
||||
The web endpoint is a bit more complicated to configure as it listens on many different `Host` fields.
|
||||
To better understand the logic involved, you can refer to the [Exposing buckets as websites](@/documentation/cookbook/exposing-websites.md) section.
|
||||
Also, for some applications, you may need to serve CORS headers: Garage can not serve them directly but we show how we can use nginx to serve them.
|
||||
You can use the following example as your starting point:
|
||||
|
||||
```nginx
|
||||
server {
|
||||
listen [::]:443 http2 ssl;
|
||||
ssl_certificate /tmp/garage.crt;
|
||||
ssl_certificate_key /tmp/garage.key;
|
||||
|
||||
# We list all the Hosts fields that can access our buckets
|
||||
server_name *.web.garage
|
||||
example.com
|
||||
my-site.tld
|
||||
;
|
||||
|
||||
location / {
|
||||
# Add these headers only if you want to allow CORS requests
|
||||
# For production use, more specific rules would be better for your security
|
||||
add_header Access-Control-Allow-Origin *;
|
||||
add_header Access-Control-Max-Age 3600;
|
||||
add_header Access-Control-Expose-Headers Content-Length;
|
||||
add_header Access-Control-Allow-Headers Range;
|
||||
|
||||
# We do not forward OPTIONS requests to Garage
|
||||
# as it does not support them but they are needed for CORS.
|
||||
if ($request_method = OPTIONS) {
|
||||
return 200;
|
||||
}
|
||||
|
||||
proxy_pass http://web_backend;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header Host $host;
|
||||
|
@ -158,7 +136,6 @@ server {
|
|||
}
|
||||
```
|
||||
|
||||
|
||||
## Apache httpd
|
||||
|
||||
@TODO
|
||||
|
|
|
@ -27,8 +27,8 @@ your motivations for doing so in the PR message.
|
|||
| | CompleteMultipartUpload |
|
||||
| | AbortMultipartUpload |
|
||||
| | UploadPart |
|
||||
| | [*ListMultipartUploads*](https://git.deuxfleurs.fr/Deuxfleurs/garage/issues/103) |
|
||||
| | [*ListParts*](https://git.deuxfleurs.fr/Deuxfleurs/garage/issues/103) |
|
||||
| | ListMultipartUploads |
|
||||
| | ListParts |
|
||||
| **A-tier** | |
|
||||
| | GetBucketCors |
|
||||
| | PutBucketCors |
|
||||
|
@ -37,6 +37,7 @@ your motivations for doing so in the PR message.
|
|||
| | GetBucketWebsite |
|
||||
| | PutBucketWebsite |
|
||||
| | DeleteBucketWebsite |
|
||||
| | [PostObject](https://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectPOST.html) |
|
||||
| ~~~~~~~~~~~~~~~~~~~~~~~~~~ | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
|
||||
| **B-tier** | |
|
||||
| | GetBucketAcl |
|
||||
|
|
Loading…
Reference in a new issue