Improve the integrations section of the doc #153
8 changed files with 398 additions and 44 deletions
|
@ -16,9 +16,11 @@
|
||||||
- [Integrations](./connect/index.md)
|
- [Integrations](./connect/index.md)
|
||||||
- [Apps (Nextcloud, Peertube...)](./connect/apps.md)
|
- [Apps (Nextcloud, Peertube...)](./connect/apps.md)
|
||||||
- [Websites (Hugo, Jekyll, Publii...)](./connect/websites.md)
|
- [Websites (Hugo, Jekyll, Publii...)](./connect/websites.md)
|
||||||
- [Repositories (Docker, Nix...)](./connect/repositories.md)
|
- [Repositories (Docker, Nix, Git...)](./connect/repositories.md)
|
||||||
- [CLI tools (rclone, awscli, mc...)](./connect/cli.md)
|
- [CLI tools (rclone, awscli, mc...)](./connect/cli.md)
|
||||||
|
- [Backups (restic, duplicity...)](./connect/backup.md)
|
||||||
- [Your code (PHP, JS, Go...)](./connect/code.md)
|
- [Your code (PHP, JS, Go...)](./connect/code.md)
|
||||||
|
- [FUSE (s3fs, goofys, s3backer...)](./connect/fs.md)
|
||||||
|
|
||||||
|
|
||||||
- [Reference Manual](./reference_manual/index.md)
|
- [Reference Manual](./reference_manual/index.md)
|
||||||
|
@ -35,6 +37,7 @@
|
||||||
- [Setup your environment](./development/devenv.md)
|
- [Setup your environment](./development/devenv.md)
|
||||||
- [Development scripts](./development/scripts.md)
|
- [Development scripts](./development/scripts.md)
|
||||||
- [Release process](./development/release_process.md)
|
- [Release process](./development/release_process.md)
|
||||||
|
- [Miscellaneous notes](./development/miscellaneous_notes.md)
|
||||||
|
|
||||||
- [Working Documents](./working_documents/index.md)
|
- [Working Documents](./working_documents/index.md)
|
||||||
- [Load Balancing Data](./working_documents/load_balancing.md)
|
- [Load Balancing Data](./working_documents/load_balancing.md)
|
||||||
|
|
33
doc/book/src/connect/backup.md
Normal file
33
doc/book/src/connect/backup.md
Normal file
|
@ -0,0 +1,33 @@
|
||||||
|
# Backups (restic, duplicity...)
|
||||||
|
|
||||||
|
Backups are essential for disaster recovery but they are not trivial to manage.
|
||||||
|
Using Garage as your backup target will enable you to scale your storage as needed while ensuring high availability.
|
||||||
|
|
||||||
|
## Borg Backup
|
||||||
|
|
||||||
|
Borg Backup is very popular among the backup tools but it is not yet compatible with the S3 API.
|
||||||
|
We recommend using any other tool listed in this guide because they are all compatible with the S3 API.
|
||||||
|
If you still want to use Borg, you can use it with `rclone mount`.
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
## Restic
|
||||||
|
|
||||||
|
*External links:* [Restic Documentation > Amazon S3](https://restic.readthedocs.io/en/stable/030_preparing_a_new_repo.html#amazon-s3)
|
||||||
|
|
||||||
|
## Duplicity
|
||||||
|
|
||||||
|
*External links:* [Duplicity > man](https://duplicity.gitlab.io/duplicity-web/vers8/duplicity.1.html) (scroll to "URL Format" and "A note on Amazon S3")
|
||||||
|
|
||||||
|
## Duplicati
|
||||||
|
|
||||||
|
*External links:* [Duplicati Documentation > Storage Providers](https://github.com/kees-z/DuplicatiDocs/blob/master/docs/05-storage-providers.md#s3-compatible)
|
||||||
|
|
||||||
|
## knoxite
|
||||||
|
|
||||||
|
*External links:* [Knoxite Documentation > Storage Backends](https://knoxite.com/docs/storage-backends/#amazon-s3)
|
||||||
|
|
||||||
|
## kopia
|
||||||
|
|
||||||
|
*External links:* [Kopia Documentation > Repositories](https://kopia.io/docs/repositories/#amazon-s3)
|
||||||
|
|
|
@ -90,10 +90,10 @@ aws s3 cp s3/my_files/cpuinfo.txt /tmp/cpuinfo.txt
|
||||||
|
|
||||||
## `rclone`
|
## `rclone`
|
||||||
|
|
||||||
`rclone` can be configured using the interactive assistant invoked using `rclone configure`.
|
`rclone` can be configured using the interactive assistant invoked using `rclone config`.
|
||||||
|
|
||||||
You can also configure `rclone` by writing directly its configuration file.
|
You can also configure `rclone` by writing directly its configuration file.
|
||||||
Here is a template `rclone.ini` configuration file:
|
Here is a template `rclone.ini` configuration file (mine is located at `~/.config/rclone/rclone.conf`):
|
||||||
|
|
||||||
```ini
|
```ini
|
||||||
[garage]
|
[garage]
|
||||||
|
@ -109,9 +109,25 @@ acl = private
|
||||||
bucket_acl = private
|
bucket_acl = private
|
||||||
```
|
```
|
||||||
|
|
||||||
## Cyberduck
|
Now you can run:
|
||||||
|
|
||||||
TODO
|
```bash
|
||||||
|
# list buckets
|
||||||
|
rclone lsd garage:
|
||||||
|
|
||||||
|
# list objects of a bucket aggregated in directories
|
||||||
|
rclone lsd garage:my-bucket
|
||||||
|
|
||||||
|
# copy from your filesystem to garage
|
||||||
|
echo hello world > /tmp/hello.txt
|
||||||
|
rclone copy /tmp/hello.txt garage:my-bucket/
|
||||||
|
|
||||||
|
# copy from garage to your filesystem
|
||||||
|
rclone copy garage:quentin.divers/hello.txt .
|
||||||
|
|
||||||
|
# see all available subcommands
|
||||||
|
rclone help
|
||||||
|
```
|
||||||
|
|
||||||
## `s3cmd`
|
## `s3cmd`
|
||||||
|
|
||||||
|
@ -123,5 +139,28 @@ access_key = <access key>
|
||||||
secret_key = <secret key>
|
secret_key = <secret key>
|
||||||
host_base = <endpoint without http(s)://>
|
host_base = <endpoint without http(s)://>
|
||||||
host_bucket = <same as host_base>
|
host_bucket = <same as host_base>
|
||||||
use_https = False | True
|
use_https = <False or True>
|
||||||
```
|
```
|
||||||
|
|
||||||
|
And use it as follow:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# List buckets
|
||||||
|
s3cmd ls
|
||||||
|
|
||||||
|
# s3cmd objects inside a bucket
|
||||||
|
s3cmd ls s3://my-bucket
|
||||||
|
|
||||||
|
# copy from your filesystem to garage
|
||||||
|
echo hello world > /tmp/hello.txt
|
||||||
|
s3cmd put /tmp/hello.txt s3://my-bucket/
|
||||||
|
|
||||||
|
# copy from garage to your filesystem
|
||||||
|
s3cmd get s3://my-bucket/hello.txt hello.txt
|
||||||
|
```
|
||||||
|
|
||||||
|
## Cyberduck & duck
|
||||||
|
|
||||||
|
TODO
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -1 +1,79 @@
|
||||||
# Your code (PHP, JS, Go...)
|
# Your code (PHP, JS, Go...)
|
||||||
|
|
||||||
|
If you are developping a new application, you may want to use Garage to store your user's media.
|
||||||
|
|
||||||
|
The S3 API that Garage uses is a standard REST API, so as long as you can make HTTP requests,
|
||||||
|
you can query it. You can check the [S3 REST API Reference](https://docs.aws.amazon.com/AmazonS3/latest/API/API_Operations_Amazon_Simple_Storage_Service.html) from Amazon to learn more.
|
||||||
|
|
||||||
|
Developping your own wrapper around the REST API is time consuming and complicated.
|
||||||
|
Instead, there are some libraries already avalaible.
|
||||||
|
|
||||||
|
Some of them are maintained by Amazon, some by Minio, others by the community.
|
||||||
|
|
||||||
|
## PHP
|
||||||
|
|
||||||
|
- Amazon aws-sdk-php
|
||||||
|
- [Installation](https://docs.aws.amazon.com/sdk-for-php/v3/developer-guide/getting-started_installation.html)
|
||||||
|
- [Reference](https://docs.aws.amazon.com/aws-sdk-php/v3/api/api-s3-2006-03-01.html)
|
||||||
|
- [Example](https://docs.aws.amazon.com/sdk-for-php/v3/developer-guide/s3-examples-creating-buckets.html)
|
||||||
|
|
||||||
|
## Javascript
|
||||||
|
|
||||||
|
- Minio SDK
|
||||||
|
- [Reference](https://docs.min.io/docs/javascript-client-api-reference.html)
|
||||||
|
|
||||||
|
- Amazon aws-sdk-js
|
||||||
|
- [Installation](https://docs.aws.amazon.com/sdk-for-javascript/v3/developer-guide/getting-started.html)
|
||||||
|
- [Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html)
|
||||||
|
- [Example](https://docs.aws.amazon.com/sdk-for-javascript/v3/developer-guide/s3-example-creating-buckets.html)
|
||||||
|
|
||||||
|
## Golang
|
||||||
|
|
||||||
|
- Minio minio-go-sdk
|
||||||
|
- [Reference](https://docs.min.io/docs/golang-client-api-reference.html)
|
||||||
|
|
||||||
|
- Amazon aws-sdk-go-v2
|
||||||
|
- [Installation](https://aws.github.io/aws-sdk-go-v2/docs/getting-started/)
|
||||||
|
- [Reference](https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/service/s3)
|
||||||
|
- [Example](https://aws.github.io/aws-sdk-go-v2/docs/code-examples/s3/putobject/)
|
||||||
|
|
||||||
|
## Python
|
||||||
|
|
||||||
|
- Minio SDK
|
||||||
|
- [Reference](https://docs.min.io/docs/python-client-api-reference.html)
|
||||||
|
|
||||||
|
- Amazon boto3
|
||||||
|
- [Installation](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/quickstart.html)
|
||||||
|
- [Reference](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3.html)
|
||||||
|
- [Example](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/s3-uploading-files.html)
|
||||||
|
|
||||||
|
## Java
|
||||||
|
|
||||||
|
- Minio SDK
|
||||||
|
- [Reference](https://docs.min.io/docs/java-client-api-reference.html)
|
||||||
|
|
||||||
|
- Amazon aws-sdk-java
|
||||||
|
- [Installation](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/get-started.html)
|
||||||
|
- [Reference](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/S3Client.html)
|
||||||
|
- [Example](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/examples-s3-objects.html)
|
||||||
|
|
||||||
|
## Rust
|
||||||
|
|
||||||
|
- Amazon aws-rust-sdk
|
||||||
|
- [Github](https://github.com/awslabs/aws-sdk-rust)
|
||||||
|
|
||||||
|
## .NET
|
||||||
|
|
||||||
|
- Minio SDK
|
||||||
|
- [Reference](https://docs.min.io/docs/dotnet-client-api-reference.html)
|
||||||
|
|
||||||
|
- Amazon aws-dotnet-sdk
|
||||||
|
|
||||||
|
## C++
|
||||||
|
|
||||||
|
- Amazon aws-cpp-sdk
|
||||||
|
|
||||||
|
## Haskell
|
||||||
|
|
||||||
|
- Minio SDK
|
||||||
|
- [Reference](https://docs.min.io/docs/haskell-client-api-reference.html)
|
||||||
|
|
68
doc/book/src/connect/fs.md
Normal file
68
doc/book/src/connect/fs.md
Normal file
|
@ -0,0 +1,68 @@
|
||||||
|
# FUSE (s3fs, goofys, s3backer...)
|
||||||
|
|
||||||
|
**WARNING! Garage is not POSIX compatible.
|
||||||
|
Mounting S3 buckets as filesystems will not provide POSIX compatibility.
|
||||||
|
If you are not careful, you will lose or corrupt your data.**
|
||||||
|
|
||||||
|
Do not use these FUSE filesystems to store any database files (eg. MySQL, Postgresql, Mongo or sqlite),
|
||||||
|
any daemon cache (dovecot, openldap, gitea, etc.),
|
||||||
|
and more generally any software that use locking, advanced filesystems features or make any synchronisation assumption.
|
||||||
|
Ideally, avoid these solutions at all for any serious or production use.
|
||||||
|
|
||||||
|
## rclone mount
|
||||||
|
|
||||||
|
rclone uses the same configuration when used [in CLI](/connect/cli.html) and mount mode.
|
||||||
|
We suppose you have the following entry in your `rclone.ini` (mine is located in `~/.config/rclone/rclone.conf`):
|
||||||
|
|
||||||
|
```toml
|
||||||
|
[garage]
|
||||||
|
type = s3
|
||||||
|
provider = Other
|
||||||
|
env_auth = false
|
||||||
|
access_key_id = <access key>
|
||||||
|
secret_access_key = <secret key>
|
||||||
|
region = <region>
|
||||||
|
endpoint = <endpoint>
|
||||||
|
force_path_style = true
|
||||||
|
acl = private
|
||||||
|
bucket_acl = private
|
||||||
|
```
|
||||||
|
|
||||||
|
Then you can mount and access any bucket as follow:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# mount the bucket
|
||||||
|
mkdir /tmp/my-bucket
|
||||||
|
rclone mount --daemon garage:my-bucket /tmp/my-bucket
|
||||||
|
|
||||||
|
# set your working directory to the bucket
|
||||||
|
cd /tmp/my-bucket
|
||||||
|
|
||||||
|
# create a file
|
||||||
|
echo hello world > hello.txt
|
||||||
|
|
||||||
|
# access the file
|
||||||
|
cat hello.txt
|
||||||
|
|
||||||
|
# unmount the bucket
|
||||||
|
cd
|
||||||
|
fusermount -u /tmp/my-bucket
|
||||||
|
```
|
||||||
|
|
||||||
|
*External link:* [rclone documentation > rclone mount](https://rclone.org/commands/rclone_mount/)
|
||||||
|
|
||||||
|
## s3fs
|
||||||
|
|
||||||
|
*External link:* [s3fs github > README.md](https://github.com/s3fs-fuse/s3fs-fuse#examples)
|
||||||
|
|
||||||
|
## goofys
|
||||||
|
|
||||||
|
*External link:* [goofys github > README.md](https://github.com/kahing/goofys#usage)
|
||||||
|
|
||||||
|
## s3backer
|
||||||
|
|
||||||
|
*External link:* [s3backer github > manpage](https://github.com/archiecobbs/s3backer/wiki/ManPage)
|
||||||
|
|
||||||
|
## csi-s3
|
||||||
|
|
||||||
|
*External link:* [csi-s3 Github > README.md](https://github.com/ctrox/csi-s3)
|
|
@ -1 +1,169 @@
|
||||||
# Repositories (Docker, Nix...)
|
# Repositories (Docker, Nix, Git...)
|
||||||
|
|
||||||
|
Whether you need to store and serve binary packages or source code, you may want to deploy a tool referred as a repository or registry.
|
||||||
|
Garage can also help you serve this content.
|
||||||
|
|
||||||
|
## Gitea
|
||||||
|
|
||||||
|
You can use Garage with Gitea to store your [git LFS](https://git-lfs.github.com/) data, your users' avatar, and their attachements.
|
||||||
|
You can configure a different target for each data type (check `[lfs]` and `[attachment]` sections of the Gitea documentation) and you can provide a default one through the `[storage]` section.
|
||||||
|
|
||||||
|
Let's start by creating a key and a bucket (your key id and secret will be needed later, keep them somewhere):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
garage key new --name gitea-key
|
||||||
|
garage bucket create gitea
|
||||||
|
garage bucket allow gitea --read --write --key gitea-key
|
||||||
|
```
|
||||||
|
|
||||||
|
Then you can edit your configuration (by default `/etc/gitea/conf/app.ini`):
|
||||||
|
|
||||||
|
```ini
|
||||||
|
[storage]
|
||||||
|
STORAGE_TYPE=minio
|
||||||
|
MINIO_ENDPOINT=localhost:3900
|
||||||
|
MINIO_ACCESS_KEY_ID=GKxxx
|
||||||
|
MINIO_SECRET_ACCESS_KEY=xxxx
|
||||||
|
MINIO_BUCKET=gitea
|
||||||
|
MINIO_LOCATION=garage
|
||||||
|
MINIO_USE_SSL=false
|
||||||
|
```
|
||||||
|
|
||||||
|
You can also pass this configuration through environment variables:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
GITEA__storage__STORAGE_TYPE=minio
|
||||||
|
GITEA__storage__MINIO_ENDPOINT=localhost:3900
|
||||||
|
GITEA__storage__MINIO_ACCESS_KEY_ID=GKxxx
|
||||||
|
GITEA__storage__MINIO_SECRET_ACCESS_KEY=xxxx
|
||||||
|
GITEA__storage__MINIO_BUCKET=gitea
|
||||||
|
GITEA__storage__MINIO_LOCATION=garage
|
||||||
|
GITEA__storage__MINIO_USE_SSL=false
|
||||||
|
```
|
||||||
|
|
||||||
|
Then restart your gitea instance and try to upload a custom avatar.
|
||||||
|
If it worked, you should see some content in your gitea bucket (you must configure your `aws` command before):
|
||||||
|
|
||||||
|
```
|
||||||
|
$ aws s3 ls s3://gitea/avatars/
|
||||||
|
2021-11-10 12:35:47 190034 616ba79ae2b84f565c33d72c2ec50861
|
||||||
|
```
|
||||||
|
|
||||||
|
|
||||||
|
*External link:* [Gitea Documentation > Configuration Cheat Sheet](https://docs.gitea.io/en-us/config-cheat-sheet/)
|
||||||
|
|
||||||
|
## Gitlab
|
||||||
|
|
||||||
|
*External link:* [Gitlab Documentation > Object storage](https://docs.gitlab.com/ee/administration/object_storage.html)
|
||||||
|
|
||||||
|
|
||||||
|
## Private NPM Registry (Verdacio)
|
||||||
|
|
||||||
|
*External link:* [Verdaccio Github Repository > aws-storage plugin](https://github.com/verdaccio/verdaccio/tree/master/packages/plugins/aws-storage)
|
||||||
|
|
||||||
|
## Docker
|
||||||
|
|
||||||
|
Not yet compatible, follow [#103](https://git.deuxfleurs.fr/Deuxfleurs/garage/issues/103).
|
||||||
|
|
||||||
|
*External link:* [Docker Documentation > Registry storage drivers > S3 storage driver](https://docs.docker.com/registry/storage-drivers/s3/)
|
||||||
|
|
||||||
|
## Nix
|
||||||
|
|
||||||
|
Nix has no repository in its terminology: instead, it breaks down this concept in 2 parts: binary cache and channel.
|
||||||
|
|
||||||
|
**A channel** is a set of `.nix` definitions that generate definitions for all the software you want to serve.
|
||||||
|
|
||||||
|
Because we do not want all our clients to compile all these derivations by themselves,
|
||||||
|
we can compile them once and then serve them as part of our **binary cache**.
|
||||||
|
|
||||||
|
It is possible to use a **binary cache** without a channel, you only need to serve your nix definitions
|
||||||
|
through another support, like a git repository.
|
||||||
|
|
||||||
|
As a first step, we will need to create a bucket on Garage and enabling website access on it:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
garage key new --name nix-key
|
||||||
|
garage bucket create nix.example.com
|
||||||
|
garage bucket allow nix.example.com --read --write --key nix-key
|
||||||
|
garage bucket website nix.example.com --allow
|
||||||
|
```
|
||||||
|
|
||||||
|
If you need more information about exposing buckets as websites on Garage,
|
||||||
|
check [Exposing buckets as websites](/cookbook/exposing_websites.html)
|
||||||
|
and [Configuring a reverse proxy](/cookbook/reverse_proxy.html).
|
||||||
|
|
||||||
|
Next, we want to check that our bucket works:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
echo nix repo > /tmp/index.html
|
||||||
|
mc cp /tmp/index.html garage/nix/
|
||||||
|
rm /tmp/index.html
|
||||||
|
|
||||||
|
curl https://nix.example.com
|
||||||
|
# output: nix repo
|
||||||
|
```
|
||||||
|
|
||||||
|
### Binary cache
|
||||||
|
|
||||||
|
To serve binaries as part of your cache, you need to sign them with a key specific to nix.
|
||||||
|
You can generate the keypair as follow:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
nix-store --generate-binary-cache-key <name> cache-priv-key.pem cache-pub-key.pem
|
||||||
|
```
|
||||||
|
|
||||||
|
You can then manually sign the packages of your store with the following command:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
nix sign-paths --all -k cache-priv-key.pem
|
||||||
|
```
|
||||||
|
|
||||||
|
Setting a key in `nix.conf` will do the signature at build time automatically without additional commands.
|
||||||
|
Edit the `nix.conf` of your builder:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
secret-key-files = /etc/nix/cache-priv-key.pem
|
||||||
|
```
|
||||||
|
|
||||||
|
Now that your content is signed, you can copy a derivation to your cache.
|
||||||
|
For example, if you want to copy a specific derivation of your store:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
nix copy /nix/store/wadmyilr414n7bimxysbny876i2vlm5r-bash-5.1-p8 --to 's3://nix?endpoint=garage.example.com®ion=garage'
|
||||||
|
```
|
||||||
|
|
||||||
|
*Note that if you have not signed your packages, you can append to the end of your S3 URL `&secret-key=/etc/nix/cache-priv-key.pem`.*
|
||||||
|
|
||||||
|
Sometimes you don't want to hardcode this store path in your script.
|
||||||
|
Let suppose that you are working on a codebase that you build with `nix-build`, you can then run:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
nix copy $(nix-build) --to 's3://nix?endpoint=garage.example.com®ion=garage'
|
||||||
|
```
|
||||||
|
|
||||||
|
*This command works because the only thing that `nix-build` outputs on stdout is the paths of the built derivations in your nix store.*
|
||||||
|
|
||||||
|
You can include your derivation dependencies:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
nix copy $(nix-store -qR $(nix-build)) --to 's3://nix?endpoint=garage.example.com®ion=garage'
|
||||||
|
```
|
||||||
|
|
||||||
|
Now, your binary cache stores your derivation and all its dependencies.
|
||||||
|
Just inform your users that they must update their `nix.conf` file with the following lines:
|
||||||
|
|
||||||
|
```toml
|
||||||
|
substituters = https://cache.nixos.org https://nix.example.com
|
||||||
|
trusted-public-keys = cache.nixos.org-1:6NCHdD59X431o0gWypbMrAURkbJ16ZPMQFGspcDShjY= nix.example.com:eTGL6kvaQn6cDR/F9lDYUIP9nCVR/kkshYfLDJf1yKs=
|
||||||
|
```
|
||||||
|
|
||||||
|
*You must re-add cache.nixorg.org because redeclaring these keys override the previous configuration instead of extending it.*
|
||||||
|
|
||||||
|
Now, when your clients will run `nix-build` or any command that generates a derivation for which a hash is already present
|
||||||
|
on the binary cache, the client will download the result from the cache instead of compiling it, saving lot of time and CPU!
|
||||||
|
|
||||||
|
|
||||||
|
### Channels
|
||||||
|
|
||||||
|
Channels additionnaly serve Nix definitions, ie. a `.nix` file referencing
|
||||||
|
all the derivations you want to serve.
|
||||||
|
|
|
@ -26,9 +26,10 @@ export AWS_ACCESS_KEY_ID=GKxxx
|
||||||
export AWS_SECRET_ACCESS_KEY=xxx
|
export AWS_SECRET_ACCESS_KEY=xxx
|
||||||
```
|
```
|
||||||
|
|
||||||
And finally deploy your website:
|
And finally build and deploy your website:
|
||||||
|
|
||||||
```bsh
|
```bsh
|
||||||
|
hugo
|
||||||
hugo deploy
|
hugo deploy
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
|
@ -13,42 +13,6 @@ We have a simple [PR on cargo2nix](https://github.com/cargo2nix/cargo2nix/pull/2
|
||||||
|
|
||||||
Nix has no armv7 + musl toolchains but armv7l is backward compatible with armv6l.
|
Nix has no armv7 + musl toolchains but armv7l is backward compatible with armv6l.
|
||||||
|
|
||||||
Signing keys are generated with:
|
|
||||||
|
|
||||||
```
|
|
||||||
nix-store --generate-binary-cache-key nix.web.deuxfleurs.fr cache-priv-key.pem cache-pub-key.pem
|
|
||||||
```
|
|
||||||
|
|
||||||
We copy the secret key in our nix folder:
|
|
||||||
|
|
||||||
```
|
|
||||||
cp cache-priv-key.pem /etc/nix/signing-key.sec
|
|
||||||
```
|
|
||||||
|
|
||||||
Manually sign
|
|
||||||
|
|
||||||
We can sign the whole store with:
|
|
||||||
|
|
||||||
```
|
|
||||||
nix sign-paths --all -k /etc/nix/signing-key.sec
|
|
||||||
```
|
|
||||||
|
|
||||||
Or simply the current package and its dependencies with:
|
|
||||||
|
|
||||||
```
|
|
||||||
nix sign-paths --recursive -k /etc/nix/signing-key.sec
|
|
||||||
```
|
|
||||||
|
|
||||||
Setting a key in `nix.conf` will do the signature at build time automatically without additional commands, edit the `nix.conf` of your builder:
|
|
||||||
|
|
||||||
```toml
|
|
||||||
secret-key-files = /etc/nix/signing-key.sec
|
|
||||||
max-jobs = auto
|
|
||||||
cores = 8
|
|
||||||
```
|
|
||||||
|
|
||||||
Now you are ready to build your packages:
|
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
cat > $HOME/.awsrc <<EOF
|
cat > $HOME/.awsrc <<EOF
|
||||||
export AWS_ACCESS_KEY_ID="xxx"
|
export AWS_ACCESS_KEY_ID="xxx"
|
||||||
|
|
Loading…
Reference in a new issue