Compare commits
2 commits
Author | SHA1 | Date | |
---|---|---|---|
a9a665e437 | |||
c430d8eaf1 |
347 changed files with 1483 additions and 41326 deletions
1
.gitignore
vendored
1
.gitignore
vendored
|
@ -4,4 +4,3 @@ secrets/*
|
||||||
cluster/*/secrets/*
|
cluster/*/secrets/*
|
||||||
!cluster/*/secrets/*.sample
|
!cluster/*/secrets/*.sample
|
||||||
|
|
||||||
adrn-notes/
|
|
||||||
|
|
183
README.md
183
README.md
|
@ -1,55 +1,160 @@
|
||||||
# Deuxfleurs on NixOS!
|
# Deuxfleurs on NixOS!
|
||||||
|
|
||||||
This repository contains code to run Deuxfleurs' infrastructure on NixOS.
|
This repository contains code to run Deuxfleur's infrastructure on NixOS.
|
||||||
|
|
||||||
## Our abstraction stack
|
It sets up the following:
|
||||||
|
|
||||||
We try to build a generic abstraction stack between our different resources (CPU, RAM, disk, etc.) and our services (Chat, Storage, etc.), we develop our own tools when needed.
|
- A Wireguard mesh between all nodes
|
||||||
|
- Consul, with TLS
|
||||||
|
- Nomad, with TLS
|
||||||
|
|
||||||
Our first abstraction level is the NixOS level, which installs a bunch of standard components:
|
## Configuring the OS
|
||||||
|
|
||||||
* **Wireguard:** provides encrypted communication between remote nodes
|
This repo contains a bunch of scripts to configure NixOS on all cluster nodes.
|
||||||
* **Nomad:** schedule containers and handle their lifecycle
|
Most scripts are invoked with the following syntax:
|
||||||
* **Consul:** distributed key value store + lock + service discovery
|
|
||||||
* **Docker:** package, distribute and isolate applications
|
|
||||||
|
|
||||||
Then, inside our Nomad+Consul orchestrator, we deploy a number of base services:
|
- for scripts that generate secrets: `./gen_<something> <cluster_name>` to generate the secrets to be used on cluster `<cluster_name>`
|
||||||
|
- for deployment scripts:
|
||||||
|
- `./deploy_<something> <cluster_name>` to run the deployment script on all nodes of the cluster `<cluster_name>`
|
||||||
|
- `./deploy_<something> <cluster_name> <node1> <node2> ...` to run the deployment script only on nodes `node1, node2, ...` of cluster `<cluster_name>`.
|
||||||
|
|
||||||
* Data management
|
All deployment scripts can use the following parameters passed as environment variables:
|
||||||
* **[Garage](https://git.deuxfleurs.fr/Deuxfleurs/garage/):** S3-compatible lightweight object store for self-hosted geo-distributed deployments
|
|
||||||
* **Stolon + PostgreSQL:** distributed relational database
|
|
||||||
* Network Control Plane
|
|
||||||
* **[DiploNAT](https://git.deuxfleurs.fr/Deuxfleurs/diplonat):** - network automation (firewalling, upnp igd)
|
|
||||||
* **[D53](https://git.deuxfleurs.fr/lx/d53)** - update DNS entries (A and AAAA) dynamically based on Nomad service scheduling and local node info
|
|
||||||
* **[Tricot](https://git.deuxfleurs.fr/Deuxfleurs/tricot)** - a dynamic reverse proxy for nomad+consul inspired by traefik
|
|
||||||
* **[wgautomesh](https://git.deuxfleurs.fr/Deuxfleurs/wgautomesh)** - a dynamic wireguard mesh configurator
|
|
||||||
* User Management
|
|
||||||
* **[Bottin](https://git.deuxfleurs.fr/Deuxfleurs/bottin):** authentication and authorization (LDAP protocol, consul backend)
|
|
||||||
* **[Guichet](https://git.deuxfleurs.fr/Deuxfleurs/guichet):** a dashboard for our users and administrators7
|
|
||||||
* Observability
|
|
||||||
* **Prometheus + Grafana:** monitoring
|
|
||||||
|
|
||||||
Some services we provide based on this abstraction:
|
- `SUDO_PASS`: optionnally, the password for `sudo` on cluster nodes. If not set, it will be asked at the begninning.
|
||||||
|
- `SSH_USER`: optionnally, the user to try to login using SSH. If not set, the username from your local machine will be used.
|
||||||
|
|
||||||
* **Websites:** Garage (static) + fediverse blog (Plume)
|
### Assumptions (how to setup your environment)
|
||||||
* **Chat:** Synapse + Element Web (Matrix protocol)
|
|
||||||
* **Email:** Postfix SMTP + Dovecot IMAP + opendkim DKIM + Sogo webmail | Alps webmail (experimental)
|
|
||||||
- **[Aerogramme](https://git.deuxfleurs.fr/Deuxfleurs/aerogramme/):** an encrypted IMAP server
|
|
||||||
* **Visioconference:** Jitsi
|
|
||||||
* **Collaboration:** CryptPad
|
|
||||||
|
|
||||||
As a generic abstraction is provided, deploying new services should be easy.
|
- you have an SSH access to all of your cluster nodes (listed in `cluster/<cluster_name>/ssh_config`)
|
||||||
|
|
||||||
## How to use this?
|
- your account is in group `wheel` and you know its password (you need it to become root using `sudo`);
|
||||||
|
the password is the same on all cluster nodes (see below for password management tools)
|
||||||
|
|
||||||
See the following documentation topics:
|
- you have a clone of the secrets repository in your `pass` password store, for instance at `~/.password-store/deuxfleurs`
|
||||||
|
(scripts in this repo will read and write all secrets in `pass` under `deuxfleurs/cluster/<cluster_name>/`)
|
||||||
|
|
||||||
- [Quick start and onboarding for new administrators](doc/onboarding.md)
|
### Deploying the NixOS configuration
|
||||||
- [How to add new nodes to a cluster (rapid overview)](doc/adding-nodes.md)
|
|
||||||
- [Architecture of this repo, how the scripts work](doc/architecture.md)
|
|
||||||
- [List of TCP and UDP ports used by services](doc/ports)
|
|
||||||
- [Why not Ansible?](doc/why-not-ansible.md)
|
|
||||||
|
|
||||||
## Got personal services in addition to Deuxfleurs at home?
|
The NixOS configuration makes use of a certain number of files:
|
||||||
|
|
||||||
Go check [`cluster/prod/register_external_services.sh`](./cluster/prod/register_external_services.sh). In bash, we register a redirect from Tricot to your own services or your personal reverse proxy.
|
- files in `nix/` that are the same for all deployments on all clusters
|
||||||
|
- the file `cluster/<cluster_name>/cluster.nix`, a Nix configuration file that is specific to the cluster but is copied the same on all cluster nodes
|
||||||
|
- files in `cluster/<cluster_name>/site/`, which are specific to the various sites on which Nix nodes are deployed
|
||||||
|
- files in `cluster/<cluster_name>/node/` which are specific to each node
|
||||||
|
|
||||||
|
To deploy the NixOS configuration on the cluster, simply do:
|
||||||
|
|
||||||
|
```
|
||||||
|
./deploy_nixos <cluster_name>
|
||||||
|
```
|
||||||
|
|
||||||
|
or to deploy only on a single node:
|
||||||
|
|
||||||
|
```
|
||||||
|
./deploy_nixos <cluster_name> <node_name>
|
||||||
|
```
|
||||||
|
|
||||||
|
To upgrade NixOS, use the `./upgrade_nixos` script instead (it has the same syntax).
|
||||||
|
|
||||||
|
**When adding a node to the cluster:** just do `./deploy_nixos <cluster_name> <name_of_new_node>`
|
||||||
|
|
||||||
|
### Deploying Wesher
|
||||||
|
|
||||||
|
We use Wesher to provide an encrypted overlay network between nodes in the cluster.
|
||||||
|
This is usefull in particular for securing services that are not able to do mTLS,
|
||||||
|
but as a security-in-depth measure, we make all traffic go through Wesher even when
|
||||||
|
TLS is done correctly. It is thus mandatory to have a working Wesher installation
|
||||||
|
in the cluster for it to run correctly.
|
||||||
|
|
||||||
|
First, if no Wesher shared secret key has been generated for this cluster yet,
|
||||||
|
generate it with:
|
||||||
|
|
||||||
|
```
|
||||||
|
./gen_wesher_key <cluster_name>
|
||||||
|
```
|
||||||
|
|
||||||
|
This key will be stored in `pass`, so you must have a working `pass` installation
|
||||||
|
for this script to run correctly.
|
||||||
|
|
||||||
|
Then, deploy the key on all nodes with:
|
||||||
|
|
||||||
|
```
|
||||||
|
./deploy_wesher_key <cluster_name>
|
||||||
|
```
|
||||||
|
|
||||||
|
This should be done after `./deploy_nixos` has run successfully on all nodes.
|
||||||
|
You should now have a working Wesher network between all your nodes!
|
||||||
|
|
||||||
|
**When adding a node to the cluster:** just do `./deploy_wesher_key <cluster_name> <name_of_new_node>`
|
||||||
|
|
||||||
|
### Generating and deploying a PKI for Consul and Nomad
|
||||||
|
|
||||||
|
This is very similar to how we do for Wesher.
|
||||||
|
|
||||||
|
First, if the PKI has not yet been created, create it with:
|
||||||
|
|
||||||
|
```
|
||||||
|
./gen_pki <cluster_name>
|
||||||
|
```
|
||||||
|
|
||||||
|
Then, deploy the PKI on all nodes with:
|
||||||
|
|
||||||
|
```
|
||||||
|
./deploy_pki <cluster_name>
|
||||||
|
```
|
||||||
|
|
||||||
|
**When adding a node to the cluster:** just do `./deploy_pki <cluster_name> <name_of_new_node>`
|
||||||
|
|
||||||
|
### Adding administrators and password management
|
||||||
|
|
||||||
|
Adminstrators are defined in the `cluster.nix` file for each cluster (they could also be defined in the site-specific Nix files if necessary).
|
||||||
|
This is where their public SSH keys for remote access are put.
|
||||||
|
|
||||||
|
Administrators will also need passwords to administrate the cluster, as we are not using passwordless sudo.
|
||||||
|
To set the password for a new administrator, they must have a working `pass` installation as specified above.
|
||||||
|
They must then run:
|
||||||
|
|
||||||
|
```
|
||||||
|
./passwd <cluster_name> <user_name>
|
||||||
|
```
|
||||||
|
|
||||||
|
to set their password in the `pass` database (the password is hashed, so other administrators cannot learn their password even if they have access to the `pass` db).
|
||||||
|
|
||||||
|
Then, an administrator that already has root access must run the following (after syncing the `pass` db) to set the password correctly on all cluster nodes:
|
||||||
|
|
||||||
|
```
|
||||||
|
./deploy_passwords <cluster_name>
|
||||||
|
```
|
||||||
|
|
||||||
|
## Deploying stuff on Nomad
|
||||||
|
|
||||||
|
### Connecting to Nomad
|
||||||
|
|
||||||
|
Connect using SSH to one of the cluster nodes, forwarding port 14646 to port 4646 on localhost, and port 8501 to port 8501 on localhost.
|
||||||
|
|
||||||
|
You can for instance use an entry in your `~/.ssh/config` that looks like this:
|
||||||
|
|
||||||
|
```
|
||||||
|
Host caribou
|
||||||
|
HostName 2a01:e0a:c:a720::23
|
||||||
|
LocalForward 14646 127.0.0.1:4646
|
||||||
|
LocalForward 8501 127.0.0.1:8501
|
||||||
|
```
|
||||||
|
|
||||||
|
Then, in a separate window, launch `./tlsproxy <cluster_name>`: this will
|
||||||
|
launch `socat` proxies that strip the TLS layer and allow you to simply access
|
||||||
|
Nomad and Consul on the regular, unencrypted URLs: `http://localhost:4646` for
|
||||||
|
Nomad and `http://localhost:8500` for Consul. Keep this terminal window for as
|
||||||
|
long as you need to access Nomad and Consul on the cluster.
|
||||||
|
|
||||||
|
### Launching services
|
||||||
|
|
||||||
|
Stuff should be started in this order:
|
||||||
|
|
||||||
|
- `app/core`
|
||||||
|
- `app/frontend`
|
||||||
|
- `app/garage-staging`
|
||||||
|
|
||||||
|
At this point, we are able to have a systemd service called `mountgarage` that mounts Garage buckets in `/mnt/garage-staging`. This is used by the following services that can be launched afterwards:
|
||||||
|
|
||||||
|
- `app/im`
|
||||||
|
|
|
@ -1,37 +1,41 @@
|
||||||
job "core-diplonat" {
|
job "core" {
|
||||||
datacenters = ["neptune", "scorpio", "bespin", "corrin"]
|
datacenters = ["dc1", "neptune"]
|
||||||
type = "system"
|
type = "system"
|
||||||
priority = 90
|
priority = 90
|
||||||
|
|
||||||
|
constraint {
|
||||||
|
attribute = "${attr.cpu.arch}"
|
||||||
|
value = "amd64"
|
||||||
|
}
|
||||||
|
|
||||||
update {
|
update {
|
||||||
max_parallel = 2
|
max_parallel = 1
|
||||||
stagger = "1m"
|
stagger = "1m"
|
||||||
}
|
}
|
||||||
|
|
||||||
group "diplonat" {
|
group "network" {
|
||||||
task "diplonat" {
|
task "diplonat" {
|
||||||
driver = "docker"
|
driver = "docker"
|
||||||
|
|
||||||
config {
|
config {
|
||||||
image = "lxpz/amd64_diplonat:7"
|
image = "lxpz/amd64_diplonat:3"
|
||||||
network_mode = "host"
|
network_mode = "host"
|
||||||
readonly_rootfs = true
|
readonly_rootfs = true
|
||||||
privileged = true
|
|
||||||
volumes = [
|
volumes = [
|
||||||
"secrets:/etc/diplonat",
|
"secrets:/etc/diplonat",
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
|
|
||||||
restart {
|
restart {
|
||||||
interval = "5m"
|
interval = "30m"
|
||||||
attempts = 10
|
attempts = 2
|
||||||
delay = "15s"
|
delay = "15s"
|
||||||
mode = "delay"
|
mode = "delay"
|
||||||
}
|
}
|
||||||
|
|
||||||
template {
|
template {
|
||||||
data = "{{ key \"secrets/consul/consul.crt\" }}"
|
data = "{{ key \"secrets/consul/consul-ca.crt\" }}"
|
||||||
destination = "secrets/consul.crt"
|
destination = "secrets/consul-ca.crt"
|
||||||
}
|
}
|
||||||
|
|
||||||
template {
|
template {
|
||||||
|
@ -49,8 +53,8 @@ job "core-diplonat" {
|
||||||
DIPLONAT_REFRESH_TIME=60
|
DIPLONAT_REFRESH_TIME=60
|
||||||
DIPLONAT_EXPIRATION_TIME=300
|
DIPLONAT_EXPIRATION_TIME=300
|
||||||
DIPLONAT_CONSUL_NODE_NAME={{ env "attr.unique.hostname" }}
|
DIPLONAT_CONSUL_NODE_NAME={{ env "attr.unique.hostname" }}
|
||||||
DIPLONAT_CONSUL_URL=https://consul.service.prod.consul:8501
|
DIPLONAT_CONSUL_URL=https://localhost:8501
|
||||||
DIPLONAT_CONSUL_TLS_SKIP_VERIFY=true
|
DIPLONAT_CONSUL_CA_CERT=/etc/diplonat/consul-ca.crt
|
||||||
DIPLONAT_CONSUL_CLIENT_CERT=/etc/diplonat/consul-client.crt
|
DIPLONAT_CONSUL_CLIENT_CERT=/etc/diplonat/consul-client.crt
|
||||||
DIPLONAT_CONSUL_CLIENT_KEY=/etc/diplonat/consul-client.key
|
DIPLONAT_CONSUL_CLIENT_KEY=/etc/diplonat/consul-client.key
|
||||||
RUST_LOG=debug
|
RUST_LOG=debug
|
||||||
|
@ -60,8 +64,7 @@ EOH
|
||||||
}
|
}
|
||||||
|
|
||||||
resources {
|
resources {
|
||||||
memory = 100
|
memory = 40
|
||||||
memory_max = 200
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
1
app/dummy/deploy/.gitignore
vendored
Normal file
1
app/dummy/deploy/.gitignore
vendored
Normal file
|
@ -0,0 +1 @@
|
||||||
|
dummy-volume.hcl
|
35
app/dummy/deploy/dummy-nginx.hcl
Normal file
35
app/dummy/deploy/dummy-nginx.hcl
Normal file
|
@ -0,0 +1,35 @@
|
||||||
|
job "dummy-nginx" {
|
||||||
|
datacenters = ["neptune"]
|
||||||
|
type = "service"
|
||||||
|
|
||||||
|
group "nginx" {
|
||||||
|
count = 1
|
||||||
|
|
||||||
|
network {
|
||||||
|
port "http" {
|
||||||
|
to = 80
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
task "nginx" {
|
||||||
|
driver = "docker"
|
||||||
|
config {
|
||||||
|
image = "nginx"
|
||||||
|
ports = [ "http" ]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
service {
|
||||||
|
port = "http"
|
||||||
|
tags = [
|
||||||
|
"tricot home.adnab.me 100",
|
||||||
|
]
|
||||||
|
check {
|
||||||
|
type = "http"
|
||||||
|
path = "/"
|
||||||
|
interval = "10s"
|
||||||
|
timeout = "2s"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
|
@ -1,45 +1,30 @@
|
||||||
job "core-tricot" {
|
job "frontend" {
|
||||||
datacenters = ["neptune", "dathomir", "corrin", "bespin"]
|
datacenters = ["neptune"]
|
||||||
type = "system"
|
type = "service"
|
||||||
priority = 90
|
priority = 90
|
||||||
|
|
||||||
constraint {
|
|
||||||
attribute = "${attr.cpu.arch}"
|
|
||||||
value = "amd64"
|
|
||||||
}
|
|
||||||
|
|
||||||
update {
|
|
||||||
max_parallel = 1
|
|
||||||
stagger = "1m"
|
|
||||||
}
|
|
||||||
|
|
||||||
group "tricot" {
|
group "tricot" {
|
||||||
network {
|
network {
|
||||||
port "http_port" { static = 80 }
|
port "http_port" { static = 80 }
|
||||||
port "https_port" { static = 443 }
|
port "https_port" { static = 443 }
|
||||||
port "metrics_port" { static = 9334 }
|
|
||||||
}
|
}
|
||||||
|
|
||||||
task "server" {
|
task "server" {
|
||||||
driver = "docker"
|
driver = "docker"
|
||||||
|
|
||||||
config {
|
config {
|
||||||
image = "armael/tricot:8sa24l6pxdppb5gq0nnj9kvcl9mijliy-block_user_agent"
|
image = "lxpz/amd64_tricot:36"
|
||||||
network_mode = "host"
|
network_mode = "host"
|
||||||
readonly_rootfs = true
|
readonly_rootfs = true
|
||||||
ports = [ "http_port", "https_port" ]
|
ports = [ "http_port", "https_port" ]
|
||||||
volumes = [
|
volumes = [
|
||||||
"secrets:/etc/tricot",
|
"secrets:/etc/tricot",
|
||||||
]
|
]
|
||||||
ulimit {
|
|
||||||
nofile = "65535:65535"
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
resources {
|
resources {
|
||||||
cpu = 500
|
cpu = 2000
|
||||||
memory = 200
|
memory = 200
|
||||||
memory_max = 500
|
|
||||||
}
|
}
|
||||||
|
|
||||||
restart {
|
restart {
|
||||||
|
@ -68,17 +53,12 @@ job "core-tricot" {
|
||||||
data = <<EOH
|
data = <<EOH
|
||||||
TRICOT_NODE_NAME={{ env "attr.unique.consul.name" }}
|
TRICOT_NODE_NAME={{ env "attr.unique.consul.name" }}
|
||||||
TRICOT_LETSENCRYPT_EMAIL=alex@adnab.me
|
TRICOT_LETSENCRYPT_EMAIL=alex@adnab.me
|
||||||
#TRICOT_ENABLE_COMPRESSION=true
|
TRICOT_ENABLE_COMPRESSION=true
|
||||||
TRICOT_CONSUL_HOST=https://localhost:8501
|
TRICOT_CONSUL_HOST=https://localhost:8501
|
||||||
TRICOT_CONSUL_CA_CERT=/etc/tricot/consul-ca.crt
|
TRICOT_CONSUL_CA_CERT=/etc/tricot/consul-ca.crt
|
||||||
TRICOT_CONSUL_CLIENT_CERT=/etc/tricot/consul-client.crt
|
TRICOT_CONSUL_CLIENT_CERT=/etc/tricot/consul-client.crt
|
||||||
TRICOT_CONSUL_CLIENT_KEY=/etc/tricot/consul-client.key
|
TRICOT_CONSUL_CLIENT_KEY=/etc/tricot/consul-client.key
|
||||||
TRICOT_HTTP_BIND_ADDR=[::]:80
|
|
||||||
TRICOT_HTTPS_BIND_ADDR=[::]:443
|
|
||||||
TRICOT_METRICS_BIND_ADDR=[::]:9334
|
|
||||||
TRICOT_WARMUP_CERT_MEMORY_STORE=true
|
|
||||||
RUST_LOG=tricot=debug
|
RUST_LOG=tricot=debug
|
||||||
RUST_BACKTRACE=1
|
|
||||||
EOH
|
EOH
|
||||||
destination = "secrets/env"
|
destination = "secrets/env"
|
||||||
env = true
|
env = true
|
||||||
|
@ -87,27 +67,14 @@ EOH
|
||||||
service {
|
service {
|
||||||
name = "tricot-http"
|
name = "tricot-http"
|
||||||
port = "http_port"
|
port = "http_port"
|
||||||
tags = [
|
tags = [ "(diplonat (tcp_port 80))" ]
|
||||||
"(diplonat (tcp_port 80))"
|
|
||||||
]
|
|
||||||
address_mode = "host"
|
address_mode = "host"
|
||||||
}
|
}
|
||||||
|
|
||||||
service {
|
service {
|
||||||
name = "tricot-https"
|
name = "tricot-https"
|
||||||
port = "https_port"
|
port = "https_port"
|
||||||
tags = [
|
tags = [ "(diplonat (tcp_port 443))" ]
|
||||||
"(diplonat (tcp_port 443))",
|
|
||||||
"d53-aaaa ${attr.unique.hostname}.machine.staging.deuxfleurs.org",
|
|
||||||
"d53-aaaa ${meta.site}.site.staging.deuxfleurs.org",
|
|
||||||
"d53-aaaa staging.deuxfleurs.org"
|
|
||||||
]
|
|
||||||
address_mode = "host"
|
|
||||||
}
|
|
||||||
|
|
||||||
service {
|
|
||||||
name = "tricot-metrics"
|
|
||||||
port = "metrics_port"
|
|
||||||
address_mode = "host"
|
address_mode = "host"
|
||||||
}
|
}
|
||||||
}
|
}
|
27
app/garage-staging/config/garage.toml
Normal file
27
app/garage-staging/config/garage.toml
Normal file
|
@ -0,0 +1,27 @@
|
||||||
|
block_size = 1048576
|
||||||
|
|
||||||
|
metadata_dir = "/meta"
|
||||||
|
data_dir = "/data"
|
||||||
|
|
||||||
|
replication_mode = "3"
|
||||||
|
|
||||||
|
rpc_bind_addr = "0.0.0.0:3991"
|
||||||
|
rpc_secret = "{{ key "secrets/garage-staging/rpc_secret" | trimSpace }}"
|
||||||
|
|
||||||
|
consul_host = "localhost:8500"
|
||||||
|
consul_service_name = "garage-staging-rpc-self-advertised"
|
||||||
|
|
||||||
|
bootstrap_peers = []
|
||||||
|
|
||||||
|
[s3_api]
|
||||||
|
s3_region = "garage-staging"
|
||||||
|
api_bind_addr = "0.0.0.0:3990"
|
||||||
|
|
||||||
|
[s3_web]
|
||||||
|
bind_addr = "0.0.0.0:3992"
|
||||||
|
root_domain = ".garage-staging-web.home.adnab.me"
|
||||||
|
index = "index.html"
|
||||||
|
|
||||||
|
[admin]
|
||||||
|
api_bind_addr = "0.0.0.0:3909"
|
||||||
|
trace_sink = "http://{{ env "attr.unique.network.ip-address" }}:4317"
|
139
app/garage-staging/deploy/garage.hcl
Normal file
139
app/garage-staging/deploy/garage.hcl
Normal file
|
@ -0,0 +1,139 @@
|
||||||
|
job "garage-staging" {
|
||||||
|
type = "system"
|
||||||
|
#datacenters = [ "neptune", "pluton" ]
|
||||||
|
datacenters = [ "neptune" ]
|
||||||
|
|
||||||
|
priority = 80
|
||||||
|
|
||||||
|
constraint {
|
||||||
|
attribute = "${attr.cpu.arch}"
|
||||||
|
value = "amd64"
|
||||||
|
}
|
||||||
|
|
||||||
|
group "garage-staging" {
|
||||||
|
network {
|
||||||
|
port "s3" { static = 3990 }
|
||||||
|
port "rpc" { static = 3991 }
|
||||||
|
port "web" { static = 3992 }
|
||||||
|
port "admin" { static = 3909 }
|
||||||
|
}
|
||||||
|
|
||||||
|
update {
|
||||||
|
max_parallel = 1
|
||||||
|
min_healthy_time = "30s"
|
||||||
|
healthy_deadline = "5m"
|
||||||
|
}
|
||||||
|
|
||||||
|
task "server" {
|
||||||
|
driver = "docker"
|
||||||
|
|
||||||
|
config {
|
||||||
|
image = "dxflrs/amd64_garage:v0.7.0"
|
||||||
|
command = "/garage"
|
||||||
|
args = [ "server" ]
|
||||||
|
network_mode = "host"
|
||||||
|
volumes = [
|
||||||
|
"/mnt/storage/garage-staging/data:/data",
|
||||||
|
"/mnt/ssd/garage-staging/meta:/meta",
|
||||||
|
"secrets/garage.toml:/etc/garage.toml",
|
||||||
|
]
|
||||||
|
}
|
||||||
|
|
||||||
|
template {
|
||||||
|
data = file("../config/garage.toml")
|
||||||
|
destination = "secrets/garage.toml"
|
||||||
|
}
|
||||||
|
|
||||||
|
resources {
|
||||||
|
memory = 1000
|
||||||
|
cpu = 1000
|
||||||
|
}
|
||||||
|
|
||||||
|
kill_signal = "SIGINT"
|
||||||
|
kill_timeout = "20s"
|
||||||
|
|
||||||
|
service {
|
||||||
|
tags = [
|
||||||
|
"garage-staging-api",
|
||||||
|
"tricot garage-staging.home.adnab.me",
|
||||||
|
"tricot-add-header Access-Control-Allow-Origin *",
|
||||||
|
]
|
||||||
|
port = 3990
|
||||||
|
address_mode = "driver"
|
||||||
|
name = "garage-staging-api"
|
||||||
|
check {
|
||||||
|
type = "tcp"
|
||||||
|
port = 3990
|
||||||
|
address_mode = "driver"
|
||||||
|
interval = "60s"
|
||||||
|
timeout = "5s"
|
||||||
|
check_restart {
|
||||||
|
limit = 3
|
||||||
|
grace = "90s"
|
||||||
|
ignore_warnings = false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
service {
|
||||||
|
tags = ["garage-staging-rpc"]
|
||||||
|
port = 3991
|
||||||
|
address_mode = "driver"
|
||||||
|
name = "garage-staging-rpc"
|
||||||
|
check {
|
||||||
|
type = "tcp"
|
||||||
|
port = 3991
|
||||||
|
address_mode = "driver"
|
||||||
|
interval = "60s"
|
||||||
|
timeout = "5s"
|
||||||
|
check_restart {
|
||||||
|
limit = 3
|
||||||
|
grace = "90s"
|
||||||
|
ignore_warnings = false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
service {
|
||||||
|
tags = [
|
||||||
|
"garage-staging-web",
|
||||||
|
"tricot *.garage-staging-web.home.adnab.me",
|
||||||
|
"tricot matrix.home.adnab.me/.well-known/matrix/server",
|
||||||
|
"tricot rust-docs",
|
||||||
|
"tricot-add-header Access-Control-Allow-Origin *",
|
||||||
|
]
|
||||||
|
port = 3992
|
||||||
|
address_mode = "driver"
|
||||||
|
name = "garage-staging-web"
|
||||||
|
check {
|
||||||
|
type = "tcp"
|
||||||
|
port = 3992
|
||||||
|
address_mode = "driver"
|
||||||
|
interval = "60s"
|
||||||
|
timeout = "5s"
|
||||||
|
check_restart {
|
||||||
|
limit = 3
|
||||||
|
grace = "90s"
|
||||||
|
ignore_warnings = false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
service {
|
||||||
|
tags = [
|
||||||
|
"garage-staging-admin",
|
||||||
|
]
|
||||||
|
port = 3909
|
||||||
|
address_mode = "driver"
|
||||||
|
name = "garage-staging-admin"
|
||||||
|
}
|
||||||
|
|
||||||
|
restart {
|
||||||
|
interval = "30m"
|
||||||
|
attempts = 10
|
||||||
|
delay = "15s"
|
||||||
|
mode = "delay"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
1
app/garage-staging/secrets/garage-staging/rpc_secret
Normal file
1
app/garage-staging/secrets/garage-staging/rpc_secret
Normal file
|
@ -0,0 +1 @@
|
||||||
|
CMD_ONCE openssl rand -hex 32
|
|
@ -1,4 +1,4 @@
|
||||||
FROM amd64/debian:trixie AS builder
|
FROM amd64/debian:buster as builder
|
||||||
|
|
||||||
ARG VERSION
|
ARG VERSION
|
||||||
ARG S3_VERSION
|
ARG S3_VERSION
|
||||||
|
@ -16,34 +16,29 @@ RUN apt-get update && \
|
||||||
libjpeg62-turbo-dev \
|
libjpeg62-turbo-dev \
|
||||||
libxml2-dev \
|
libxml2-dev \
|
||||||
zlib1g-dev \
|
zlib1g-dev \
|
||||||
rustc \
|
|
||||||
cargo \
|
|
||||||
# postgresql-dev \
|
# postgresql-dev \
|
||||||
libpq-dev \
|
libpq-dev \
|
||||||
virtualenv \
|
virtualenv \
|
||||||
libxslt1-dev \
|
libxslt1-dev \
|
||||||
git
|
git
|
||||||
|
|
||||||
RUN virtualenv /root/matrix-env -p /usr/bin/python3 && \
|
RUN virtualenv /root/matrix-env -p /usr/bin/python3
|
||||||
. /root/matrix-env/bin/activate && \
|
RUN . /root/matrix-env/bin/activate && \
|
||||||
pip3 install \
|
pip3 install \
|
||||||
https://github.com/element-hq/synapse/archive/${VERSION}.tar.gz#egg=matrix-synapse[matrix-synapse-ldap3,postgres,resources.consent,saml2,url_preview] && \
|
https://github.com/matrix-org/synapse/archive/v${VERSION}.tar.gz#egg=matrix-synapse[matrix-synapse-ldap3,postgres,resources.consent,saml2,url_preview] && \
|
||||||
pip3 install \
|
pip3 install \
|
||||||
git+https://github.com/matrix-org/synapse-s3-storage-provider.git@${S3_VERSION}
|
git+https://github.com/matrix-org/synapse-s3-storage-provider.git@${S3_VERSION}
|
||||||
|
|
||||||
# WARNING: trixie n'est pas une LTS
|
FROM amd64/debian:buster
|
||||||
# mais on est obligé d'avoir la même version que le builder
|
|
||||||
# et le builder veut une version de rustc qui n'est pas dans bookworm (dernière LTS at the time of writing)
|
|
||||||
FROM amd64/debian:trixie
|
|
||||||
|
|
||||||
RUN apt-get update && \
|
RUN apt-get update && \
|
||||||
apt-get -qq -y full-upgrade && \
|
apt-get -qq -y full-upgrade && \
|
||||||
apt-get install -y \
|
apt-get install -y \
|
||||||
python3 \
|
python3 \
|
||||||
python3-setuptools \
|
python3-distutils \
|
||||||
libffi8 \
|
libffi6 \
|
||||||
libjpeg62-turbo \
|
libjpeg62-turbo \
|
||||||
libssl3 \
|
libssl1.1 \
|
||||||
libxslt1.1 \
|
libxslt1.1 \
|
||||||
libpq5 \
|
libpq5 \
|
||||||
zlib1g \
|
zlib1g \
|
||||||
|
@ -53,6 +48,7 @@ RUN apt-get update && \
|
||||||
ENV LD_PRELOAD /usr/lib/x86_64-linux-gnu/libjemalloc.so.2
|
ENV LD_PRELOAD /usr/lib/x86_64-linux-gnu/libjemalloc.so.2
|
||||||
COPY --from=builder /root/matrix-env /root/matrix-env
|
COPY --from=builder /root/matrix-env /root/matrix-env
|
||||||
COPY matrix-s3-async /usr/local/bin/matrix-s3-async
|
COPY matrix-s3-async /usr/local/bin/matrix-s3-async
|
||||||
|
COPY matrix-s3-async-sqlite /usr/local/bin/matrix-s3-async-sqlite
|
||||||
COPY entrypoint.sh /usr/local/bin/entrypoint
|
COPY entrypoint.sh /usr/local/bin/entrypoint
|
||||||
|
|
||||||
ENTRYPOINT ["/usr/local/bin/entrypoint"]
|
ENTRYPOINT ["/usr/local/bin/entrypoint"]
|
13
app/im/build/matrix-synapse/matrix-s3-async-sqlite
Executable file
13
app/im/build/matrix-synapse/matrix-s3-async-sqlite
Executable file
|
@ -0,0 +1,13 @@
|
||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
cat > database.yaml <<EOF
|
||||||
|
sqlite:
|
||||||
|
database: $SYNAPSE_SQLITE_DB
|
||||||
|
EOF
|
||||||
|
|
||||||
|
while true; do
|
||||||
|
/root/matrix-env/bin/s3_media_upload update-db 0d
|
||||||
|
/root/matrix-env/bin/s3_media_upload --no-progress check-deleted $SYNAPSE_MEDIA_STORE
|
||||||
|
/root/matrix-env/bin/s3_media_upload --no-progress upload $SYNAPSE_MEDIA_STORE $SYNAPSE_MEDIA_S3_BUCKET --delete --endpoint-url $S3_ENDPOINT
|
||||||
|
sleep 600
|
||||||
|
done
|
|
@ -7,7 +7,7 @@ job "im" {
|
||||||
|
|
||||||
network {
|
network {
|
||||||
port "http" {
|
port "http" {
|
||||||
static = 8008
|
to = 8008
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -26,64 +26,46 @@ job "im" {
|
||||||
sidecar = false
|
sidecar = false
|
||||||
}
|
}
|
||||||
|
|
||||||
driver = "nix2"
|
driver = "docker"
|
||||||
config {
|
config {
|
||||||
packages = [
|
image = "litestream/litestream:0.3.7"
|
||||||
"#litestream"
|
|
||||||
]
|
|
||||||
command = "litestream"
|
|
||||||
args = [
|
args = [
|
||||||
"restore", "-config", "/etc/litestream.yml", "/ephemeral/homeserver.db"
|
"restore", "-config", "/etc/litestream.yml", "/ephemeral/homeserver.db"
|
||||||
]
|
]
|
||||||
bind = {
|
volumes = [
|
||||||
"../alloc/data" = "/ephemeral",
|
"../alloc/data:/ephemeral",
|
||||||
}
|
"secrets/litestream.yml:/etc/litestream.yml"
|
||||||
|
]
|
||||||
}
|
}
|
||||||
|
|
||||||
template {
|
template {
|
||||||
data = file("../config/litestream.yml")
|
data = file("../config/litestream.yml")
|
||||||
destination = "etc/litestream.yml"
|
destination = "secrets/litestream.yml"
|
||||||
}
|
}
|
||||||
|
|
||||||
resources {
|
resources {
|
||||||
memory = 100
|
memory = 200
|
||||||
memory_max = 500
|
|
||||||
cpu = 1000
|
cpu = 1000
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
task "synapse" {
|
task "synapse" {
|
||||||
driver = "nix2"
|
driver = "docker"
|
||||||
config {
|
config {
|
||||||
nixpkgs = "github:nixos/nixpkgs/nixos-23.11"
|
image = "lxpz/amd64_synapse:1.49.2-3"
|
||||||
packages = [
|
ports = [ "http" ]
|
||||||
"#cacert",
|
|
||||||
"#bash",
|
command = "python"
|
||||||
"#coreutils",
|
|
||||||
"#sqlite",
|
|
||||||
".#synapse",
|
|
||||||
]
|
|
||||||
command = "synapse_homeserver"
|
|
||||||
args = [
|
args = [
|
||||||
|
"-m", "synapse.app.homeserver",
|
||||||
"-n",
|
"-n",
|
||||||
"-c", "/etc/matrix-synapse/homeserver.yaml"
|
"-c", "/etc/matrix-synapse/homeserver.yaml"
|
||||||
]
|
]
|
||||||
bind = {
|
|
||||||
"./secrets" = "/etc/matrix-synapse",
|
|
||||||
"../alloc/data" = "/ephemeral",
|
|
||||||
}
|
|
||||||
}
|
|
||||||
env = {
|
|
||||||
SSL_CERT_FILE = "/etc/ssl/certs/ca-bundle.crt"
|
|
||||||
}
|
|
||||||
|
|
||||||
template {
|
volumes = [
|
||||||
data = file("flake.nix")
|
"secrets:/etc/matrix-synapse",
|
||||||
destination = "flake.nix"
|
"../alloc/data:/ephemeral",
|
||||||
}
|
]
|
||||||
template {
|
|
||||||
data = file("flake.lock")
|
|
||||||
destination = "flake.lock"
|
|
||||||
}
|
}
|
||||||
|
|
||||||
template {
|
template {
|
||||||
|
@ -102,8 +84,7 @@ job "im" {
|
||||||
}
|
}
|
||||||
|
|
||||||
resources {
|
resources {
|
||||||
memory = 2000
|
memory = 2500
|
||||||
memory_max = 3000
|
|
||||||
cpu = 1000
|
cpu = 1000
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -124,37 +105,21 @@ job "im" {
|
||||||
}
|
}
|
||||||
|
|
||||||
task "media-async-upload" {
|
task "media-async-upload" {
|
||||||
driver = "nix2"
|
driver = "docker"
|
||||||
|
|
||||||
config {
|
config {
|
||||||
packages = [
|
image = "lxpz/amd64_synapse:1.49.2-4"
|
||||||
"#bash",
|
readonly_rootfs = true
|
||||||
"#coreutils",
|
command = "/usr/local/bin/matrix-s3-async-sqlite"
|
||||||
".#matrix_s3_async_sqlite",
|
work_dir = "/ephemeral"
|
||||||
|
volumes = [
|
||||||
|
"../alloc/data:/ephemeral",
|
||||||
]
|
]
|
||||||
command = "sh"
|
|
||||||
args = [
|
|
||||||
"-c",
|
|
||||||
"cd /ephemeral; matrix-s3-async-sqlite"
|
|
||||||
]
|
|
||||||
bind = {
|
|
||||||
"../alloc/data" = "/ephemeral",
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
template {
|
|
||||||
data = file("flake.nix")
|
|
||||||
destination = "flake.nix"
|
|
||||||
}
|
|
||||||
template {
|
|
||||||
data = file("flake.lock")
|
|
||||||
destination = "flake.lock"
|
|
||||||
}
|
}
|
||||||
|
|
||||||
resources {
|
resources {
|
||||||
cpu = 100
|
cpu = 100
|
||||||
memory = 100
|
memory = 200
|
||||||
memory_max = 500
|
|
||||||
}
|
}
|
||||||
|
|
||||||
template {
|
template {
|
||||||
|
@ -166,6 +131,7 @@ AWS_ACCESS_KEY_ID={{ key "secrets/synapse/s3_access_key" | trimSpace }}
|
||||||
AWS_SECRET_ACCESS_KEY={{ key "secrets/synapse/s3_secret_key" | trimSpace }}
|
AWS_SECRET_ACCESS_KEY={{ key "secrets/synapse/s3_secret_key" | trimSpace }}
|
||||||
AWS_DEFAULT_REGION=garage-staging
|
AWS_DEFAULT_REGION=garage-staging
|
||||||
S3_ENDPOINT=http://{{ env "attr.unique.network.ip-address" }}:3990
|
S3_ENDPOINT=http://{{ env "attr.unique.network.ip-address" }}:3990
|
||||||
|
|
||||||
EOH
|
EOH
|
||||||
destination = "secrets/env"
|
destination = "secrets/env"
|
||||||
env = true
|
env = true
|
||||||
|
@ -173,28 +139,25 @@ EOH
|
||||||
}
|
}
|
||||||
|
|
||||||
task "replicate-db" {
|
task "replicate-db" {
|
||||||
driver = "nix2"
|
driver = "docker"
|
||||||
config {
|
config {
|
||||||
packages = [
|
image = "litestream/litestream:0.3.7"
|
||||||
"#litestream"
|
|
||||||
]
|
|
||||||
command = "litestream"
|
|
||||||
args = [
|
args = [
|
||||||
"replicate", "-config", "/etc/litestream.yml"
|
"replicate", "-config", "/etc/litestream.yml"
|
||||||
]
|
]
|
||||||
bind = {
|
volumes = [
|
||||||
"../alloc/data" = "/ephemeral",
|
"../alloc/data:/ephemeral",
|
||||||
}
|
"secrets/litestream.yml:/etc/litestream.yml"
|
||||||
|
]
|
||||||
}
|
}
|
||||||
|
|
||||||
template {
|
template {
|
||||||
data = file("../config/litestream.yml")
|
data = file("../config/litestream.yml")
|
||||||
destination = "etc/litestream.yml"
|
destination = "secrets/litestream.yml"
|
||||||
}
|
}
|
||||||
|
|
||||||
resources {
|
resources {
|
||||||
memory = 500
|
memory = 200
|
||||||
memory_max = 500
|
|
||||||
cpu = 100
|
cpu = 100
|
||||||
}
|
}
|
||||||
}
|
}
|
1
app/im/secrets/synapse/form_secret
Normal file
1
app/im/secrets/synapse/form_secret
Normal file
|
@ -0,0 +1 @@
|
||||||
|
USER Synapse's `form_secret` configuration parameter
|
1
app/im/secrets/synapse/macaroon_secret_key
Normal file
1
app/im/secrets/synapse/macaroon_secret_key
Normal file
|
@ -0,0 +1 @@
|
||||||
|
USER Synapse's `macaroon_secret_key` parameter
|
1
app/im/secrets/synapse/registration_shared_secret
Normal file
1
app/im/secrets/synapse/registration_shared_secret
Normal file
|
@ -0,0 +1 @@
|
||||||
|
USER Synapse's `registration_shared_secret` parameter
|
1
app/im/secrets/synapse/s3_access_key
Normal file
1
app/im/secrets/synapse/s3_access_key
Normal file
|
@ -0,0 +1 @@
|
||||||
|
USER S3 access key ID for database storage
|
1
app/im/secrets/synapse/s3_secret_key
Normal file
1
app/im/secrets/synapse/s3_secret_key
Normal file
|
@ -0,0 +1 @@
|
||||||
|
USER S3 secret key for database storage
|
1
app/im/secrets/synapse/signing_key
Normal file
1
app/im/secrets/synapse/signing_key
Normal file
|
@ -0,0 +1 @@
|
||||||
|
USER Signing key for messages
|
1
app/secretmgr.py
Symbolic link
1
app/secretmgr.py
Symbolic link
|
@ -0,0 +1 @@
|
||||||
|
../../infrastructure/app/secretmgr.py
|
|
@ -8,8 +8,8 @@ output.elasticsearch:
|
||||||
# In case you specify and additional path, the scheme is required: `http://localhost:9200/path`.
|
# In case you specify and additional path, the scheme is required: `http://localhost:9200/path`.
|
||||||
# IPv6 addresses should always be defined as: `https://[2001:db8::1]:9200`.
|
# IPv6 addresses should always be defined as: `https://[2001:db8::1]:9200`.
|
||||||
hosts: ["localhost:9200"]
|
hosts: ["localhost:9200"]
|
||||||
username: "elastic"
|
username: "apm"
|
||||||
password: "{{ key "secrets/telemetry/elastic_passwords/elastic" }}"
|
password: "{{ key "secrets/telemetry/elastic_passwords/apm" }}"
|
||||||
|
|
||||||
instrumentation:
|
instrumentation:
|
||||||
enabled: true
|
enabled: true
|
|
@ -5,13 +5,13 @@ datasources:
|
||||||
type: elasticsearch
|
type: elasticsearch
|
||||||
access: proxy
|
access: proxy
|
||||||
url: http://localhost:9200
|
url: http://localhost:9200
|
||||||
password: '{{ key "secrets/telemetry/elastic_passwords/elastic" }}'
|
password: '{{ key "secrets/telemetry/elastic_passwords/grafana" }}'
|
||||||
user: 'elastic'
|
user: 'grafana'
|
||||||
database: metrics-*
|
database: apm-*
|
||||||
basicAuth: false
|
basicAuth: false
|
||||||
isDefault: true
|
isDefault: true
|
||||||
jsonData:
|
jsonData:
|
||||||
esVersion: "8.2.0"
|
esVersion: "7.10.0"
|
||||||
includeFrozen: false
|
includeFrozen: false
|
||||||
logLevelField: ''
|
logLevelField: ''
|
||||||
logMessageField: ''
|
logMessageField: ''
|
|
@ -15,11 +15,10 @@ job "telemetry-system" {
|
||||||
task "elastic" {
|
task "elastic" {
|
||||||
driver = "docker"
|
driver = "docker"
|
||||||
config {
|
config {
|
||||||
image = "docker.elastic.co/elasticsearch/elasticsearch:8.2.0"
|
image = "docker.elastic.co/elasticsearch/elasticsearch:7.17.0"
|
||||||
network_mode = "host"
|
network_mode = "host"
|
||||||
volumes = [
|
volumes = [
|
||||||
"/mnt/ssd/telemetry/es_data:/usr/share/elasticsearch/data",
|
"/mnt/ssd/telemetry/es_data:/usr/share/elasticsearch/data",
|
||||||
"secrets/elastic-certificates.p12:/usr/share/elasticsearch/config/elastic-certificates.p12",
|
|
||||||
]
|
]
|
||||||
ports = [ "elastic", "elastic_internal" ]
|
ports = [ "elastic", "elastic_internal" ]
|
||||||
sysctl = {
|
sysctl = {
|
||||||
|
@ -30,25 +29,18 @@ job "telemetry-system" {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
user = "1000"
|
|
||||||
|
|
||||||
resources {
|
resources {
|
||||||
memory = 1500
|
memory = 1500
|
||||||
cpu = 500
|
cpu = 500
|
||||||
}
|
}
|
||||||
|
|
||||||
template {
|
|
||||||
data = "{{ key \"secrets/telemetry/elasticsearch/elastic-certificates.p12\" }}"
|
|
||||||
destination = "secrets/elastic-certificates.p12"
|
|
||||||
}
|
|
||||||
|
|
||||||
template {
|
template {
|
||||||
data = <<EOH
|
data = <<EOH
|
||||||
node.name={{ env "attr.unique.hostname" }}
|
node.name={{ env "attr.unique.hostname" }}
|
||||||
http.port=9200
|
http.port=9200
|
||||||
transport.port=9300
|
transport.port=9300
|
||||||
cluster.name=es-deuxfleurs
|
cluster.name=es-deuxfleurs
|
||||||
cluster.initial_master_nodes=carcajou,caribou,cariacou
|
cluster.initial_master_nodes=caribou,cariacou,carcajou
|
||||||
discovery.seed_hosts=carcajou,caribou,cariacou
|
discovery.seed_hosts=carcajou,caribou,cariacou
|
||||||
bootstrap.memory_lock=true
|
bootstrap.memory_lock=true
|
||||||
xpack.security.enabled=true
|
xpack.security.enabled=true
|
||||||
|
@ -56,8 +48,8 @@ xpack.security.authc.api_key.enabled=true
|
||||||
xpack.security.transport.ssl.enabled=true
|
xpack.security.transport.ssl.enabled=true
|
||||||
xpack.security.transport.ssl.verification_mode=certificate
|
xpack.security.transport.ssl.verification_mode=certificate
|
||||||
xpack.security.transport.ssl.client_authentication=required
|
xpack.security.transport.ssl.client_authentication=required
|
||||||
xpack.security.transport.ssl.keystore.path=/usr/share/elasticsearch/config/elastic-certificates.p12
|
xpack.security.transport.ssl.keystore.path=/usr/share/elasticsearch/data/elastic-certificates.p12
|
||||||
xpack.security.transport.ssl.truststore.path=/usr/share/elasticsearch/config/elastic-certificates.p12
|
xpack.security.transport.ssl.truststore.path=/usr/share/elasticsearch/data/elastic-certificates.p12
|
||||||
cluster.routing.allocation.disk.watermark.high=75%
|
cluster.routing.allocation.disk.watermark.high=75%
|
||||||
cluster.routing.allocation.disk.watermark.low=65%
|
cluster.routing.allocation.disk.watermark.low=65%
|
||||||
ES_JAVA_OPTS=-Xms512M -Xmx512M
|
ES_JAVA_OPTS=-Xms512M -Xmx512M
|
||||||
|
@ -101,7 +93,7 @@ EOH
|
||||||
}
|
}
|
||||||
|
|
||||||
resources {
|
resources {
|
||||||
memory = 100
|
memory = 200
|
||||||
cpu = 100
|
cpu = 100
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -109,7 +101,7 @@ EOH
|
||||||
task "apm" {
|
task "apm" {
|
||||||
driver = "docker"
|
driver = "docker"
|
||||||
config {
|
config {
|
||||||
image = "docker.elastic.co/apm/apm-server:8.2.0"
|
image = "docker.elastic.co/apm/apm-server:7.17.1"
|
||||||
network_mode = "host"
|
network_mode = "host"
|
||||||
ports = [ "apm" ]
|
ports = [ "apm" ]
|
||||||
args = [ "--strict.perms=false" ]
|
args = [ "--strict.perms=false" ]
|
||||||
|
@ -124,7 +116,7 @@ EOH
|
||||||
}
|
}
|
||||||
|
|
||||||
resources {
|
resources {
|
||||||
memory = 100
|
memory = 200
|
||||||
cpu = 100
|
cpu = 100
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -152,7 +144,7 @@ EOH
|
||||||
task "filebeat" {
|
task "filebeat" {
|
||||||
driver = "docker"
|
driver = "docker"
|
||||||
config {
|
config {
|
||||||
image = "docker.elastic.co/beats/filebeat:8.2.0"
|
image = "docker.elastic.co/beats/filebeat:7.17.1"
|
||||||
network_mode = "host"
|
network_mode = "host"
|
||||||
volumes = [
|
volumes = [
|
||||||
"/mnt/ssd/telemetry/filebeat:/usr/share/filebeat/data",
|
"/mnt/ssd/telemetry/filebeat:/usr/share/filebeat/data",
|
||||||
|
@ -171,11 +163,6 @@ EOH
|
||||||
data = file("../config/filebeat.yml")
|
data = file("../config/filebeat.yml")
|
||||||
destination = "secrets/filebeat.yml"
|
destination = "secrets/filebeat.yml"
|
||||||
}
|
}
|
||||||
|
|
||||||
resources {
|
|
||||||
memory = 100
|
|
||||||
cpu = 100
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
|
@ -14,7 +14,7 @@ job "telemetry" {
|
||||||
task "kibana" {
|
task "kibana" {
|
||||||
driver = "docker"
|
driver = "docker"
|
||||||
config {
|
config {
|
||||||
image = "docker.elastic.co/kibana/kibana:8.2.0"
|
image = "docker.elastic.co/kibana/kibana:7.17.0"
|
||||||
network_mode = "host"
|
network_mode = "host"
|
||||||
ports = [ "kibana" ]
|
ports = [ "kibana" ]
|
||||||
}
|
}
|
||||||
|
@ -39,7 +39,7 @@ EOH
|
||||||
service {
|
service {
|
||||||
tags = [
|
tags = [
|
||||||
"kibana",
|
"kibana",
|
||||||
"tricot kibana.staging.deuxfleurs.org",
|
"tricot kibana.home.adnab.me",
|
||||||
]
|
]
|
||||||
port = 5601
|
port = 5601
|
||||||
address_mode = "driver"
|
address_mode = "driver"
|
||||||
|
@ -133,7 +133,7 @@ EOH
|
||||||
service {
|
service {
|
||||||
tags = [
|
tags = [
|
||||||
"grafana",
|
"grafana",
|
||||||
"tricot grafana.staging.deuxfleurs.org",
|
"tricot grafana.home.adnab.me",
|
||||||
]
|
]
|
||||||
port = 3333
|
port = 3333
|
||||||
address_mode = "driver"
|
address_mode = "driver"
|
|
@ -1,32 +0,0 @@
|
||||||
## Pour remonter locement un backup de PSQL fait par Nomad (backup-weekly.hcl)
|
|
||||||
|
|
||||||
```bash
|
|
||||||
export AWS_BUCKET=backups-pgbasebackup
|
|
||||||
export AWS_ENDPOINT=s3.deuxfleurs.shirokumo.net
|
|
||||||
export AWS_ACCESS_KEY_ID=$(consul kv get "secrets/postgres/backup/aws_access_key_id")
|
|
||||||
export AWS_SECRET_ACCESS_KEY=$(consul kv get secrets/postgres/backup/aws_secret_access_key)
|
|
||||||
export CRYPT_PUBLIC_KEY=$(consul kv get secrets/postgres/backup/crypt_public_key)
|
|
||||||
```
|
|
||||||
|
|
||||||
Et voilà le travail :
|
|
||||||
|
|
||||||
```bash
|
|
||||||
$ aws s3 --endpoint https://$AWS_ENDPOINT ls
|
|
||||||
2022-04-14 17:00:50 backups-pgbasebackup
|
|
||||||
|
|
||||||
$ aws s3 --endpoint https://$AWS_ENDPOINT ls s3://backups-pgbasebackup
|
|
||||||
PRE 2024-07-28 00:00:36.140539/
|
|
||||||
PRE 2024-08-04 00:00:21.291551/
|
|
||||||
PRE 2024-08-11 00:00:26.589762/
|
|
||||||
PRE 2024-08-18 00:00:40.873939/
|
|
||||||
PRE 2024-08-25 01:03:54.672763/
|
|
||||||
PRE 2024-09-01 00:00:20.019605/
|
|
||||||
PRE 2024-09-08 00:00:16.969740/
|
|
||||||
PRE 2024-09-15 00:00:37.951459/
|
|
||||||
PRE 2024-09-22 00:00:21.030452/
|
|
||||||
|
|
||||||
$ aws s3 --endpoint https://$AWS_ENDPOINT ls "s3://backups-pgbasebackup/2024-09-22 00:00:21.030452/"
|
|
||||||
2024-09-22 03:23:28 623490 backup_manifest
|
|
||||||
2024-09-22 03:25:32 6037121487 base.tar.gz
|
|
||||||
2024-09-22 03:25:33 19948939 pg_wal.tar.gz
|
|
||||||
```
|
|
|
@ -1,28 +0,0 @@
|
||||||
FROM golang:buster as builder
|
|
||||||
|
|
||||||
WORKDIR /root
|
|
||||||
RUN git clone https://filippo.io/age && cd age/cmd/age && go build -o age .
|
|
||||||
|
|
||||||
FROM amd64/debian:buster
|
|
||||||
|
|
||||||
COPY --from=builder /root/age/cmd/age/age /usr/local/bin/age
|
|
||||||
|
|
||||||
RUN apt-get update && \
|
|
||||||
apt-get -qq -y full-upgrade && \
|
|
||||||
apt-get install -y rsync wget openssh-client unzip && \
|
|
||||||
apt-get clean && \
|
|
||||||
rm -f /var/lib/apt/lists/*_*
|
|
||||||
|
|
||||||
RUN mkdir -p /root/.ssh
|
|
||||||
WORKDIR /root
|
|
||||||
|
|
||||||
RUN wget https://releases.hashicorp.com/consul/1.8.5/consul_1.8.5_linux_amd64.zip && \
|
|
||||||
unzip consul_1.8.5_linux_amd64.zip && \
|
|
||||||
chmod +x consul && \
|
|
||||||
mv consul /usr/local/bin && \
|
|
||||||
rm consul_1.8.5_linux_amd64.zip
|
|
||||||
|
|
||||||
COPY do_backup.sh /root/do_backup.sh
|
|
||||||
|
|
||||||
CMD "/root/do_backup.sh"
|
|
||||||
|
|
|
@ -1,20 +0,0 @@
|
||||||
#!/bin/sh
|
|
||||||
|
|
||||||
set -x -e
|
|
||||||
|
|
||||||
cd /root
|
|
||||||
|
|
||||||
chmod 0600 .ssh/id_ed25519
|
|
||||||
|
|
||||||
cat > .ssh/config <<EOF
|
|
||||||
Host backuphost
|
|
||||||
HostName $TARGET_SSH_HOST
|
|
||||||
Port $TARGET_SSH_PORT
|
|
||||||
User $TARGET_SSH_USER
|
|
||||||
EOF
|
|
||||||
|
|
||||||
consul kv export | \
|
|
||||||
gzip | \
|
|
||||||
age -r "$(cat /root/.ssh/id_ed25519.pub)" | \
|
|
||||||
ssh backuphost "cat > $TARGET_SSH_DIR/consul/$(date --iso-8601=minute)_consul_kv_export.gz.age"
|
|
||||||
|
|
|
@ -1,7 +0,0 @@
|
||||||
FROM alpine:3.17
|
|
||||||
|
|
||||||
RUN apk add rclone curl bash jq
|
|
||||||
|
|
||||||
COPY do-backup.sh /do-backup.sh
|
|
||||||
|
|
||||||
CMD bash /do-backup.sh
|
|
|
@ -1,83 +0,0 @@
|
||||||
#!/usr/bin/env bash
|
|
||||||
|
|
||||||
# DESCRIPTION:
|
|
||||||
# Script to backup all buckets on a Garage cluster using rclone.
|
|
||||||
#
|
|
||||||
# REQUIREMENTS:
|
|
||||||
# An access key for the backup script must be created in Garage beforehand.
|
|
||||||
# This script will use the Garage administration API to grant read access
|
|
||||||
# to this key on all buckets.
|
|
||||||
#
|
|
||||||
# A rclone configuration file is expected to be located at `/etc/secrets/rclone.conf`,
|
|
||||||
# which contains credentials to the following two remotes:
|
|
||||||
# garage: the Garage server, for read access (using the backup access key)
|
|
||||||
# backup: the backup location
|
|
||||||
#
|
|
||||||
# DEPENDENCIES: (see Dockerfile)
|
|
||||||
# curl
|
|
||||||
# jq
|
|
||||||
# rclone
|
|
||||||
#
|
|
||||||
# PARAMETERS (environmenet variables)
|
|
||||||
# $GARAGE_ADMIN_API_URL => Garage administration API URL (e.g. http://localhost:3903)
|
|
||||||
# $GARAGE_ADMIN_TOKEN => Garage administration access token
|
|
||||||
# $GARAGE_ACCESS_KEY => Garage access key ID
|
|
||||||
# $TARGET_BACKUP_DIR => Folder on the backup remote where to store buckets
|
|
||||||
|
|
||||||
if [ -z "$GARAGE_ACCESS_KEY" -o -z "$GARAGE_ADMIN_TOKEN" -o -z "$GARAGE_ADMIN_API_URL" ]; then
|
|
||||||
echo "Missing parameters"
|
|
||||||
fi
|
|
||||||
|
|
||||||
# copy potentially immutable file to a mutable location,
|
|
||||||
# otherwise rclone complains
|
|
||||||
mkdir -p /root/.config/rclone
|
|
||||||
cp /etc/secrets/rclone.conf /root/.config/rclone/rclone.conf
|
|
||||||
|
|
||||||
function gcurl {
|
|
||||||
curl -s -H "Authorization: Bearer $GARAGE_ADMIN_TOKEN" $@
|
|
||||||
}
|
|
||||||
|
|
||||||
BUCKETS=$(gcurl "$GARAGE_ADMIN_API_URL/v0/bucket" | jq -r '.[].id')
|
|
||||||
|
|
||||||
mkdir -p /tmp/buckets-info
|
|
||||||
|
|
||||||
for BUCKET in $BUCKETS; do
|
|
||||||
echo "==== BUCKET $BUCKET ===="
|
|
||||||
|
|
||||||
gcurl "http://localhost:3903/v0/bucket?id=$BUCKET" > "/tmp/buckets-info/$BUCKET.json"
|
|
||||||
rclone copy "/tmp/buckets-info/$BUCKET.json" "backup:$TARGET_BACKUP_DIR/" 2>&1
|
|
||||||
|
|
||||||
ALIASES=$(jq -r '.globalAliases[]' < "/tmp/buckets-info/$BUCKET.json")
|
|
||||||
echo "(aka. $ALIASES)"
|
|
||||||
|
|
||||||
case $ALIASES in
|
|
||||||
*backup*)
|
|
||||||
echo "Skipping $BUCKET (not doing backup of backup)"
|
|
||||||
;;
|
|
||||||
*cache*)
|
|
||||||
echo "Skipping $BUCKET (not doing backup of cache)"
|
|
||||||
;;
|
|
||||||
*)
|
|
||||||
echo "Backing up $BUCKET"
|
|
||||||
|
|
||||||
gcurl -X POST -H "Content-Type: application/json" --data @- "http://localhost:3903/v0/bucket/allow" >/dev/null <<EOF
|
|
||||||
{
|
|
||||||
"bucketId": "$BUCKET",
|
|
||||||
"accessKeyId": "$GARAGE_ACCESS_KEY",
|
|
||||||
"permissions": {"read": true}
|
|
||||||
}
|
|
||||||
EOF
|
|
||||||
|
|
||||||
rclone sync \
|
|
||||||
--transfers 32 \
|
|
||||||
--fast-list \
|
|
||||||
--stats-one-line \
|
|
||||||
--stats 10s \
|
|
||||||
--stats-log-level NOTICE \
|
|
||||||
"garage:$BUCKET" "backup:$TARGET_BACKUP_DIR/$BUCKET" 2>&1
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
done
|
|
||||||
|
|
||||||
echo "========= DONE SYNCHRONIZING =========="
|
|
||||||
|
|
|
@ -1 +0,0 @@
|
||||||
result
|
|
|
@ -1,8 +0,0 @@
|
||||||
## Build
|
|
||||||
|
|
||||||
```bash
|
|
||||||
docker load < $(nix-build docker.nix)
|
|
||||||
docker push superboum/backup-psql:???
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
|
@ -1,108 +0,0 @@
|
||||||
#!/usr/bin/env python3
|
|
||||||
import shutil,sys,os,datetime,minio,subprocess
|
|
||||||
|
|
||||||
working_directory = "."
|
|
||||||
if 'CACHE_DIR' in os.environ: working_directory = os.environ['CACHE_DIR']
|
|
||||||
required_space_in_bytes = 20 * 1024 * 1024 * 1024
|
|
||||||
bucket = os.environ['AWS_BUCKET']
|
|
||||||
key = os.environ['AWS_ACCESS_KEY_ID']
|
|
||||||
secret = os.environ['AWS_SECRET_ACCESS_KEY']
|
|
||||||
endpoint = os.environ['AWS_ENDPOINT']
|
|
||||||
pubkey = os.environ['CRYPT_PUBLIC_KEY']
|
|
||||||
psql_host = os.environ['PSQL_HOST']
|
|
||||||
psql_user = os.environ['PSQL_USER']
|
|
||||||
s3_prefix = str(datetime.datetime.now())
|
|
||||||
files = [ "backup_manifest", "base.tar.gz", "pg_wal.tar.gz" ]
|
|
||||||
clear_paths = [ os.path.join(working_directory, f) for f in files ]
|
|
||||||
crypt_paths = [ os.path.join(working_directory, f) + ".age" for f in files ]
|
|
||||||
s3_keys = [ s3_prefix + "/" + f for f in files ]
|
|
||||||
|
|
||||||
def abort(msg):
|
|
||||||
for p in clear_paths + crypt_paths:
|
|
||||||
if os.path.exists(p):
|
|
||||||
print(f"Remove {p}")
|
|
||||||
os.remove(p)
|
|
||||||
|
|
||||||
if msg: sys.exit(msg)
|
|
||||||
else: print("success")
|
|
||||||
|
|
||||||
# Check we have enough space on disk
|
|
||||||
if shutil.disk_usage(working_directory).free < required_space_in_bytes:
|
|
||||||
abort(f"Not enough space on disk at path {working_directory} to perform a backup, aborting")
|
|
||||||
|
|
||||||
# Check postgres password is set
|
|
||||||
if 'PGPASSWORD' not in os.environ:
|
|
||||||
abort(f"You must pass postgres' password through the environment variable PGPASSWORD")
|
|
||||||
|
|
||||||
# Check our working directory is empty
|
|
||||||
if len(os.listdir(working_directory)) != 0:
|
|
||||||
abort(f"Working directory {working_directory} is not empty, aborting")
|
|
||||||
|
|
||||||
# Check Minio
|
|
||||||
client = minio.Minio(endpoint, key, secret)
|
|
||||||
if not client.bucket_exists(bucket):
|
|
||||||
abort(f"Bucket {bucket} does not exist or its access is forbidden, aborting")
|
|
||||||
|
|
||||||
# Perform the backup locally
|
|
||||||
# Via command-line:
|
|
||||||
# pg_basebackup --host=localhost --username=$PSQL_USER --pgdata=. --format=tar --wal-method=stream --gzip --compress=6 --progress --max-rate=5M
|
|
||||||
try:
|
|
||||||
ret = subprocess.run(["pg_basebackup",
|
|
||||||
f"--host={psql_host}",
|
|
||||||
f"--username={psql_user}",
|
|
||||||
f"--pgdata={working_directory}",
|
|
||||||
f"--format=tar",
|
|
||||||
"--wal-method=stream",
|
|
||||||
"--gzip",
|
|
||||||
"--compress=6",
|
|
||||||
"--progress",
|
|
||||||
"--max-rate=5M",
|
|
||||||
])
|
|
||||||
if ret.returncode != 0:
|
|
||||||
abort(f"pg_basebackup exited, expected return code 0, got {ret.returncode}. aborting")
|
|
||||||
except Exception as e:
|
|
||||||
abort(f"pg_basebackup raised exception {e}. aborting")
|
|
||||||
|
|
||||||
# Check that the expected files are here
|
|
||||||
for p in clear_paths:
|
|
||||||
print(f"Checking that {p} exists locally")
|
|
||||||
if not os.path.exists(p):
|
|
||||||
abort(f"File {p} expected but not found, aborting")
|
|
||||||
|
|
||||||
# Cipher them
|
|
||||||
for c, e in zip(clear_paths, crypt_paths):
|
|
||||||
print(f"Ciphering {c} to {e}")
|
|
||||||
try:
|
|
||||||
ret = subprocess.run(["age", "-r", pubkey, "-o", e, c])
|
|
||||||
if ret.returncode != 0:
|
|
||||||
abort(f"age exit code is {ret}, 0 expected. aborting")
|
|
||||||
except Exception as e:
|
|
||||||
abort(f"aged raised an exception. {e}. aborting")
|
|
||||||
|
|
||||||
# Upload the backup to S3
|
|
||||||
for p, k in zip(crypt_paths, s3_keys):
|
|
||||||
try:
|
|
||||||
print(f"Uploading {p} to {k}")
|
|
||||||
result = client.fput_object(bucket, k, p)
|
|
||||||
print(
|
|
||||||
"created {0} object; etag: {1}, version-id: {2}".format(
|
|
||||||
result.object_name, result.etag, result.version_id,
|
|
||||||
),
|
|
||||||
)
|
|
||||||
except Exception as e:
|
|
||||||
abort(f"Exception {e} occured while upload {p}. aborting")
|
|
||||||
|
|
||||||
# Check that the files have been uploaded
|
|
||||||
for k in s3_keys:
|
|
||||||
try:
|
|
||||||
print(f"Checking that {k} exists remotely")
|
|
||||||
result = client.stat_object(bucket, k)
|
|
||||||
print(
|
|
||||||
"last-modified: {0}, size: {1}".format(
|
|
||||||
result.last_modified, result.size,
|
|
||||||
),
|
|
||||||
)
|
|
||||||
except Exception as e:
|
|
||||||
abort(f"{k} not found on S3. {e}. aborting")
|
|
||||||
|
|
||||||
abort(None)
|
|
|
@ -1,8 +0,0 @@
|
||||||
{
|
|
||||||
pkgsSrc = fetchTarball {
|
|
||||||
# Latest commit on https://github.com/NixOS/nixpkgs/tree/nixos-21.11
|
|
||||||
# As of 2022-04-15
|
|
||||||
url ="https://github.com/NixOS/nixpkgs/archive/2f06b87f64bc06229e05045853e0876666e1b023.tar.gz";
|
|
||||||
sha256 = "sha256:1d7zg96xw4qsqh7c89pgha9wkq3rbi9as3k3d88jlxy2z0ns0cy2";
|
|
||||||
};
|
|
||||||
}
|
|
|
@ -1,37 +0,0 @@
|
||||||
let
|
|
||||||
common = import ./common.nix;
|
|
||||||
pkgs = import common.pkgsSrc {};
|
|
||||||
python-with-my-packages = pkgs.python3.withPackages (p: with p; [
|
|
||||||
minio
|
|
||||||
]);
|
|
||||||
in
|
|
||||||
pkgs.stdenv.mkDerivation {
|
|
||||||
name = "backup-psql";
|
|
||||||
src = pkgs.lib.sourceFilesBySuffices ./. [ ".py" ];
|
|
||||||
|
|
||||||
buildInputs = [
|
|
||||||
python-with-my-packages
|
|
||||||
pkgs.age
|
|
||||||
pkgs.postgresql_14
|
|
||||||
];
|
|
||||||
|
|
||||||
buildPhase = ''
|
|
||||||
cat > backup-psql <<EOF
|
|
||||||
#!${pkgs.bash}/bin/bash
|
|
||||||
|
|
||||||
export PYTHONPATH=${python-with-my-packages}/${python-with-my-packages.sitePackages}
|
|
||||||
export PATH=${python-with-my-packages}/bin:${pkgs.age}/bin:${pkgs.postgresql_14}/bin
|
|
||||||
|
|
||||||
${python-with-my-packages}/bin/python3 $out/lib/backup-psql.py
|
|
||||||
EOF
|
|
||||||
|
|
||||||
chmod +x backup-psql
|
|
||||||
'';
|
|
||||||
|
|
||||||
installPhase = ''
|
|
||||||
mkdir -p $out/{bin,lib}
|
|
||||||
cp *.py $out/lib/backup-psql.py
|
|
||||||
cp backup-psql $out/bin/backup-psql
|
|
||||||
'';
|
|
||||||
}
|
|
||||||
|
|
|
@ -1,11 +0,0 @@
|
||||||
let
|
|
||||||
common = import ./common.nix;
|
|
||||||
app = import ./default.nix;
|
|
||||||
pkgs = import common.pkgsSrc {};
|
|
||||||
in
|
|
||||||
pkgs.dockerTools.buildImage {
|
|
||||||
name = "superboum/backup-psql-docker";
|
|
||||||
config = {
|
|
||||||
Cmd = [ "${app}/bin/backup-psql" ];
|
|
||||||
};
|
|
||||||
}
|
|
|
@ -1,196 +0,0 @@
|
||||||
job "backup_daily" {
|
|
||||||
datacenters = ["neptune", "scorpio", "bespin"]
|
|
||||||
type = "batch"
|
|
||||||
|
|
||||||
priority = "60"
|
|
||||||
|
|
||||||
periodic {
|
|
||||||
cron = "@daily"
|
|
||||||
// Do not allow overlapping runs.
|
|
||||||
prohibit_overlap = true
|
|
||||||
}
|
|
||||||
|
|
||||||
group "backup-dovecot" {
|
|
||||||
constraint {
|
|
||||||
attribute = "${attr.unique.hostname}"
|
|
||||||
operator = "="
|
|
||||||
value = "ananas"
|
|
||||||
}
|
|
||||||
|
|
||||||
task "main" {
|
|
||||||
driver = "docker"
|
|
||||||
|
|
||||||
config {
|
|
||||||
image = "restic/restic:0.16.4"
|
|
||||||
entrypoint = [ "/bin/sh", "-c" ]
|
|
||||||
args = [ "restic backup /mail && restic forget --group-by paths --keep-within 1m1d --keep-within-weekly 3m --keep-within-monthly 1y && restic prune --max-unused 50% --max-repack-size 2G && restic check" ]
|
|
||||||
volumes = [
|
|
||||||
"/mnt/ssd/mail:/mail"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
|
|
||||||
template {
|
|
||||||
data = <<EOH
|
|
||||||
AWS_ACCESS_KEY_ID={{ key "secrets/email/dovecot/backup_aws_access_key_id" }}
|
|
||||||
AWS_SECRET_ACCESS_KEY={{ key "secrets/email/dovecot/backup_aws_secret_access_key" }}
|
|
||||||
RESTIC_REPOSITORY={{ key "secrets/email/dovecot/backup_restic_repository" }}
|
|
||||||
RESTIC_PASSWORD={{ key "secrets/email/dovecot/backup_restic_password" }}
|
|
||||||
EOH
|
|
||||||
|
|
||||||
destination = "secrets/env_vars"
|
|
||||||
env = true
|
|
||||||
}
|
|
||||||
|
|
||||||
resources {
|
|
||||||
cpu = 500
|
|
||||||
memory = 100
|
|
||||||
memory_max = 1000
|
|
||||||
}
|
|
||||||
|
|
||||||
restart {
|
|
||||||
attempts = 2
|
|
||||||
interval = "30m"
|
|
||||||
delay = "15s"
|
|
||||||
mode = "fail"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
group "backup-consul" {
|
|
||||||
task "consul-kv-export" {
|
|
||||||
driver = "docker"
|
|
||||||
|
|
||||||
lifecycle {
|
|
||||||
hook = "prestart"
|
|
||||||
sidecar = false
|
|
||||||
}
|
|
||||||
|
|
||||||
config {
|
|
||||||
image = "consul:1.13.1"
|
|
||||||
network_mode = "host"
|
|
||||||
entrypoint = [ "/bin/sh", "-c" ]
|
|
||||||
args = [ "/bin/consul kv export > $NOMAD_ALLOC_DIR/consul.json" ]
|
|
||||||
volumes = [
|
|
||||||
"secrets:/etc/consul",
|
|
||||||
]
|
|
||||||
}
|
|
||||||
|
|
||||||
env {
|
|
||||||
CONSUL_HTTP_ADDR = "https://consul.service.prod.consul:8501"
|
|
||||||
CONSUL_HTTP_SSL = "true"
|
|
||||||
CONSUL_CACERT = "/etc/consul/consul.crt"
|
|
||||||
CONSUL_CLIENT_CERT = "/etc/consul/consul-client.crt"
|
|
||||||
CONSUL_CLIENT_KEY = "/etc/consul/consul-client.key"
|
|
||||||
}
|
|
||||||
|
|
||||||
resources {
|
|
||||||
cpu = 200
|
|
||||||
memory = 200
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
template {
|
|
||||||
data = "{{ key \"secrets/consul/consul.crt\" }}"
|
|
||||||
destination = "secrets/consul.crt"
|
|
||||||
}
|
|
||||||
|
|
||||||
template {
|
|
||||||
data = "{{ key \"secrets/consul/consul-client.crt\" }}"
|
|
||||||
destination = "secrets/consul-client.crt"
|
|
||||||
}
|
|
||||||
|
|
||||||
template {
|
|
||||||
data = "{{ key \"secrets/consul/consul-client.key\" }}"
|
|
||||||
destination = "secrets/consul-client.key"
|
|
||||||
}
|
|
||||||
|
|
||||||
restart {
|
|
||||||
attempts = 2
|
|
||||||
interval = "30m"
|
|
||||||
delay = "15s"
|
|
||||||
mode = "fail"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
task "restic-backup" {
|
|
||||||
driver = "docker"
|
|
||||||
|
|
||||||
config {
|
|
||||||
image = "restic/restic:0.16.4"
|
|
||||||
entrypoint = [ "/bin/sh", "-c" ]
|
|
||||||
args = [ "restic backup $NOMAD_ALLOC_DIR/consul.json && restic forget --group-by paths --keep-within 1m1d --keep-within-weekly 3m --keep-within-monthly 1y && restic prune --max-unused 50% --max-repack-size 2G && restic check" ]
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
template {
|
|
||||||
data = <<EOH
|
|
||||||
AWS_ACCESS_KEY_ID={{ key "secrets/backup/consul/backup_aws_access_key_id" }}
|
|
||||||
AWS_SECRET_ACCESS_KEY={{ key "secrets/backup/consul/backup_aws_secret_access_key" }}
|
|
||||||
RESTIC_REPOSITORY={{ key "secrets/backup/consul/backup_restic_repository" }}
|
|
||||||
RESTIC_PASSWORD={{ key "secrets/backup/consul/backup_restic_password" }}
|
|
||||||
EOH
|
|
||||||
|
|
||||||
destination = "secrets/env_vars"
|
|
||||||
env = true
|
|
||||||
}
|
|
||||||
|
|
||||||
resources {
|
|
||||||
cpu = 200
|
|
||||||
memory = 200
|
|
||||||
}
|
|
||||||
|
|
||||||
restart {
|
|
||||||
attempts = 2
|
|
||||||
interval = "30m"
|
|
||||||
delay = "15s"
|
|
||||||
mode = "fail"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
group "backup-cryptpad" {
|
|
||||||
constraint {
|
|
||||||
attribute = "${attr.unique.hostname}"
|
|
||||||
operator = "="
|
|
||||||
value = "abricot"
|
|
||||||
}
|
|
||||||
|
|
||||||
task "main" {
|
|
||||||
driver = "docker"
|
|
||||||
|
|
||||||
config {
|
|
||||||
image = "restic/restic:0.16.4"
|
|
||||||
entrypoint = [ "/bin/sh", "-c" ]
|
|
||||||
args = [ "restic backup /cryptpad && restic forget --group-by paths --keep-within 1m1d --keep-within-weekly 3m --keep-within-monthly 1y && restic prune --max-unused 50% --max-repack-size 2G && restic check" ]
|
|
||||||
volumes = [
|
|
||||||
"/mnt/ssd/cryptpad:/cryptpad"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
|
|
||||||
template {
|
|
||||||
data = <<EOH
|
|
||||||
AWS_ACCESS_KEY_ID={{ key "secrets/backup/cryptpad/backup_aws_access_key_id" }}
|
|
||||||
AWS_SECRET_ACCESS_KEY={{ key "secrets/backup/cryptpad/backup_aws_secret_access_key" }}
|
|
||||||
RESTIC_REPOSITORY={{ key "secrets/backup/cryptpad/backup_restic_repository" }}
|
|
||||||
RESTIC_PASSWORD={{ key "secrets/backup/cryptpad/backup_restic_password" }}
|
|
||||||
EOH
|
|
||||||
|
|
||||||
destination = "secrets/env_vars"
|
|
||||||
env = true
|
|
||||||
}
|
|
||||||
|
|
||||||
resources {
|
|
||||||
cpu = 500
|
|
||||||
memory = 100
|
|
||||||
memory_max = 1000
|
|
||||||
}
|
|
||||||
|
|
||||||
restart {
|
|
||||||
attempts = 2
|
|
||||||
interval = "30m"
|
|
||||||
delay = "15s"
|
|
||||||
mode = "fail"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
|
@ -1,72 +0,0 @@
|
||||||
job "backup-garage" {
|
|
||||||
datacenters = ["neptune", "bespin", "scorpio"]
|
|
||||||
type = "batch"
|
|
||||||
|
|
||||||
priority = "60"
|
|
||||||
|
|
||||||
periodic {
|
|
||||||
cron = "@daily"
|
|
||||||
// Do not allow overlapping runs.
|
|
||||||
prohibit_overlap = true
|
|
||||||
}
|
|
||||||
|
|
||||||
group "backup-garage" {
|
|
||||||
task "main" {
|
|
||||||
driver = "docker"
|
|
||||||
|
|
||||||
config {
|
|
||||||
image = "lxpz/backup_garage:9"
|
|
||||||
network_mode = "host"
|
|
||||||
volumes = [
|
|
||||||
"secrets/rclone.conf:/etc/secrets/rclone.conf"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
|
|
||||||
template {
|
|
||||||
data = <<EOH
|
|
||||||
GARAGE_ADMIN_TOKEN={{ key "secrets/garage/admin_token" }}
|
|
||||||
GARAGE_ADMIN_API_URL=http://localhost:3903
|
|
||||||
GARAGE_ACCESS_KEY={{ key "secrets/backup/garage/s3_access_key_id" }}
|
|
||||||
TARGET_BACKUP_DIR={{ key "secrets/backup/garage/target_sftp_directory" }}
|
|
||||||
EOH
|
|
||||||
destination = "secrets/env_vars"
|
|
||||||
env = true
|
|
||||||
}
|
|
||||||
|
|
||||||
template {
|
|
||||||
data = <<EOH
|
|
||||||
[garage]
|
|
||||||
type = s3
|
|
||||||
provider = Other
|
|
||||||
env_auth = false
|
|
||||||
access_key_id = {{ key "secrets/backup/garage/s3_access_key_id" }}
|
|
||||||
secret_access_key = {{ key "secrets/backup/garage/s3_secret_access_key" }}
|
|
||||||
endpoint = http://localhost:3900
|
|
||||||
region = garage
|
|
||||||
|
|
||||||
[backup]
|
|
||||||
type = sftp
|
|
||||||
host = {{ key "secrets/backup/garage/target_sftp_host" }}
|
|
||||||
user = {{ key "secrets/backup/garage/target_sftp_user" }}
|
|
||||||
port = {{ key "secrets/backup/garage/target_sftp_port" }}
|
|
||||||
key_pem = {{ key "secrets/backup/garage/target_sftp_key_pem" | replaceAll "\n" "\\n" }}
|
|
||||||
shell_type = unix
|
|
||||||
EOH
|
|
||||||
destination = "secrets/rclone.conf"
|
|
||||||
}
|
|
||||||
|
|
||||||
resources {
|
|
||||||
cpu = 500
|
|
||||||
memory = 200
|
|
||||||
memory_max = 4000
|
|
||||||
}
|
|
||||||
|
|
||||||
restart {
|
|
||||||
attempts = 2
|
|
||||||
interval = "30m"
|
|
||||||
delay = "15s"
|
|
||||||
mode = "fail"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
|
@ -1,55 +0,0 @@
|
||||||
job "backup_weekly" {
|
|
||||||
datacenters = ["scorpio", "neptune", "bespin"]
|
|
||||||
type = "batch"
|
|
||||||
|
|
||||||
priority = "60"
|
|
||||||
|
|
||||||
periodic {
|
|
||||||
cron = "@weekly"
|
|
||||||
// Do not allow overlapping runs.
|
|
||||||
prohibit_overlap = true
|
|
||||||
}
|
|
||||||
|
|
||||||
group "backup-psql" {
|
|
||||||
task "main" {
|
|
||||||
driver = "docker"
|
|
||||||
|
|
||||||
config {
|
|
||||||
image = "superboum/backup-psql-docker:gyr3aqgmhs0hxj0j9hkrdmm1m07i8za2"
|
|
||||||
volumes = [
|
|
||||||
// Mount a cache on the hard disk to avoid filling up the SSD
|
|
||||||
"/mnt/storage/tmp_bckp_psql:/mnt/cache"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
|
|
||||||
template {
|
|
||||||
data = <<EOH
|
|
||||||
CACHE_DIR=/mnt/cache
|
|
||||||
AWS_BUCKET=backups-pgbasebackup
|
|
||||||
AWS_ENDPOINT=s3.deuxfleurs.shirokumo.net
|
|
||||||
AWS_ACCESS_KEY_ID={{ key "secrets/postgres/backup/aws_access_key_id" }}
|
|
||||||
AWS_SECRET_ACCESS_KEY={{ key "secrets/postgres/backup/aws_secret_access_key" }}
|
|
||||||
CRYPT_PUBLIC_KEY={{ key "secrets/postgres/backup/crypt_public_key" }}
|
|
||||||
PSQL_HOST={{ env "meta.site" }}.psql-proxy.service.prod.consul
|
|
||||||
PSQL_USER={{ key "secrets/postgres/keeper/pg_repl_username" }}
|
|
||||||
PGPASSWORD={{ key "secrets/postgres/keeper/pg_repl_pwd" }}
|
|
||||||
EOH
|
|
||||||
|
|
||||||
destination = "secrets/env_vars"
|
|
||||||
env = true
|
|
||||||
}
|
|
||||||
|
|
||||||
resources {
|
|
||||||
cpu = 200
|
|
||||||
memory = 200
|
|
||||||
}
|
|
||||||
|
|
||||||
restart {
|
|
||||||
attempts = 2
|
|
||||||
interval = "30m"
|
|
||||||
delay = "15s"
|
|
||||||
mode = "fail"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
|
@ -1,92 +0,0 @@
|
||||||
# Cryptpad backup
|
|
||||||
|
|
||||||
[secrets."backup/cryptpad/backup_restic_password"]
|
|
||||||
type = 'user'
|
|
||||||
description = 'Restic password to encrypt backups'
|
|
||||||
|
|
||||||
[secrets."backup/cryptpad/backup_aws_secret_access_key"]
|
|
||||||
type = 'user'
|
|
||||||
description = 'Backup AWS secret access key'
|
|
||||||
|
|
||||||
[secrets."backup/cryptpad/backup_restic_repository"]
|
|
||||||
type = 'user'
|
|
||||||
description = 'Restic repository'
|
|
||||||
example = 's3:https://s3.garage.tld'
|
|
||||||
|
|
||||||
[secrets."backup/cryptpad/backup_aws_access_key_id"]
|
|
||||||
type = 'user'
|
|
||||||
description = 'Backup AWS access key ID'
|
|
||||||
|
|
||||||
|
|
||||||
# Consul backup
|
|
||||||
|
|
||||||
[secrets."backup/consul/backup_restic_password"]
|
|
||||||
type = 'user'
|
|
||||||
description = 'Restic password to encrypt backups'
|
|
||||||
|
|
||||||
[secrets."backup/consul/backup_aws_secret_access_key"]
|
|
||||||
type = 'user'
|
|
||||||
description = 'Backup AWS secret access key'
|
|
||||||
|
|
||||||
[secrets."backup/consul/backup_restic_repository"]
|
|
||||||
type = 'user'
|
|
||||||
description = 'Restic repository'
|
|
||||||
example = 's3:https://s3.garage.tld'
|
|
||||||
|
|
||||||
[secrets."backup/consul/backup_aws_access_key_id"]
|
|
||||||
type = 'user'
|
|
||||||
description = 'Backup AWS access key ID'
|
|
||||||
|
|
||||||
|
|
||||||
# Postgresql backup
|
|
||||||
|
|
||||||
[secrets."postgres/backup/aws_access_key_id"]
|
|
||||||
type = 'user'
|
|
||||||
description = 'Minio access key'
|
|
||||||
|
|
||||||
[secrets."postgres/backup/aws_secret_access_key"]
|
|
||||||
type = 'user'
|
|
||||||
description = 'Minio secret key'
|
|
||||||
|
|
||||||
[secrets."postgres/backup/crypt_public_key"]
|
|
||||||
type = 'user'
|
|
||||||
description = 'A public key to encypt backups with age'
|
|
||||||
|
|
||||||
|
|
||||||
# Plume backup
|
|
||||||
|
|
||||||
[secrets."plume/backup_restic_repository"]
|
|
||||||
type = 'user'
|
|
||||||
description = 'Restic repository'
|
|
||||||
example = 's3:https://s3.garage.tld'
|
|
||||||
|
|
||||||
[secrets."plume/backup_restic_password"]
|
|
||||||
type = 'user'
|
|
||||||
description = 'Restic password to encrypt backups'
|
|
||||||
|
|
||||||
[secrets."plume/backup_aws_secret_access_key"]
|
|
||||||
type = 'user'
|
|
||||||
description = 'Backup AWS secret access key'
|
|
||||||
|
|
||||||
[secrets."plume/backup_aws_access_key_id"]
|
|
||||||
type = 'user'
|
|
||||||
description = 'Backup AWS access key ID'
|
|
||||||
|
|
||||||
|
|
||||||
# Dovecot backup
|
|
||||||
|
|
||||||
[secrets."email/dovecot/backup_restic_password"]
|
|
||||||
type = 'user'
|
|
||||||
description = 'Restic backup password to encrypt data'
|
|
||||||
|
|
||||||
[secrets."email/dovecot/backup_aws_secret_access_key"]
|
|
||||||
type = 'user'
|
|
||||||
description = 'AWS Secret Access key'
|
|
||||||
|
|
||||||
[secrets."email/dovecot/backup_restic_repository"]
|
|
||||||
type = 'user'
|
|
||||||
description = 'Restic Repository URL, check op_guide/backup-minio to see the format'
|
|
||||||
|
|
||||||
[secrets."email/dovecot/backup_aws_access_key_id"]
|
|
||||||
type = 'user'
|
|
||||||
description = 'AWS Acces Key ID'
|
|
|
@ -1,88 +0,0 @@
|
||||||
job "bagage" {
|
|
||||||
datacenters = ["corrin", "neptune", "scorpio"]
|
|
||||||
type = "service"
|
|
||||||
priority = 90
|
|
||||||
|
|
||||||
constraint {
|
|
||||||
attribute = "${attr.cpu.arch}"
|
|
||||||
value = "amd64"
|
|
||||||
}
|
|
||||||
|
|
||||||
group "main" {
|
|
||||||
count = 1
|
|
||||||
|
|
||||||
network {
|
|
||||||
port "web_port" {
|
|
||||||
static = 8080
|
|
||||||
to = 8080
|
|
||||||
}
|
|
||||||
port "ssh_port" {
|
|
||||||
static = 2222
|
|
||||||
to = 2222
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
task "server" {
|
|
||||||
driver = "docker"
|
|
||||||
config {
|
|
||||||
image = "lxpz/amd64_bagage:20231016-3"
|
|
||||||
readonly_rootfs = false
|
|
||||||
network_mode = "host"
|
|
||||||
volumes = [
|
|
||||||
"secrets/id_rsa:/id_rsa"
|
|
||||||
]
|
|
||||||
ports = [ "web_port", "ssh_port" ]
|
|
||||||
}
|
|
||||||
|
|
||||||
env {
|
|
||||||
BAGAGE_LDAP_ENDPOINT = "bottin.service.prod.consul:389"
|
|
||||||
}
|
|
||||||
|
|
||||||
resources {
|
|
||||||
memory = 200
|
|
||||||
cpu = 100
|
|
||||||
}
|
|
||||||
|
|
||||||
template {
|
|
||||||
data = "{{ key \"secrets/bagage/id_rsa\" }}"
|
|
||||||
destination = "secrets/id_rsa"
|
|
||||||
}
|
|
||||||
|
|
||||||
service {
|
|
||||||
name = "bagage-ssh"
|
|
||||||
port = "ssh_port"
|
|
||||||
address_mode = "host"
|
|
||||||
tags = [
|
|
||||||
"bagage",
|
|
||||||
"(diplonat (tcp_port 2222))",
|
|
||||||
"d53-a sftp.deuxfleurs.fr",
|
|
||||||
"d53-aaaa sftp.deuxfleurs.fr",
|
|
||||||
]
|
|
||||||
}
|
|
||||||
|
|
||||||
service {
|
|
||||||
name = "bagage-webdav"
|
|
||||||
tags = [
|
|
||||||
"bagage",
|
|
||||||
"tricot bagage.deuxfleurs.fr",
|
|
||||||
"d53-cname bagage.deuxfleurs.fr",
|
|
||||||
]
|
|
||||||
port = "web_port"
|
|
||||||
address_mode = "host"
|
|
||||||
check {
|
|
||||||
type = "tcp"
|
|
||||||
port = "web_port"
|
|
||||||
address_mode = "host"
|
|
||||||
interval = "60s"
|
|
||||||
timeout = "5s"
|
|
||||||
check_restart {
|
|
||||||
limit = 3
|
|
||||||
grace = "90s"
|
|
||||||
ignore_warnings = false
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
|
@ -1,4 +0,0 @@
|
||||||
[secrets."bagage/id_rsa"]
|
|
||||||
type = 'command'
|
|
||||||
rotate = true
|
|
||||||
command = 'ssh-keygen -q -f >(cat) -N "" <<< y 2>/dev/null 1>&2 ; true'
|
|
|
@ -1,11 +0,0 @@
|
||||||
HOST=0.0.0.0
|
|
||||||
PORT={{ env "NOMAD_PORT_web_port" }}
|
|
||||||
SESSION_SECRET={{ key "secrets/cms/teabag/session" | trimSpace }}
|
|
||||||
|
|
||||||
GITEA_KEY={{ key "secrets/cms/teabag/gitea_key" | trimSpace }}
|
|
||||||
GITEA_SECRET={{ key "secrets/cms/teabag/gitea_secret" | trimSpace }}
|
|
||||||
GITEA_BASE_URL=https://git.deuxfleurs.fr
|
|
||||||
GITEA_AUTH_URI=login/oauth/authorize
|
|
||||||
GITEA_TOKEN_URI=login/oauth/access_token
|
|
||||||
GITEA_USER_URI=api/v1/user
|
|
||||||
CALLBACK_URI=https://teabag.deuxfleurs.fr/callback
|
|
|
@ -1,74 +0,0 @@
|
||||||
job "cms" {
|
|
||||||
datacenters = ["corrin", "neptune", "scorpio"]
|
|
||||||
type = "service"
|
|
||||||
|
|
||||||
priority = 100
|
|
||||||
|
|
||||||
constraint {
|
|
||||||
attribute = "${attr.cpu.arch}"
|
|
||||||
value = "amd64"
|
|
||||||
}
|
|
||||||
|
|
||||||
group "auth" {
|
|
||||||
count = 1
|
|
||||||
|
|
||||||
network {
|
|
||||||
port "web_port" { }
|
|
||||||
}
|
|
||||||
|
|
||||||
task "teabag" {
|
|
||||||
driver = "docker"
|
|
||||||
config {
|
|
||||||
# Using a digest to pin the container as no tag is provided
|
|
||||||
# https://github.com/denyskon/teabag/pkgs/container/teabag
|
|
||||||
image = "ghcr.io/denyskon/teabag@sha256:d5af7c6caf172727fbfa047c8ee82f9087ef904f0f3bffdeec656be04e9e0a14"
|
|
||||||
ports = [ "web_port" ]
|
|
||||||
volumes = [
|
|
||||||
"secrets/teabag.env:/etc/teabag/teabag.env",
|
|
||||||
]
|
|
||||||
}
|
|
||||||
|
|
||||||
template {
|
|
||||||
data = file("../config/teabag.env")
|
|
||||||
destination = "secrets/teabag.env"
|
|
||||||
}
|
|
||||||
|
|
||||||
resources {
|
|
||||||
memory = 20
|
|
||||||
memory_max = 50
|
|
||||||
cpu = 50
|
|
||||||
}
|
|
||||||
|
|
||||||
service {
|
|
||||||
name = "teabag"
|
|
||||||
tags = [
|
|
||||||
"teabag",
|
|
||||||
"tricot teabag.deuxfleurs.fr",
|
|
||||||
"d53-cname teabag.deuxfleurs.fr",
|
|
||||||
]
|
|
||||||
port = "web_port"
|
|
||||||
check {
|
|
||||||
type = "http"
|
|
||||||
protocol = "http"
|
|
||||||
port = "web_port"
|
|
||||||
path = "/"
|
|
||||||
interval = "60s"
|
|
||||||
timeout = "5s"
|
|
||||||
check_restart {
|
|
||||||
limit = 3
|
|
||||||
grace = "600s"
|
|
||||||
ignore_warnings = false
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
restart {
|
|
||||||
interval = "30m"
|
|
||||||
attempts = 20
|
|
||||||
delay = "15s"
|
|
||||||
mode = "delay"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
|
@ -1,17 +0,0 @@
|
||||||
# HTTP Session Encryption Key
|
|
||||||
[secrets."cms/teabag/session"]
|
|
||||||
type = 'command'
|
|
||||||
rotate = true
|
|
||||||
command = 'openssl rand -base64 32'
|
|
||||||
|
|
||||||
# Gitea Application Token
|
|
||||||
[secrets."cms/teabag/gitea_key"]
|
|
||||||
type = 'user'
|
|
||||||
description = 'Gitea Application Key'
|
|
||||||
example = '4fea0...'
|
|
||||||
|
|
||||||
[secrets."cms/teabag/gitea_secret"]
|
|
||||||
type = 'user'
|
|
||||||
description = 'Gitea Secret Key'
|
|
||||||
example = 'gto_bz6f...'
|
|
||||||
|
|
|
@ -1,26 +0,0 @@
|
||||||
{
|
|
||||||
"suffix": "{{ key "secrets/directory/ldap_base_dn" }}",
|
|
||||||
"bind": "0.0.0.0:389",
|
|
||||||
"log_level": "debug",
|
|
||||||
"acl": [
|
|
||||||
"*,{{ key "secrets/directory/ldap_base_dn" }}::read:*:* !userpassword !user_secret !alternate_user_secrets !garage_s3_secret_key",
|
|
||||||
"*::read modify:SELF:*",
|
|
||||||
"ANONYMOUS::bind:*,ou=users,{{ key "secrets/directory/ldap_base_dn" }}:",
|
|
||||||
"ANONYMOUS::bind:cn=admin,{{ key "secrets/directory/ldap_base_dn" }}:",
|
|
||||||
"*,ou=services,ou=users,{{ key "secrets/directory/ldap_base_dn" }}::bind:*,ou=users,{{ key "secrets/directory/ldap_base_dn" }}:*",
|
|
||||||
"*,ou=services,ou=users,{{ key "secrets/directory/ldap_base_dn" }}::read:*:*",
|
|
||||||
|
|
||||||
"*:cn=asso_deuxfleurs,ou=groups,{{ key "secrets/directory/ldap_base_dn" }}:add:*,ou=invitations,{{ key "secrets/directory/ldap_base_dn" }}:*",
|
|
||||||
"ANONYMOUS::bind:*,ou=invitations,{{ key "secrets/directory/ldap_base_dn" }}:",
|
|
||||||
"*,ou=invitations,{{ key "secrets/directory/ldap_base_dn" }}::delete:SELF:*",
|
|
||||||
|
|
||||||
"*:cn=asso_deuxfleurs,ou=groups,{{ key "secrets/directory/ldap_base_dn" }}:add:*,ou=users,{{ key "secrets/directory/ldap_base_dn" }}:*",
|
|
||||||
"*,ou=invitations,{{ key "secrets/directory/ldap_base_dn" }}::add:*,ou=users,{{ key "secrets/directory/ldap_base_dn" }}:*",
|
|
||||||
|
|
||||||
"*:cn=asso_deuxfleurs,ou=groups,{{ key "secrets/directory/ldap_base_dn" }}:modifyAdd:cn=email,ou=groups,{{ key "secrets/directory/ldap_base_dn" }}:*",
|
|
||||||
"*,ou=invitations,{{ key "secrets/directory/ldap_base_dn" }}::modifyAdd:cn=email,ou=groups,{{ key "secrets/directory/ldap_base_dn" }}:*",
|
|
||||||
|
|
||||||
"cn=admin,{{ key "secrets/directory/ldap_base_dn" }}::read add modify delete:*:*",
|
|
||||||
"*:cn=admin,ou=groups,{{ key "secrets/directory/ldap_base_dn" }}:read add modify delete:*:*"
|
|
||||||
]
|
|
||||||
}
|
|
|
@ -1,100 +0,0 @@
|
||||||
job "core-bottin" {
|
|
||||||
datacenters = ["corrin", "neptune", "scorpio", "bespin"]
|
|
||||||
type = "system"
|
|
||||||
priority = 90
|
|
||||||
|
|
||||||
update {
|
|
||||||
max_parallel = 1
|
|
||||||
stagger = "1m"
|
|
||||||
}
|
|
||||||
|
|
||||||
group "bottin" {
|
|
||||||
constraint {
|
|
||||||
distinct_property = "${meta.site}"
|
|
||||||
value = "1"
|
|
||||||
}
|
|
||||||
|
|
||||||
network {
|
|
||||||
port "ldap_port" {
|
|
||||||
static = 389
|
|
||||||
to = 389
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
task "bottin" {
|
|
||||||
driver = "docker"
|
|
||||||
config {
|
|
||||||
image = "dxflrs/bottin:7h18i30cckckaahv87d3c86pn4a7q41z"
|
|
||||||
network_mode = "host"
|
|
||||||
readonly_rootfs = true
|
|
||||||
ports = [ "ldap_port" ]
|
|
||||||
volumes = [
|
|
||||||
"secrets/config.json:/config.json",
|
|
||||||
"secrets:/etc/bottin",
|
|
||||||
]
|
|
||||||
}
|
|
||||||
|
|
||||||
restart {
|
|
||||||
interval = "5m"
|
|
||||||
attempts = 10
|
|
||||||
delay = "15s"
|
|
||||||
mode = "delay"
|
|
||||||
}
|
|
||||||
|
|
||||||
resources {
|
|
||||||
memory = 100
|
|
||||||
memory_max = 200
|
|
||||||
}
|
|
||||||
|
|
||||||
template {
|
|
||||||
data = file("../config/bottin/config.json.tpl")
|
|
||||||
destination = "secrets/config.json"
|
|
||||||
}
|
|
||||||
|
|
||||||
template {
|
|
||||||
data = "{{ key \"secrets/consul/consul.crt\" }}"
|
|
||||||
destination = "secrets/consul.crt"
|
|
||||||
}
|
|
||||||
|
|
||||||
template {
|
|
||||||
data = "{{ key \"secrets/consul/consul-client.crt\" }}"
|
|
||||||
destination = "secrets/consul-client.crt"
|
|
||||||
}
|
|
||||||
|
|
||||||
template {
|
|
||||||
data = "{{ key \"secrets/consul/consul-client.key\" }}"
|
|
||||||
destination = "secrets/consul-client.key"
|
|
||||||
}
|
|
||||||
|
|
||||||
template {
|
|
||||||
data = <<EOH
|
|
||||||
CONSUL_HTTP_ADDR=https://consul.service.prod.consul:8501
|
|
||||||
CONSUL_HTTP_SSL=true
|
|
||||||
CONSUL_CACERT=/etc/bottin/consul.crt
|
|
||||||
CONSUL_CLIENT_CERT=/etc/bottin/consul-client.crt
|
|
||||||
CONSUL_CLIENT_KEY=/etc/bottin/consul-client.key
|
|
||||||
EOH
|
|
||||||
destination = "secrets/env"
|
|
||||||
env = true
|
|
||||||
}
|
|
||||||
|
|
||||||
service {
|
|
||||||
tags = [ "${meta.site}" ]
|
|
||||||
port = "ldap_port"
|
|
||||||
address_mode = "host"
|
|
||||||
name = "bottin"
|
|
||||||
check {
|
|
||||||
type = "tcp"
|
|
||||||
port = "ldap_port"
|
|
||||||
interval = "60s"
|
|
||||||
timeout = "5s"
|
|
||||||
check_restart {
|
|
||||||
limit = 3
|
|
||||||
grace = "90s"
|
|
||||||
ignore_warnings = false
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
|
@ -1,102 +0,0 @@
|
||||||
job "core-d53" {
|
|
||||||
datacenters = ["neptune", "scorpio", "bespin", "corrin"]
|
|
||||||
type = "service"
|
|
||||||
priority = 90
|
|
||||||
|
|
||||||
group "D53" {
|
|
||||||
count = 1
|
|
||||||
|
|
||||||
task "d53" {
|
|
||||||
driver = "docker"
|
|
||||||
|
|
||||||
config {
|
|
||||||
image = "lxpz/amd64_d53:4"
|
|
||||||
network_mode = "host"
|
|
||||||
readonly_rootfs = true
|
|
||||||
volumes = [
|
|
||||||
"secrets:/etc/d53",
|
|
||||||
]
|
|
||||||
}
|
|
||||||
|
|
||||||
resources {
|
|
||||||
cpu = 100
|
|
||||||
memory = 100
|
|
||||||
}
|
|
||||||
|
|
||||||
restart {
|
|
||||||
interval = "3m"
|
|
||||||
attempts = 10
|
|
||||||
delay = "15s"
|
|
||||||
mode = "delay"
|
|
||||||
}
|
|
||||||
|
|
||||||
template {
|
|
||||||
data = "{{ key \"secrets/consul/consul-ca.crt\" }}"
|
|
||||||
destination = "secrets/consul-ca.crt"
|
|
||||||
}
|
|
||||||
|
|
||||||
template {
|
|
||||||
data = "{{ key \"secrets/consul/consul-client.crt\" }}"
|
|
||||||
destination = "secrets/consul-client.crt"
|
|
||||||
}
|
|
||||||
|
|
||||||
template {
|
|
||||||
data = "{{ key \"secrets/consul/consul-client.key\" }}"
|
|
||||||
destination = "secrets/consul-client.key"
|
|
||||||
}
|
|
||||||
|
|
||||||
template {
|
|
||||||
data = <<EOH
|
|
||||||
D53_CONSUL_HOST=https://localhost:8501
|
|
||||||
D53_CONSUL_CA_CERT=/etc/d53/consul-ca.crt
|
|
||||||
D53_CONSUL_CLIENT_CERT=/etc/d53/consul-client.crt
|
|
||||||
D53_CONSUL_CLIENT_KEY=/etc/d53/consul-client.key
|
|
||||||
D53_PROVIDERS=deuxfleurs.fr:gandi
|
|
||||||
D53_GANDI_API_KEY={{ key "secrets/d53/gandi_api_key" }}
|
|
||||||
D53_ALLOWED_DOMAINS=deuxfleurs.fr
|
|
||||||
RUST_LOG=d53=info
|
|
||||||
EOH
|
|
||||||
destination = "secrets/env"
|
|
||||||
env = true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
# Dummy task for Gitea (still on an external VM), runs on any bespin node
|
|
||||||
# and allows D53 to automatically update the A record for git.deuxfleurs.fr
|
|
||||||
# to the IPv4 address of the bespin site (that changes occasionnaly)
|
|
||||||
group "gitea-dummy" {
|
|
||||||
count = 1
|
|
||||||
|
|
||||||
network {
|
|
||||||
port "dummy" {
|
|
||||||
to = 999
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
task "main" {
|
|
||||||
driver = "docker"
|
|
||||||
|
|
||||||
constraint {
|
|
||||||
attribute = "${meta.site}"
|
|
||||||
operator = "="
|
|
||||||
value = "bespin"
|
|
||||||
}
|
|
||||||
|
|
||||||
config {
|
|
||||||
image = "alpine"
|
|
||||||
command = "sh"
|
|
||||||
args = ["-c", "while true; do echo x; sleep 60; done"]
|
|
||||||
ports = [ "dummy" ]
|
|
||||||
}
|
|
||||||
|
|
||||||
service {
|
|
||||||
name = "gitea-dummy"
|
|
||||||
port = "dummy"
|
|
||||||
tags = [
|
|
||||||
"d53-a git.deuxfleurs.fr",
|
|
||||||
]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
|
@ -1,123 +0,0 @@
|
||||||
job "core-tricot" {
|
|
||||||
# bespin pas pour l'instant, on a des soucis de SSL avec gitea
|
|
||||||
# on pourra mettre bespin quand on aura migré gitea de la vm vers le cluster
|
|
||||||
# en attendant, les deux ne sont pas capables de partager les certificats SSL
|
|
||||||
# donc on laisse la VM gitea gérer les certifs et prendre tout le trafic http(s)
|
|
||||||
datacenters = ["corrin", "neptune", "scorpio"]
|
|
||||||
type = "system"
|
|
||||||
priority = 90
|
|
||||||
|
|
||||||
update {
|
|
||||||
max_parallel = 1
|
|
||||||
stagger = "5m"
|
|
||||||
}
|
|
||||||
|
|
||||||
group "tricot" {
|
|
||||||
constraint {
|
|
||||||
distinct_property = "${meta.site}"
|
|
||||||
value = "1"
|
|
||||||
}
|
|
||||||
|
|
||||||
network {
|
|
||||||
port "http_port" { static = 80 }
|
|
||||||
port "https_port" { static = 443 }
|
|
||||||
port "metrics_port" { static = 9334 }
|
|
||||||
}
|
|
||||||
|
|
||||||
task "server" {
|
|
||||||
driver = "docker"
|
|
||||||
|
|
||||||
config {
|
|
||||||
image = "armael/tricot:n6dk1b5xrdww12zf12jbcmihqs6g1brz"
|
|
||||||
network_mode = "host"
|
|
||||||
readonly_rootfs = true
|
|
||||||
ports = [ "http_port", "https_port" ]
|
|
||||||
volumes = [
|
|
||||||
"secrets:/etc/tricot",
|
|
||||||
]
|
|
||||||
ulimit {
|
|
||||||
nofile = "65535:65535"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
resources {
|
|
||||||
cpu = 1000
|
|
||||||
memory = 200
|
|
||||||
memory_max = 500
|
|
||||||
}
|
|
||||||
|
|
||||||
restart {
|
|
||||||
interval = "5m"
|
|
||||||
attempts = 10
|
|
||||||
delay = "15s"
|
|
||||||
mode = "delay"
|
|
||||||
}
|
|
||||||
|
|
||||||
template {
|
|
||||||
data = "{{ key \"secrets/consul/consul-ca.crt\" }}"
|
|
||||||
destination = "secrets/consul-ca.crt"
|
|
||||||
}
|
|
||||||
|
|
||||||
template {
|
|
||||||
data = "{{ key \"secrets/consul/consul-client.crt\" }}"
|
|
||||||
destination = "secrets/consul-client.crt"
|
|
||||||
}
|
|
||||||
|
|
||||||
template {
|
|
||||||
data = "{{ key \"secrets/consul/consul-client.key\" }}"
|
|
||||||
destination = "secrets/consul-client.key"
|
|
||||||
}
|
|
||||||
|
|
||||||
template {
|
|
||||||
data = <<EOH
|
|
||||||
TRICOT_NODE_NAME={{ env "attr.unique.hostname" }}
|
|
||||||
TRICOT_LETSENCRYPT_EMAIL=prod-sysadmin@deuxfleurs.fr
|
|
||||||
TRICOT_ENABLE_COMPRESSION=true
|
|
||||||
TRICOT_CONSUL_HOST=https://consul.service.prod.consul:8501
|
|
||||||
TRICOT_CONSUL_TLS_SKIP_VERIFY=true
|
|
||||||
TRICOT_CONSUL_CLIENT_CERT=/etc/tricot/consul-client.crt
|
|
||||||
TRICOT_CONSUL_CLIENT_KEY=/etc/tricot/consul-client.key
|
|
||||||
TRICOT_HTTP_BIND_ADDR=[::]:80
|
|
||||||
TRICOT_HTTPS_BIND_ADDR=[::]:443
|
|
||||||
TRICOT_METRICS_BIND_ADDR=[::]:9334
|
|
||||||
TRICOT_WARMUP_CERT_MEMORY_STORE=true
|
|
||||||
RUST_LOG=tricot=debug
|
|
||||||
EOH
|
|
||||||
destination = "secrets/env"
|
|
||||||
env = true
|
|
||||||
}
|
|
||||||
|
|
||||||
service {
|
|
||||||
name = "tricot-http"
|
|
||||||
port = "http_port"
|
|
||||||
tags = [
|
|
||||||
"(diplonat (tcp_port 80))",
|
|
||||||
"${meta.site}"
|
|
||||||
]
|
|
||||||
address_mode = "host"
|
|
||||||
}
|
|
||||||
|
|
||||||
service {
|
|
||||||
name = "tricot-https"
|
|
||||||
port = "https_port"
|
|
||||||
tags = [
|
|
||||||
"(diplonat (tcp_port 443))",
|
|
||||||
"${meta.site}",
|
|
||||||
"d53-a global.site.deuxfleurs.fr",
|
|
||||||
"d53-aaaa global.site.deuxfleurs.fr",
|
|
||||||
"d53-a ${meta.site}.site.deuxfleurs.fr",
|
|
||||||
"d53-aaaa ${meta.site}.site.deuxfleurs.fr",
|
|
||||||
"d53-a v4.${meta.site}.site.deuxfleurs.fr",
|
|
||||||
"d53-aaaa v6.${meta.site}.site.deuxfleurs.fr",
|
|
||||||
]
|
|
||||||
address_mode = "host"
|
|
||||||
}
|
|
||||||
|
|
||||||
service {
|
|
||||||
name = "tricot-metrics"
|
|
||||||
port = "metrics_port"
|
|
||||||
address_mode = "host"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
|
@ -1,9 +0,0 @@
|
||||||
[secrets."directory/ldap_base_dn"]
|
|
||||||
type = 'user'
|
|
||||||
description = 'LDAP base DN for everything'
|
|
||||||
example = 'dc=example,dc=com'
|
|
||||||
|
|
||||||
[secrets."d53/gandi_api_key"]
|
|
||||||
type = 'user'
|
|
||||||
description = 'Gandi API key'
|
|
||||||
|
|
|
@ -1,15 +0,0 @@
|
||||||
#!/bin/sh
|
|
||||||
|
|
||||||
turnserver \
|
|
||||||
-n \
|
|
||||||
--external-ip=$(detect-external-ip) \
|
|
||||||
--min-port=49160 \
|
|
||||||
--max-port=49169 \
|
|
||||||
--log-file=stdout \
|
|
||||||
--use-auth-secret \
|
|
||||||
--realm turn.deuxfleurs.fr \
|
|
||||||
--no-cli \
|
|
||||||
--no-tls \
|
|
||||||
--no-dtls \
|
|
||||||
--prometheus \
|
|
||||||
--static-auth-secret '{{ key "secrets/coturn/static-auth-secret" | trimSpace }}'
|
|
|
@ -1,85 +0,0 @@
|
||||||
job "coturn" {
|
|
||||||
datacenters = ["corrin", "neptune", "scorpio"]
|
|
||||||
type = "service"
|
|
||||||
|
|
||||||
priority = 100
|
|
||||||
|
|
||||||
constraint {
|
|
||||||
attribute = "${attr.cpu.arch}"
|
|
||||||
value = "amd64"
|
|
||||||
}
|
|
||||||
|
|
||||||
group "main" {
|
|
||||||
count = 1
|
|
||||||
|
|
||||||
network {
|
|
||||||
port "prometheus" { static = 9641 }
|
|
||||||
port "turn_ctrl" { static = 3478 }
|
|
||||||
port "turn_data0" { static = 49160 }
|
|
||||||
port "turn_data1" { static = 49161 }
|
|
||||||
port "turn_data2" { static = 49162 }
|
|
||||||
port "turn_data3" { static = 49163 }
|
|
||||||
port "turn_data4" { static = 49164 }
|
|
||||||
port "turn_data5" { static = 49165 }
|
|
||||||
port "turn_data6" { static = 49166 }
|
|
||||||
port "turn_data7" { static = 49167 }
|
|
||||||
port "turn_data8" { static = 49168 }
|
|
||||||
port "turn_data9" { static = 49169 }
|
|
||||||
}
|
|
||||||
|
|
||||||
task "turnserver" {
|
|
||||||
driver = "docker"
|
|
||||||
config {
|
|
||||||
image = "coturn/coturn:4.6.1-r2-alpine"
|
|
||||||
ports = [ "prometheus", "turn_ctrl", "turn_data0", "turn_data1", "turn_data2",
|
|
||||||
"turn_data3", "turn_data4", "turn_data5", "turn_data6", "turn_data7",
|
|
||||||
"turn_data8", "turn_data9" ]
|
|
||||||
entrypoint = ["/local/docker-entrypoint.sh"]
|
|
||||||
network_mode = "host"
|
|
||||||
}
|
|
||||||
|
|
||||||
template {
|
|
||||||
data = file("../config/docker-entrypoint.sh")
|
|
||||||
destination = "local/docker-entrypoint.sh"
|
|
||||||
perms = 555
|
|
||||||
}
|
|
||||||
|
|
||||||
resources {
|
|
||||||
memory = 20
|
|
||||||
memory_max = 50
|
|
||||||
cpu = 50
|
|
||||||
}
|
|
||||||
|
|
||||||
service {
|
|
||||||
name = "coturn"
|
|
||||||
tags = [
|
|
||||||
"coturn",
|
|
||||||
"d53-cname turn.deuxfleurs.fr",
|
|
||||||
"(diplonat (tcp_port 3478) (udp_port 3478 49160 49161 49162 49163 49164 49165 49166 49167 49168 49169))",
|
|
||||||
]
|
|
||||||
port = "turn_ctrl"
|
|
||||||
check {
|
|
||||||
type = "http"
|
|
||||||
protocol = "http"
|
|
||||||
port = "prometheus"
|
|
||||||
path = "/"
|
|
||||||
interval = "60s"
|
|
||||||
timeout = "5s"
|
|
||||||
check_restart {
|
|
||||||
limit = 3
|
|
||||||
grace = "600s"
|
|
||||||
ignore_warnings = false
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
restart {
|
|
||||||
interval = "30m"
|
|
||||||
attempts = 20
|
|
||||||
delay = "15s"
|
|
||||||
mode = "delay"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
|
@ -1,7 +0,0 @@
|
||||||
docker run \
|
|
||||||
--name coturn \
|
|
||||||
--rm \
|
|
||||||
-it \
|
|
||||||
-v `pwd`/docker-entrypoint.sh:/usr/local/bin/docker-entrypoint.sh \
|
|
||||||
--network=host \
|
|
||||||
coturn/coturn:4.6.1-r2-alpine
|
|
|
@ -1,6 +0,0 @@
|
||||||
stun+turn
|
|
||||||
tcp: 3478
|
|
||||||
udp: 49160-49169
|
|
||||||
|
|
||||||
prometheus:
|
|
||||||
tcp: 9641
|
|
|
@ -1,5 +0,0 @@
|
||||||
# coturn
|
|
||||||
[secrets."coturn/static-auth-secret"]
|
|
||||||
type = 'command'
|
|
||||||
rotate = true
|
|
||||||
command = "openssl rand -base64 64|tr -d '\n'"
|
|
|
@ -1,52 +0,0 @@
|
||||||
# CryptPad for NixOS with Deuxfleurs flavour
|
|
||||||
|
|
||||||
## Building
|
|
||||||
|
|
||||||
The `default.nix` file follows the nixpkgs `callPackage` convention for fetching dependencies, so you need to either:
|
|
||||||
|
|
||||||
- Run `nix-build --expr '{ ... }@args: (import <nixpkgs> {}).callPackage ./default.nix args'`
|
|
||||||
- Do the `callPackage from a higher-level directory importing your package`
|
|
||||||
|
|
||||||
### Docker
|
|
||||||
|
|
||||||
The `docker.nix` derives into a Docker image you can load simply by running:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
docker load -i $(nix-build docker.nix)
|
|
||||||
```
|
|
||||||
|
|
||||||
You can then test the built Docker image using the provided `docker-compose.yml` and `config.js` files, which are
|
|
||||||
configured to render the instance accessible at `http://localhost:3000` with data stored into the `_data` folder.
|
|
||||||
|
|
||||||
|
|
||||||
### Deuxfleurs flavour
|
|
||||||
The `deuxfleurs.nix` file derives into two derivations: The CryptPad derivation itself and a Docker image,
|
|
||||||
which can be choose by passing the `-A [name]` flags to `nix-build`
|
|
||||||
|
|
||||||
For example, to build and load the Deuxfleurs-flavoured CryptPad Docker image, you run:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
docker load -i $(nix-build deuxfleurs.nix -A docker)
|
|
||||||
```
|
|
||||||
|
|
||||||
## OnlyOffice integration
|
|
||||||
Apart for `deuxfleurs.nix`, both `default.nix` and `docker.nix` files build CryptPad with a copy of OnlyOffice pre-built and
|
|
||||||
used by CryptPad, which can result to large Docker image (~2.6GiB)
|
|
||||||
|
|
||||||
This behaviour is configurable by passing the `--arg withOnlyOffice false` flag to `nix-build` when building them.
|
|
||||||
|
|
||||||
## Updating the Deuxfleurs pinned nixpkgs
|
|
||||||
The pinned sources files are generated with the [npins](https://github.com/andir/npins) tool.
|
|
||||||
|
|
||||||
To update the pinned nixpkgs, you simply run the following command:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
npins update
|
|
||||||
```
|
|
||||||
|
|
||||||
To modify the pinned nixpkgs, remove it and re-add it using the new target, for exemple for `nixos-unstable`:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
npins remove nixpkgs
|
|
||||||
npins add --name nixpkgs channel nixos-unstable
|
|
||||||
```
|
|
|
@ -1,132 +0,0 @@
|
||||||
{ lib
|
|
||||||
, stdenvNoCC
|
|
||||||
|
|
||||||
, buildNpmPackage
|
|
||||||
, fetchFromGitHub
|
|
||||||
, fetchzip
|
|
||||||
|
|
||||||
, nodejs
|
|
||||||
|
|
||||||
, withOnlyOffice ? true
|
|
||||||
}: let
|
|
||||||
onlyOfficeVersions = {
|
|
||||||
v1 = {
|
|
||||||
rev = "4f370bebe96e3a0d4054df87412ee5b2c6ed8aaa";
|
|
||||||
hash = "sha256-TE/99qOx4wT2s0op9wi+SHwqTPYq/H+a9Uus9Zj4iSY=";
|
|
||||||
};
|
|
||||||
v2b = {
|
|
||||||
rev = "d9da72fda95daf93b90ffa345757c47eb5b919dd";
|
|
||||||
hash = "sha256-SiRDRc2vnLwCVnvtk+C8PKw7IeuSzHBaJmZHogRe3hQ=";
|
|
||||||
};
|
|
||||||
v4 = {
|
|
||||||
rev = "6ebc6938b6841440ffad2efc1e23f1dc1ceda964";
|
|
||||||
hash = "sha256-eto1+8Tk/s3kbUCpbUh8qCS8EOq700FYG1/KiHyynaA=";
|
|
||||||
};
|
|
||||||
v5 = {
|
|
||||||
rev = "88a356f08ded2f0f4620bda66951caf1d7f02c21";
|
|
||||||
hash = "sha256-8j1rlAyHlKx6oAs2pIhjPKcGhJFj6ZzahOcgenyeOCc=";
|
|
||||||
};
|
|
||||||
v6 = {
|
|
||||||
rev = "abd8a309f6dd37289f950cd8cea40df4492d8a15";
|
|
||||||
hash = "sha256-BZdExj2q/bqUD3k9uluOot2dlrWKA+vpad49EdgXKww=";
|
|
||||||
};
|
|
||||||
v7 = {
|
|
||||||
rev = "e1267803ea749cd93e9d5f81438011ea620d04af";
|
|
||||||
hash = "sha256-iIds0GnCHAyeIEdSD4aCCgDtnnwARh3NE470CywseS0=";
|
|
||||||
};
|
|
||||||
};
|
|
||||||
mkOnlyOffice = {
|
|
||||||
pname, version
|
|
||||||
}: stdenvNoCC.mkDerivation (final: {
|
|
||||||
pname = "${pname}-onlyoffice";
|
|
||||||
inherit version;
|
|
||||||
|
|
||||||
x2t = let
|
|
||||||
version = "v7.3+1";
|
|
||||||
in fetchzip {
|
|
||||||
url = "https://github.com/cryptpad/onlyoffice-x2t-wasm/releases/download/${version}/x2t.zip";
|
|
||||||
hash = "sha256-d5raecsTOflo0UpjSEZW5lker4+wdkTb6IyHNq5iBg8=";
|
|
||||||
stripRoot = false;
|
|
||||||
};
|
|
||||||
|
|
||||||
srcs = lib.mapAttrsToList (version: { rev, hash ? lib.fakeHash }: fetchFromGitHub {
|
|
||||||
name = "${final.pname}-${version}-source";
|
|
||||||
owner = "cryptpad";
|
|
||||||
repo = "onlyoffice-builds";
|
|
||||||
inherit rev hash;
|
|
||||||
}) onlyOfficeVersions;
|
|
||||||
|
|
||||||
dontBuild = true;
|
|
||||||
|
|
||||||
sourceRoot = ".";
|
|
||||||
|
|
||||||
installPhase = ''
|
|
||||||
mkdir -p $out
|
|
||||||
${lib.concatLines (map
|
|
||||||
(version: "cp -Tr ${final.pname}-${version}-source $out/${version}")
|
|
||||||
(builtins.attrNames onlyOfficeVersions)
|
|
||||||
)}
|
|
||||||
cp -Tr $x2t $out/x2t
|
|
||||||
'';
|
|
||||||
});
|
|
||||||
in buildNpmPackage rec {
|
|
||||||
pname = "cryptpad";
|
|
||||||
version = "2024.9.0";
|
|
||||||
|
|
||||||
src = fetchFromGitHub {
|
|
||||||
owner = "cryptpad";
|
|
||||||
repo = "cryptpad";
|
|
||||||
rev = version;
|
|
||||||
hash = "sha256-OUtWaDVLRUbKS0apwY0aNq4MalGFv+fH9VA7LvWWYRs=";
|
|
||||||
};
|
|
||||||
|
|
||||||
npmDepsHash = "sha256-pK0b7q1kJja9l8ANwudbfo3jpldwuO56kuulS8X9A5s=";
|
|
||||||
|
|
||||||
inherit nodejs;
|
|
||||||
|
|
||||||
onlyOffice = lib.optional withOnlyOffice (mkOnlyOffice {
|
|
||||||
inherit pname version;
|
|
||||||
});
|
|
||||||
|
|
||||||
makeCacheWritable = true;
|
|
||||||
dontFixup = true;
|
|
||||||
|
|
||||||
preBuild = ''
|
|
||||||
npm run install:components
|
|
||||||
'' + lib.optionalString withOnlyOffice ''
|
|
||||||
ln -s $onlyOffice www/common/onlyoffice/dist
|
|
||||||
'';
|
|
||||||
|
|
||||||
postBuild = ''
|
|
||||||
rm -rf customize
|
|
||||||
'';
|
|
||||||
|
|
||||||
installPhase = ''
|
|
||||||
runHook preInstall
|
|
||||||
|
|
||||||
mkdir -p $out
|
|
||||||
cp -R . $out/
|
|
||||||
|
|
||||||
substituteInPlace $out/lib/workers/index.js \
|
|
||||||
--replace-warn "lib/workers/db-worker" "$out/lib/workers/db-worker"
|
|
||||||
|
|
||||||
makeWrapper ${lib.getExe nodejs} $out/bin/cryptpad-server \
|
|
||||||
--chdir $out \
|
|
||||||
--add-flags server.js
|
|
||||||
|
|
||||||
runHook postInstall
|
|
||||||
'';
|
|
||||||
|
|
||||||
passthru = {
|
|
||||||
inherit onlyOffice;
|
|
||||||
};
|
|
||||||
|
|
||||||
meta = {
|
|
||||||
description = "Collaborative office suite, end-to-end encrypted and open-source.";
|
|
||||||
homepage = "https://cryptpad.org";
|
|
||||||
changelog = "https://github.com/cryptpad/cryptpad/releases/tag/${version}";
|
|
||||||
license = lib.licenses.agpl3Plus;
|
|
||||||
platforms = lib.platforms.all;
|
|
||||||
mainProgram = "cryptpad-server";
|
|
||||||
};
|
|
||||||
}
|
|
|
@ -1,14 +0,0 @@
|
||||||
{ name ? "deuxfleurs/cryptpad"
|
|
||||||
, tag ? "nix-latest"
|
|
||||||
}: let
|
|
||||||
sources = import ./npins;
|
|
||||||
pkgs = import sources.nixpkgs {};
|
|
||||||
in rec {
|
|
||||||
cryptpad = pkgs.callPackage ./default.nix {};
|
|
||||||
docker = import ./docker.nix {
|
|
||||||
inherit pkgs;
|
|
||||||
inherit name tag;
|
|
||||||
inherit cryptpad;
|
|
||||||
withOnlyOffice = true;
|
|
||||||
};
|
|
||||||
}
|
|
|
@ -1,27 +0,0 @@
|
||||||
{ pkgs ? import <nixpkgs> {}
|
|
||||||
|
|
||||||
, name ? "cryptpad"
|
|
||||||
, tag ? "nix-latest"
|
|
||||||
|
|
||||||
, withOnlyOffice ? true
|
|
||||||
|
|
||||||
, cryptpad ? pkgs.callPackage ./default.nix { inherit withOnlyOffice; }
|
|
||||||
}: let
|
|
||||||
cryptpad' = cryptpad.overrideAttrs {
|
|
||||||
postInstall = ''
|
|
||||||
ln -sf /cryptpad/customize $out/customize
|
|
||||||
'';
|
|
||||||
};
|
|
||||||
in pkgs.dockerTools.buildImage {
|
|
||||||
inherit name tag;
|
|
||||||
|
|
||||||
config = {
|
|
||||||
Cmd = [
|
|
||||||
(pkgs.lib.getExe cryptpad')
|
|
||||||
];
|
|
||||||
|
|
||||||
Volumes = {
|
|
||||||
"/cryptpad/customize" = {};
|
|
||||||
};
|
|
||||||
};
|
|
||||||
}
|
|
|
@ -1,80 +0,0 @@
|
||||||
# Generated by npins. Do not modify; will be overwritten regularly
|
|
||||||
let
|
|
||||||
data = builtins.fromJSON (builtins.readFile ./sources.json);
|
|
||||||
version = data.version;
|
|
||||||
|
|
||||||
mkSource =
|
|
||||||
spec:
|
|
||||||
assert spec ? type;
|
|
||||||
let
|
|
||||||
path =
|
|
||||||
if spec.type == "Git" then
|
|
||||||
mkGitSource spec
|
|
||||||
else if spec.type == "GitRelease" then
|
|
||||||
mkGitSource spec
|
|
||||||
else if spec.type == "PyPi" then
|
|
||||||
mkPyPiSource spec
|
|
||||||
else if spec.type == "Channel" then
|
|
||||||
mkChannelSource spec
|
|
||||||
else
|
|
||||||
builtins.throw "Unknown source type ${spec.type}";
|
|
||||||
in
|
|
||||||
spec // { outPath = path; };
|
|
||||||
|
|
||||||
mkGitSource =
|
|
||||||
{
|
|
||||||
repository,
|
|
||||||
revision,
|
|
||||||
url ? null,
|
|
||||||
hash,
|
|
||||||
branch ? null,
|
|
||||||
...
|
|
||||||
}:
|
|
||||||
assert repository ? type;
|
|
||||||
# At the moment, either it is a plain git repository (which has an url), or it is a GitHub/GitLab repository
|
|
||||||
# In the latter case, there we will always be an url to the tarball
|
|
||||||
if url != null then
|
|
||||||
(builtins.fetchTarball {
|
|
||||||
inherit url;
|
|
||||||
sha256 = hash; # FIXME: check nix version & use SRI hashes
|
|
||||||
})
|
|
||||||
else
|
|
||||||
assert repository.type == "Git";
|
|
||||||
let
|
|
||||||
urlToName =
|
|
||||||
url: rev:
|
|
||||||
let
|
|
||||||
matched = builtins.match "^.*/([^/]*)(\\.git)?$" repository.url;
|
|
||||||
|
|
||||||
short = builtins.substring 0 7 rev;
|
|
||||||
|
|
||||||
appendShort = if (builtins.match "[a-f0-9]*" rev) != null then "-${short}" else "";
|
|
||||||
in
|
|
||||||
"${if matched == null then "source" else builtins.head matched}${appendShort}";
|
|
||||||
name = urlToName repository.url revision;
|
|
||||||
in
|
|
||||||
builtins.fetchGit {
|
|
||||||
url = repository.url;
|
|
||||||
rev = revision;
|
|
||||||
inherit name;
|
|
||||||
# hash = hash;
|
|
||||||
};
|
|
||||||
|
|
||||||
mkPyPiSource =
|
|
||||||
{ url, hash, ... }:
|
|
||||||
builtins.fetchurl {
|
|
||||||
inherit url;
|
|
||||||
sha256 = hash;
|
|
||||||
};
|
|
||||||
|
|
||||||
mkChannelSource =
|
|
||||||
{ url, hash, ... }:
|
|
||||||
builtins.fetchTarball {
|
|
||||||
inherit url;
|
|
||||||
sha256 = hash;
|
|
||||||
};
|
|
||||||
in
|
|
||||||
if version == 3 then
|
|
||||||
builtins.mapAttrs (_: mkSource) data.pins
|
|
||||||
else
|
|
||||||
throw "Unsupported format version ${toString version} in sources.json. Try running `npins upgrade`"
|
|
|
@ -1,11 +0,0 @@
|
||||||
{
|
|
||||||
"pins": {
|
|
||||||
"nixpkgs": {
|
|
||||||
"type": "Channel",
|
|
||||||
"name": "nixos-24.05",
|
|
||||||
"url": "https://releases.nixos.org/nixos/24.05/nixos-24.05.5385.1719f27dd95f/nixexprs.tar.xz",
|
|
||||||
"hash": "0f7i315g1z8kjh10hvj2zv7y2vfqxmwvd96hwlcrr8aig6qq5gzm"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"version": 3
|
|
||||||
}
|
|
|
@ -1,59 +0,0 @@
|
||||||
# SPDX-FileCopyrightText: 2023 XWiki CryptPad Team <contact@cryptpad.org> and contributors
|
|
||||||
#
|
|
||||||
# SPDX-License-Identifier: AGPL-3.0-or-later
|
|
||||||
#
|
|
||||||
# Tweaks by Deuxfleurs
|
|
||||||
|
|
||||||
# Multistage build to reduce image size and increase security
|
|
||||||
FROM node:lts-slim AS build
|
|
||||||
ENV DEBIAN_FRONTEND=noninteractive
|
|
||||||
|
|
||||||
RUN apt-get update && apt-get install --no-install-recommends -y \
|
|
||||||
ca-certificates tar wget
|
|
||||||
|
|
||||||
# Download the release tarball
|
|
||||||
RUN wget https://github.com/cryptpad/cryptpad/archive/refs/tags/2024.9.0.tar.gz -O cryptpad.tar.gz
|
|
||||||
|
|
||||||
# Create folder for CryptPad
|
|
||||||
RUN mkdir /cryptpad
|
|
||||||
|
|
||||||
# Extract the release into /cryptpad
|
|
||||||
RUN tar xvzf cryptpad.tar.gz -C /cryptpad --strip-components 1
|
|
||||||
|
|
||||||
# Go to /cryptpad
|
|
||||||
WORKDIR /cryptpad
|
|
||||||
|
|
||||||
# Install dependencies
|
|
||||||
RUN npm install --production && npm run install:components
|
|
||||||
|
|
||||||
# Create the actual CryptPad image
|
|
||||||
FROM node:lts-slim
|
|
||||||
ENV DEBIAN_FRONTEND=noninteractive
|
|
||||||
|
|
||||||
# Install curl for healthcheck
|
|
||||||
# Install git, rdfind and unzip for install-onlyoffice.sh
|
|
||||||
RUN apt-get update && apt-get install --no-install-recommends -y \
|
|
||||||
curl ca-certificates git rdfind unzip && \
|
|
||||||
apt-get clean && \
|
|
||||||
rm -rf /var/lib/apt/lists/*
|
|
||||||
|
|
||||||
# Copy cryptpad with installed modules
|
|
||||||
COPY --from=build /cryptpad /cryptpad
|
|
||||||
|
|
||||||
# Set workdir to cryptpad
|
|
||||||
WORKDIR /cryptpad
|
|
||||||
|
|
||||||
# Install onlyoffice
|
|
||||||
RUN ./install-onlyoffice.sh --accept-license --trust-repository
|
|
||||||
|
|
||||||
# Build static pages (?) unsure we need this
|
|
||||||
RUN npm run build
|
|
||||||
|
|
||||||
# Healthcheck
|
|
||||||
HEALTHCHECK --interval=1m CMD curl -f http://localhost:3000/ || exit 1
|
|
||||||
|
|
||||||
# Ports
|
|
||||||
EXPOSE 3000 3003
|
|
||||||
|
|
||||||
# Run cryptpad on startup
|
|
||||||
CMD ["npm", "start"]
|
|
|
@ -1,40 +0,0 @@
|
||||||
/*
|
|
||||||
* You can override the configurable values from this file.
|
|
||||||
* The recommended method is to make a copy of this file (/customize.dist/application_config.js)
|
|
||||||
in a 'customize' directory (/customize/application_config.js).
|
|
||||||
* If you want to check all the configurable values, you can open the internal configuration file
|
|
||||||
but you should not change it directly (/common/application_config_internal.js)
|
|
||||||
*/
|
|
||||||
define(['/common/application_config_internal.js'], function (AppConfig) {
|
|
||||||
// To inform users of the support ticket panel which languages your admins speak:
|
|
||||||
AppConfig.supportLanguages = [ 'en', 'fr' ];
|
|
||||||
|
|
||||||
/* Select the buttons displayed on the main page to create new collaborative sessions.
|
|
||||||
* Removing apps from the list will prevent users from accessing them. They will instead be
|
|
||||||
* redirected to the drive.
|
|
||||||
* You should never remove the drive from this list.
|
|
||||||
*/
|
|
||||||
AppConfig.availablePadTypes = ['drive', 'teams', 'doc', 'presentation', 'pad', 'kanban', 'code', 'form', 'poll', 'whiteboard',
|
|
||||||
'file', 'contacts', 'slide', 'convert'];
|
|
||||||
// disabled: sheet
|
|
||||||
|
|
||||||
/* You can display a link to your own privacy policy in the static pages footer.
|
|
||||||
* Since this is different for each individual or organization there is no default value.
|
|
||||||
* See the comments above for a description of possible configurations.
|
|
||||||
*/
|
|
||||||
AppConfig.privacy = {
|
|
||||||
"default": "https://deuxfleurs.fr/CGU.html",
|
|
||||||
};
|
|
||||||
|
|
||||||
/* You can display a link to your instances's terms of service in the static pages footer.
|
|
||||||
* A default is included for backwards compatibility, but we recommend replacing this
|
|
||||||
* with your own terms.
|
|
||||||
*
|
|
||||||
* See the comments above for a description of possible configurations.
|
|
||||||
*/
|
|
||||||
AppConfig.terms = {
|
|
||||||
"default": "https://deuxfleurs.fr/CGU.html",
|
|
||||||
};
|
|
||||||
|
|
||||||
return AppConfig;
|
|
||||||
});
|
|
|
@ -1,296 +0,0 @@
|
||||||
/* globals module */
|
|
||||||
|
|
||||||
/* DISCLAIMER:
|
|
||||||
|
|
||||||
There are two recommended methods of running a CryptPad instance:
|
|
||||||
|
|
||||||
1. Using a standalone nodejs server without HTTPS (suitable for local development)
|
|
||||||
2. Using NGINX to serve static assets and to handle HTTPS for API server's websocket traffic
|
|
||||||
|
|
||||||
We do not officially recommend or support Apache, Docker, Kubernetes, Traefik, or any other configuration.
|
|
||||||
Support requests for such setups should be directed to their authors.
|
|
||||||
|
|
||||||
If you're having difficulty difficulty configuring your instance
|
|
||||||
we suggest that you join the project's IRC/Matrix channel.
|
|
||||||
|
|
||||||
If you don't have any difficulty configuring your instance and you'd like to
|
|
||||||
support us for the work that went into making it pain-free we are quite happy
|
|
||||||
to accept donations via our opencollective page: https://opencollective.com/cryptpad
|
|
||||||
|
|
||||||
*/
|
|
||||||
module.exports = {
|
|
||||||
/* CryptPad is designed to serve its content over two domains.
|
|
||||||
* Account passwords and cryptographic content is handled on the 'main' domain,
|
|
||||||
* while the user interface is loaded on a 'sandbox' domain
|
|
||||||
* which can only access information which the main domain willingly shares.
|
|
||||||
*
|
|
||||||
* In the event of an XSS vulnerability in the UI (that's bad)
|
|
||||||
* this system prevents attackers from gaining access to your account (that's good).
|
|
||||||
*
|
|
||||||
* Most problems with new instances are related to this system blocking access
|
|
||||||
* because of incorrectly configured sandboxes. If you only see a white screen
|
|
||||||
* when you try to load CryptPad, this is probably the cause.
|
|
||||||
*
|
|
||||||
* PLEASE READ THE FOLLOWING COMMENTS CAREFULLY.
|
|
||||||
*
|
|
||||||
*/
|
|
||||||
|
|
||||||
/* httpUnsafeOrigin is the URL that clients will enter to load your instance.
|
|
||||||
* Any other URL that somehow points to your instance is supposed to be blocked.
|
|
||||||
* The default provided below assumes you are loading CryptPad from a server
|
|
||||||
* which is running on the same machine, using port 3000.
|
|
||||||
*
|
|
||||||
* In a production instance this should be available ONLY over HTTPS
|
|
||||||
* using the default port for HTTPS (443) ie. https://cryptpad.fr
|
|
||||||
* In such a case this should be also handled by NGINX, as documented in
|
|
||||||
* cryptpad/docs/example.nginx.conf (see the $main_domain variable)
|
|
||||||
*
|
|
||||||
*/
|
|
||||||
httpUnsafeOrigin: 'https://pad-debug.deuxfleurs.fr',
|
|
||||||
|
|
||||||
/* httpSafeOrigin is the URL that is used for the 'sandbox' described above.
|
|
||||||
* If you're testing or developing with CryptPad on your local machine then
|
|
||||||
* it is appropriate to leave this blank. The default behaviour is to serve
|
|
||||||
* the main domain over port 3000 and to serve the sandbox content over port 3001.
|
|
||||||
*
|
|
||||||
* This is not appropriate in a production environment where invasive networks
|
|
||||||
* may filter traffic going over abnormal ports.
|
|
||||||
* To correctly configure your production instance you must provide a URL
|
|
||||||
* with a different domain (a subdomain is sufficient).
|
|
||||||
* It will be used to load the UI in our 'sandbox' system.
|
|
||||||
*
|
|
||||||
* This value corresponds to the $sandbox_domain variable
|
|
||||||
* in the example nginx file.
|
|
||||||
*
|
|
||||||
* Note that in order for the sandboxing system to be effective
|
|
||||||
* httpSafeOrigin must be different from httpUnsafeOrigin.
|
|
||||||
*
|
|
||||||
* CUSTOMIZE AND UNCOMMENT THIS FOR PRODUCTION INSTALLATIONS.
|
|
||||||
*/
|
|
||||||
httpSafeOrigin: "https://pad-sandbox-debug.deuxfleurs.fr",
|
|
||||||
|
|
||||||
/* httpAddress specifies the address on which the nodejs server
|
|
||||||
* should be accessible. By default it will listen on 127.0.0.1
|
|
||||||
* (IPv4 localhost on most systems). If you want it to listen on
|
|
||||||
* all addresses, including IPv6, set this to '::'.
|
|
||||||
*
|
|
||||||
*/
|
|
||||||
httpAddress: '::',
|
|
||||||
|
|
||||||
/* httpPort specifies on which port the nodejs server should listen.
|
|
||||||
* By default it will serve content over port 3000, which is suitable
|
|
||||||
* for both local development and for use with the provided nginx example,
|
|
||||||
* which will proxy websocket traffic to your node server.
|
|
||||||
*
|
|
||||||
*/
|
|
||||||
httpPort: 3000,
|
|
||||||
|
|
||||||
/* httpSafePort allows you to specify an alternative port from which
|
|
||||||
* the node process should serve sandboxed assets. The default value is
|
|
||||||
* that of your httpPort + 1. You probably don't need to change this.
|
|
||||||
*
|
|
||||||
*/
|
|
||||||
// httpSafePort: 3001,
|
|
||||||
|
|
||||||
/* CryptPad will launch a child process for every core available
|
|
||||||
* in order to perform CPU-intensive tasks in parallel.
|
|
||||||
* Some host environments may have a very large number of cores available
|
|
||||||
* or you may want to limit how much computing power CryptPad can take.
|
|
||||||
* If so, set 'maxWorkers' to a positive integer.
|
|
||||||
*/
|
|
||||||
// maxWorkers: 4,
|
|
||||||
|
|
||||||
/* =====================
|
|
||||||
* Admin
|
|
||||||
* ===================== */
|
|
||||||
|
|
||||||
/*
|
|
||||||
* CryptPad contains an administration panel. Its access is restricted to specific
|
|
||||||
* users using the following list.
|
|
||||||
* To give access to the admin panel to a user account, just add their public signing
|
|
||||||
* key, which can be found on the settings page for registered users.
|
|
||||||
* Entries should be strings separated by a comma.
|
|
||||||
*/
|
|
||||||
adminKeys: [
|
|
||||||
"[quentin@pad.deuxfleurs.fr/EWtzm-CiqJnM9RZL9mj-YyTgAtX-Zh76sru1K5bFpN8=]",
|
|
||||||
"[adrn@pad.deuxfleurs.fr/PxDpkPwd-jDJWkfWdAzFX7wtnLpnPlBeYZ4MmoEYS6E=]",
|
|
||||||
"[lx@pad.deuxfleurs.fr/FwQzcXywx1FIb83z6COB7c3sHnz8rNSDX1xhjPuH3Fg=]",
|
|
||||||
"[trinity-1686a@pad-debug.deuxfleurs.fr/Pu6Ef03jEsAGBbZI6IOdKd6+5pORD5N51QIYt4-Ys1c=]",
|
|
||||||
"[Jill@pad.deuxfleurs.fr/tLW7W8EVNB2KYETXEaOYR+HmNiBQtZj7u+SOxS3hGmg=]",
|
|
||||||
"[vincent@pad.deuxfleurs.fr/07FQiE8w1iztRWwzbRJzEy3xIqnNR31mUFjLNiGXjwU=]",
|
|
||||||
"[boris@pad.deuxfleurs.fr/kHo5LIhSxDFk39GuhGRp+XKlMjNe+lWfFWM75cINoTQ=]",
|
|
||||||
"[maximilien@pad.deuxfleurs.fr/UoXHLejYRUjvX6t55hAQKpjMdU-3ecg4eDhAeckZmyE=]",
|
|
||||||
"[armael@pad-debug.deuxfleurs.fr/CIKMvNdFxGavwTmni0TnR3x9GM0ypgx3DMcFyzppplU=]",
|
|
||||||
"[bjonglez@pad-debug.deuxfleurs.fr/+RRzwcLPj5ZCWELUXMjmt3u+-lvYnyhpDt4cqAn9nh8=]"
|
|
||||||
],
|
|
||||||
|
|
||||||
/* =====================
|
|
||||||
* STORAGE
|
|
||||||
* ===================== */
|
|
||||||
|
|
||||||
/* Pads that are not 'pinned' by any registered user can be set to expire
|
|
||||||
* after a configurable number of days of inactivity (default 90 days).
|
|
||||||
* The value can be changed or set to false to remove expiration.
|
|
||||||
* Expired pads can then be removed using a cron job calling the
|
|
||||||
* `evict-inactive.js` script with node
|
|
||||||
*
|
|
||||||
* defaults to 90 days if nothing is provided
|
|
||||||
*/
|
|
||||||
//inactiveTime: 90, // days
|
|
||||||
|
|
||||||
/* CryptPad archives some data instead of deleting it outright.
|
|
||||||
* This archived data still takes up space and so you'll probably still want to
|
|
||||||
* remove these files after a brief period.
|
|
||||||
*
|
|
||||||
* cryptpad/scripts/evict-inactive.js is intended to be run daily
|
|
||||||
* from a crontab or similar scheduling service.
|
|
||||||
*
|
|
||||||
* The intent with this feature is to provide a safety net in case of accidental
|
|
||||||
* deletion. Set this value to the number of days you'd like to retain
|
|
||||||
* archived data before it's removed permanently.
|
|
||||||
*
|
|
||||||
* defaults to 15 days if nothing is provided
|
|
||||||
*/
|
|
||||||
//archiveRetentionTime: 15,
|
|
||||||
|
|
||||||
/* It's possible to configure your instance to remove data
|
|
||||||
* stored on behalf of inactive accounts. Set 'accountRetentionTime'
|
|
||||||
* to the number of days an account can remain idle before its
|
|
||||||
* documents and other account data is removed.
|
|
||||||
*
|
|
||||||
* Leave this value commented out to preserve all data stored
|
|
||||||
* by user accounts regardless of inactivity.
|
|
||||||
*/
|
|
||||||
//accountRetentionTime: 365,
|
|
||||||
|
|
||||||
/* Starting with CryptPad 3.23.0, the server automatically runs
|
|
||||||
* the script responsible for removing inactive data according to
|
|
||||||
* your configured definition of inactivity. Set this value to `true`
|
|
||||||
* if you prefer not to remove inactive data, or if you prefer to
|
|
||||||
* do so manually using `scripts/evict-inactive.js`.
|
|
||||||
*/
|
|
||||||
//disableIntegratedEviction: true,
|
|
||||||
|
|
||||||
|
|
||||||
/* Max Upload Size (bytes)
|
|
||||||
* this sets the maximum size of any one file uploaded to the server.
|
|
||||||
* anything larger than this size will be rejected
|
|
||||||
* defaults to 20MB if no value is provided
|
|
||||||
*/
|
|
||||||
//maxUploadSize: 20 * 1024 * 1024,
|
|
||||||
|
|
||||||
/* Users with premium accounts (those with a plan included in their customLimit)
|
|
||||||
* can benefit from an increased upload size limit. By default they are restricted to the same
|
|
||||||
* upload size as any other registered user.
|
|
||||||
*
|
|
||||||
*/
|
|
||||||
//premiumUploadSize: 100 * 1024 * 1024,
|
|
||||||
|
|
||||||
/* =====================
|
|
||||||
* DATABASE VOLUMES
|
|
||||||
* ===================== */
|
|
||||||
|
|
||||||
/*
|
|
||||||
* We need this config entry, else CryptPad will try to mkdir
|
|
||||||
* some stuff into Nix store apparently...
|
|
||||||
*/
|
|
||||||
base: '/mnt/data',
|
|
||||||
|
|
||||||
/*
|
|
||||||
* CryptPad stores each document in an individual file on your hard drive.
|
|
||||||
* Specify a directory where files should be stored.
|
|
||||||
* It will be created automatically if it does not already exist.
|
|
||||||
*/
|
|
||||||
filePath: '/mnt/datastore/',
|
|
||||||
|
|
||||||
/* CryptPad offers the ability to archive data for a configurable period
|
|
||||||
* before deleting it, allowing a means of recovering data in the event
|
|
||||||
* that it was deleted accidentally.
|
|
||||||
*
|
|
||||||
* To set the location of this archive directory to a custom value, change
|
|
||||||
* the path below:
|
|
||||||
*/
|
|
||||||
archivePath: '/mnt/data/archive',
|
|
||||||
|
|
||||||
/* CryptPad allows logged in users to request that particular documents be
|
|
||||||
* stored by the server indefinitely. This is called 'pinning'.
|
|
||||||
* Pin requests are stored in a pin-store. The location of this store is
|
|
||||||
* defined here.
|
|
||||||
*/
|
|
||||||
pinPath: '/mnt/data/pins',
|
|
||||||
|
|
||||||
/* if you would like the list of scheduled tasks to be stored in
|
|
||||||
a custom location, change the path below:
|
|
||||||
*/
|
|
||||||
taskPath: '/mnt/data/tasks',
|
|
||||||
|
|
||||||
/* if you would like users' authenticated blocks to be stored in
|
|
||||||
a custom location, change the path below:
|
|
||||||
*/
|
|
||||||
blockPath: '/mnt/block',
|
|
||||||
|
|
||||||
/* CryptPad allows logged in users to upload encrypted files. Files/blobs
|
|
||||||
* are stored in a 'blob-store'. Set its location here.
|
|
||||||
*/
|
|
||||||
blobPath: '/mnt/blob',
|
|
||||||
|
|
||||||
/* CryptPad stores incomplete blobs in a 'staging' area until they are
|
|
||||||
* fully uploaded. Set its location here.
|
|
||||||
*/
|
|
||||||
blobStagingPath: '/mnt/data/blobstage',
|
|
||||||
|
|
||||||
decreePath: '/mnt/data/decrees',
|
|
||||||
|
|
||||||
/* CryptPad supports logging events directly to the disk in a 'logs' directory
|
|
||||||
* Set its location here, or set it to false (or nothing) if you'd rather not log
|
|
||||||
*/
|
|
||||||
logPath: false,
|
|
||||||
|
|
||||||
/* =====================
|
|
||||||
* Debugging
|
|
||||||
* ===================== */
|
|
||||||
|
|
||||||
/* CryptPad can log activity to stdout
|
|
||||||
* This may be useful for debugging
|
|
||||||
*/
|
|
||||||
logToStdout: true,
|
|
||||||
|
|
||||||
/* CryptPad can be configured to log more or less
|
|
||||||
* the various settings are listed below by order of importance
|
|
||||||
*
|
|
||||||
* silly, verbose, debug, feedback, info, warn, error
|
|
||||||
*
|
|
||||||
* Choose the least important level of logging you wish to see.
|
|
||||||
* For example, a 'silly' logLevel will display everything,
|
|
||||||
* while 'info' will display 'info', 'warn', and 'error' logs
|
|
||||||
*
|
|
||||||
* This will affect both logging to the console and the disk.
|
|
||||||
*/
|
|
||||||
logLevel: 'silly',
|
|
||||||
|
|
||||||
/* clients can use the /settings/ app to opt out of usage feedback
|
|
||||||
* which informs the server of things like how much each app is being
|
|
||||||
* used, and whether certain clientside features are supported by
|
|
||||||
* the client's browser. The intent is to provide feedback to the admin
|
|
||||||
* such that the service can be improved. Enable this with `true`
|
|
||||||
* and ignore feedback with `false` or by commenting the attribute
|
|
||||||
*
|
|
||||||
* You will need to set your logLevel to include 'feedback'. Set this
|
|
||||||
* to false if you'd like to exclude feedback from your logs.
|
|
||||||
*/
|
|
||||||
logFeedback: false,
|
|
||||||
|
|
||||||
/* CryptPad supports verbose logging
|
|
||||||
* (false by default)
|
|
||||||
*/
|
|
||||||
verbose: true,
|
|
||||||
|
|
||||||
/* Surplus information:
|
|
||||||
*
|
|
||||||
* 'installMethod' is included in server telemetry to voluntarily
|
|
||||||
* indicate how many instances are using unofficial installation methods
|
|
||||||
* such as Docker.
|
|
||||||
*
|
|
||||||
*/
|
|
||||||
installMethod: 'deuxfleurs.fr',
|
|
||||||
};
|
|
|
@ -1,296 +0,0 @@
|
||||||
/* globals module */
|
|
||||||
|
|
||||||
/* DISCLAIMER:
|
|
||||||
|
|
||||||
There are two recommended methods of running a CryptPad instance:
|
|
||||||
|
|
||||||
1. Using a standalone nodejs server without HTTPS (suitable for local development)
|
|
||||||
2. Using NGINX to serve static assets and to handle HTTPS for API server's websocket traffic
|
|
||||||
|
|
||||||
We do not officially recommend or support Apache, Docker, Kubernetes, Traefik, or any other configuration.
|
|
||||||
Support requests for such setups should be directed to their authors.
|
|
||||||
|
|
||||||
If you're having difficulty difficulty configuring your instance
|
|
||||||
we suggest that you join the project's IRC/Matrix channel.
|
|
||||||
|
|
||||||
If you don't have any difficulty configuring your instance and you'd like to
|
|
||||||
support us for the work that went into making it pain-free we are quite happy
|
|
||||||
to accept donations via our opencollective page: https://opencollective.com/cryptpad
|
|
||||||
|
|
||||||
*/
|
|
||||||
module.exports = {
|
|
||||||
/* CryptPad is designed to serve its content over two domains.
|
|
||||||
* Account passwords and cryptographic content is handled on the 'main' domain,
|
|
||||||
* while the user interface is loaded on a 'sandbox' domain
|
|
||||||
* which can only access information which the main domain willingly shares.
|
|
||||||
*
|
|
||||||
* In the event of an XSS vulnerability in the UI (that's bad)
|
|
||||||
* this system prevents attackers from gaining access to your account (that's good).
|
|
||||||
*
|
|
||||||
* Most problems with new instances are related to this system blocking access
|
|
||||||
* because of incorrectly configured sandboxes. If you only see a white screen
|
|
||||||
* when you try to load CryptPad, this is probably the cause.
|
|
||||||
*
|
|
||||||
* PLEASE READ THE FOLLOWING COMMENTS CAREFULLY.
|
|
||||||
*
|
|
||||||
*/
|
|
||||||
|
|
||||||
/* httpUnsafeOrigin is the URL that clients will enter to load your instance.
|
|
||||||
* Any other URL that somehow points to your instance is supposed to be blocked.
|
|
||||||
* The default provided below assumes you are loading CryptPad from a server
|
|
||||||
* which is running on the same machine, using port 3000.
|
|
||||||
*
|
|
||||||
* In a production instance this should be available ONLY over HTTPS
|
|
||||||
* using the default port for HTTPS (443) ie. https://cryptpad.fr
|
|
||||||
* In such a case this should be also handled by NGINX, as documented in
|
|
||||||
* cryptpad/docs/example.nginx.conf (see the $main_domain variable)
|
|
||||||
*
|
|
||||||
*/
|
|
||||||
httpUnsafeOrigin: 'https://pad.deuxfleurs.fr',
|
|
||||||
|
|
||||||
/* httpSafeOrigin is the URL that is used for the 'sandbox' described above.
|
|
||||||
* If you're testing or developing with CryptPad on your local machine then
|
|
||||||
* it is appropriate to leave this blank. The default behaviour is to serve
|
|
||||||
* the main domain over port 3000 and to serve the sandbox content over port 3001.
|
|
||||||
*
|
|
||||||
* This is not appropriate in a production environment where invasive networks
|
|
||||||
* may filter traffic going over abnormal ports.
|
|
||||||
* To correctly configure your production instance you must provide a URL
|
|
||||||
* with a different domain (a subdomain is sufficient).
|
|
||||||
* It will be used to load the UI in our 'sandbox' system.
|
|
||||||
*
|
|
||||||
* This value corresponds to the $sandbox_domain variable
|
|
||||||
* in the example nginx file.
|
|
||||||
*
|
|
||||||
* Note that in order for the sandboxing system to be effective
|
|
||||||
* httpSafeOrigin must be different from httpUnsafeOrigin.
|
|
||||||
*
|
|
||||||
* CUSTOMIZE AND UNCOMMENT THIS FOR PRODUCTION INSTALLATIONS.
|
|
||||||
*/
|
|
||||||
httpSafeOrigin: "https://pad-sandbox.deuxfleurs.fr",
|
|
||||||
|
|
||||||
/* httpAddress specifies the address on which the nodejs server
|
|
||||||
* should be accessible. By default it will listen on 127.0.0.1
|
|
||||||
* (IPv4 localhost on most systems). If you want it to listen on
|
|
||||||
* all addresses, including IPv6, set this to '::'.
|
|
||||||
*
|
|
||||||
*/
|
|
||||||
httpAddress: '::',
|
|
||||||
|
|
||||||
/* httpPort specifies on which port the nodejs server should listen.
|
|
||||||
* By default it will serve content over port 3000, which is suitable
|
|
||||||
* for both local development and for use with the provided nginx example,
|
|
||||||
* which will proxy websocket traffic to your node server.
|
|
||||||
*
|
|
||||||
*/
|
|
||||||
httpPort: 3000,
|
|
||||||
|
|
||||||
/* httpSafePort allows you to specify an alternative port from which
|
|
||||||
* the node process should serve sandboxed assets. The default value is
|
|
||||||
* that of your httpPort + 1. You probably don't need to change this.
|
|
||||||
*
|
|
||||||
*/
|
|
||||||
// httpSafePort: 3001,
|
|
||||||
|
|
||||||
/* CryptPad will launch a child process for every core available
|
|
||||||
* in order to perform CPU-intensive tasks in parallel.
|
|
||||||
* Some host environments may have a very large number of cores available
|
|
||||||
* or you may want to limit how much computing power CryptPad can take.
|
|
||||||
* If so, set 'maxWorkers' to a positive integer.
|
|
||||||
*/
|
|
||||||
// maxWorkers: 4,
|
|
||||||
|
|
||||||
/* =====================
|
|
||||||
* Admin
|
|
||||||
* ===================== */
|
|
||||||
|
|
||||||
/*
|
|
||||||
* CryptPad contains an administration panel. Its access is restricted to specific
|
|
||||||
* users using the following list.
|
|
||||||
* To give access to the admin panel to a user account, just add their public signing
|
|
||||||
* key, which can be found on the settings page for registered users.
|
|
||||||
* Entries should be strings separated by a comma.
|
|
||||||
*/
|
|
||||||
adminKeys: [
|
|
||||||
"[quentin@pad.deuxfleurs.fr/EWtzm-CiqJnM9RZL9mj-YyTgAtX-Zh76sru1K5bFpN8=]",
|
|
||||||
"[adrn@pad.deuxfleurs.fr/PxDpkPwd-jDJWkfWdAzFX7wtnLpnPlBeYZ4MmoEYS6E=]",
|
|
||||||
"[lx@pad.deuxfleurs.fr/FwQzcXywx1FIb83z6COB7c3sHnz8rNSDX1xhjPuH3Fg=]",
|
|
||||||
"[trinity-1686a@pad.deuxfleurs.fr/Pu6Ef03jEsAGBbZI6IOdKd6+5pORD5N51QIYt4-Ys1c=]",
|
|
||||||
"[Jill@pad.deuxfleurs.fr/tLW7W8EVNB2KYETXEaOYR+HmNiBQtZj7u+SOxS3hGmg=]",
|
|
||||||
"[vincent@pad.deuxfleurs.fr/07FQiE8w1iztRWwzbRJzEy3xIqnNR31mUFjLNiGXjwU=]",
|
|
||||||
"[boris@pad.deuxfleurs.fr/kHo5LIhSxDFk39GuhGRp+XKlMjNe+lWfFWM75cINoTQ=]",
|
|
||||||
"[maximilien@pad.deuxfleurs.fr/UoXHLejYRUjvX6t55hAQKpjMdU-3ecg4eDhAeckZmyE=]",
|
|
||||||
"[armael@pad.deuxfleurs.fr/CIKMvNdFxGavwTmni0TnR3x9GM0ypgx3DMcFyzppplU=]",
|
|
||||||
"[bjonglez@pad.deuxfleurs.fr/+RRzwcLPj5ZCWELUXMjmt3u+-lvYnyhpDt4cqAn9nh8=]"
|
|
||||||
],
|
|
||||||
|
|
||||||
/* =====================
|
|
||||||
* STORAGE
|
|
||||||
* ===================== */
|
|
||||||
|
|
||||||
/* Pads that are not 'pinned' by any registered user can be set to expire
|
|
||||||
* after a configurable number of days of inactivity (default 90 days).
|
|
||||||
* The value can be changed or set to false to remove expiration.
|
|
||||||
* Expired pads can then be removed using a cron job calling the
|
|
||||||
* `evict-inactive.js` script with node
|
|
||||||
*
|
|
||||||
* defaults to 90 days if nothing is provided
|
|
||||||
*/
|
|
||||||
//inactiveTime: 90, // days
|
|
||||||
|
|
||||||
/* CryptPad archives some data instead of deleting it outright.
|
|
||||||
* This archived data still takes up space and so you'll probably still want to
|
|
||||||
* remove these files after a brief period.
|
|
||||||
*
|
|
||||||
* cryptpad/scripts/evict-inactive.js is intended to be run daily
|
|
||||||
* from a crontab or similar scheduling service.
|
|
||||||
*
|
|
||||||
* The intent with this feature is to provide a safety net in case of accidental
|
|
||||||
* deletion. Set this value to the number of days you'd like to retain
|
|
||||||
* archived data before it's removed permanently.
|
|
||||||
*
|
|
||||||
* defaults to 15 days if nothing is provided
|
|
||||||
*/
|
|
||||||
//archiveRetentionTime: 15,
|
|
||||||
|
|
||||||
/* It's possible to configure your instance to remove data
|
|
||||||
* stored on behalf of inactive accounts. Set 'accountRetentionTime'
|
|
||||||
* to the number of days an account can remain idle before its
|
|
||||||
* documents and other account data is removed.
|
|
||||||
*
|
|
||||||
* Leave this value commented out to preserve all data stored
|
|
||||||
* by user accounts regardless of inactivity.
|
|
||||||
*/
|
|
||||||
//accountRetentionTime: 365,
|
|
||||||
|
|
||||||
/* Starting with CryptPad 3.23.0, the server automatically runs
|
|
||||||
* the script responsible for removing inactive data according to
|
|
||||||
* your configured definition of inactivity. Set this value to `true`
|
|
||||||
* if you prefer not to remove inactive data, or if you prefer to
|
|
||||||
* do so manually using `scripts/evict-inactive.js`.
|
|
||||||
*/
|
|
||||||
//disableIntegratedEviction: true,
|
|
||||||
|
|
||||||
|
|
||||||
/* Max Upload Size (bytes)
|
|
||||||
* this sets the maximum size of any one file uploaded to the server.
|
|
||||||
* anything larger than this size will be rejected
|
|
||||||
* defaults to 20MB if no value is provided
|
|
||||||
*/
|
|
||||||
//maxUploadSize: 20 * 1024 * 1024,
|
|
||||||
|
|
||||||
/* Users with premium accounts (those with a plan included in their customLimit)
|
|
||||||
* can benefit from an increased upload size limit. By default they are restricted to the same
|
|
||||||
* upload size as any other registered user.
|
|
||||||
*
|
|
||||||
*/
|
|
||||||
//premiumUploadSize: 100 * 1024 * 1024,
|
|
||||||
|
|
||||||
/* =====================
|
|
||||||
* DATABASE VOLUMES
|
|
||||||
* ===================== */
|
|
||||||
|
|
||||||
/*
|
|
||||||
* We need this config entry, else CryptPad will try to mkdir
|
|
||||||
* some stuff into Nix store apparently...
|
|
||||||
*/
|
|
||||||
base: '/mnt/data',
|
|
||||||
|
|
||||||
/*
|
|
||||||
* CryptPad stores each document in an individual file on your hard drive.
|
|
||||||
* Specify a directory where files should be stored.
|
|
||||||
* It will be created automatically if it does not already exist.
|
|
||||||
*/
|
|
||||||
filePath: '/mnt/datastore/',
|
|
||||||
|
|
||||||
/* CryptPad offers the ability to archive data for a configurable period
|
|
||||||
* before deleting it, allowing a means of recovering data in the event
|
|
||||||
* that it was deleted accidentally.
|
|
||||||
*
|
|
||||||
* To set the location of this archive directory to a custom value, change
|
|
||||||
* the path below:
|
|
||||||
*/
|
|
||||||
archivePath: '/mnt/data/archive',
|
|
||||||
|
|
||||||
/* CryptPad allows logged in users to request that particular documents be
|
|
||||||
* stored by the server indefinitely. This is called 'pinning'.
|
|
||||||
* Pin requests are stored in a pin-store. The location of this store is
|
|
||||||
* defined here.
|
|
||||||
*/
|
|
||||||
pinPath: '/mnt/data/pins',
|
|
||||||
|
|
||||||
/* if you would like the list of scheduled tasks to be stored in
|
|
||||||
a custom location, change the path below:
|
|
||||||
*/
|
|
||||||
taskPath: '/mnt/data/tasks',
|
|
||||||
|
|
||||||
/* if you would like users' authenticated blocks to be stored in
|
|
||||||
a custom location, change the path below:
|
|
||||||
*/
|
|
||||||
blockPath: '/mnt/block',
|
|
||||||
|
|
||||||
/* CryptPad allows logged in users to upload encrypted files. Files/blobs
|
|
||||||
* are stored in a 'blob-store'. Set its location here.
|
|
||||||
*/
|
|
||||||
blobPath: '/mnt/blob',
|
|
||||||
|
|
||||||
/* CryptPad stores incomplete blobs in a 'staging' area until they are
|
|
||||||
* fully uploaded. Set its location here.
|
|
||||||
*/
|
|
||||||
blobStagingPath: '/mnt/data/blobstage',
|
|
||||||
|
|
||||||
decreePath: '/mnt/data/decrees',
|
|
||||||
|
|
||||||
/* CryptPad supports logging events directly to the disk in a 'logs' directory
|
|
||||||
* Set its location here, or set it to false (or nothing) if you'd rather not log
|
|
||||||
*/
|
|
||||||
logPath: false,
|
|
||||||
|
|
||||||
/* =====================
|
|
||||||
* Debugging
|
|
||||||
* ===================== */
|
|
||||||
|
|
||||||
/* CryptPad can log activity to stdout
|
|
||||||
* This may be useful for debugging
|
|
||||||
*/
|
|
||||||
logToStdout: true,
|
|
||||||
|
|
||||||
/* CryptPad can be configured to log more or less
|
|
||||||
* the various settings are listed below by order of importance
|
|
||||||
*
|
|
||||||
* silly, verbose, debug, feedback, info, warn, error
|
|
||||||
*
|
|
||||||
* Choose the least important level of logging you wish to see.
|
|
||||||
* For example, a 'silly' logLevel will display everything,
|
|
||||||
* while 'info' will display 'info', 'warn', and 'error' logs
|
|
||||||
*
|
|
||||||
* This will affect both logging to the console and the disk.
|
|
||||||
*/
|
|
||||||
logLevel: 'silly',
|
|
||||||
|
|
||||||
/* clients can use the /settings/ app to opt out of usage feedback
|
|
||||||
* which informs the server of things like how much each app is being
|
|
||||||
* used, and whether certain clientside features are supported by
|
|
||||||
* the client's browser. The intent is to provide feedback to the admin
|
|
||||||
* such that the service can be improved. Enable this with `true`
|
|
||||||
* and ignore feedback with `false` or by commenting the attribute
|
|
||||||
*
|
|
||||||
* You will need to set your logLevel to include 'feedback'. Set this
|
|
||||||
* to false if you'd like to exclude feedback from your logs.
|
|
||||||
*/
|
|
||||||
logFeedback: false,
|
|
||||||
|
|
||||||
/* CryptPad supports verbose logging
|
|
||||||
* (false by default)
|
|
||||||
*/
|
|
||||||
verbose: true,
|
|
||||||
|
|
||||||
/* Surplus information:
|
|
||||||
*
|
|
||||||
* 'installMethod' is included in server telemetry to voluntarily
|
|
||||||
* indicate how many instances are using unofficial installation methods
|
|
||||||
* such as Docker.
|
|
||||||
*
|
|
||||||
*/
|
|
||||||
installMethod: 'deuxfleurs.fr',
|
|
||||||
};
|
|
|
@ -1,80 +0,0 @@
|
||||||
job "cryptpad" {
|
|
||||||
datacenters = ["scorpio"]
|
|
||||||
type = "service"
|
|
||||||
|
|
||||||
group "cryptpad" {
|
|
||||||
count = 1
|
|
||||||
|
|
||||||
network {
|
|
||||||
port "http" {
|
|
||||||
to = 3000
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
restart {
|
|
||||||
attempts = 10
|
|
||||||
delay = "30s"
|
|
||||||
}
|
|
||||||
|
|
||||||
task "main" {
|
|
||||||
driver = "docker"
|
|
||||||
|
|
||||||
constraint {
|
|
||||||
attribute = "${attr.unique.hostname}"
|
|
||||||
operator = "="
|
|
||||||
value = "abricot"
|
|
||||||
}
|
|
||||||
|
|
||||||
config {
|
|
||||||
image = "kokakiwi/cryptpad:2024.9.0"
|
|
||||||
ports = [ "http" ]
|
|
||||||
|
|
||||||
volumes = [
|
|
||||||
"/mnt/ssd/cryptpad:/mnt",
|
|
||||||
"secrets/config.js:/cryptpad/config.js",
|
|
||||||
]
|
|
||||||
}
|
|
||||||
env {
|
|
||||||
CRYPTPAD_CONFIG = "/cryptpad/config.js"
|
|
||||||
}
|
|
||||||
|
|
||||||
template {
|
|
||||||
data = file("../config/config.js")
|
|
||||||
destination = "secrets/config.js"
|
|
||||||
}
|
|
||||||
|
|
||||||
/* Disabled because it requires modifications to the docker image and I do not want to invest the time yet
|
|
||||||
template {
|
|
||||||
data = file("../config/application_config.js")
|
|
||||||
destination = "secrets/config.js"
|
|
||||||
}
|
|
||||||
*/
|
|
||||||
|
|
||||||
resources {
|
|
||||||
memory = 1000
|
|
||||||
cpu = 500
|
|
||||||
}
|
|
||||||
|
|
||||||
service {
|
|
||||||
name = "cryptpad"
|
|
||||||
port = "http"
|
|
||||||
tags = [
|
|
||||||
"tricot pad.deuxfleurs.fr",
|
|
||||||
"tricot pad-sandbox.deuxfleurs.fr",
|
|
||||||
"tricot-add-header Cross-Origin-Resource-Policy cross-origin",
|
|
||||||
"tricot-add-header Cross-Origin-Embedder-Policy require-corp",
|
|
||||||
"tricot-add-header Access-Control-Allow-Origin *",
|
|
||||||
"tricot-add-header Access-Control-Allow-Credentials true",
|
|
||||||
"d53-cname pad.deuxfleurs.fr",
|
|
||||||
"d53-cname pad-sandbox.deuxfleurs.fr",
|
|
||||||
]
|
|
||||||
check {
|
|
||||||
type = "http"
|
|
||||||
path = "/"
|
|
||||||
interval = "10s"
|
|
||||||
timeout = "2s"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
|
@ -1,20 +0,0 @@
|
||||||
FROM golang:1.19.3-buster as builder
|
|
||||||
|
|
||||||
ARG VERSION
|
|
||||||
|
|
||||||
ENV CGO_ENABLED=0 GOOS=linux GOARCH=amd64
|
|
||||||
WORKDIR /tmp/alps
|
|
||||||
|
|
||||||
RUN git init && \
|
|
||||||
git remote add origin https://git.deuxfleurs.fr/Deuxfleurs/alps.git && \
|
|
||||||
git fetch --depth 1 origin ${VERSION} && \
|
|
||||||
git checkout FETCH_HEAD
|
|
||||||
|
|
||||||
RUN go build -a -o /usr/local/bin/alps ./cmd/alps
|
|
||||||
|
|
||||||
FROM scratch
|
|
||||||
COPY --from=builder /usr/local/bin/alps /alps
|
|
||||||
COPY --from=builder /tmp/alps/themes /themes
|
|
||||||
COPY --from=builder /tmp/alps/plugins /plugins
|
|
||||||
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/ca-certificates.crt
|
|
||||||
ENTRYPOINT ["/alps"]
|
|
|
@ -1,36 +0,0 @@
|
||||||
version: '3.4'
|
|
||||||
services:
|
|
||||||
|
|
||||||
# Email
|
|
||||||
sogo:
|
|
||||||
build:
|
|
||||||
context: ./sogo
|
|
||||||
args:
|
|
||||||
# fake for now
|
|
||||||
VERSION: 5.0.0
|
|
||||||
image: superboum/amd64_sogo:v7
|
|
||||||
|
|
||||||
alps:
|
|
||||||
build:
|
|
||||||
context: ./alps
|
|
||||||
args:
|
|
||||||
VERSION: bf9ccc6ed17e8b50a230e9f5809d820e9de8562f
|
|
||||||
image: lxpz/amd64_alps:v4
|
|
||||||
|
|
||||||
dovecot:
|
|
||||||
build:
|
|
||||||
context: ./dovecot
|
|
||||||
image: superboum/amd64_dovecot:v6
|
|
||||||
|
|
||||||
postfix:
|
|
||||||
build:
|
|
||||||
context: ./postfix
|
|
||||||
args:
|
|
||||||
# https://packages.debian.org/fr/trixie/postfix
|
|
||||||
VERSION: 3.8.4-1
|
|
||||||
image: superboum/amd64_postfix:v4
|
|
||||||
|
|
||||||
opendkim:
|
|
||||||
build:
|
|
||||||
context: ./opendkim
|
|
||||||
image: superboum/amd64_opendkim:v6
|
|
|
@ -1 +0,0 @@
|
||||||
dovecot-ldap.conf
|
|
|
@ -1,16 +0,0 @@
|
||||||
FROM amd64/debian:bullseye
|
|
||||||
|
|
||||||
RUN apt-get update && \
|
|
||||||
apt-get install -y \
|
|
||||||
dovecot-antispam \
|
|
||||||
dovecot-core \
|
|
||||||
dovecot-imapd \
|
|
||||||
dovecot-ldap \
|
|
||||||
dovecot-managesieved \
|
|
||||||
dovecot-sieve \
|
|
||||||
dovecot-lmtpd && \
|
|
||||||
rm -rf /etc/dovecot/*
|
|
||||||
RUN useradd mailstore
|
|
||||||
COPY entrypoint.sh /usr/local/bin/entrypoint
|
|
||||||
|
|
||||||
ENTRYPOINT ["/usr/local/bin/entrypoint"]
|
|
|
@ -1,18 +0,0 @@
|
||||||
```
|
|
||||||
sudo docker build -t superboum/amd64_dovecot:v2 .
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
```
|
|
||||||
sudo docker run -t -i \
|
|
||||||
-e TLSINFO="/C=FR/ST=Bretagne/L=Rennes/O=Deuxfleurs/CN=www.deuxfleurs.fr" \
|
|
||||||
-p 993:993 \
|
|
||||||
-p 143:143 \
|
|
||||||
-p 24:24 \
|
|
||||||
-p 1337:1337 \
|
|
||||||
-v /mnt/glusterfs/email/ssl:/etc/ssl/ \
|
|
||||||
-v /mnt/glusterfs/email/mail:/var/mail \
|
|
||||||
-v `pwd`/dovecot-ldap.conf:/etc/dovecot/dovecot-ldap.conf \
|
|
||||||
superboum/amd64_dovecot:v1 \
|
|
||||||
dovecot -F
|
|
||||||
```
|
|
|
@ -1,27 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
|
|
||||||
if [[ ! -f /etc/ssl/certs/dovecot.crt || ! -f /etc/ssl/private/dovecot.key ]]; then
|
|
||||||
cd /root
|
|
||||||
openssl req \
|
|
||||||
-new \
|
|
||||||
-newkey rsa:4096 \
|
|
||||||
-days 3650 \
|
|
||||||
-nodes \
|
|
||||||
-x509 \
|
|
||||||
-subj ${TLSINFO} \
|
|
||||||
-keyout dovecot.key \
|
|
||||||
-out dovecot.crt
|
|
||||||
|
|
||||||
mkdir -p /etc/ssl/{certs,private}/
|
|
||||||
|
|
||||||
cp dovecot.crt /etc/ssl/certs/dovecot.crt
|
|
||||||
cp dovecot.key /etc/ssl/private/dovecot.key
|
|
||||||
chmod 400 /etc/ssl/certs/dovecot.crt
|
|
||||||
chmod 400 /etc/ssl/private/dovecot.key
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [[ $(stat -c '%U' /var/mail/) != "mailstore" ]]; then
|
|
||||||
chown -R mailstore /var/mail
|
|
||||||
fi
|
|
||||||
|
|
||||||
exec "$@"
|
|
|
@ -1,5 +0,0 @@
|
||||||
require ["fileinto", "mailbox"];
|
|
||||||
if header :contains "X-Spam-Flag" "YES" {
|
|
||||||
fileinto :create "Junk";
|
|
||||||
}
|
|
||||||
|
|
|
@ -1,8 +0,0 @@
|
||||||
hosts = ldap.example.com
|
|
||||||
dn = cn=admin,dc=example,dc=com
|
|
||||||
dnpass = s3cr3t
|
|
||||||
base = dc=example,dc=com
|
|
||||||
scope = subtree
|
|
||||||
user_filter = (&(mail=%u)(&(objectClass=inetOrgPerson)(memberOf=cn=email,ou=groups,dc=example,dc=com)))
|
|
||||||
pass_filter = (&(mail=%u)(&(objectClass=inetOrgPerson)(memberOf=cn=email,ou=groups,dc=example,dc=com)))
|
|
||||||
user_attrs = mail=/var/mail/%{ldap:mail}
|
|
|
@ -1,17 +0,0 @@
|
||||||
require ["vnd.dovecot.pipe", "copy", "imapsieve", "environment", "variables", "vnd.dovecot.debug"];
|
|
||||||
|
|
||||||
if environment :matches "imap.mailbox" "*" {
|
|
||||||
set "mailbox" "${1}";
|
|
||||||
}
|
|
||||||
|
|
||||||
if string "${mailbox}" "Trash" {
|
|
||||||
stop;
|
|
||||||
}
|
|
||||||
|
|
||||||
if environment :matches "imap.user" "*" {
|
|
||||||
set "username" "${1}";
|
|
||||||
}
|
|
||||||
|
|
||||||
pipe :copy "sa-learn" [ "--ham", "-u", "debian-spamd" ];
|
|
||||||
debug_log "ham reported by ${username}";
|
|
||||||
|
|
|
@ -1,9 +0,0 @@
|
||||||
require ["vnd.dovecot.pipe", "copy", "imapsieve", "environment", "variables", "vnd.dovecot.debug"];
|
|
||||||
|
|
||||||
if environment :matches "imap.user" "*" {
|
|
||||||
set "username" "${1}";
|
|
||||||
}
|
|
||||||
|
|
||||||
pipe :copy "sa-learn" [ "--spam", "-u", "debian-spamd"];
|
|
||||||
debug_log "spam reported by ${username}";
|
|
||||||
|
|
|
@ -1,9 +0,0 @@
|
||||||
FROM amd64/debian:bullseye
|
|
||||||
|
|
||||||
RUN apt-get update && \
|
|
||||||
apt-get dist-upgrade -y && \
|
|
||||||
apt-get install -y opendkim opendkim-tools
|
|
||||||
|
|
||||||
COPY ./opendkim.conf /etc/opendkim.conf
|
|
||||||
COPY ./entrypoint /entrypoint
|
|
||||||
CMD ["/entrypoint"]
|
|
|
@ -1,12 +0,0 @@
|
||||||
```
|
|
||||||
sudo docker build -t superboum/amd64_opendkim:v1 .
|
|
||||||
```
|
|
||||||
|
|
||||||
```
|
|
||||||
sudo docker run -t -i \
|
|
||||||
-v `pwd`/conf:/etc/dkim \
|
|
||||||
-v /dev/log:/dev/log \
|
|
||||||
-p 8999:8999
|
|
||||||
superboum/amd64_opendkim:v1
|
|
||||||
opendkim -f -v -x /etc/opendkim.conf
|
|
||||||
```
|
|
|
@ -1,8 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
|
|
||||||
chown 0:0 /etc/dkim/*
|
|
||||||
chown 0:0 /etc/dkim
|
|
||||||
chmod 400 /etc/dkim/*
|
|
||||||
chmod 700 /etc/dkim
|
|
||||||
|
|
||||||
opendkim -f -v -x /etc/opendkim.conf
|
|
|
@ -1,12 +0,0 @@
|
||||||
Syslog yes
|
|
||||||
SyslogSuccess yes
|
|
||||||
LogWhy yes
|
|
||||||
UMask 007
|
|
||||||
Mode sv
|
|
||||||
OversignHeaders From
|
|
||||||
TrustAnchorFile /usr/share/dns/root.key
|
|
||||||
KeyTable refile:/etc/dkim/keytable
|
|
||||||
SigningTable refile:/etc/dkim/signingtable
|
|
||||||
ExternalIgnoreList refile:/etc/dkim/trusted
|
|
||||||
InternalHosts refile:/etc/dkim/trusted
|
|
||||||
Socket inet:8999
|
|
|
@ -1,13 +0,0 @@
|
||||||
FROM amd64/debian:trixie
|
|
||||||
|
|
||||||
ARG VERSION
|
|
||||||
|
|
||||||
RUN apt-get update && \
|
|
||||||
apt-get install -y \
|
|
||||||
postfix=$VERSION \
|
|
||||||
postfix-ldap
|
|
||||||
|
|
||||||
COPY entrypoint.sh /usr/local/bin/entrypoint
|
|
||||||
|
|
||||||
ENTRYPOINT ["/usr/local/bin/entrypoint"]
|
|
||||||
CMD ["postfix", "start-fg"]
|
|
|
@ -1,18 +0,0 @@
|
||||||
```
|
|
||||||
sudo docker build -t superboum/amd64_postfix:v1 .
|
|
||||||
```
|
|
||||||
|
|
||||||
```
|
|
||||||
sudo docker run -t -i \
|
|
||||||
-e TLSINFO="/C=FR/ST=Bretagne/L=Rennes/O=Deuxfleurs/CN=smtp.deuxfleurs.fr" \
|
|
||||||
-e MAILNAME="smtp.deuxfleurs.fr" \
|
|
||||||
-p 25:25 \
|
|
||||||
-p 465:465 \
|
|
||||||
-p 587:587 \
|
|
||||||
-v `pwd`/../../ansible/roles/container_conf/files/email/postfix-conf:/etc/postfix-conf \
|
|
||||||
-v /mnt/glusterfs/email/postfix-ssl/private:/etc/ssl/private \
|
|
||||||
-v /mnt/glusterfs/email/postfix-ssl/certs:/etc/ssl/certs \
|
|
||||||
superboum/amd64_postfix:v1 \
|
|
||||||
bash
|
|
||||||
```
|
|
||||||
|
|
|
@ -1,31 +0,0 @@
|
||||||
#!/bin/bash
|
|
||||||
|
|
||||||
if [[ ! -f /etc/ssl/certs/postfix.crt || ! -f /etc/ssl/private/postfix.key ]]; then
|
|
||||||
cd /root
|
|
||||||
openssl req \
|
|
||||||
-new \
|
|
||||||
-newkey rsa:4096 \
|
|
||||||
-days 3650 \
|
|
||||||
-nodes \
|
|
||||||
-x509 \
|
|
||||||
-subj ${TLSINFO} \
|
|
||||||
-keyout postfix.key \
|
|
||||||
-out postfix.crt
|
|
||||||
|
|
||||||
mkdir -p /etc/ssl/{certs,private}/
|
|
||||||
|
|
||||||
cp postfix.crt /etc/ssl/certs/postfix.crt
|
|
||||||
cp postfix.key /etc/ssl/private/postfix.key
|
|
||||||
chmod 400 /etc/ssl/certs/postfix.crt
|
|
||||||
chmod 400 /etc/ssl/private/postfix.key
|
|
||||||
fi
|
|
||||||
|
|
||||||
# A way to map files inside the postfix folder :s
|
|
||||||
for file in $(ls /etc/postfix-conf); do
|
|
||||||
cp /etc/postfix-conf/${file} /etc/postfix/${file}
|
|
||||||
done
|
|
||||||
|
|
||||||
echo ${MAILNAME} > /etc/mailname
|
|
||||||
postmap /etc/postfix/transport
|
|
||||||
|
|
||||||
exec "$@"
|
|
|
@ -1,17 +0,0 @@
|
||||||
#FROM amd64/debian:stretch as builder
|
|
||||||
|
|
||||||
FROM amd64/debian:buster
|
|
||||||
|
|
||||||
RUN mkdir ~/.gnupg && echo "disable-ipv6" >> ~/.gnupg/dirmngr.conf
|
|
||||||
|
|
||||||
RUN apt-get update && \
|
|
||||||
apt-get install -y apt-transport-https gnupg2 sudo nginx && \
|
|
||||||
rm -rf /etc/nginx/sites-enabled/* && \
|
|
||||||
apt-key adv --keyserver keys.gnupg.net --recv-key 0x810273C4 && \
|
|
||||||
echo "deb http://packages.inverse.ca/SOGo/nightly/5/debian/ buster buster" > /etc/apt/sources.list.d/sogo.list && \
|
|
||||||
apt-get update && \
|
|
||||||
apt-get install -y sogo sogo-activesync sope4.9-gdl1-postgresql postgresql-client
|
|
||||||
|
|
||||||
COPY sogo.nginx.conf /etc/nginx/sites-enabled/sogo.conf
|
|
||||||
COPY entrypoint /usr/sbin/entrypoint
|
|
||||||
ENTRYPOINT ["/usr/sbin/entrypoint"]
|
|
|
@ -1,20 +0,0 @@
|
||||||
```
|
|
||||||
docker build -t superboum/amd64_sogo:v6 .
|
|
||||||
|
|
||||||
# privileged is only for debug
|
|
||||||
docker run --rm -ti \
|
|
||||||
--privileged \
|
|
||||||
-p 8080:8080 \
|
|
||||||
-v /tmp/sogo/log:/var/log/sogo \
|
|
||||||
-v /tmp/sogo/run:/var/run/sogo \
|
|
||||||
-v /tmp/sogo/spool:/var/spool/sogo \
|
|
||||||
-v /tmp/sogo/tmp:/tmp \
|
|
||||||
-v `pwd`/sogo:/etc/sogo:ro \
|
|
||||||
superboum/amd64_sogo:v1
|
|
||||||
```
|
|
||||||
|
|
||||||
Password must be url encoded in sogo.conf for postgres
|
|
||||||
Will need a nginx instance: http://wiki.sogo.nu/nginxSettings
|
|
||||||
|
|
||||||
Might (or might not) be needed:
|
|
||||||
traefik.frontend.headers.customRequestHeaders=x-webobjects-server-port:443||x-webobjects-server-name=sogo.deuxfleurs.fr||x-webobjects-server-url:https://sogo.deuxfleurs.fr
|
|
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue