Compare commits
11 commits
main
...
hammerhead
Author | SHA1 | Date | |
---|---|---|---|
c9e3f01b34 | |||
|
9acdec272b | ||
|
6aa3369341 | ||
|
1beced4c65 | ||
|
213e42f4ad | ||
|
66818430bb | ||
|
8c565aac6f | ||
|
7275c5b156 | ||
|
560e1f1d90 | ||
|
fab59e7a7a | ||
|
efd6069af4 |
165 changed files with 2818 additions and 1635 deletions
3
.gitmodules
vendored
3
.gitmodules
vendored
|
@ -1,3 +1,6 @@
|
|||
[submodule "docker/static/goStatic"]
|
||||
path = app/build/static/goStatic
|
||||
url = https://github.com/PierreZ/goStatic
|
||||
[submodule "docker/blog/quentin.dufour.io"]
|
||||
path = docker/blog-quentin/quentin.dufour.io
|
||||
url = git@gitlab.com:superboum/quentin.dufour.io.git
|
||||
|
|
93
README.md
93
README.md
|
@ -1,8 +1,31 @@
|
|||
deuxfleurs.fr
|
||||
=============
|
||||
|
||||
**OBSOLETION NOTICE:** We are progressively migrating our stack to NixOS, to replace Ansible. Most of the files present in this repository are outdated or obsolete,
|
||||
the current code for our infrastructure is at: <https://git.deuxfleurs.fr/Deuxfleurs/nixcfg>.
|
||||
*Many things are still missing here, including a proper documentation. Please stay nice, it is a volunter project. Feel free to open pull/merge requests to improve it. Thanks.*
|
||||
|
||||
## Our abstraction stack
|
||||
|
||||
We try to build a generic abstraction stack between our different resources (CPU, RAM, disk, etc.) and our services (Chat, Storage, etc.), we develop our own tools when needed:
|
||||
|
||||
* **[garage](https://git.deuxfleurs.fr/Deuxfleurs/garage/):** S3-compatible lightweight object store for self-hosted geo-distributed deployments (we also have a legacy glusterfs cluster)
|
||||
* **[diplonat](https://git.deuxfleurs.fr/Deuxfleurs/diplonat):** network automation (firewalling, upnp igd)
|
||||
* **[bottin](https://git.deuxfleurs.fr/Deuxfleurs/bottin):** authentication and authorization (LDAP protocol, consul backend)
|
||||
* **[guichet](https://git.deuxfleurs.fr/Deuxfleurs/guichet):** a dashboard for our users and administrators
|
||||
* **ansible:** physical node configuration
|
||||
* **nomad:** schedule containers and handle their lifecycle
|
||||
* **consul:** distributed key value store + lock + service discovery
|
||||
* **stolon + postgresql:** distributed relational database
|
||||
* **docker:** package, distribute and isolate applications
|
||||
|
||||
Some services we provide:
|
||||
|
||||
* **Websites:** garage (static) + fediverse blog (plume)
|
||||
* **Chat:** Synapse + Element Web (Matrix protocol)
|
||||
* **Email:** Postfix SMTP + Dovecot IMAP + opendkim DKIM + Sogo webmail (legacy) | Alps webmail (experimental)
|
||||
* **Storage:** Seafile (legacy) | Nextcloud (experimental)
|
||||
* **Visio:** Jitsi
|
||||
|
||||
As a generic abstraction is provided, deploying new services should be easy.
|
||||
|
||||
## I am lost, how this repo works?
|
||||
|
||||
|
@ -19,3 +42,69 @@ To ease the development, we make the choice of a fully integrated environment
|
|||
3. `op_guide`: Guides to explain you operations you can do cluster wide (like configuring postgres)
|
||||
|
||||
|
||||
## Start hacking
|
||||
|
||||
### Deploying/Updating new services is done from your machine
|
||||
|
||||
*The following instructions are provided for ops that already have access to the servers (meaning: their SSH public key is known by the cluster).*
|
||||
|
||||
Deploy Nomad on your machine:
|
||||
|
||||
```bash
|
||||
export NOMAD_VER=1.0.1
|
||||
wget https://releases.hashicorp.com/nomad/${NOMAD_VER}/nomad_${NOMAD_VER}_linux_amd64.zip
|
||||
unzip nomad_${NOMAD_VER}_linux_amd64.zip
|
||||
sudo mv nomad /usr/local/bin
|
||||
rm nomad_${NOMAD_VER}_linux_amd64.zip
|
||||
```
|
||||
|
||||
Deploy Consul on your machine:
|
||||
|
||||
```bash
|
||||
export CONSUL_VER=1.9.0
|
||||
wget https://releases.hashicorp.com/consul/${CONSUL_VER}/consul_${CONSUL_VER}_linux_amd64.zip
|
||||
unzip consul_${CONSUL_VER}_linux_amd64.zip
|
||||
sudo mv consul /usr/local/bin
|
||||
rm consul_${CONSUL_VER}_linux_amd64.zip
|
||||
```
|
||||
|
||||
Create an alias (and put it in your `.bashrc`) to bind APIs on your machine:
|
||||
|
||||
```
|
||||
alias bind_df="ssh \
|
||||
-p110 \
|
||||
-N \
|
||||
-L 1389:bottin2.service.2.cluster.deuxfleurs.fr:389 \
|
||||
-L 4646:127.0.0.1:4646 \
|
||||
-L 5432:psql-proxy.service.2.cluster.deuxfleurs.fr:5432 \
|
||||
-L 8082:traefik-admin.service.2.cluster.deuxfleurs.fr:8082 \
|
||||
-L 8500:127.0.0.1:8500 \
|
||||
<a server from the cluster>"
|
||||
```
|
||||
|
||||
and run:
|
||||
|
||||
bind_df
|
||||
|
||||
Adrien uses `.ssh/config` configuration instead. I works basically the same. Here it goes:
|
||||
|
||||
```
|
||||
# in ~/.ssh/config
|
||||
|
||||
Host deuxfleurs
|
||||
User adrien
|
||||
Hostname deuxfleurs.fr
|
||||
# If you don't use the default ~/.ssh/id_rsa to connect to Deuxfleurs
|
||||
IdentityFile <some_key_path>
|
||||
PubKeyAuthentication yes
|
||||
ForwardAgent No
|
||||
LocalForward 1389 bottin2.service.2.cluster.deuxfleurs.fr:389
|
||||
LocalForward 4646 127.0.0.1:4646
|
||||
LocalForward 5432 psql-proxy.service.2.cluster.deuxfleurs.fr:5432
|
||||
LocalForward 8082 traefik-admin.service.2.cluster.deuxfleurs.fr:8082
|
||||
LocalForward 8500 127.0.0.1:8500
|
||||
```
|
||||
|
||||
Now, to connect, do the following:
|
||||
|
||||
ssh deuxfleurs -N
|
||||
|
|
22
app/backup/build/backup-matrix/Dockerfile
Normal file
22
app/backup/build/backup-matrix/Dockerfile
Normal file
|
@ -0,0 +1,22 @@
|
|||
FROM golang:buster as builder
|
||||
|
||||
WORKDIR /root
|
||||
RUN git clone https://filippo.io/age && cd age/cmd/age && go build -o age .
|
||||
|
||||
FROM amd64/debian:buster
|
||||
|
||||
COPY --from=builder /root/age/cmd/age/age /usr/local/bin/age
|
||||
|
||||
RUN apt-get update && \
|
||||
apt-get -qq -y full-upgrade && \
|
||||
apt-get install -y rsync wget openssh-client postgresql-client && \
|
||||
apt-get clean && \
|
||||
rm -f /var/lib/apt/lists/*_*
|
||||
|
||||
RUN mkdir -p /root/.ssh
|
||||
WORKDIR /root
|
||||
|
||||
COPY do_backup.sh /root/do_backup.sh
|
||||
|
||||
CMD "/root/do_backup.sh"
|
||||
|
40
app/backup/build/backup-matrix/do_backup.sh
Executable file
40
app/backup/build/backup-matrix/do_backup.sh
Executable file
|
@ -0,0 +1,40 @@
|
|||
#!/bin/sh
|
||||
|
||||
set -x -e
|
||||
|
||||
cd /root
|
||||
|
||||
chmod 0600 .ssh/id_ed25519
|
||||
|
||||
cat > .ssh/config <<EOF
|
||||
Host backuphost
|
||||
HostName $TARGET_SSH_HOST
|
||||
Port $TARGET_SSH_PORT
|
||||
User $TARGET_SSH_USER
|
||||
EOF
|
||||
|
||||
echo "export sql"
|
||||
export PGPASSWORD=$REPL_PSQL_PWD
|
||||
pg_basebackup \
|
||||
--pgdata=- \
|
||||
--format=tar \
|
||||
--max-rate=1M \
|
||||
--no-slot \
|
||||
--wal-method=none \
|
||||
--gzip \
|
||||
--compress=8 \
|
||||
--checkpoint=spread \
|
||||
--progress \
|
||||
--verbose \
|
||||
--status-interval=10 \
|
||||
--username=$REPL_PSQL_USER \
|
||||
--port=5432 \
|
||||
--host=psql-proxy.service.2.cluster.deuxfleurs.fr | \
|
||||
age -r "$(cat /root/.ssh/id_ed25519.pub)" | \
|
||||
ssh backuphost "cat > $TARGET_SSH_DIR/matrix/db-$(date --iso-8601=minute).gz.age"
|
||||
|
||||
MATRIX_MEDIA="/mnt/glusterfs/chat/matrix/synapse/media"
|
||||
echo "export local_content"
|
||||
tar -vzcf - ${MATRIX_MEDIA} | \
|
||||
age -r "$(cat /root/.ssh/id_ed25519.pub)" | \
|
||||
ssh backuphost "cat > $TARGET_SSH_DIR/matrix/media-$(date --iso-8601=minute).gz.age"
|
1
app/backup/build/backup-psql/.gitignore
vendored
1
app/backup/build/backup-psql/.gitignore
vendored
|
@ -1 +0,0 @@
|
|||
result
|
|
@ -1,8 +0,0 @@
|
|||
## Build
|
||||
|
||||
```bash
|
||||
docker load < $(nix-build docker.nix)
|
||||
docker push superboum/backup-psql:???
|
||||
```
|
||||
|
||||
|
|
@ -1,106 +0,0 @@
|
|||
#!/usr/bin/env python3
|
||||
import shutil,sys,os,datetime,minio,subprocess
|
||||
|
||||
working_directory = "."
|
||||
if 'CACHE_DIR' in os.environ: working_directory = os.environ['CACHE_DIR']
|
||||
required_space_in_bytes = 20 * 1024 * 1024 * 1024
|
||||
bucket = os.environ['AWS_BUCKET']
|
||||
key = os.environ['AWS_ACCESS_KEY_ID']
|
||||
secret = os.environ['AWS_SECRET_ACCESS_KEY']
|
||||
endpoint = os.environ['AWS_ENDPOINT']
|
||||
pubkey = os.environ['CRYPT_PUBLIC_KEY']
|
||||
psql_host = os.environ['PSQL_HOST']
|
||||
psql_user = os.environ['PSQL_USER']
|
||||
s3_prefix = str(datetime.datetime.now())
|
||||
files = [ "backup_manifest", "base.tar.gz", "pg_wal.tar.gz" ]
|
||||
clear_paths = [ os.path.join(working_directory, f) for f in files ]
|
||||
crypt_paths = [ os.path.join(working_directory, f) + ".age" for f in files ]
|
||||
s3_keys = [ s3_prefix + "/" + f for f in files ]
|
||||
|
||||
def abort(msg):
|
||||
for p in clear_paths + crypt_paths:
|
||||
if os.path.exists(p):
|
||||
print(f"Remove {p}")
|
||||
os.remove(p)
|
||||
|
||||
if msg: sys.exit(msg)
|
||||
else: print("success")
|
||||
|
||||
# Check we have enough space on disk
|
||||
if shutil.disk_usage(working_directory).free < required_space_in_bytes:
|
||||
abort(f"Not enough space on disk at path {working_directory} to perform a backup, aborting")
|
||||
|
||||
# Check postgres password is set
|
||||
if 'PGPASSWORD' not in os.environ:
|
||||
abort(f"You must pass postgres' password through the environment variable PGPASSWORD")
|
||||
|
||||
# Check our working directory is empty
|
||||
if len(os.listdir(working_directory)) != 0:
|
||||
abort(f"Working directory {working_directory} is not empty, aborting")
|
||||
|
||||
# Check Minio
|
||||
client = minio.Minio(endpoint, key, secret)
|
||||
if not client.bucket_exists(bucket):
|
||||
abort(f"Bucket {bucket} does not exist or its access is forbidden, aborting")
|
||||
|
||||
# Perform the backup locally
|
||||
try:
|
||||
ret = subprocess.run(["pg_basebackup",
|
||||
f"--host={psql_host}",
|
||||
f"--username={psql_user}",
|
||||
f"--pgdata={working_directory}",
|
||||
f"--format=tar",
|
||||
"--wal-method=stream",
|
||||
"--gzip",
|
||||
"--compress=6",
|
||||
"--progress",
|
||||
"--max-rate=5M",
|
||||
])
|
||||
if ret.returncode != 0:
|
||||
abort(f"pg_basebackup exited, expected return code 0, got {ret.returncode}. aborting")
|
||||
except Exception as e:
|
||||
abort(f"pg_basebackup raised exception {e}. aborting")
|
||||
|
||||
# Check that the expected files are here
|
||||
for p in clear_paths:
|
||||
print(f"Checking that {p} exists locally")
|
||||
if not os.path.exists(p):
|
||||
abort(f"File {p} expected but not found, aborting")
|
||||
|
||||
# Cipher them
|
||||
for c, e in zip(clear_paths, crypt_paths):
|
||||
print(f"Ciphering {c} to {e}")
|
||||
try:
|
||||
ret = subprocess.run(["age", "-r", pubkey, "-o", e, c])
|
||||
if ret.returncode != 0:
|
||||
abort(f"age exit code is {ret}, 0 expected. aborting")
|
||||
except Exception as e:
|
||||
abort(f"aged raised an exception. {e}. aborting")
|
||||
|
||||
# Upload the backup to S3
|
||||
for p, k in zip(crypt_paths, s3_keys):
|
||||
try:
|
||||
print(f"Uploading {p} to {k}")
|
||||
result = client.fput_object(bucket, k, p)
|
||||
print(
|
||||
"created {0} object; etag: {1}, version-id: {2}".format(
|
||||
result.object_name, result.etag, result.version_id,
|
||||
),
|
||||
)
|
||||
except Exception as e:
|
||||
abort(f"Exception {e} occured while upload {p}. aborting")
|
||||
|
||||
# Check that the files have been uploaded
|
||||
for k in s3_keys:
|
||||
try:
|
||||
print(f"Checking that {k} exists remotely")
|
||||
result = client.stat_object(bucket, k)
|
||||
print(
|
||||
"last-modified: {0}, size: {1}".format(
|
||||
result.last_modified, result.size,
|
||||
),
|
||||
)
|
||||
except Exception as e:
|
||||
abort(f"{k} not found on S3. {e}. aborting")
|
||||
|
||||
abort(None)
|
|
@ -1,8 +0,0 @@
|
|||
{
|
||||
pkgsSrc = fetchTarball {
|
||||
# Latest commit on https://github.com/NixOS/nixpkgs/tree/nixos-21.11
|
||||
# As of 2022-04-15
|
||||
url ="https://github.com/NixOS/nixpkgs/archive/2f06b87f64bc06229e05045853e0876666e1b023.tar.gz";
|
||||
sha256 = "sha256:1d7zg96xw4qsqh7c89pgha9wkq3rbi9as3k3d88jlxy2z0ns0cy2";
|
||||
};
|
||||
}
|
|
@ -1,37 +0,0 @@
|
|||
let
|
||||
common = import ./common.nix;
|
||||
pkgs = import common.pkgsSrc {};
|
||||
python-with-my-packages = pkgs.python3.withPackages (p: with p; [
|
||||
minio
|
||||
]);
|
||||
in
|
||||
pkgs.stdenv.mkDerivation {
|
||||
name = "backup-psql";
|
||||
src = pkgs.lib.sourceFilesBySuffices ./. [ ".py" ];
|
||||
|
||||
buildInputs = [
|
||||
python-with-my-packages
|
||||
pkgs.age
|
||||
pkgs.postgresql_14
|
||||
];
|
||||
|
||||
buildPhase = ''
|
||||
cat > backup-psql <<EOF
|
||||
#!${pkgs.bash}/bin/bash
|
||||
|
||||
export PYTHONPATH=${python-with-my-packages}/${python-with-my-packages.sitePackages}
|
||||
export PATH=${python-with-my-packages}/bin:${pkgs.age}/bin:${pkgs.postgresql_14}/bin
|
||||
|
||||
${python-with-my-packages}/bin/python3 $out/lib/backup-psql.py
|
||||
EOF
|
||||
|
||||
chmod +x backup-psql
|
||||
'';
|
||||
|
||||
installPhase = ''
|
||||
mkdir -p $out/{bin,lib}
|
||||
cp *.py $out/lib/backup-psql.py
|
||||
cp backup-psql $out/bin/backup-psql
|
||||
'';
|
||||
}
|
||||
|
|
@ -1,11 +0,0 @@
|
|||
let
|
||||
common = import ./common.nix;
|
||||
app = import ./default.nix;
|
||||
pkgs = import common.pkgsSrc {};
|
||||
in
|
||||
pkgs.dockerTools.buildImage {
|
||||
name = "superboum/backup-psql-docker";
|
||||
config = {
|
||||
Cmd = [ "${app}/bin/backup-psql" ];
|
||||
};
|
||||
}
|
|
@ -1,171 +0,0 @@
|
|||
job "backup_daily" {
|
||||
datacenters = ["dc1"]
|
||||
type = "batch"
|
||||
|
||||
priority = "60"
|
||||
|
||||
periodic {
|
||||
cron = "@daily"
|
||||
// Do not allow overlapping runs.
|
||||
prohibit_overlap = true
|
||||
}
|
||||
|
||||
group "backup-dovecot" {
|
||||
constraint {
|
||||
attribute = "${attr.unique.hostname}"
|
||||
operator = "="
|
||||
value = "digitale"
|
||||
}
|
||||
|
||||
task "main" {
|
||||
driver = "docker"
|
||||
|
||||
config {
|
||||
image = "restic/restic:0.12.1"
|
||||
entrypoint = [ "/bin/sh", "-c" ]
|
||||
args = [ "restic backup /mail && restic forget --keep-within 1m1d --keep-within-weekly 3m --keep-within-monthly 1y && restic prune --max-unused 50% --max-repack-size 2G && restic check" ]
|
||||
volumes = [
|
||||
"/mnt/ssd/mail:/mail"
|
||||
]
|
||||
}
|
||||
|
||||
template {
|
||||
data = <<EOH
|
||||
AWS_ACCESS_KEY_ID={{ key "secrets/email/dovecot/backup_aws_access_key_id" }}
|
||||
AWS_SECRET_ACCESS_KEY={{ key "secrets/email/dovecot/backup_aws_secret_access_key" }}
|
||||
RESTIC_REPOSITORY={{ key "secrets/email/dovecot/backup_restic_repository" }}
|
||||
RESTIC_PASSWORD={{ key "secrets/email/dovecot/backup_restic_password" }}
|
||||
EOH
|
||||
|
||||
destination = "secrets/env_vars"
|
||||
env = true
|
||||
}
|
||||
|
||||
resources {
|
||||
cpu = 500
|
||||
memory = 200
|
||||
}
|
||||
|
||||
restart {
|
||||
attempts = 2
|
||||
interval = "30m"
|
||||
delay = "15s"
|
||||
mode = "fail"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
group "backup-plume" {
|
||||
constraint {
|
||||
attribute = "${attr.unique.hostname}"
|
||||
operator = "="
|
||||
value = "digitale"
|
||||
}
|
||||
|
||||
task "main" {
|
||||
driver = "docker"
|
||||
|
||||
config {
|
||||
image = "restic/restic:0.12.1"
|
||||
entrypoint = [ "/bin/sh", "-c" ]
|
||||
args = [ "restic backup /plume && restic forget --keep-within 1m1d --keep-within-weekly 3m --keep-within-monthly 1y && restic prune --max-unused 50% --max-repack-size 2G && restic check" ]
|
||||
volumes = [
|
||||
"/mnt/ssd/plume/media:/plume"
|
||||
]
|
||||
}
|
||||
|
||||
template {
|
||||
data = <<EOH
|
||||
AWS_ACCESS_KEY_ID={{ key "secrets/plume/backup_aws_access_key_id" }}
|
||||
AWS_SECRET_ACCESS_KEY={{ key "secrets/plume/backup_aws_secret_access_key" }}
|
||||
RESTIC_REPOSITORY={{ key "secrets/plume/backup_restic_repository" }}
|
||||
RESTIC_PASSWORD={{ key "secrets/plume/backup_restic_password" }}
|
||||
EOH
|
||||
|
||||
destination = "secrets/env_vars"
|
||||
env = true
|
||||
}
|
||||
|
||||
resources {
|
||||
cpu = 500
|
||||
memory = 200
|
||||
}
|
||||
|
||||
restart {
|
||||
attempts = 2
|
||||
interval = "30m"
|
||||
delay = "15s"
|
||||
mode = "fail"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
group "backup-consul" {
|
||||
task "consul-kv-export" {
|
||||
driver = "docker"
|
||||
|
||||
lifecycle {
|
||||
hook = "prestart"
|
||||
sidecar = false
|
||||
}
|
||||
|
||||
config {
|
||||
image = "consul:1.11.2"
|
||||
network_mode = "host"
|
||||
entrypoint = [ "/bin/sh", "-c" ]
|
||||
args = [ "/bin/consul kv export > $NOMAD_ALLOC_DIR/consul.json" ]
|
||||
}
|
||||
|
||||
env {
|
||||
CONSUL_HTTP_ADDR = "http://consul.service.2.cluster.deuxfleurs.fr:8500"
|
||||
}
|
||||
|
||||
resources {
|
||||
cpu = 200
|
||||
memory = 200
|
||||
}
|
||||
|
||||
restart {
|
||||
attempts = 2
|
||||
interval = "30m"
|
||||
delay = "15s"
|
||||
mode = "fail"
|
||||
}
|
||||
}
|
||||
|
||||
task "restic-backup" {
|
||||
driver = "docker"
|
||||
|
||||
config {
|
||||
image = "restic/restic:0.12.1"
|
||||
entrypoint = [ "/bin/sh", "-c" ]
|
||||
args = [ "restic backup $NOMAD_ALLOC_DIR/consul.json && restic forget --keep-within 1m1d --keep-within-weekly 3m --keep-within-monthly 1y && restic prune --max-unused 50% --max-repack-size 2G && restic check" ]
|
||||
}
|
||||
|
||||
|
||||
template {
|
||||
data = <<EOH
|
||||
AWS_ACCESS_KEY_ID={{ key "secrets/backup/consul/backup_aws_access_key_id" }}
|
||||
AWS_SECRET_ACCESS_KEY={{ key "secrets/backup/consul/backup_aws_secret_access_key" }}
|
||||
RESTIC_REPOSITORY={{ key "secrets/backup/consul/backup_restic_repository" }}
|
||||
RESTIC_PASSWORD={{ key "secrets/backup/consul/backup_restic_password" }}
|
||||
EOH
|
||||
|
||||
destination = "secrets/env_vars"
|
||||
env = true
|
||||
}
|
||||
|
||||
resources {
|
||||
cpu = 200
|
||||
memory = 200
|
||||
}
|
||||
|
||||
restart {
|
||||
attempts = 2
|
||||
interval = "30m"
|
||||
delay = "15s"
|
||||
mode = "fail"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
62
app/backup/deploy/backup-matrix.hcl
Normal file
62
app/backup/deploy/backup-matrix.hcl
Normal file
|
@ -0,0 +1,62 @@
|
|||
job "backup_manual_matrix" {
|
||||
datacenters = ["dc1"]
|
||||
|
||||
type = "batch"
|
||||
|
||||
task "backup-matrix" {
|
||||
driver = "docker"
|
||||
|
||||
config {
|
||||
image = "superboum/backup_matrix:4"
|
||||
volumes = [
|
||||
"secrets/id_ed25519:/root/.ssh/id_ed25519",
|
||||
"secrets/id_ed25519.pub:/root/.ssh/id_ed25519.pub",
|
||||
"secrets/known_hosts:/root/.ssh/known_hosts",
|
||||
"/mnt/glusterfs/chat/matrix/synapse/media:/mnt/glusterfs/chat/matrix/synapse/media"
|
||||
]
|
||||
network_mode = "host"
|
||||
}
|
||||
|
||||
env {
|
||||
CONSUL_HTTP_ADDR = "http://consul.service.2.cluster.deuxfleurs.fr:8500"
|
||||
}
|
||||
|
||||
template {
|
||||
data = <<EOH
|
||||
TARGET_SSH_USER={{ key "secrets/backup/target_ssh_user" }}
|
||||
TARGET_SSH_PORT={{ key "secrets/backup/target_ssh_port" }}
|
||||
TARGET_SSH_HOST={{ key "secrets/backup/target_ssh_host" }}
|
||||
TARGET_SSH_DIR={{ key "secrets/backup/target_ssh_dir" }}
|
||||
REPL_PSQL_USER={{ key "secrets/postgres/keeper/pg_repl_username" }}
|
||||
REPL_PSQL_PWD={{ key "secrets/postgres/keeper/pg_repl_pwd" }}
|
||||
EOH
|
||||
|
||||
destination = "secrets/env_vars"
|
||||
env = true
|
||||
}
|
||||
|
||||
template {
|
||||
data = "{{ key \"secrets/backup/id_ed25519\" }}"
|
||||
destination = "secrets/id_ed25519"
|
||||
}
|
||||
template {
|
||||
data = "{{ key \"secrets/backup/id_ed25519.pub\" }}"
|
||||
destination = "secrets/id_ed25519.pub"
|
||||
}
|
||||
template {
|
||||
data = "{{ key \"secrets/backup/target_ssh_fingerprint\" }}"
|
||||
destination = "secrets/known_hosts"
|
||||
}
|
||||
|
||||
resources {
|
||||
memory = 200
|
||||
}
|
||||
|
||||
restart {
|
||||
attempts = 2
|
||||
interval = "30m"
|
||||
delay = "15s"
|
||||
mode = "fail"
|
||||
}
|
||||
}
|
||||
}
|
|
@ -1,55 +0,0 @@
|
|||
job "backup_weekly" {
|
||||
datacenters = ["dc1"]
|
||||
type = "batch"
|
||||
|
||||
priority = "60"
|
||||
|
||||
periodic {
|
||||
cron = "@weekly"
|
||||
// Do not allow overlapping runs.
|
||||
prohibit_overlap = true
|
||||
}
|
||||
|
||||
group "backup-psql" {
|
||||
task "main" {
|
||||
driver = "docker"
|
||||
|
||||
config {
|
||||
image = "superboum/backup-psql-docker:gyr3aqgmhs0hxj0j9hkrdmm1m07i8za2"
|
||||
volumes = [
|
||||
// Mount a cache on the hard disk to avoid filling the SSD
|
||||
"/mnt/storage/tmp_bckp_psql:/mnt/cache"
|
||||
]
|
||||
}
|
||||
|
||||
template {
|
||||
data = <<EOH
|
||||
CACHE_DIR=/mnt/cache
|
||||
AWS_BUCKET=backups-pgbasebackup
|
||||
AWS_ENDPOINT=s3.deuxfleurs.shirokumo.net
|
||||
AWS_ACCESS_KEY_ID={{ key "secrets/backup/psql/aws_access_key_id" }}
|
||||
AWS_SECRET_ACCESS_KEY={{ key "secrets/backup/psql/aws_secret_access_key" }}
|
||||
CRYPT_PUBLIC_KEY={{ key "secrets/backup/psql/crypt_public_key" }}
|
||||
PSQL_HOST=psql-proxy.service.2.cluster.deuxfleurs.fr
|
||||
PSQL_USER={{ key "secrets/postgres/keeper/pg_repl_username" }}
|
||||
PGPASSWORD={{ key "secrets/postgres/keeper/pg_repl_pwd" }}
|
||||
EOH
|
||||
|
||||
destination = "secrets/env_vars"
|
||||
env = true
|
||||
}
|
||||
|
||||
resources {
|
||||
cpu = 200
|
||||
memory = 200
|
||||
}
|
||||
|
||||
restart {
|
||||
attempts = 2
|
||||
interval = "30m"
|
||||
delay = "15s"
|
||||
mode = "fail"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
|
@ -1 +0,0 @@
|
|||
USER Backup AWS access key ID
|
|
@ -1 +0,0 @@
|
|||
USER Backup AWS secret access key
|
|
@ -1 +0,0 @@
|
|||
USER Restic password to encrypt backups
|
|
@ -1 +0,0 @@
|
|||
USER Restic repository, eg. s3:https://s3.garage.tld
|
|
@ -1 +0,0 @@
|
|||
USER Minio access key
|
|
@ -1 +0,0 @@
|
|||
USER Minio secret key
|
|
@ -1 +0,0 @@
|
|||
USER a private key to decript backups from age
|
|
@ -1 +0,0 @@
|
|||
USER A public key to encypt backups with age
|
|
@ -1,83 +0,0 @@
|
|||
job "bagage" {
|
||||
datacenters = ["dc1"]
|
||||
type = "service"
|
||||
priority = 90
|
||||
|
||||
constraint {
|
||||
attribute = "${attr.cpu.arch}"
|
||||
value = "amd64"
|
||||
}
|
||||
|
||||
group "main" {
|
||||
count = 1
|
||||
|
||||
network {
|
||||
port "web_port" { to = 8080 }
|
||||
port "ssh_port" {
|
||||
static = 2222
|
||||
to = 2222
|
||||
}
|
||||
}
|
||||
|
||||
task "server" {
|
||||
driver = "docker"
|
||||
config {
|
||||
image = "superboum/amd64_bagage:v11"
|
||||
readonly_rootfs = false
|
||||
volumes = [
|
||||
"secrets/id_rsa:/id_rsa"
|
||||
]
|
||||
ports = [ "web_port", "ssh_port" ]
|
||||
}
|
||||
|
||||
env {
|
||||
BAGAGE_LDAP_ENDPOINT = "bottin2.service.2.cluster.deuxfleurs.fr:389"
|
||||
}
|
||||
|
||||
resources {
|
||||
memory = 500
|
||||
}
|
||||
|
||||
template {
|
||||
data = "{{ key \"secrets/bagage/id_rsa\" }}"
|
||||
destination = "secrets/id_rsa"
|
||||
}
|
||||
|
||||
service {
|
||||
name = "bagage-ssh"
|
||||
port = "ssh_port"
|
||||
address_mode = "host"
|
||||
tags = [
|
||||
"bagage",
|
||||
"(diplonat (tcp_port 2222))"
|
||||
]
|
||||
}
|
||||
|
||||
service {
|
||||
name = "bagage-webdav"
|
||||
tags = [
|
||||
"bagage",
|
||||
"traefik.enable=true",
|
||||
"traefik.frontend.entryPoints=https,http",
|
||||
"traefik.frontend.rule=Host:bagage.deuxfleurs.fr",
|
||||
"tricot bagage.deuxfleurs.fr",
|
||||
]
|
||||
port = "web_port"
|
||||
address_mode = "host"
|
||||
check {
|
||||
type = "tcp"
|
||||
port = "web_port"
|
||||
address_mode = "host"
|
||||
interval = "60s"
|
||||
timeout = "5s"
|
||||
check_restart {
|
||||
limit = 3
|
||||
grace = "90s"
|
||||
ignore_warnings = false
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
@ -1 +0,0 @@
|
|||
CMD ssh-keygen -q -f >(cat) -N "" <<< y 2>/dev/null 1>&2 ; true
|
|
@ -1,5 +1,5 @@
|
|||
job "core" {
|
||||
datacenters = ["dc1", "neptune"]
|
||||
datacenters = ["dc1"]
|
||||
type = "system"
|
||||
priority = 90
|
||||
|
||||
|
@ -18,21 +18,15 @@ job "core" {
|
|||
driver = "docker"
|
||||
|
||||
config {
|
||||
image = "lxpz/amd64_diplonat:3"
|
||||
image = "darkgallium/amd64_diplonat:v2"
|
||||
network_mode = "host"
|
||||
readonly_rootfs = true
|
||||
privileged = true
|
||||
}
|
||||
|
||||
restart {
|
||||
interval = "30m"
|
||||
attempts = 2
|
||||
delay = "15s"
|
||||
mode = "delay"
|
||||
}
|
||||
|
||||
template {
|
||||
data = <<EOH
|
||||
DIPLONAT_PRIVATE_IP={{ env "attr.unique.network.ip-address" }}
|
||||
DIPLONAT_REFRESH_TIME=60
|
||||
DIPLONAT_EXPIRATION_TIME=300
|
||||
DIPLONAT_CONSUL_NODE_NAME={{ env "attr.unique.hostname" }}
|
||||
|
|
|
@ -1,2 +0,0 @@
|
|||
docker load < $(nix-build docker.nix)
|
||||
docker push superboum/cryptpad:???
|
|
@ -1,8 +0,0 @@
|
|||
{
|
||||
pkgsSrc = fetchTarball {
|
||||
# Latest commit on https://github.com/NixOS/nixpkgs/tree/nixos-21.11
|
||||
# As of 2022-04-15
|
||||
url ="https://github.com/NixOS/nixpkgs/archive/2f06b87f64bc06229e05045853e0876666e1b023.tar.gz";
|
||||
sha256 = "sha256:1d7zg96xw4qsqh7c89pgha9wkq3rbi9as3k3d88jlxy2z0ns0cy2";
|
||||
};
|
||||
}
|
|
@ -1,10 +0,0 @@
|
|||
let
|
||||
common = import ./common.nix;
|
||||
pkgs = import common.pkgsSrc {};
|
||||
in
|
||||
pkgs.dockerTools.buildImage {
|
||||
name = "superboum/cryptpad";
|
||||
config = {
|
||||
Cmd = [ "${pkgs.cryptpad}/bin/cryptpad" ];
|
||||
};
|
||||
}
|
|
@ -1,283 +0,0 @@
|
|||
/* globals module */
|
||||
|
||||
/* DISCLAIMER:
|
||||
|
||||
There are two recommended methods of running a CryptPad instance:
|
||||
|
||||
1. Using a standalone nodejs server without HTTPS (suitable for local development)
|
||||
2. Using NGINX to serve static assets and to handle HTTPS for API server's websocket traffic
|
||||
|
||||
We do not officially recommend or support Apache, Docker, Kubernetes, Traefik, or any other configuration.
|
||||
Support requests for such setups should be directed to their authors.
|
||||
|
||||
If you're having difficulty difficulty configuring your instance
|
||||
we suggest that you join the project's IRC/Matrix channel.
|
||||
|
||||
If you don't have any difficulty configuring your instance and you'd like to
|
||||
support us for the work that went into making it pain-free we are quite happy
|
||||
to accept donations via our opencollective page: https://opencollective.com/cryptpad
|
||||
|
||||
*/
|
||||
module.exports = {
|
||||
/* CryptPad is designed to serve its content over two domains.
|
||||
* Account passwords and cryptographic content is handled on the 'main' domain,
|
||||
* while the user interface is loaded on a 'sandbox' domain
|
||||
* which can only access information which the main domain willingly shares.
|
||||
*
|
||||
* In the event of an XSS vulnerability in the UI (that's bad)
|
||||
* this system prevents attackers from gaining access to your account (that's good).
|
||||
*
|
||||
* Most problems with new instances are related to this system blocking access
|
||||
* because of incorrectly configured sandboxes. If you only see a white screen
|
||||
* when you try to load CryptPad, this is probably the cause.
|
||||
*
|
||||
* PLEASE READ THE FOLLOWING COMMENTS CAREFULLY.
|
||||
*
|
||||
*/
|
||||
|
||||
/* httpUnsafeOrigin is the URL that clients will enter to load your instance.
|
||||
* Any other URL that somehow points to your instance is supposed to be blocked.
|
||||
* The default provided below assumes you are loading CryptPad from a server
|
||||
* which is running on the same machine, using port 3000.
|
||||
*
|
||||
* In a production instance this should be available ONLY over HTTPS
|
||||
* using the default port for HTTPS (443) ie. https://cryptpad.fr
|
||||
* In such a case this should be also handled by NGINX, as documented in
|
||||
* cryptpad/docs/example.nginx.conf (see the $main_domain variable)
|
||||
*
|
||||
*/
|
||||
httpUnsafeOrigin: 'http://localhost:3000',
|
||||
|
||||
/* httpSafeOrigin is the URL that is used for the 'sandbox' described above.
|
||||
* If you're testing or developing with CryptPad on your local machine then
|
||||
* it is appropriate to leave this blank. The default behaviour is to serve
|
||||
* the main domain over port 3000 and to serve the sandbox content over port 3001.
|
||||
*
|
||||
* This is not appropriate in a production environment where invasive networks
|
||||
* may filter traffic going over abnormal ports.
|
||||
* To correctly configure your production instance you must provide a URL
|
||||
* with a different domain (a subdomain is sufficient).
|
||||
* It will be used to load the UI in our 'sandbox' system.
|
||||
*
|
||||
* This value corresponds to the $sandbox_domain variable
|
||||
* in the example nginx file.
|
||||
*
|
||||
* Note that in order for the sandboxing system to be effective
|
||||
* httpSafeOrigin must be different from httpUnsafeOrigin.
|
||||
*
|
||||
* CUSTOMIZE AND UNCOMMENT THIS FOR PRODUCTION INSTALLATIONS.
|
||||
*/
|
||||
// httpSafeOrigin: "https://some-other-domain.xyz",
|
||||
|
||||
/* httpAddress specifies the address on which the nodejs server
|
||||
* should be accessible. By default it will listen on 127.0.0.1
|
||||
* (IPv4 localhost on most systems). If you want it to listen on
|
||||
* all addresses, including IPv6, set this to '::'.
|
||||
*
|
||||
*/
|
||||
httpAddress: '::',
|
||||
|
||||
/* httpPort specifies on which port the nodejs server should listen.
|
||||
* By default it will serve content over port 3000, which is suitable
|
||||
* for both local development and for use with the provided nginx example,
|
||||
* which will proxy websocket traffic to your node server.
|
||||
*
|
||||
*/
|
||||
//httpPort: 3000,
|
||||
|
||||
/* httpSafePort allows you to specify an alternative port from which
|
||||
* the node process should serve sandboxed assets. The default value is
|
||||
* that of your httpPort + 1. You probably don't need to change this.
|
||||
*
|
||||
*/
|
||||
//httpSafePort: 3001,
|
||||
|
||||
/* CryptPad will launch a child process for every core available
|
||||
* in order to perform CPU-intensive tasks in parallel.
|
||||
* Some host environments may have a very large number of cores available
|
||||
* or you may want to limit how much computing power CryptPad can take.
|
||||
* If so, set 'maxWorkers' to a positive integer.
|
||||
*/
|
||||
// maxWorkers: 4,
|
||||
|
||||
/* =====================
|
||||
* Admin
|
||||
* ===================== */
|
||||
|
||||
/*
|
||||
* CryptPad contains an administration panel. Its access is restricted to specific
|
||||
* users using the following list.
|
||||
* To give access to the admin panel to a user account, just add their public signing
|
||||
* key, which can be found on the settings page for registered users.
|
||||
* Entries should be strings separated by a comma.
|
||||
*/
|
||||
/*
|
||||
adminKeys: [
|
||||
//"[cryptpad-user1@my.awesome.website/YZgXQxKR0Rcb6r6CmxHPdAGLVludrAF2lEnkbx1vVOo=]",
|
||||
],
|
||||
*/
|
||||
|
||||
/* =====================
|
||||
* STORAGE
|
||||
* ===================== */
|
||||
|
||||
/* Pads that are not 'pinned' by any registered user can be set to expire
|
||||
* after a configurable number of days of inactivity (default 90 days).
|
||||
* The value can be changed or set to false to remove expiration.
|
||||
* Expired pads can then be removed using a cron job calling the
|
||||
* `evict-inactive.js` script with node
|
||||
*
|
||||
* defaults to 90 days if nothing is provided
|
||||
*/
|
||||
//inactiveTime: 90, // days
|
||||
|
||||
/* CryptPad archives some data instead of deleting it outright.
|
||||
* This archived data still takes up space and so you'll probably still want to
|
||||
* remove these files after a brief period.
|
||||
*
|
||||
* cryptpad/scripts/evict-inactive.js is intended to be run daily
|
||||
* from a crontab or similar scheduling service.
|
||||
*
|
||||
* The intent with this feature is to provide a safety net in case of accidental
|
||||
* deletion. Set this value to the number of days you'd like to retain
|
||||
* archived data before it's removed permanently.
|
||||
*
|
||||
* defaults to 15 days if nothing is provided
|
||||
*/
|
||||
//archiveRetentionTime: 15,
|
||||
|
||||
/* It's possible to configure your instance to remove data
|
||||
* stored on behalf of inactive accounts. Set 'accountRetentionTime'
|
||||
* to the number of days an account can remain idle before its
|
||||
* documents and other account data is removed.
|
||||
*
|
||||
* Leave this value commented out to preserve all data stored
|
||||
* by user accounts regardless of inactivity.
|
||||
*/
|
||||
//accountRetentionTime: 365,
|
||||
|
||||
/* Starting with CryptPad 3.23.0, the server automatically runs
|
||||
* the script responsible for removing inactive data according to
|
||||
* your configured definition of inactivity. Set this value to `true`
|
||||
* if you prefer not to remove inactive data, or if you prefer to
|
||||
* do so manually using `scripts/evict-inactive.js`.
|
||||
*/
|
||||
//disableIntegratedEviction: true,
|
||||
|
||||
|
||||
/* Max Upload Size (bytes)
|
||||
* this sets the maximum size of any one file uploaded to the server.
|
||||
* anything larger than this size will be rejected
|
||||
* defaults to 20MB if no value is provided
|
||||
*/
|
||||
//maxUploadSize: 20 * 1024 * 1024,
|
||||
|
||||
/* Users with premium accounts (those with a plan included in their customLimit)
|
||||
* can benefit from an increased upload size limit. By default they are restricted to the same
|
||||
* upload size as any other registered user.
|
||||
*
|
||||
*/
|
||||
//premiumUploadSize: 100 * 1024 * 1024,
|
||||
|
||||
/* =====================
|
||||
* DATABASE VOLUMES
|
||||
* ===================== */
|
||||
|
||||
/*
|
||||
* CryptPad stores each document in an individual file on your hard drive.
|
||||
* Specify a directory where files should be stored.
|
||||
* It will be created automatically if it does not already exist.
|
||||
*/
|
||||
filePath: './root/tmp/mut/datastore/',
|
||||
|
||||
/* CryptPad offers the ability to archive data for a configurable period
|
||||
* before deleting it, allowing a means of recovering data in the event
|
||||
* that it was deleted accidentally.
|
||||
*
|
||||
* To set the location of this archive directory to a custom value, change
|
||||
* the path below:
|
||||
*/
|
||||
archivePath: './root/tmp/mut/data/archive',
|
||||
|
||||
/* CryptPad allows logged in users to request that particular documents be
|
||||
* stored by the server indefinitely. This is called 'pinning'.
|
||||
* Pin requests are stored in a pin-store. The location of this store is
|
||||
* defined here.
|
||||
*/
|
||||
pinPath: './root/tmp/mut/data/pins',
|
||||
|
||||
/* if you would like the list of scheduled tasks to be stored in
|
||||
a custom location, change the path below:
|
||||
*/
|
||||
taskPath: './root/tmp/mut/data/tasks',
|
||||
|
||||
/* if you would like users' authenticated blocks to be stored in
|
||||
a custom location, change the path below:
|
||||
*/
|
||||
blockPath: './root/tmp/mut/block',
|
||||
|
||||
/* CryptPad allows logged in users to upload encrypted files. Files/blobs
|
||||
* are stored in a 'blob-store'. Set its location here.
|
||||
*/
|
||||
blobPath: './root/tmp/mut/blob',
|
||||
|
||||
/* CryptPad stores incomplete blobs in a 'staging' area until they are
|
||||
* fully uploaded. Set its location here.
|
||||
*/
|
||||
blobStagingPath: './root/tmp/mut/data/blobstage',
|
||||
|
||||
decreePath: './root/tmp/mut/data/decrees',
|
||||
|
||||
/* CryptPad supports logging events directly to the disk in a 'logs' directory
|
||||
* Set its location here, or set it to false (or nothing) if you'd rather not log
|
||||
*/
|
||||
logPath: './root/tmp/mut/data/logs',
|
||||
|
||||
/* =====================
|
||||
* Debugging
|
||||
* ===================== */
|
||||
|
||||
/* CryptPad can log activity to stdout
|
||||
* This may be useful for debugging
|
||||
*/
|
||||
logToStdout: true,
|
||||
|
||||
/* CryptPad can be configured to log more or less
|
||||
* the various settings are listed below by order of importance
|
||||
*
|
||||
* silly, verbose, debug, feedback, info, warn, error
|
||||
*
|
||||
* Choose the least important level of logging you wish to see.
|
||||
* For example, a 'silly' logLevel will display everything,
|
||||
* while 'info' will display 'info', 'warn', and 'error' logs
|
||||
*
|
||||
* This will affect both logging to the console and the disk.
|
||||
*/
|
||||
logLevel: 'debug',
|
||||
|
||||
/* clients can use the /settings/ app to opt out of usage feedback
|
||||
* which informs the server of things like how much each app is being
|
||||
* used, and whether certain clientside features are supported by
|
||||
* the client's browser. The intent is to provide feedback to the admin
|
||||
* such that the service can be improved. Enable this with `true`
|
||||
* and ignore feedback with `false` or by commenting the attribute
|
||||
*
|
||||
* You will need to set your logLevel to include 'feedback'. Set this
|
||||
* to false if you'd like to exclude feedback from your logs.
|
||||
*/
|
||||
logFeedback: false,
|
||||
|
||||
/* CryptPad supports verbose logging
|
||||
* (false by default)
|
||||
*/
|
||||
verbose: true,
|
||||
|
||||
/* Surplus information:
|
||||
*
|
||||
* 'installMethod' is included in server telemetry to voluntarily
|
||||
* indicate how many instances are using unofficial installation methods
|
||||
* such as Docker.
|
||||
*
|
||||
*/
|
||||
installMethod: 'unspecified',
|
||||
};
|
|
@ -4,7 +4,7 @@
|
|||
"consul_host": "http://consul.service.2.cluster.deuxfleurs.fr:8500",
|
||||
"log_level": "debug",
|
||||
"acl": [
|
||||
"*,dc=deuxfleurs,dc=fr::read:*:* !userpassword !user_secret !alternate_user_secrets !garage_s3_secret_key",
|
||||
"*,dc=deuxfleurs,dc=fr::read:*:* !userpassword",
|
||||
"*::read modify:SELF:*",
|
||||
"ANONYMOUS::bind:*,ou=users,dc=deuxfleurs,dc=fr:",
|
||||
"ANONYMOUS::bind:cn=admin,dc=deuxfleurs,dc=fr:",
|
||||
|
@ -20,6 +20,10 @@
|
|||
|
||||
"*:cn=asso_deuxfleurs,ou=groups,dc=deuxfleurs,dc=fr:modifyAdd:cn=email,ou=groups,dc=deuxfleurs,dc=fr:*",
|
||||
"*,ou=invitations,dc=deuxfleurs,dc=fr::modifyAdd:cn=email,ou=groups,dc=deuxfleurs,dc=fr:*",
|
||||
"*:cn=asso_deuxfleurs,ou=groups,dc=deuxfleurs,dc=fr:modifyAdd:cn=seafile,ou=groups,dc=deuxfleurs,dc=fr:*",
|
||||
"*,ou=invitations,dc=deuxfleurs,dc=fr::modifyAdd:cn=seafile,ou=groups,dc=deuxfleurs,dc=fr:*",
|
||||
"*:cn=asso_deuxfleurs,ou=groups,dc=deuxfleurs,dc=fr:modifyAdd:cn=nextcloud,ou=groups,dc=deuxfleurs,dc=fr:*",
|
||||
"*,ou=invitations,dc=deuxfleurs,dc=fr::modifyAdd:cn=seafile,ou=nextcloud,dc=deuxfleurs,dc=fr:*",
|
||||
|
||||
"cn=admin,dc=deuxfleurs,dc=fr::read add modify delete:*:*",
|
||||
"*:cn=admin,ou=groups,dc=deuxfleurs,dc=fr:read add modify delete:*:*"
|
||||
|
|
|
@ -12,7 +12,9 @@
|
|||
"invitation_name_attr": "cn",
|
||||
"invited_mail_format": "{}@deuxfleurs.fr",
|
||||
"invited_auto_groups": [
|
||||
"cn=email,ou=groups,dc=deuxfleurs,dc=fr"
|
||||
"cn=email,ou=groups,dc=deuxfleurs,dc=fr",
|
||||
"cn=seafile,ou=groups,dc=deuxfleurs,dc=fr",
|
||||
"cn=nextcloud,ou=groups,dc=deuxfleurs,dc=fr"
|
||||
],
|
||||
|
||||
"web_address": "https://guichet.deuxfleurs.fr",
|
||||
|
@ -23,12 +25,6 @@
|
|||
|
||||
"admin_account": "cn=admin,dc=deuxfleurs,dc=fr",
|
||||
"group_can_admin": "cn=admin,ou=groups,dc=deuxfleurs,dc=fr",
|
||||
"group_can_invite": "cn=asso_deuxfleurs,ou=groups,dc=deuxfleurs,dc=fr",
|
||||
|
||||
"s3_endpoint": "garage.deuxfleurs.fr",
|
||||
"s3_access_key": "{{ key "secrets/directory/guichet/s3_access_key" | trimSpace }}",
|
||||
"s3_secret_key": "{{ key "secrets/directory/guichet/s3_secret_key" | trimSpace }}",
|
||||
"s3_region": "garage",
|
||||
"s3_bucket": "bottin-pictures"
|
||||
"group_can_invite": "cn=asso_deuxfleurs,ou=groups,dc=deuxfleurs,dc=fr"
|
||||
}
|
||||
|
||||
|
|
|
@ -21,7 +21,7 @@ job "directory" {
|
|||
task "bottin" {
|
||||
driver = "docker"
|
||||
config {
|
||||
image = "superboum/bottin_amd64:22"
|
||||
image = "lxpz/bottin_amd64:21"
|
||||
network_mode = "host"
|
||||
readonly_rootfs = true
|
||||
ports = [ "ldap_port" ]
|
||||
|
@ -59,7 +59,6 @@ job "directory" {
|
|||
}
|
||||
}
|
||||
|
||||
/*
|
||||
group "guichet" {
|
||||
count = 1
|
||||
|
||||
|
@ -70,7 +69,7 @@ job "directory" {
|
|||
task "guichet" {
|
||||
driver = "docker"
|
||||
config {
|
||||
image = "dxflrs/guichet:6y7pv4kgfsn02iijj55kf5af0rbksgrn"
|
||||
image = "lxpz/guichet_amd64:10"
|
||||
readonly_rootfs = true
|
||||
ports = [ "web_port" ]
|
||||
volumes = [
|
||||
|
@ -94,7 +93,6 @@ job "directory" {
|
|||
"traefik.enable=true",
|
||||
"traefik.frontend.entryPoints=https,http",
|
||||
"traefik.frontend.rule=Host:guichet.deuxfleurs.fr",
|
||||
"tricot guichet.deuxfleurs.fr",
|
||||
]
|
||||
port = "web_port"
|
||||
address_mode = "host"
|
||||
|
@ -112,6 +110,5 @@ job "directory" {
|
|||
}
|
||||
}
|
||||
}
|
||||
*/
|
||||
}
|
||||
|
||||
|
|
|
@ -1 +0,0 @@
|
|||
USER Garage access key for Guichet profile pictures
|
|
@ -1 +0,0 @@
|
|||
USER Garage secret key for Guichet profile pictures
|
|
@ -1 +0,0 @@
|
|||
USER SMTP password
|
|
@ -1 +0,0 @@
|
|||
USER SMTP username
|
|
@ -1,27 +1,29 @@
|
|||
version: '3.4'
|
||||
services:
|
||||
|
||||
mariadb:
|
||||
build:
|
||||
context: ./seafile/build/mariadb
|
||||
args:
|
||||
VERSION: 4 # fake for now
|
||||
image: superboum/amd64_mariadb:v4
|
||||
|
||||
# Instant Messaging
|
||||
riot:
|
||||
build:
|
||||
context: ./im/build/riotweb
|
||||
args:
|
||||
# https://github.com/vector-im/riot-web/releases
|
||||
VERSION: 1.10.15
|
||||
image: superboum/amd64_riotweb:v30
|
||||
VERSION: 1.7.24
|
||||
image: superboum/amd64_riotweb:v22
|
||||
|
||||
synapse:
|
||||
build:
|
||||
context: ./im/build/matrix-synapse
|
||||
args:
|
||||
# https://github.com/matrix-org/synapse/releases
|
||||
VERSION: 1.61.1
|
||||
# https://github.com/matrix-org/synapse-s3-storage-provider/commits/main
|
||||
# Update with the latest commit on main each time you update the synapse version
|
||||
# otherwise synapse may fail to launch due to incompatibility issues
|
||||
# see this issue for an example: https://github.com/matrix-org/synapse-s3-storage-provider/issues/64
|
||||
S3_VERSION: ffd3fa477321608e57d27644197e721965e0e858
|
||||
image: superboum/amd64_synapse:v53
|
||||
VERSION: 1.31.0
|
||||
image: superboum/amd64_synapse:v43
|
||||
|
||||
# Email
|
||||
sogo:
|
||||
|
@ -39,27 +41,22 @@ services:
|
|||
VERSION: 9bafa64b9d
|
||||
image: superboum/amd64_alps:v1
|
||||
|
||||
dovecot:
|
||||
build:
|
||||
context: ./email/build/dovecot
|
||||
image: superboum/amd64_dovecot:v6
|
||||
|
||||
# VoIP
|
||||
jitsi-meet:
|
||||
build:
|
||||
context: ./jitsi/build/jitsi-meet
|
||||
args:
|
||||
# https://github.com/jitsi/jitsi-meet
|
||||
MEET_TAG: stable/jitsi-meet_6826
|
||||
image: superboum/amd64_jitsi_meet:v5
|
||||
MEET_TAG: jitsi-meet_5463
|
||||
image: superboum/amd64_jitsi_meet:v4
|
||||
|
||||
jitsi-conference-focus:
|
||||
build:
|
||||
context: ./jitsi/build/jitsi-conference-focus
|
||||
args:
|
||||
# https://github.com/jitsi/jicofo
|
||||
JICOFO_TAG: stable/jitsi-meet_6826
|
||||
image: superboum/amd64_jitsi_conference_focus:v9
|
||||
JICOFO_TAG: jitsi-meet_5463
|
||||
image: superboum/amd64_jitsi_conference_focus:v7
|
||||
|
||||
jitsi-videobridge:
|
||||
build:
|
||||
|
@ -67,23 +64,23 @@ services:
|
|||
args:
|
||||
# https://github.com/jitsi/jitsi-videobridge
|
||||
# note: JVB is not tagged with non-stable tags
|
||||
JVB_TAG: stable/jitsi-meet_6826
|
||||
image: superboum/amd64_jitsi_videobridge:v20
|
||||
JVB_TAG: stable/jitsi-meet_5390
|
||||
image: superboum/amd64_jitsi_videobridge:v17
|
||||
|
||||
jitsi-xmpp:
|
||||
build:
|
||||
context: ./jitsi/build/jitsi-xmpp
|
||||
args:
|
||||
MEET_TAG: stable/jitsi-meet_6826
|
||||
PROSODY_VERSION: 0.11.12-1
|
||||
image: superboum/amd64_jitsi_xmpp:v10
|
||||
MEET_TAG: jitsi-meet_5463
|
||||
PROSODY_VERSION: 0.11.7-1~buster4
|
||||
image: superboum/amd64_jitsi_xmpp:v9
|
||||
|
||||
plume:
|
||||
build:
|
||||
context: ./plume/build/plume
|
||||
args:
|
||||
VERSION: 8709f6cf9f8ff7e3c5ee7ea699ee7c778e92fefc
|
||||
image: superboum/plume:v8
|
||||
VERSION: 5424f9110f8749eb7d9f01b44ac8074fc13e0e68
|
||||
image: superboum/plume:v3
|
||||
|
||||
postfix:
|
||||
build:
|
||||
|
@ -97,12 +94,18 @@ services:
|
|||
build:
|
||||
args:
|
||||
# https://github.com/sorintlab/stolon/releases
|
||||
STOLON_VERSION: 3bb7499f815f77140551eb762b200cf4557f57d3
|
||||
STOLON_VERSION: 2d0b8e516a4eaec01f3a9509cdc50a1d4ce8709c
|
||||
# https://packages.debian.org/fr/stretch/postgresql-all
|
||||
PG_VERSION: 9.6+181+deb9u3
|
||||
context: ./postgres/build/postgres
|
||||
image: superboum/amd64_postgres:v11
|
||||
image: superboum/amd64_postgres:v5
|
||||
|
||||
backup-consul:
|
||||
build:
|
||||
context: ./backup/build/backup-consul
|
||||
image: lxpz/backup_consul:12
|
||||
|
||||
backup-matrix:
|
||||
build:
|
||||
context: ./backup/build/backup-matrix
|
||||
image: superboum/backup_matrix:4
|
||||
|
|
|
@ -14,7 +14,7 @@ job "drone-ci" {
|
|||
task "drone_server" {
|
||||
driver = "docker"
|
||||
config {
|
||||
image = "drone/drone:2.12.0"
|
||||
image = "drone/drone:latest"
|
||||
ports = [ "web_port" ]
|
||||
}
|
||||
|
||||
|
@ -38,7 +38,6 @@ DRONE_S3_PATH_STYLE=true
|
|||
DRONE_DATABASE_DRIVER=postgres
|
||||
DRONE_DATABASE_DATASOURCE=postgres://{{ key "secrets/drone-ci/db_user" }}:{{ key "secrets/drone-ci/db_pass" }}@psql-proxy.service.2.cluster.deuxfleurs.fr:5432/drone?sslmode=disable
|
||||
DRONE_USER_CREATE=username:lx-admin,admin:true
|
||||
DRONE_REGISTRATION_CLOSED=true
|
||||
DRONE_LOGS_TEXT=true
|
||||
DRONE_LOGS_PRETTY=true
|
||||
DRONE_LOGS_DEBUG=true
|
||||
|
@ -60,7 +59,6 @@ EOH
|
|||
"traefik.enable=true",
|
||||
"traefik.frontend.entryPoints=https,http",
|
||||
"traefik.frontend.rule=Host:drone.deuxfleurs.fr",
|
||||
"tricot drone.deuxfleurs.fr",
|
||||
]
|
||||
port = "web_port"
|
||||
address_mode = "host"
|
||||
|
|
|
@ -1,69 +0,0 @@
|
|||
## Install Debian
|
||||
|
||||
We recommend Debian Bullseye
|
||||
|
||||
## Install Docker CE from docker.io
|
||||
|
||||
Do not use the docker engine shipped by Debian
|
||||
|
||||
Doc:
|
||||
|
||||
- https://docs.docker.com/engine/install/debian/
|
||||
- https://docs.docker.com/compose/install/
|
||||
|
||||
On a fresh install, as root:
|
||||
|
||||
```bash
|
||||
apt-get remove -y docker docker-engine docker.io containerd runc
|
||||
apt-get update
|
||||
apt-get install apt-transport-https ca-certificates curl gnupg lsb-release
|
||||
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
|
||||
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/debian $(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null
|
||||
apt-get update
|
||||
apt-get install -y docker-ce docker-ce-cli containerd.io
|
||||
|
||||
curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
|
||||
chmod +x /usr/local/bin/docker-compose
|
||||
```
|
||||
|
||||
## Install the runner
|
||||
|
||||
*This is our Nix runner version 2, previously we had another way to start Nix runners. This one has a proper way to handle concurrency, require less boilerplate, and should be safer and more idiomatic.*
|
||||
|
||||
|
||||
```bash
|
||||
wget https://git.deuxfleurs.fr/Deuxfleurs/infrastructure/raw/branch/main/app/drone-ci/integration/nix.conf
|
||||
wget https://git.deuxfleurs.fr/Deuxfleurs/infrastructure/raw/branch/main/app/drone-ci/integration/docker-compose.yml
|
||||
|
||||
# Edit the docker-compose.yml to adapt its variables to your needs,
|
||||
# especially the capacitiy value and its name.
|
||||
COMPOSE_PROJECT_NAME=drone DRONE_SECRET=xxx docker-compose up -d
|
||||
```
|
||||
|
||||
That's all folks.
|
||||
|
||||
## Check if a given job is built by your runner
|
||||
|
||||
```bash
|
||||
export URL=https://drone.deuxfleurs.fr
|
||||
export REPO=Deuxfleurs/garage
|
||||
export BUILD=1312
|
||||
curl ${URL}/api/repos/${REPO}/builds/${BUILD} \
|
||||
| jq -c '[.stages[] | { name: .name, machine: .machine }]'
|
||||
```
|
||||
|
||||
It will give you the following result:
|
||||
|
||||
```json
|
||||
[{"name":"default","machine":"1686a"},{"name":"release-linux-x86_64","machine":"vimaire"},{"name":"release-linux-i686","machine":"carcajou"},{"name":"release-linux-aarch64","machine":"caribou"},{"name":"release-linux-armv6l","machine":"cariacou"},{"name":"refresh-release-page","machine":null}]
|
||||
```
|
||||
|
||||
## Random note
|
||||
|
||||
*This part might be deprecated!*
|
||||
|
||||
This setup is done mainly to allow nix builds with some cache.
|
||||
To use the cache in Drone, you must set your repository as trusted.
|
||||
The command line tool does not work (it says it successfully set your repository as trusted but it did nothing):
|
||||
the only way to set your repository as trusted is to connect on the DB and set the `repo_trusted` field of your repo to true.
|
||||
|
|
@ -1,54 +0,0 @@
|
|||
version: '3.4'
|
||||
services:
|
||||
nix-daemon:
|
||||
image: nixpkgs/nix:nixos-22.05
|
||||
restart: always
|
||||
command: nix-daemon
|
||||
privileged: true
|
||||
volumes:
|
||||
- "nix:/nix"
|
||||
- "./nix.conf:/etc/nix/nix.conf:ro"
|
||||
|
||||
drone-runner:
|
||||
image: drone/drone-runner-docker:latest
|
||||
restart: always
|
||||
environment:
|
||||
- DRONE_RPC_PROTO=https
|
||||
- DRONE_RPC_HOST=drone.deuxfleurs.fr
|
||||
- DRONE_RPC_SECRET=${DRONE_SECRET}
|
||||
- DRONE_RUNNER_CAPACITY=3
|
||||
- DRONE_DEBUG=true
|
||||
- DRONE_LOGS_TRACE=true
|
||||
- DRONE_RPC_DUMP_HTTP=true
|
||||
- DRONE_RPC_DUMP_HTTP_BODY=true
|
||||
- DRONE_RUNNER_NAME=i_forgot_to_change_my_runner_name
|
||||
- DRONE_RUNNER_LABELS=nix-daemon:1
|
||||
# we should put "nix:/nix:ro but it is not supported by
|
||||
# drone-runner-docker because the dependency envconfig does
|
||||
# not support having two colons (:) in the same stanza.
|
||||
# Without the RO flag (or using docker userns), build isolation
|
||||
# is broken.
|
||||
# https://discourse.drone.io/t/allow-mounting-a-host-volume-as-read-only/10071
|
||||
# https://github.com/kelseyhightower/envconfig/pull/153
|
||||
#
|
||||
# A workaround for isolation is to configure docker with a userns,
|
||||
# so even if the folder is writable to root, it is not to any non
|
||||
# privileged docker daemon ran by drone!
|
||||
- DRONE_RUNNER_VOLUMES=drone_nix:/nix
|
||||
- DRONE_RUNNER_ENVIRON=NIX_REMOTE:daemon
|
||||
ports:
|
||||
- "3000:3000/tcp"
|
||||
volumes:
|
||||
- "/var/run/docker.sock:/var/run/docker.sock"
|
||||
|
||||
drone-gc:
|
||||
image: drone/gc:latest
|
||||
restart: always
|
||||
environment:
|
||||
- GC_DEBUG=true
|
||||
- GC_CACHE=10gb
|
||||
- GC_INTERVAL=10m
|
||||
volumes:
|
||||
- "/var/run/docker.sock:/var/run/docker.sock"
|
||||
volumes:
|
||||
nix:
|
|
@ -1,9 +0,0 @@
|
|||
substituters = https://cache.nixos.org https://nix.web.deuxfleurs.fr
|
||||
trusted-public-keys = cache.nixos.org-1:6NCHdD59X431o0gWypbMrAURkbJ16ZPMQFGspcDShjY= nix.web.deuxfleurs.fr:eTGL6kvaQn6cDR/F9lDYUIP9nCVR/kkshYfLDJf1yKs=
|
||||
max-jobs = auto
|
||||
cores = 0
|
||||
log-lines = 200
|
||||
filter-syscalls = true
|
||||
sandbox = true
|
||||
keep-outputs = true
|
||||
keep-derivations = true
|
|
@ -15,6 +15,5 @@ RUN go build -a -o /usr/local/bin/alps ./cmd/alps
|
|||
FROM scratch
|
||||
COPY --from=builder /usr/local/bin/alps /alps
|
||||
COPY --from=builder /tmp/alps/themes /themes
|
||||
COPY --from=builder /tmp/alps/plugins /plugins
|
||||
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/ca-certificates.crt
|
||||
ENTRYPOINT ["/alps"]
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
FROM amd64/debian:bullseye
|
||||
FROM amd64/debian:stretch
|
||||
|
||||
RUN apt-get update && \
|
||||
apt-get install -y \
|
||||
|
@ -11,6 +11,7 @@ RUN apt-get update && \
|
|||
dovecot-lmtpd && \
|
||||
rm -rf /etc/dovecot/*
|
||||
RUN useradd mailstore
|
||||
COPY ./conf/* /etc/dovecot/
|
||||
COPY entrypoint.sh /usr/local/bin/entrypoint
|
||||
|
||||
ENTRYPOINT ["/usr/local/bin/entrypoint"]
|
||||
|
|
|
@ -19,7 +19,10 @@ service auth {
|
|||
}
|
||||
}
|
||||
|
||||
|
||||
passdb {
|
||||
args = /etc/dovecot/dovecot-ldap.conf
|
||||
driver = ldap
|
||||
}
|
||||
|
||||
service lmtp {
|
||||
inet_listener lmtp {
|
||||
|
@ -28,23 +31,7 @@ service lmtp {
|
|||
}
|
||||
}
|
||||
|
||||
# https://doc.dovecot.org/configuration_manual/authentication/ldap_authentication/
|
||||
passdb {
|
||||
args = /etc/dovecot/dovecot-ldap.conf
|
||||
driver = ldap
|
||||
}
|
||||
userdb {
|
||||
driver = prefetch
|
||||
}
|
||||
userdb {
|
||||
args = /etc/dovecot/dovecot-ldap.conf
|
||||
driver = ldap
|
||||
}
|
||||
|
||||
|
||||
service imap-login {
|
||||
service_count = 0 # performance mode. set to 1 for secure mode
|
||||
process_min_avail = 1
|
||||
inet_listener imap {
|
||||
port = 143
|
||||
}
|
||||
|
@ -53,6 +40,11 @@ service imap-login {
|
|||
}
|
||||
}
|
||||
|
||||
userdb {
|
||||
args = uid=mailstore gid=mailstore home=/var/mail/%u
|
||||
driver = static
|
||||
}
|
||||
|
||||
protocol imap {
|
||||
mail_plugins = $mail_plugins imap_sieve
|
||||
}
|
|
@ -5,8 +5,4 @@ base = dc=deuxfleurs,dc=fr
|
|||
scope = subtree
|
||||
user_filter = (&(mail=%u)(&(objectClass=inetOrgPerson)(memberOf=cn=email,ou=groups,dc=deuxfleurs,dc=fr)))
|
||||
pass_filter = (&(mail=%u)(&(objectClass=inetOrgPerson)(memberOf=cn=email,ou=groups,dc=deuxfleurs,dc=fr)))
|
||||
user_attrs = \
|
||||
=user=%{ldap:cn}, \
|
||||
=mail=maildir:/var/mail/%{ldap:cn}, \
|
||||
=uid=1000, \
|
||||
=gid=1000
|
||||
user_attrs = mail=/var/mail/%{ldap:mail}
|
||||
|
|
|
@ -21,9 +21,8 @@ compatibility_level = 2
|
|||
#===
|
||||
# TLS parameters
|
||||
#===
|
||||
smtpd_tls_cert_file=/etc/ssl/postfix.crt
|
||||
smtpd_tls_key_file=/etc/ssl/postfix.key
|
||||
smtpd_tls_dh1024_param_file=auto
|
||||
smtpd_tls_cert_file=/etc/ssl/certs/postfix.crt
|
||||
smtpd_tls_key_file=/etc/ssl/private/postfix.key
|
||||
smtpd_use_tls=yes
|
||||
smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache
|
||||
smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache
|
||||
|
|
|
@ -28,14 +28,8 @@ job "email" {
|
|||
task "server" {
|
||||
driver = "docker"
|
||||
|
||||
constraint {
|
||||
attribute = "${attr.unique.hostname}"
|
||||
operator = "="
|
||||
value = "digitale"
|
||||
}
|
||||
|
||||
config {
|
||||
image = "superboum/amd64_dovecot:v6"
|
||||
image = "superboum/amd64_dovecot:v2"
|
||||
readonly_rootfs = false
|
||||
ports = [ "zauthentication_port", "imaps_port", "imap_port", "lmtp_port" ]
|
||||
command = "dovecot"
|
||||
|
@ -43,8 +37,8 @@ job "email" {
|
|||
volumes = [
|
||||
"secrets/ssl/certs:/etc/ssl/certs",
|
||||
"secrets/ssl/private:/etc/ssl/private",
|
||||
"secrets/conf/:/etc/dovecot/",
|
||||
"/mnt/ssd/mail:/var/mail/",
|
||||
"secrets/conf/dovecot-ldap.conf:/etc/dovecot/dovecot-ldap.conf",
|
||||
"/mnt/glusterfs/email/mail:/var/mail/",
|
||||
]
|
||||
}
|
||||
|
||||
|
@ -141,22 +135,15 @@ job "email" {
|
|||
destination = "secrets/conf/dovecot-ldap.conf"
|
||||
perms = "400"
|
||||
}
|
||||
template {
|
||||
data = file("../config/dovecot/dovecot.conf")
|
||||
destination = "secrets/conf/dovecot.conf"
|
||||
perms = "400"
|
||||
}
|
||||
|
||||
# ----- secrets ------
|
||||
template {
|
||||
# data = "{{ key \"secrets/email/dovecot/dovecot.crt\" }}"
|
||||
data = "{{ with $d := key \"tricot/certs/imap.deuxfleurs.fr\" | parseJSON }}{{ $d.cert_pem }}{{ end }}"
|
||||
data = "{{ key \"secrets/email/dovecot/dovecot.crt\" }}"
|
||||
destination = "secrets/ssl/certs/dovecot.crt"
|
||||
perms = "400"
|
||||
}
|
||||
template {
|
||||
# data = "{{ key \"secrets/email/dovecot/dovecot.key\" }}"
|
||||
data = "{{ with $d := key \"tricot/certs/imap.deuxfleurs.fr\" | parseJSON }}{{ $d.key_pem }}{{ end }}"
|
||||
data = "{{ key \"secrets/email/dovecot/dovecot.key\" }}"
|
||||
destination = "secrets/ssl/private/dovecot.key"
|
||||
perms = "400"
|
||||
}
|
||||
|
@ -261,7 +248,8 @@ job "email" {
|
|||
command = "postfix"
|
||||
args = [ "start-fg" ]
|
||||
volumes = [
|
||||
"secrets/ssl:/etc/ssl",
|
||||
"secrets/ssl/certs:/etc/ssl/certs",
|
||||
"secrets/ssl/private:/etc/ssl/private",
|
||||
"secrets/postfix:/etc/postfix-conf",
|
||||
"/dev/log:/dev/log"
|
||||
]
|
||||
|
@ -382,16 +370,14 @@ job "email" {
|
|||
|
||||
# --- secrets ---
|
||||
template {
|
||||
# data = "{{ key \"secrets/email/postfix/postfix.crt\" }}"
|
||||
data = "{{ with $d := key \"tricot/certs/smtp.deuxfleurs.fr\" | parseJSON }}{{ $d.cert_pem }}{{ end }}"
|
||||
destination = "secrets/ssl/postfix.crt"
|
||||
data = "{{ key \"secrets/email/postfix/postfix.crt\" }}"
|
||||
destination = "secrets/ssl/certs/postfix.crt"
|
||||
perms = "400"
|
||||
}
|
||||
|
||||
template {
|
||||
# data = "{{ key \"secrets/email/postfix/postfix.key\" }}"
|
||||
data = "{{ with $d := key \"tricot/certs/smtp.deuxfleurs.fr\" | parseJSON }}{{ $d.key_pem }}{{ end }}"
|
||||
destination = "secrets/ssl/postfix.key"
|
||||
data = "{{ key \"secrets/email/postfix/postfix.key\" }}"
|
||||
destination = "secrets/ssl/private/postfix.key"
|
||||
perms = "400"
|
||||
}
|
||||
}
|
||||
|
@ -432,8 +418,7 @@ job "email" {
|
|||
"alps",
|
||||
"traefik.enable=true",
|
||||
"traefik.frontend.entryPoints=https,http",
|
||||
"traefik.frontend.rule=Host:alps.deuxfleurs.fr",
|
||||
"tricot alps.deuxfleurs.fr",
|
||||
"traefik.frontend.rule=Host:alps.deuxfleurs.fr"
|
||||
]
|
||||
check {
|
||||
type = "tcp"
|
||||
|
@ -487,9 +472,7 @@ job "email" {
|
|||
"sogo",
|
||||
"traefik.enable=true",
|
||||
"traefik.frontend.entryPoints=https,http",
|
||||
"traefik.frontend.rule=Host:www.sogo.deuxfleurs.fr,sogo.deuxfleurs.fr;PathPrefix:/",
|
||||
"tricot www.sogo.deuxfleurs.fr",
|
||||
"tricot sogo.deuxfleurs.fr",
|
||||
"traefik.frontend.rule=Host:www.sogo.deuxfleurs.fr,sogo.deuxfleurs.fr;PathPrefix:/"
|
||||
]
|
||||
check {
|
||||
type = "tcp"
|
||||
|
|
|
@ -1 +0,0 @@
|
|||
USER AWS Acces Key ID
|
|
@ -1 +0,0 @@
|
|||
USER AWS Secret Access key
|
|
@ -1 +0,0 @@
|
|||
USER Restic backup password to encrypt data
|
|
@ -1 +0,0 @@
|
|||
USER Restic Repository URL, check op_guide/backup-minio to see the format
|
|
@ -1,60 +0,0 @@
|
|||
job "frontend" {
|
||||
datacenters = ["dc1", "neptune"]
|
||||
type = "service"
|
||||
priority = 90
|
||||
|
||||
group "tricot" {
|
||||
network {
|
||||
port "http_port" { static = 80 }
|
||||
port "https_port" { static = 443 }
|
||||
}
|
||||
|
||||
task "server" {
|
||||
driver = "docker"
|
||||
|
||||
config {
|
||||
image = "lxpz/amd64_tricot:37"
|
||||
network_mode = "host"
|
||||
readonly_rootfs = true
|
||||
ports = [ "http_port", "https_port" ]
|
||||
}
|
||||
|
||||
resources {
|
||||
cpu = 2000
|
||||
memory = 500
|
||||
}
|
||||
|
||||
restart {
|
||||
interval = "30m"
|
||||
attempts = 2
|
||||
delay = "15s"
|
||||
mode = "delay"
|
||||
}
|
||||
|
||||
template {
|
||||
data = <<EOH
|
||||
TRICOT_NODE_NAME={{ env "attr.unique.hostname" }}
|
||||
TRICOT_LETSENCRYPT_EMAIL=alex@adnab.me
|
||||
TRICOT_ENABLE_COMPRESSION=true
|
||||
RUST_LOG=tricot=debug
|
||||
EOH
|
||||
destination = "secrets/env"
|
||||
env = true
|
||||
}
|
||||
|
||||
service {
|
||||
name = "tricot-http"
|
||||
port = "http_port"
|
||||
tags = [ "(diplonat (tcp_port 80))" ]
|
||||
address_mode = "host"
|
||||
}
|
||||
|
||||
service {
|
||||
name = "tricot-https"
|
||||
port = "https_port"
|
||||
tags = [ "(diplonat (tcp_port 443))" ]
|
||||
address_mode = "host"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
|
@ -1,24 +1,30 @@
|
|||
block_size = 1048576
|
||||
|
||||
metadata_dir = "/meta"
|
||||
data_dir = "/data"
|
||||
|
||||
replication_mode = "3"
|
||||
metadata_dir = "/garage/meta"
|
||||
data_dir = "/garage/data"
|
||||
|
||||
rpc_bind_addr = "[::]:3901"
|
||||
rpc_secret = "{{ key "secrets/garage/rpc_secret" | trimSpace }}"
|
||||
|
||||
sled_cache_capacity = 536870912
|
||||
sled_sync_interval_ms = 10000
|
||||
consul_host = "consul.service.2.cluster.deuxfleurs.fr:8500"
|
||||
consul_service_name = "garage-rpc"
|
||||
|
||||
bootstrap_peers = []
|
||||
|
||||
max_concurrent_rpc_requests = 12
|
||||
data_replication_factor = 3
|
||||
meta_replication_factor = 3
|
||||
meta_epidemic_fanout = 3
|
||||
|
||||
[rpc_tls]
|
||||
ca_cert = "/garage/garage-ca.crt"
|
||||
node_cert = "/garage/garage.crt"
|
||||
node_key = "/garage/garage.key"
|
||||
|
||||
[s3_api]
|
||||
s3_region = "garage"
|
||||
api_bind_addr = "[::]:3900"
|
||||
root_domain = ".garage.deuxfleurs.fr"
|
||||
|
||||
[s3_web]
|
||||
bind_addr = "[::]:3902"
|
||||
root_domain = ".web.deuxfleurs.fr"
|
||||
|
||||
[admin]
|
||||
api_bind_addr = "[::1]:3903"
|
||||
index = "index.html"
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
job "garage" {
|
||||
datacenters = ["dc1", "saturne", "neptune"]
|
||||
datacenters = ["dc1", "belair", "saturne"]
|
||||
type = "system"
|
||||
priority = 80
|
||||
|
||||
|
@ -25,18 +25,16 @@ job "garage" {
|
|||
driver = "docker"
|
||||
config {
|
||||
advertise_ipv6_address = true
|
||||
image = "dxflrs/amd64_garage:v0.7.1"
|
||||
command = "/garage"
|
||||
args = [ "server" ]
|
||||
image = "lxpz/garage_amd64:v0.2.1.6"
|
||||
network_mode = "host"
|
||||
volumes = [
|
||||
"/mnt/storage/garage/data:/data",
|
||||
"/mnt/ssd/garage/meta:/meta",
|
||||
"secrets/garage.toml:/etc/garage.toml",
|
||||
"/mnt/storage/garage/data:/garage/data",
|
||||
"/mnt/ssd/garage/meta:/garage/meta",
|
||||
"secrets/garage.toml:/garage/config.toml",
|
||||
"secrets/garage-ca.crt:/garage/garage-ca.crt",
|
||||
"secrets/garage.crt:/garage/garage.crt",
|
||||
"secrets/garage.key:/garage/garage.key",
|
||||
]
|
||||
logging {
|
||||
type = "journald"
|
||||
}
|
||||
}
|
||||
|
||||
template {
|
||||
|
@ -44,8 +42,22 @@ job "garage" {
|
|||
destination = "secrets/garage.toml"
|
||||
}
|
||||
|
||||
# --- secrets ---
|
||||
template {
|
||||
data = "{{ key \"secrets/garage/garage-ca.crt\" }}"
|
||||
destination = "secrets/garage-ca.crt"
|
||||
}
|
||||
template {
|
||||
data = "{{ key \"secrets/garage/garage.crt\" }}"
|
||||
destination = "secrets/garage.crt"
|
||||
}
|
||||
template {
|
||||
data = "{{ key \"secrets/garage/garage.key\" }}"
|
||||
destination = "secrets/garage.key"
|
||||
}
|
||||
|
||||
resources {
|
||||
memory = 1500
|
||||
memory = 800
|
||||
cpu = 1000
|
||||
}
|
||||
|
||||
|
@ -55,8 +67,9 @@ job "garage" {
|
|||
service {
|
||||
tags = [
|
||||
"garage_api",
|
||||
"tricot garage.deuxfleurs.fr",
|
||||
"tricot *.garage.deuxfleurs.fr",
|
||||
"traefik.enable=true",
|
||||
"traefik.frontend.entryPoints=https,http",
|
||||
"traefik.frontend.rule=Host:garage.deuxfleurs.fr"
|
||||
]
|
||||
port = 3900
|
||||
address_mode = "driver"
|
||||
|
@ -93,39 +106,6 @@ job "garage" {
|
|||
}
|
||||
}
|
||||
}
|
||||
|
||||
service {
|
||||
tags = [
|
||||
"garage-web",
|
||||
"tricot * 1",
|
||||
"tricot-add-header Content-Security-Policy default-src 'self' 'unsafe-inline'; script-src 'self' 'unsafe-inline' https://code.jquery.com/; frame-ancestors 'self'",
|
||||
"tricot-add-header Strict-Transport-Security max-age=63072000; includeSubDomains; preload",
|
||||
"tricot-add-header X-Frame-Options SAMEORIGIN",
|
||||
"tricot-add-header X-XSS-Protection 1; mode=block",
|
||||
]
|
||||
port = 3902
|
||||
address_mode = "driver"
|
||||
name = "garage-web"
|
||||
check {
|
||||
type = "tcp"
|
||||
port = 3902
|
||||
address_mode = "driver"
|
||||
interval = "60s"
|
||||
timeout = "5s"
|
||||
check_restart {
|
||||
limit = 3
|
||||
grace = "90s"
|
||||
ignore_warnings = false
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
restart {
|
||||
interval = "30m"
|
||||
attempts = 10
|
||||
delay = "15s"
|
||||
mode = "delay"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
1
app/garage/secrets/garage/garage-ca.crt
Normal file
1
app/garage/secrets/garage/garage-ca.crt
Normal file
|
@ -0,0 +1 @@
|
|||
USER_LONG garage-ca.crt (generated with Garage's genkeys.sh script)
|
1
app/garage/secrets/garage/garage-ca.key
Normal file
1
app/garage/secrets/garage/garage-ca.key
Normal file
|
@ -0,0 +1 @@
|
|||
USER_LONG garage-ca.key (generated with Garage's genkeys.sh script)
|
1
app/garage/secrets/garage/garage.crt
Normal file
1
app/garage/secrets/garage/garage.crt
Normal file
|
@ -0,0 +1 @@
|
|||
USER_LONG garage.crt (generated with Garage's genkeys.sh script)
|
1
app/garage/secrets/garage/garage.key
Normal file
1
app/garage/secrets/garage/garage.key
Normal file
|
@ -0,0 +1 @@
|
|||
USER_LONG garage.key (generated with Garage's genkeys.sh script)
|
|
@ -1 +0,0 @@
|
|||
CMD_ONCE openssl rand -hex 32
|
|
@ -1,7 +1,6 @@
|
|||
FROM amd64/debian:buster as builder
|
||||
|
||||
ARG VERSION
|
||||
ARG S3_VERSION
|
||||
RUN apt-get update && \
|
||||
apt-get -qq -y full-upgrade && \
|
||||
apt-get install -y \
|
||||
|
@ -19,14 +18,11 @@ RUN apt-get update && \
|
|||
# postgresql-dev \
|
||||
libpq-dev \
|
||||
virtualenv \
|
||||
libxslt1-dev \
|
||||
git && \
|
||||
libxslt1-dev && \
|
||||
virtualenv /root/matrix-env -p /usr/bin/python3 && \
|
||||
. /root/matrix-env/bin/activate && \
|
||||
pip3 install \
|
||||
https://github.com/matrix-org/synapse/archive/v${VERSION}.tar.gz#egg=matrix-synapse[matrix-synapse-ldap3,postgres,resources.consent,saml2,url_preview] && \
|
||||
pip3 install \
|
||||
git+https://github.com/matrix-org/synapse-s3-storage-provider.git@${S3_VERSION}
|
||||
https://github.com/matrix-org/synapse/archive/v${VERSION}.tar.gz#egg=matrix-synapse[matrix-synapse-ldap3,postgres,resources.consent,saml2,url_preview]
|
||||
|
||||
FROM amd64/debian:buster
|
||||
|
||||
|
@ -46,7 +42,6 @@ RUN apt-get update && \
|
|||
|
||||
ENV LD_PRELOAD /usr/lib/x86_64-linux-gnu/libjemalloc.so.2
|
||||
COPY --from=builder /root/matrix-env /root/matrix-env
|
||||
COPY matrix-s3-async /usr/local/bin/matrix-s3-async
|
||||
COPY entrypoint.sh /usr/local/bin/entrypoint
|
||||
|
||||
ENTRYPOINT ["/usr/local/bin/entrypoint"]
|
||||
|
|
|
@ -1,16 +0,0 @@
|
|||
#!/bin/bash
|
||||
|
||||
cat > database.yaml <<EOF
|
||||
user: $PG_USER
|
||||
password: $PG_PASS
|
||||
database: $PG_DB
|
||||
host: $PG_HOST
|
||||
port: $PG_PORT
|
||||
EOF
|
||||
|
||||
while true; do
|
||||
/root/matrix-env/bin/s3_media_upload update-db 0d
|
||||
/root/matrix-env/bin/s3_media_upload --no-progress check-deleted /var/lib/matrix-synapse/media
|
||||
/root/matrix-env/bin/s3_media_upload --no-progress upload /var/lib/matrix-synapse/media matrix --delete --endpoint-url https://garage.deuxfleurs.fr
|
||||
sleep 600
|
||||
done
|
133
app/im/config/fb2mx/config.yaml
Normal file
133
app/im/config/fb2mx/config.yaml
Normal file
|
@ -0,0 +1,133 @@
|
|||
# Homeserver details
|
||||
homeserver:
|
||||
# The address that this appservice can use to connect to the homeserver.
|
||||
address: https://im.deuxfleurs.fr
|
||||
# The domain of the homeserver (for MXIDs, etc).
|
||||
domain: deuxfleurs.fr
|
||||
# Whether or not to verify the SSL certificate of the homeserver.
|
||||
# Only applies if address starts with https://
|
||||
verify_ssl: true
|
||||
|
||||
# Application service host/registration related details
|
||||
# Changing these values requires regeneration of the registration.
|
||||
appservice:
|
||||
# The address that the homeserver can use to connect to this appservice.
|
||||
address: http://fb2mx.service.2.cluster.deuxfleurs.fr:29319
|
||||
|
||||
# The hostname and port where this appservice should listen.
|
||||
hostname: 0.0.0.0
|
||||
port: 29319
|
||||
# The maximum body size of appservice API requests (from the homeserver) in mebibytes
|
||||
# Usually 1 is enough, but on high-traffic bridges you might need to increase this to avoid 413s
|
||||
max_body_size: 1
|
||||
|
||||
# The full URI to the database. SQLite and Postgres are fully supported.
|
||||
# Other DBMSes supported by SQLAlchemy may or may not work.
|
||||
# Format examples:
|
||||
# SQLite: sqlite:///filename.db
|
||||
# Postgres: postgres://username:password@hostname/dbname
|
||||
database: '{{ key "secrets/chat/fb2mx/db_url" | trimSpace }}'
|
||||
|
||||
# The unique ID of this appservice.
|
||||
id: facebook
|
||||
# Username of the appservice bot.
|
||||
bot_username: facebookbot
|
||||
# Display name and avatar for bot. Set to "remove" to remove display name/avatar, leave empty
|
||||
# to leave display name/avatar as-is.
|
||||
bot_displayname: Facebook bridge bot
|
||||
bot_avatar: mxc://maunium.net/ddtNPZSKMNqaUzqrHuWvUADv
|
||||
|
||||
# Community ID for bridged users (changes registration file) and rooms.
|
||||
# Must be created manually.
|
||||
community_id: "+fbusers:deuxfleurs.fr"
|
||||
|
||||
# Authentication tokens for AS <-> HS communication. Autogenerated; do not modify.
|
||||
as_token: '{{ key "secrets/chat/fb2mx/as_token" | trimSpace }}'
|
||||
hs_token: '{{ key "secrets/chat/fb2mx/hs_token" | trimSpace }}'
|
||||
|
||||
# Bridge config
|
||||
bridge:
|
||||
# Localpart template of MXIDs for Facebook users.
|
||||
# {userid} is replaced with the user ID of the Facebook user.
|
||||
username_template: "facebook_{userid}"
|
||||
# Localpart template for per-user room grouping community IDs.
|
||||
# The bridge will create these communities and add all of the specific user's portals to the community.
|
||||
# {localpart} is the MXID localpart and {server} is the MXID server part of the user.
|
||||
#
|
||||
# `facebook_{localpart}={server}` is a good value.
|
||||
community_template: "facebook_{localpart}={server}"
|
||||
# Displayname template for Facebook users.
|
||||
# {displayname} is replaced with the display name of the Facebook user
|
||||
# as defined below in displayname_preference.
|
||||
# Keys available for displayname_preference are also available here.
|
||||
displayname_template: "{displayname} (FB)"
|
||||
# Available keys:
|
||||
# "name" (full name)
|
||||
# "first_name"
|
||||
# "last_name"
|
||||
# "nickname"
|
||||
# "own_nickname" (user-specific!)
|
||||
displayname_preference:
|
||||
- name
|
||||
|
||||
# The prefix for commands. Only required in non-management rooms.
|
||||
command_prefix: "!fb"
|
||||
|
||||
# Number of chats to sync (and create portals for) on startup/login.
|
||||
# Maximum 20, set 0 to disable automatic syncing.
|
||||
initial_chat_sync: 10
|
||||
# Whether or not the Facebook users of logged in Matrix users should be
|
||||
# invited to private chats when the user sends a message from another client.
|
||||
invite_own_puppet_to_pm: false
|
||||
# Whether or not to use /sync to get presence, read receipts and typing notifications when using
|
||||
# your own Matrix account as the Matrix puppet for your Facebook account.
|
||||
sync_with_custom_puppets: true
|
||||
# Whether or not to bridge presence in both directions. Facebook allows users not to broadcast
|
||||
# presence, but then it won't send other users' presence to the client.
|
||||
presence: true
|
||||
# Whether or not to update avatars when syncing all contacts at startup.
|
||||
update_avatar_initial_sync: true
|
||||
|
||||
# Permissions for using the bridge.
|
||||
# Permitted values:
|
||||
# user - Use the bridge with puppeting.
|
||||
# admin - Use and administrate the bridge.
|
||||
# Permitted keys:
|
||||
# * - All Matrix users
|
||||
# domain - All users on that homeserver
|
||||
# mxid - Specific user
|
||||
permissions:
|
||||
"deuxfleurs.fr": "user"
|
||||
|
||||
# Python logging configuration.
|
||||
#
|
||||
# See section 16.7.2 of the Python documentation for more info:
|
||||
# https://docs.python.org/3.6/library/logging.config.html#configuration-dictionary-schema
|
||||
logging:
|
||||
version: 1
|
||||
formatters:
|
||||
colored:
|
||||
(): mautrix_facebook.util.ColorFormatter
|
||||
format: "[%(asctime)s] [%(levelname)s@%(name)s] %(message)s"
|
||||
normal:
|
||||
format: "[%(asctime)s] [%(levelname)s@%(name)s] %(message)s"
|
||||
handlers:
|
||||
file:
|
||||
class: logging.handlers.RotatingFileHandler
|
||||
formatter: normal
|
||||
filename: ./mautrix-facebook.log
|
||||
maxBytes: 10485760
|
||||
backupCount: 10
|
||||
console:
|
||||
class: logging.StreamHandler
|
||||
formatter: colored
|
||||
loggers:
|
||||
mau:
|
||||
level: DEBUG
|
||||
fbchat:
|
||||
level: DEBUG
|
||||
aiohttp:
|
||||
level: INFO
|
||||
root:
|
||||
level: DEBUG
|
||||
handlers: [file, console]
|
11
app/im/config/fb2mx/registration.yaml
Normal file
11
app/im/config/fb2mx/registration.yaml
Normal file
|
@ -0,0 +1,11 @@
|
|||
id: facebook
|
||||
as_token: '{{ key "secrets/chat/fb2mx/as_token" | trimSpace }}'
|
||||
hs_token: '{{ key "secrets/chat/fb2mx/hs_token" | trimSpace }}'
|
||||
namespaces:
|
||||
users:
|
||||
- exclusive: true
|
||||
regex: '@facebook_.+:deuxfleurs.fr'
|
||||
group_id: '+fbusers:deuxfleurs.fr'
|
||||
url: http://fb2mx.service.2.cluster.deuxfleurs.fr:29319
|
||||
sender_localpart: facebookbot
|
||||
rate_limited: false
|
|
@ -59,7 +59,7 @@ listeners:
|
|||
x_forwarded: false
|
||||
|
||||
resources:
|
||||
- names: [client, federation]
|
||||
- names: [client]
|
||||
compress: true
|
||||
|
||||
- port: 8448
|
||||
|
@ -83,7 +83,6 @@ listeners:
|
|||
# Database configuration
|
||||
database:
|
||||
name: psycopg2
|
||||
allow_unsafe_locale: false
|
||||
args:
|
||||
user: {{ key "secrets/chat/synapse/postgres_user" | trimSpace }}
|
||||
password: {{ key "secrets/chat/synapse/postgres_pwd" | trimSpace }}
|
||||
|
@ -138,29 +137,6 @@ federation_rc_concurrent: 3
|
|||
media_store_path: "/var/lib/matrix-synapse/media"
|
||||
uploads_path: "/var/lib/matrix-synapse/uploads"
|
||||
|
||||
media_storage_providers:
|
||||
- module: s3_storage_provider.S3StorageProviderBackend
|
||||
store_local: True
|
||||
store_remote: True
|
||||
store_synchronous: True
|
||||
config:
|
||||
bucket: matrix
|
||||
# All of the below options are optional, for use with non-AWS S3-like
|
||||
# services, or to specify access tokens here instead of some external method.
|
||||
region_name: garage
|
||||
endpoint_url: https://garage.deuxfleurs.fr
|
||||
access_key_id: {{ key "secrets/chat/synapse/s3_access_key" | trimSpace }}
|
||||
secret_access_key: {{ key "secrets/chat/synapse/s3_secret_key" | trimSpace }}
|
||||
|
||||
# The object storage class used when uploading files to the bucket.
|
||||
# Default is STANDARD.
|
||||
#storage_class: "STANDARD_IA"
|
||||
|
||||
# The maximum number of concurrent threads which will be used to connect
|
||||
# to S3. Each thread manages a single connection. Default is 40.
|
||||
#
|
||||
#threadpool_size: 20
|
||||
|
||||
# The largest allowed upload size in bytes
|
||||
max_upload_size: "100M"
|
||||
|
||||
|
@ -315,7 +291,7 @@ bcrypt_rounds: 12
|
|||
# Allows users to register as guests without a password/email/etc, and
|
||||
# participate in rooms hosted on this server which have been made
|
||||
# accessible to anonymous users.
|
||||
allow_guest_access: False
|
||||
allow_guest_access: True
|
||||
|
||||
# The list of identity servers trusted to verify third party
|
||||
# identifiers by this server.
|
||||
|
@ -332,38 +308,11 @@ enable_metrics: False
|
|||
## API Configuration ##
|
||||
|
||||
# A list of event types that will be included in the room_invite_state
|
||||
#room_invite_state_types:
|
||||
# - "m.room.join_rules"
|
||||
# - "m.room.canonical_alias"
|
||||
# - "m.room.avatar"
|
||||
# - "m.room.name"
|
||||
|
||||
# Controls for the state that is shared with users who receive an invite
|
||||
# to a room
|
||||
#
|
||||
room_prejoin_state:
|
||||
# By default, the following state event types are shared with users who
|
||||
# receive invites to the room:
|
||||
#
|
||||
# - m.room.join_rules
|
||||
# - m.room.canonical_alias
|
||||
# - m.room.avatar
|
||||
# - m.room.encryption
|
||||
# - m.room.name
|
||||
# - m.room.create
|
||||
#
|
||||
# Uncomment the following to disable these defaults (so that only the event
|
||||
# types listed in 'additional_event_types' are shared). Defaults to 'false'.
|
||||
#
|
||||
#disable_default_event_types: true
|
||||
|
||||
# Additional state event types to share with users when they are invited
|
||||
# to a room.
|
||||
#
|
||||
# By default, this list is empty (so only the default event types are shared).
|
||||
#
|
||||
#additional_event_types:
|
||||
# - org.example.custom.event.type
|
||||
room_invite_state_types:
|
||||
- "m.room.join_rules"
|
||||
- "m.room.canonical_alias"
|
||||
- "m.room.avatar"
|
||||
- "m.room.name"
|
||||
|
||||
|
||||
# A list of application service config file to use
|
||||
|
@ -469,21 +418,3 @@ password_config:
|
|||
report_stats: false
|
||||
suppress_key_server_warning: true
|
||||
enable_group_creation: true
|
||||
|
||||
#experimental_features:
|
||||
# spaces_enabled: true
|
||||
|
||||
presence:
|
||||
enabled: false
|
||||
limit_remote_rooms:
|
||||
enabled: true
|
||||
complexity: 3.0
|
||||
complexity_error: "Ce salon de discussion a trop d'activité, le serveur n'est pas assez puissant pour le rejoindre. N'hésitez pas à remonter l'information à l'équipe technique, nous pourrons ajuster la limitation au besoin."
|
||||
admins_can_join: false
|
||||
retention:
|
||||
enabled: true
|
||||
# no default policy for now, this is intended.
|
||||
# DO NOT ADD ONE BECAUSE THIS IS DANGEROUS AND WILL DELETE CONTENT WE WANT TO KEEP!
|
||||
purge_jobs:
|
||||
- interval: 1d
|
||||
|
||||
|
|
|
@ -15,7 +15,7 @@ job "im" {
|
|||
driver = "docker"
|
||||
|
||||
config {
|
||||
image = "superboum/amd64_synapse:v53"
|
||||
image = "superboum/amd64_synapse:v43"
|
||||
network_mode = "host"
|
||||
readonly_rootfs = true
|
||||
ports = [ "client_port", "federation_port" ]
|
||||
|
@ -27,8 +27,8 @@ job "im" {
|
|||
]
|
||||
volumes = [
|
||||
"secrets/conf:/etc/matrix-synapse",
|
||||
"/tmp/synapse-media:/var/lib/matrix-synapse/media",
|
||||
"/tmp/synapse-uploads:/var/lib/matrix-synapse/uploads",
|
||||
"/mnt/glusterfs/chat/matrix/synapse/media:/var/lib/matrix-synapse/media",
|
||||
"/mnt/glusterfs/chat/matrix/synapse/uploads:/var/lib/matrix-synapse/uploads",
|
||||
"/tmp/synapse-logs:/var/log/matrix-synapse",
|
||||
"/tmp/synapse:/tmp"
|
||||
]
|
||||
|
@ -86,7 +86,7 @@ job "im" {
|
|||
|
||||
resources {
|
||||
cpu = 1000
|
||||
memory = 2000
|
||||
memory = 4000
|
||||
}
|
||||
|
||||
service {
|
||||
|
@ -95,10 +95,11 @@ job "im" {
|
|||
address_mode = "host"
|
||||
tags = [
|
||||
"matrix",
|
||||
"tricot im.deuxfleurs.fr/_matrix 100",
|
||||
"tricot im.deuxfleurs.fr:443/_matrix 100",
|
||||
"tricot im.deuxfleurs.fr/_synapse 100",
|
||||
"tricot-add-header Access-Control-Allow-Origin *",
|
||||
"traefik.enable=true",
|
||||
"traefik.frontend.entryPoints=https",
|
||||
"traefik.frontend.rule=Host:im.deuxfleurs.fr;PathPrefix:/_matrix",
|
||||
"traefik.frontend.headers.customResponseHeaders=Access-Control-Allow-Origin: *",
|
||||
"traefik.frontend.priority=100"
|
||||
]
|
||||
check {
|
||||
type = "tcp"
|
||||
|
@ -119,46 +120,91 @@ job "im" {
|
|||
address_mode = "host"
|
||||
tags = [
|
||||
"matrix",
|
||||
"tricot deuxfleurs.fr/_matrix 100",
|
||||
"tricot deuxfleurs.fr:443/_matrix 100",
|
||||
"traefik.enable=true",
|
||||
"traefik.frontend.entryPoints=https",
|
||||
"traefik.frontend.rule=Host:deuxfleurs.fr;PathPrefix:/_matrix",
|
||||
"traefik.frontend.priority=100"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
group "easybridge" {
|
||||
count = 1
|
||||
|
||||
task "media-async-upload" {
|
||||
network {
|
||||
port "api_port" {
|
||||
static = 8321
|
||||
to = 8321
|
||||
}
|
||||
port "web_port" { to = 8281 }
|
||||
}
|
||||
|
||||
task "easybridge" {
|
||||
driver = "docker"
|
||||
|
||||
config {
|
||||
image = "superboum/amd64_synapse:v53"
|
||||
readonly_rootfs = true
|
||||
command = "/usr/local/bin/matrix-s3-async"
|
||||
work_dir = "/tmp"
|
||||
image = "lxpz/easybridge_amd64:33"
|
||||
ports = [ "api_port", "web_port" ]
|
||||
volumes = [
|
||||
"/tmp/synapse-media:/var/lib/matrix-synapse/media",
|
||||
"/tmp/synapse-uploads:/var/lib/matrix-synapse/uploads",
|
||||
"/tmp/synapse:/tmp"
|
||||
"secrets/conf:/data"
|
||||
]
|
||||
}
|
||||
|
||||
resources {
|
||||
cpu = 100
|
||||
memory = 200
|
||||
args = [ "./easybridge", "-config", "/data/config.json" ]
|
||||
}
|
||||
|
||||
template {
|
||||
data = <<EOH
|
||||
AWS_ACCESS_KEY_ID={{ key "secrets/chat/synapse/s3_access_key" | trimSpace }}
|
||||
AWS_SECRET_ACCESS_KEY={{ key "secrets/chat/synapse/s3_secret_key" | trimSpace }}
|
||||
AWS_DEFAULT_REGION=garage
|
||||
PG_USER={{ key "secrets/chat/synapse/postgres_user" | trimSpace }}
|
||||
PG_PASS={{ key "secrets/chat/synapse/postgres_pwd" | trimSpace }}
|
||||
PG_DB={{ key "secrets/chat/synapse/postgres_db" | trimSpace }}
|
||||
PG_HOST=psql-proxy.service.2.cluster.deuxfleurs.fr
|
||||
PG_PORT=5432
|
||||
EOH
|
||||
destination = "secrets/env"
|
||||
env = true
|
||||
data = file("../config/easybridge/registration.yaml.tpl")
|
||||
destination = "secrets/conf/registration.yaml"
|
||||
}
|
||||
|
||||
template {
|
||||
data = file("../config/easybridge/config.json.tpl")
|
||||
destination = "secrets/conf/config.json"
|
||||
}
|
||||
|
||||
resources {
|
||||
memory = 250
|
||||
cpu = 100
|
||||
}
|
||||
|
||||
service {
|
||||
name = "easybridge-api"
|
||||
tags = ["easybridge-api"]
|
||||
port = "api_port"
|
||||
address_mode = "host"
|
||||
check {
|
||||
type = "tcp"
|
||||
port = "api_port"
|
||||
interval = "60s"
|
||||
timeout = "5s"
|
||||
check_restart {
|
||||
limit = 3
|
||||
grace = "90s"
|
||||
ignore_warnings = false
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
service {
|
||||
name = "easybridge-web"
|
||||
tags = [
|
||||
"easybridge-web",
|
||||
"traefik.enable=true",
|
||||
"traefik.frontend.entryPoints=https,http",
|
||||
"traefik.frontend.rule=Host:easybridge.deuxfleurs.fr",
|
||||
]
|
||||
port = "web_port"
|
||||
address_mode = "host"
|
||||
check {
|
||||
type = "tcp"
|
||||
port = "web_port"
|
||||
interval = "60s"
|
||||
timeout = "5s"
|
||||
check_restart {
|
||||
limit = 3
|
||||
grace = "90s"
|
||||
ignore_warnings = false
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
@ -174,7 +220,7 @@ EOH
|
|||
task "server" {
|
||||
driver = "docker"
|
||||
config {
|
||||
image = "superboum/amd64_riotweb:v30"
|
||||
image = "superboum/amd64_riotweb:v22"
|
||||
ports = [ "web_port" ]
|
||||
volumes = [
|
||||
"secrets/config.json:/srv/http/config.json"
|
||||
|
@ -193,8 +239,10 @@ EOH
|
|||
service {
|
||||
tags = [
|
||||
"webstatic",
|
||||
"tricot im.deuxfleurs.fr 10",
|
||||
"tricot riot.deuxfleurs.fr 10",
|
||||
"traefik.enable=true",
|
||||
"traefik.frontend.entryPoints=https",
|
||||
"traefik.frontend.rule=Host:im.deuxfleurs.fr,riot.deuxfleurs.fr;PathPrefix:/",
|
||||
"traefik.frontend.priority=10"
|
||||
]
|
||||
port = "web_port"
|
||||
address_mode = "host"
|
||||
|
|
|
@ -1 +0,0 @@
|
|||
USER matrix
|
|
@ -1 +0,0 @@
|
|||
USER matrix
|
|
@ -1,4 +1,4 @@
|
|||
FROM debian:bookworm AS builder
|
||||
FROM debian:buster AS builder
|
||||
|
||||
# unzip is required when executing the mvn package command
|
||||
RUN apt-get update && \
|
||||
|
@ -15,7 +15,7 @@ RUN mvn package -DskipTests -Dassembly.skipAssembly=false
|
|||
RUN unzip target/jicofo-1.1-SNAPSHOT-archive.zip && \
|
||||
mv jicofo-1.1-SNAPSHOT /srv/build
|
||||
|
||||
FROM debian:bookworm
|
||||
FROM debian:buster
|
||||
|
||||
RUN apt-get update && \
|
||||
apt-get install -y openjdk-11-jre-headless ca-certificates
|
||||
|
|
|
@ -3,7 +3,6 @@
|
|||
update-ca-certificates -f
|
||||
|
||||
exec java \
|
||||
-Dlog4j2.formatMsgNoLookups=true \
|
||||
-Djdk.tls.ephemeralDHKeySize=2048 \
|
||||
-Djava.util.logging.config.file=/usr/share/jicofo/lib/logging.properties \
|
||||
-Dconfig.file=/etc/jitsi/jicofo.conf \
|
||||
|
|
|
@ -1,8 +1,8 @@
|
|||
FROM debian:bookworm AS builder
|
||||
FROM debian:buster AS builder
|
||||
|
||||
RUN apt-get update && \
|
||||
apt-get install -y curl && \
|
||||
curl -sL https://deb.nodesource.com/setup_16.x | bash - && \
|
||||
curl -sL https://deb.nodesource.com/setup_14.x | bash - && \
|
||||
apt-get install -y git nodejs make git unzip
|
||||
|
||||
ARG MEET_TAG
|
||||
|
@ -12,7 +12,7 @@ WORKDIR jitsi-meet
|
|||
RUN npm install && \
|
||||
make
|
||||
|
||||
FROM debian:bookworm
|
||||
FROM debian:buster
|
||||
|
||||
COPY --from=builder /jitsi-meet /srv/jitsi-meet
|
||||
RUN apt-get update && \
|
||||
|
|
|
@ -0,0 +1,31 @@
|
|||
From b327e580ab83110cdb52bc1d11687a096b8fc1df Mon Sep 17 00:00:00 2001
|
||||
From: Quentin Dufour <quentin@dufour.io>
|
||||
Date: Mon, 1 Feb 2021 07:16:50 +0100
|
||||
Subject: [PATCH] Disable legacy parameters
|
||||
|
||||
---
|
||||
jvb/src/main/kotlin/org/jitsi/videobridge/Main.kt | 8 --------
|
||||
1 file changed, 8 deletions(-)
|
||||
|
||||
diff --git a/jvb/src/main/kotlin/org/jitsi/videobridge/Main.kt b/jvb/src/main/kotlin/org/jitsi/videobridge/Main.kt
|
||||
index df71f480..8f0ef9a5 100644
|
||||
--- a/jvb/src/main/kotlin/org/jitsi/videobridge/Main.kt
|
||||
+++ b/jvb/src/main/kotlin/org/jitsi/videobridge/Main.kt
|
||||
@@ -62,14 +62,6 @@ fun main(args: Array<String>) {
|
||||
// to be passed.
|
||||
System.setProperty("org.eclipse.jetty.util.log.class", "org.eclipse.jetty.util.log.JavaUtilLog")
|
||||
|
||||
- // Before initializing the application programming interfaces (APIs) of
|
||||
- // Jitsi Videobridge, set any System properties which they use and which
|
||||
- // may be specified by the command-line arguments.
|
||||
- System.setProperty(
|
||||
- Videobridge.REST_API_PNAME,
|
||||
- cmdLine.getOptionValue("--apis").contains(Videobridge.REST_API).toString()
|
||||
- )
|
||||
-
|
||||
// Reload the Typesafe config used by ice4j, because the original was initialized before the new system
|
||||
// properties were set.
|
||||
JitsiConfig.reloadNewConfig()
|
||||
--
|
||||
2.25.1
|
||||
|
|
@ -1,40 +0,0 @@
|
|||
From 01507442620e5a57624c921b508eac7d572440d0 Mon Sep 17 00:00:00 2001
|
||||
From: Quentin Dufour <quentin@deuxfleurs.fr>
|
||||
Date: Tue, 25 Jan 2022 14:46:22 +0100
|
||||
Subject: [PATCH] Remove deprecated argument
|
||||
|
||||
---
|
||||
.../main/kotlin/org/jitsi/videobridge/Main.kt | 17 -----------------
|
||||
1 file changed, 17 deletions(-)
|
||||
|
||||
diff --git a/jvb/src/main/kotlin/org/jitsi/videobridge/Main.kt b/jvb/src/main/kotlin/org/jitsi/videobridge/Main.kt
|
||||
index 4f6cb78..3db00f2 100644
|
||||
--- a/jvb/src/main/kotlin/org/jitsi/videobridge/Main.kt
|
||||
+++ b/jvb/src/main/kotlin/org/jitsi/videobridge/Main.kt
|
||||
@@ -52,23 +52,6 @@ import org.jitsi.videobridge.websocket.singleton as webSocketServiceSingleton
|
||||
fun main(args: Array<String>) {
|
||||
val logger = LoggerImpl("org.jitsi.videobridge.Main")
|
||||
|
||||
- // We only support command line arguments for backward compatibility. The --apis options is the last one supported,
|
||||
- // and it is only used to enable/disable the REST API (XMPP is only controlled through the config files).
|
||||
- // TODO: fully remove support for --apis
|
||||
- CmdLine().apply {
|
||||
- parse(args)
|
||||
- getOptionValue("--apis")?.let {
|
||||
- logger.warn(
|
||||
- "A deprecated command line argument (--apis) is present. Please use the config file to control the " +
|
||||
- "REST API instead (see rest.md). Support for --apis will be removed in a future version."
|
||||
- )
|
||||
- System.setProperty(
|
||||
- Videobridge.REST_API_PNAME,
|
||||
- it.contains(Videobridge.REST_API).toString()
|
||||
- )
|
||||
- }
|
||||
- }
|
||||
-
|
||||
setupMetaconfigLogger()
|
||||
|
||||
setSystemPropertyDefaults()
|
||||
--
|
||||
2.33.1
|
||||
|
|
@ -1,4 +1,4 @@
|
|||
FROM debian:bookworm AS builder
|
||||
FROM debian:buster AS builder
|
||||
|
||||
RUN apt-get update && \
|
||||
apt-get install -y git unzip maven openjdk-11-jdk-headless
|
||||
|
@ -8,15 +8,15 @@ RUN git clone --depth 1 --branch ${JVB_TAG} https://github.com/jitsi/jitsi-video
|
|||
|
||||
WORKDIR jitsi-videobridge
|
||||
COPY *.patch .
|
||||
RUN git apply 0001-Remove-deprecated-argument.patch
|
||||
RUN git apply 0001-Disable-legacy-parameters.patch
|
||||
RUN mvn package -DskipTests
|
||||
RUN unzip jvb/target/jitsi-videobridge*.zip && \
|
||||
mv jitsi-videobridge-*-SNAPSHOT build
|
||||
|
||||
FROM debian:bookworm
|
||||
FROM debian:buster
|
||||
|
||||
RUN apt-get update && \
|
||||
apt-get install -y openjdk-11-jre-headless curl iproute2
|
||||
apt-get install -y openjdk-11-jre-headless curl
|
||||
|
||||
COPY --from=builder /jitsi-videobridge/build /usr/share/jvb
|
||||
COPY jvb_run /usr/local/bin/jvb_run
|
||||
|
|
|
@ -12,7 +12,6 @@ fi
|
|||
echo "NAT config: ${JITSI_NAT_LOCAL_IP} -> ${JITSI_NAT_PUBLIC_IP}"
|
||||
|
||||
exec java \
|
||||
-Dlog4j2.formatMsgNoLookups=true \
|
||||
-Djdk.tls.ephemeralDHKeySize=2048 \
|
||||
-Djava.util.logging.config.file=/usr/share/jvb/lib/logging.properties \
|
||||
-Dconfig.file=/etc/jitsi/videobridge.conf \
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
FROM debian:bookworm as builder
|
||||
FROM debian:buster as builder
|
||||
|
||||
RUN apt-get update && \
|
||||
apt-get install -y git unzip
|
||||
|
@ -6,7 +6,7 @@ RUN apt-get update && \
|
|||
ARG MEET_TAG
|
||||
RUN git clone --depth 1 --branch ${MEET_TAG} https://github.com/jitsi/jitsi-meet/
|
||||
|
||||
FROM debian:bookworm
|
||||
FROM debian:buster
|
||||
|
||||
ARG PROSODY_VERSION
|
||||
RUN apt-get update && \
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
# some doc: https://www.nginx.com/resources/wiki/start/topics/examples/full/
|
||||
error_log /dev/stderr info;
|
||||
error_log /dev/stderr;
|
||||
|
||||
events {}
|
||||
|
||||
|
@ -39,10 +39,8 @@ http {
|
|||
|
||||
# inspired by https://raw.githubusercontent.com/jitsi/docker-jitsi-meet/master/web/rootfs/defaults/meet.conf
|
||||
server {
|
||||
#listen 0.0.0.0:{{ env "NOMAD_PORT_https_port" }} ssl http2 default_server;
|
||||
#listen [::]:{{ env "NOMAD_PORT_https_port" }} ssl http2 default_server;
|
||||
listen 0.0.0.0:{{ env "NOMAD_PORT_https_port" }} default_server;
|
||||
listen [::]:{{ env "NOMAD_PORT_https_port" }} default_server;
|
||||
listen 0.0.0.0:{{ env "NOMAD_PORT_https_port" }} ssl http2 default_server;
|
||||
listen [::]:{{ env "NOMAD_PORT_https_port" }} ssl http2 default_server;
|
||||
client_max_body_size 0;
|
||||
server_name _;
|
||||
|
||||
|
@ -50,8 +48,8 @@ http {
|
|||
ssi on;
|
||||
ssi_types application/x-javascript application/javascript;
|
||||
|
||||
#ssl_certificate /etc/nginx/jitsi.crt;
|
||||
#ssl_certificate_key /etc/nginx/jitsi.key;
|
||||
ssl_certificate /etc/nginx/jitsi.crt;
|
||||
ssl_certificate_key /etc/nginx/jitsi.key;
|
||||
root /srv/jitsi-meet;
|
||||
index index.html;
|
||||
error_page 404 /static/404.html;
|
||||
|
@ -92,7 +90,7 @@ http {
|
|||
add_header 'Access-Control-Allow-Origin' '*';
|
||||
proxy_pass http://{{ env "NOMAD_ADDR_bosh_port" }}/http-bind;
|
||||
proxy_set_header X-Forwarded-For \$remote_addr;
|
||||
#proxy_set_header Host \$http_host;
|
||||
proxy_set_header Host \$http_host;
|
||||
}
|
||||
|
||||
# not used yet VVV
|
||||
|
|
|
@ -21,7 +21,7 @@ job "jitsi" {
|
|||
task "xmpp" {
|
||||
driver = "docker"
|
||||
config {
|
||||
image = "superboum/amd64_jitsi_xmpp:v10"
|
||||
image = "superboum/amd64_jitsi_xmpp:v9"
|
||||
ports = [ "bosh_port", "xmpp_port" ]
|
||||
network_mode = "host"
|
||||
volumes = [
|
||||
|
@ -102,7 +102,7 @@ EOF
|
|||
task "front" {
|
||||
driver = "docker"
|
||||
config {
|
||||
image = "superboum/amd64_jitsi_meet:v5"
|
||||
image = "superboum/amd64_jitsi_meet:v4"
|
||||
network_mode = "host"
|
||||
ports = [ "https_port" ]
|
||||
volumes = [
|
||||
|
@ -144,8 +144,7 @@ EOF
|
|||
"traefik.enable=true",
|
||||
"traefik.frontend.entryPoints=https",
|
||||
"traefik.frontend.rule=Host:jitsi.deuxfleurs.fr;PathPrefix:/",
|
||||
"traefik.protocol=https",
|
||||
"tricot jitsi.deuxfleurs.fr",
|
||||
"traefik.protocol=https"
|
||||
]
|
||||
port = "https_port"
|
||||
address_mode = "host"
|
||||
|
@ -167,7 +166,7 @@ EOF
|
|||
task "jicofo" {
|
||||
driver = "docker"
|
||||
config {
|
||||
image = "superboum/amd64_jitsi_conference_focus:v9"
|
||||
image = "superboum/amd64_jitsi_conference_focus:v7"
|
||||
network_mode = "host"
|
||||
volumes = [
|
||||
"secrets/certs/jitsi.crt:/usr/local/share/ca-certificates/jitsi.crt",
|
||||
|
@ -201,7 +200,7 @@ EOF
|
|||
task "videobridge" {
|
||||
driver = "docker"
|
||||
config {
|
||||
image = "superboum/amd64_jitsi_videobridge:v20"
|
||||
image = "superboum/amd64_jitsi_videobridge:v17"
|
||||
network_mode = "host"
|
||||
ports = [ "video_port" ]
|
||||
ulimit {
|
||||
|
|
|
@ -1,44 +0,0 @@
|
|||
# Bridge accounts on various services
|
||||
|
||||
[rocketchat]
|
||||
[rocketchat.dravedev]
|
||||
Server = "https://rocketchat.drave.quebec:443"
|
||||
Login = "{{ key "secrets/matterbridge/rocketchat.drave.quebec_user" | trimSpace }}"
|
||||
Password = "{{ key "secrets/matterbridge/rocketchat.drave.quebec_pass" | trimSpace }}"
|
||||
PrefixMessagesWithNick=false
|
||||
RemoteNickFormat="{NICK}"
|
||||
|
||||
[matrix]
|
||||
[matrix.deuxfleurs]
|
||||
Server = "https://im.deuxfleurs.fr"
|
||||
Login = "{{ key "secrets/matterbridge/im.deuxfleurs.fr_user" | trimSpace }}"
|
||||
Password = "{{ key "secrets/matterbridge/im.deuxfleurs.fr_pass" | trimSpace }}"
|
||||
PrefixMessagesWithNick=true
|
||||
RemoteNickFormat="<{NICK}> "
|
||||
|
||||
[discord]
|
||||
[discord.la-console]
|
||||
Token = "{{ key "secrets/matterbridge/discord.com_token" | trimSpace }}"
|
||||
Server = "872244032443678730"
|
||||
RemoteNickFormat="{NICK}"
|
||||
PrefixMessagesWithNick=false
|
||||
AutoWebhooks = true
|
||||
|
||||
# Rooms we are bridging
|
||||
|
||||
[[gateway]]
|
||||
name = "rfid"
|
||||
enable = true
|
||||
|
||||
[[gateway.inout]]
|
||||
account = "rocketchat.dravedev"
|
||||
channel = "rfid"
|
||||
|
||||
[[gateway.inout]]
|
||||
account = "matrix.deuxfleurs"
|
||||
channel = "#rfid:deuxfleurs.fr"
|
||||
|
||||
[[gateway.inout]]
|
||||
account = "discord.la-console"
|
||||
channel = "rfid"
|
||||
|
|
@ -1,40 +0,0 @@
|
|||
job "matterbridge" {
|
||||
datacenters = ["dc1"]
|
||||
type = "service"
|
||||
priority = 90
|
||||
|
||||
constraint {
|
||||
attribute = "${attr.cpu.arch}"
|
||||
value = "amd64"
|
||||
}
|
||||
|
||||
group "main" {
|
||||
count = 1
|
||||
|
||||
task "bridge" {
|
||||
driver = "docker"
|
||||
config {
|
||||
image = "42wim/matterbridge:1.23"
|
||||
readonly_rootfs = true
|
||||
volumes = [
|
||||
"secrets/matterbridge.toml:/etc/matterbridge/matterbridge.toml"
|
||||
]
|
||||
}
|
||||
|
||||
resources {
|
||||
memory = 200
|
||||
}
|
||||
|
||||
template {
|
||||
data = file("../config/matterbridge.toml")
|
||||
destination = "secrets/matterbridge.toml"
|
||||
}
|
||||
|
||||
restart {
|
||||
attempts = 10
|
||||
delay = "30s"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
27
app/nextcloud/build/nextcloud/Dockerfile
Normal file
27
app/nextcloud/build/nextcloud/Dockerfile
Normal file
|
@ -0,0 +1,27 @@
|
|||
FROM debian:10
|
||||
|
||||
RUN apt-get update && \
|
||||
apt-get -qq -y full-upgrade
|
||||
|
||||
RUN apt-get install -y apache2 php php-gd php-mbstring php-pgsql php-curl php-dom php-xml php-zip \
|
||||
php-intl php-ldap php-fileinfo php-exif php-apcu php-redis php-imagick unzip curl wget && \
|
||||
phpenmod gd && \
|
||||
phpenmod curl && \
|
||||
phpenmod mbstring && \
|
||||
phpenmod pgsql && \
|
||||
phpenmod dom && \
|
||||
phpenmod zip && \
|
||||
phpenmod intl && \
|
||||
phpenmod ldap && \
|
||||
phpenmod fileinfo && \
|
||||
phpenmod exif && \
|
||||
phpenmod apcu && \
|
||||
phpenmod redis && \
|
||||
phpenmod imagick && \
|
||||
phpenmod xml
|
||||
|
||||
COPY container-setup.sh /tmp
|
||||
RUN /tmp/container-setup.sh
|
||||
|
||||
COPY entrypoint.sh /
|
||||
CMD /entrypoint.sh
|
37
app/nextcloud/build/nextcloud/container-setup.sh
Executable file
37
app/nextcloud/build/nextcloud/container-setup.sh
Executable file
|
@ -0,0 +1,37 @@
|
|||
#!/bin/sh
|
||||
|
||||
set -ex
|
||||
|
||||
curl https://download.nextcloud.com/server/releases/nextcloud-19.0.0.zip > /tmp/nextcloud.zip
|
||||
cd /var/www
|
||||
unzip /tmp/nextcloud.zip
|
||||
rm /tmp/nextcloud.zip
|
||||
mv html html.old
|
||||
mv nextcloud html
|
||||
|
||||
cd html
|
||||
mkdir data
|
||||
|
||||
cd apps
|
||||
wget https://github.com/nextcloud/tasks/releases/download/v0.13.1/tasks.tar.gz
|
||||
tar xf tasks.tar.gz
|
||||
wget https://github.com/nextcloud/maps/releases/download/v0.1.6/maps-0.1.6.tar.gz
|
||||
tar xf maps-0.1.6.tar.gz
|
||||
wget https://github.com/nextcloud/calendar/releases/download/v2.0.3/calendar.tar.gz
|
||||
tar xf calendar.tar.gz
|
||||
wget https://github.com/nextcloud/news/releases/download/14.1.11/news.tar.gz
|
||||
tar xf news.tar.gz
|
||||
wget https://github.com/nextcloud/notes/releases/download/v3.6.0/notes.tar.gz
|
||||
tar xf notes.tar.gz
|
||||
wget https://github.com/nextcloud/contacts/releases/download/v3.3.0/contacts.tar.gz
|
||||
tar xf contacts.tar.gz
|
||||
wget https://github.com/nextcloud/mail/releases/download/v1.4.0/mail.tar.gz
|
||||
tar xf mail.tar.gz
|
||||
wget https://github.com/nextcloud/groupfolders/releases/download/v6.0.6/groupfolders.tar.gz
|
||||
tar xf groupfolders.tar.gz
|
||||
rm *.tar.gz
|
||||
|
||||
chown -R www-data:www-data /var/www/html
|
||||
|
||||
cd /var/www/html
|
||||
php occ
|
8
app/nextcloud/build/nextcloud/entrypoint.sh
Executable file
8
app/nextcloud/build/nextcloud/entrypoint.sh
Executable file
|
@ -0,0 +1,8 @@
|
|||
#!/bin/sh
|
||||
|
||||
set -xe
|
||||
|
||||
chown www-data:www-data /var/www/html/config/config.php
|
||||
touch /var/www/html/data/.ocdata
|
||||
|
||||
exec apachectl -DFOREGROUND
|
49
app/nextcloud/config/config.php.tpl
Normal file
49
app/nextcloud/config/config.php.tpl
Normal file
|
@ -0,0 +1,49 @@
|
|||
<?php
|
||||
$CONFIG = array (
|
||||
'appstoreenabled' => false,
|
||||
'instanceid' => '{{ key "secrets/nextcloud/instance_id" | trimSpace }}',
|
||||
'passwordsalt' => '{{ key "secrets/nextcloud/password_salt" | trimSpace }}',
|
||||
'secret' => '{{ key "secrets/nextcloud/secret" | trimSpace }}',
|
||||
'trusted_domains' => array (
|
||||
0 => 'nextcloud.deuxfleurs.fr',
|
||||
),
|
||||
'memcache.local' => '\\OC\\Memcache\\APCu',
|
||||
|
||||
'objectstore' => array(
|
||||
'class' => '\\OC\\Files\\ObjectStore\\S3',
|
||||
'arguments' => array(
|
||||
'bucket' => 'nextcloud',
|
||||
'autocreate' => false,
|
||||
'key' => '{{ key "secrets/nextcloud/garage_access_key" | trimSpace }}',
|
||||
'secret' => '{{ key "secrets/nextcloud/garage_secret_key" | trimSpace }}',
|
||||
'hostname' => 'garage.deuxfleurs.fr',
|
||||
'port' => 443,
|
||||
'use_ssl' => true,
|
||||
'region' => 'garage',
|
||||
// required for some non Amazon S3 implementations
|
||||
'use_path_style' => true
|
||||
),
|
||||
),
|
||||
|
||||
'dbtype' => 'pgsql',
|
||||
'dbhost' => 'psql-proxy.service.2.cluster.deuxfleurs.fr',
|
||||
'dbname' => 'nextcloud',
|
||||
'dbtableprefix' => 'nc_',
|
||||
'dbuser' => '{{ key "secrets/nextcloud/db_user" | trimSpace }}',
|
||||
'dbpassword' => '{{ key "secrets/nextcloud/db_pass" | trimSpace }}',
|
||||
|
||||
'default_language' => 'fr',
|
||||
'default_locale' => 'fr_FR',
|
||||
|
||||
'mail_domain' => 'deuxfleurs.fr',
|
||||
'mail_from_address' => 'nextcloud@deuxfleurs.fr',
|
||||
// TODO SMTP CONFIG
|
||||
|
||||
// TODO REDIS CACHE
|
||||
|
||||
'version' => '19.0.0.12',
|
||||
'overwrite.cli.url' => 'https://nextcloud.deuxfleurs.fr',
|
||||
|
||||
'installed' => true,
|
||||
);
|
||||
|
65
app/nextcloud/deploy/nextcloud.hcl
Normal file
65
app/nextcloud/deploy/nextcloud.hcl
Normal file
|
@ -0,0 +1,65 @@
|
|||
job "nextcloud" {
|
||||
datacenters = ["dc1", "belair"]
|
||||
type = "service"
|
||||
priority = 40
|
||||
|
||||
constraint {
|
||||
attribute = "${attr.cpu.arch}"
|
||||
value = "amd64"
|
||||
}
|
||||
|
||||
group "nextcloud" {
|
||||
count = 1
|
||||
|
||||
network {
|
||||
port "web_port" {
|
||||
to = 80
|
||||
}
|
||||
}
|
||||
|
||||
task "nextcloud" {
|
||||
driver = "docker"
|
||||
config {
|
||||
image = "lxpz/deuxfleurs_nextcloud_amd64:8"
|
||||
ports = [ "web_port" ]
|
||||
volumes = [
|
||||
"secrets/config.php:/var/www/html/config/config.php"
|
||||
]
|
||||
}
|
||||
|
||||
template {
|
||||
data = file("../config/config.php.tpl")
|
||||
destination = "secrets/config.php"
|
||||
}
|
||||
|
||||
resources {
|
||||
memory = 1000
|
||||
cpu = 2000
|
||||
}
|
||||
|
||||
service {
|
||||
name = "nextcloud"
|
||||
tags = [
|
||||
"nextcloud",
|
||||
"traefik.enable=true",
|
||||
"traefik.frontend.entryPoints=https,http",
|
||||
"traefik.frontend.rule=Host:nextcloud.deuxfleurs.fr",
|
||||
]
|
||||
port = "web_port"
|
||||
address_mode = "host"
|
||||
check {
|
||||
type = "tcp"
|
||||
port = "web_port"
|
||||
interval = "60s"
|
||||
timeout = "5s"
|
||||
check_restart {
|
||||
limit = 3
|
||||
grace = "90s"
|
||||
ignore_warnings = false
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
20
app/nextcloud/integration/README.md
Normal file
20
app/nextcloud/integration/README.md
Normal file
|
@ -0,0 +1,20 @@
|
|||
Install Owncloud CLI:
|
||||
|
||||
php ./occ \
|
||||
--no-interaction \
|
||||
--verbose \
|
||||
maintenance:install \
|
||||
--database pgsql \
|
||||
--database-name nextcloud \
|
||||
--database-host postgres \
|
||||
--database-user nextcloud \
|
||||
--database-pass nextcloud \
|
||||
--admin-user nextcloud \
|
||||
--admin-pass nextcloud \
|
||||
--admin-email coucou@deuxfleurs.fr
|
||||
|
||||
Official image entrypoint:
|
||||
|
||||
https://github.com/nextcloud/docker/blob/master/20.0/fpm/entrypoint.sh
|
||||
|
||||
|
31
app/nextcloud/integration/bottin.json
Normal file
31
app/nextcloud/integration/bottin.json
Normal file
|
@ -0,0 +1,31 @@
|
|||
{
|
||||
"suffix": "dc=deuxfleurs,dc=fr",
|
||||
"bind": "0.0.0.0:389",
|
||||
"consul_host": "http://consul:8500",
|
||||
"log_level": "debug",
|
||||
"acl": [
|
||||
"*,dc=deuxfleurs,dc=fr::read:*:* !userpassword",
|
||||
"*::read modify:SELF:*",
|
||||
"ANONYMOUS::bind:*,ou=users,dc=deuxfleurs,dc=fr:",
|
||||
"ANONYMOUS::bind:cn=admin,dc=deuxfleurs,dc=fr:",
|
||||
"*,ou=services,ou=users,dc=deuxfleurs,dc=fr::bind:*,ou=users,dc=deuxfleurs,dc=fr:*",
|
||||
"*,ou=services,ou=users,dc=deuxfleurs,dc=fr::read:*:*",
|
||||
|
||||
"*:cn=asso_deuxfleurs,ou=groups,dc=deuxfleurs,dc=fr:add:*,ou=invitations,dc=deuxfleurs,dc=fr:*",
|
||||
"ANONYMOUS::bind:*,ou=invitations,dc=deuxfleurs,dc=fr:",
|
||||
"*,ou=invitations,dc=deuxfleurs,dc=fr::delete:SELF:*",
|
||||
|
||||
"*:cn=asso_deuxfleurs,ou=groups,dc=deuxfleurs,dc=fr:add:*,ou=users,dc=deuxfleurs,dc=fr:*",
|
||||
"*,ou=invitations,dc=deuxfleurs,dc=fr::add:*,ou=users,dc=deuxfleurs,dc=fr:*",
|
||||
|
||||
"*:cn=asso_deuxfleurs,ou=groups,dc=deuxfleurs,dc=fr:modifyAdd:cn=email,ou=groups,dc=deuxfleurs,dc=fr:*",
|
||||
"*,ou=invitations,dc=deuxfleurs,dc=fr::modifyAdd:cn=email,ou=groups,dc=deuxfleurs,dc=fr:*",
|
||||
"*:cn=asso_deuxfleurs,ou=groups,dc=deuxfleurs,dc=fr:modifyAdd:cn=seafile,ou=groups,dc=deuxfleurs,dc=fr:*",
|
||||
"*,ou=invitations,dc=deuxfleurs,dc=fr::modifyAdd:cn=seafile,ou=groups,dc=deuxfleurs,dc=fr:*",
|
||||
"*:cn=asso_deuxfleurs,ou=groups,dc=deuxfleurs,dc=fr:modifyAdd:cn=nextcloud,ou=groups,dc=deuxfleurs,dc=fr:*",
|
||||
"*,ou=invitations,dc=deuxfleurs,dc=fr::modifyAdd:cn=seafile,ou=nextcloud,dc=deuxfleurs,dc=fr:*",
|
||||
|
||||
"cn=admin,dc=deuxfleurs,dc=fr::read add modify delete:*:*",
|
||||
"*:cn=admin,ou=groups,dc=deuxfleurs,dc=fr:read add modify delete:*:*"
|
||||
]
|
||||
}
|
27
app/nextcloud/integration/docker-compose.yml
Normal file
27
app/nextcloud/integration/docker-compose.yml
Normal file
|
@ -0,0 +1,27 @@
|
|||
version: '3.4'
|
||||
services:
|
||||
php:
|
||||
image: lxpz/deuxfleurs_nextcloud_amd64:8
|
||||
depends_on:
|
||||
- bottin
|
||||
- postgres
|
||||
ports:
|
||||
- "80:80"
|
||||
|
||||
postgres:
|
||||
image: postgres:9.6.19
|
||||
environment:
|
||||
- POSTGRES_DB=nextcloud
|
||||
- POSTGRES_USER=nextcloud
|
||||
- POSTGRES_PASSWORD=nextcloud
|
||||
|
||||
bottin:
|
||||
image: lxpz/bottin_amd64:14
|
||||
depends_on:
|
||||
- consul
|
||||
volumes:
|
||||
- ./bottin.json:/config.json
|
||||
|
||||
consul:
|
||||
image: consul:1.8.4
|
||||
|
|
@ -41,8 +41,7 @@ EOH
|
|||
"platoo",
|
||||
"traefik.enable=true",
|
||||
"traefik.frontend.entryPoints=https",
|
||||
"traefik.frontend.rule=Host:platoo.deuxfleurs.fr;PathPrefix:/",
|
||||
"tricot platoo.deuxfleurs.fr",
|
||||
"traefik.frontend.rule=Host:platoo.deuxfleurs.fr;PathPrefix:/"
|
||||
]
|
||||
port = "web_port"
|
||||
address_mode = "host"
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
FROM rust:1.58.1-slim-bullseye as builder
|
||||
FROM rust:1.47.0-slim-buster as builder
|
||||
|
||||
RUN apt-get update && \
|
||||
apt-get install -y \
|
||||
|
@ -10,7 +10,6 @@ RUN apt-get update && \
|
|||
libpq-dev \
|
||||
gettext \
|
||||
git \
|
||||
python \
|
||||
curl \
|
||||
gcc \
|
||||
make \
|
||||
|
@ -26,11 +25,11 @@ WORKDIR /opt/plume
|
|||
RUN git checkout ${VERSION}
|
||||
|
||||
WORKDIR /opt/plume/script
|
||||
RUN chmod a+x ./wasm-deps.sh && ./wasm-deps.sh
|
||||
RUN chmod a+x ./wasm-deps.sh && sleep 1 && ./wasm-deps.sh
|
||||
|
||||
WORKDIR /opt/plume
|
||||
RUN cargo install wasm-pack
|
||||
RUN chmod a+x ./script/plume-front.sh && ./script/plume-front.sh
|
||||
RUN chmod a+x ./script/plume-front.sh && sleep 1 && ./script/plume-front.sh
|
||||
RUN cargo install --path ./ --force --no-default-features --features postgres
|
||||
RUN cargo install --path plume-cli --force --no-default-features --features postgres
|
||||
RUN cargo clean
|
||||
|
@ -41,14 +40,13 @@ FROM debian:bullseye-slim
|
|||
RUN apt-get update && apt-get install -y --no-install-recommends \
|
||||
ca-certificates \
|
||||
libpq5 \
|
||||
libssl1.1 \
|
||||
rclone \
|
||||
fuse
|
||||
libssl1.1
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
COPY --from=builder /opt/plume /app
|
||||
COPY --from=builder /usr/local/cargo/bin/plm /usr/local/bin/
|
||||
COPY --from=builder /usr/local/cargo/bin/plume /usr/local/bin/
|
||||
COPY plm-start /usr/local/bin/
|
||||
|
||||
CMD ["plume"]
|
||||
CMD ["plm-start"]
|
||||
|
|
9
app/plume/build/plume/plm-start
Executable file
9
app/plume/build/plume/plm-start
Executable file
|
@ -0,0 +1,9 @@
|
|||
#!/bin/bash
|
||||
|
||||
until plm migration run;
|
||||
do sleep 2;
|
||||
done
|
||||
plm search init
|
||||
plm instance new --domain "$DOMAIN_NAME" --name "$INSTANCE_NAME" --private
|
||||
|
||||
plume
|
|
@ -28,5 +28,3 @@ LDAP_USER_NAME_ATTR=cn
|
|||
LDAP_USER_MAIL_ATTR=mail
|
||||
LDAP_TLS=false
|
||||
|
||||
RUST_BACKTRACE=1
|
||||
RUST_LOG=info
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
job "plume-blog" {
|
||||
job "plume" {
|
||||
datacenters = ["dc1"]
|
||||
type = "service"
|
||||
|
||||
|
@ -15,22 +15,16 @@ job "plume-blog" {
|
|||
}
|
||||
|
||||
task "plume" {
|
||||
constraint {
|
||||
attribute = "${attr.unique.hostname}"
|
||||
operator = "="
|
||||
value = "digitale"
|
||||
}
|
||||
|
||||
driver = "docker"
|
||||
config {
|
||||
image = "superboum/plume:v8"
|
||||
image = "superboum/plume:v3"
|
||||
network_mode = "host"
|
||||
ports = [ "web_port" ]
|
||||
#command = "cat"
|
||||
#args = [ "/dev/stdout" ]
|
||||
volumes = [
|
||||
"/mnt/ssd/plume/search_index:/app/search_index",
|
||||
"/mnt/ssd/plume/media:/app/static/media"
|
||||
"/mnt/glusterfs/plume/media:/app/static/media",
|
||||
"/mnt/glusterfs/plume/search:/app/search_index"
|
||||
]
|
||||
}
|
||||
|
||||
|
@ -41,7 +35,7 @@ job "plume-blog" {
|
|||
}
|
||||
|
||||
resources {
|
||||
memory = 500
|
||||
memory = 100
|
||||
cpu = 100
|
||||
}
|
||||
|
||||
|
@ -52,7 +46,6 @@ job "plume-blog" {
|
|||
"traefik.enable=true",
|
||||
"traefik.frontend.entryPoints=https,http",
|
||||
"traefik.frontend.rule=Host:plume.deuxfleurs.fr",
|
||||
"tricot plume.deuxfleurs.fr",
|
||||
]
|
||||
port = "web_port"
|
||||
address_mode = "host"
|
||||
|
@ -70,12 +63,6 @@ job "plume-blog" {
|
|||
}
|
||||
}
|
||||
}
|
||||
restart {
|
||||
interval = "30m"
|
||||
attempts = 20
|
||||
delay = "15s"
|
||||
mode = "delay"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1 +0,0 @@
|
|||
USER Backup AWS access key ID
|
|
@ -1 +0,0 @@
|
|||
USER Backup AWS secret access key
|
|
@ -1 +0,0 @@
|
|||
USER Restic password to encrypt backups
|
|
@ -1 +0,0 @@
|
|||
USER Restic repository, eg. s3:https://s3.garage.tld
|
|
@ -1,4 +1,4 @@
|
|||
FROM golang:1.19.0-bullseye AS builder
|
||||
FROM golang:1.13-buster AS builder
|
||||
|
||||
ARG STOLON_VERSION
|
||||
WORKDIR /stolon
|
||||
|
@ -9,8 +9,10 @@ COPY 0001-Add-max-rate-to-pg_basebackup.patch .
|
|||
RUN git apply 0001-Add-max-rate-to-pg_basebackup.patch
|
||||
RUN make && chmod +x /stolon/bin/*
|
||||
|
||||
FROM postgres:14.5-bullseye
|
||||
FROM amd64/debian:stretch
|
||||
ARG PG_VERSION
|
||||
RUN apt-get update && \
|
||||
apt-get install -y postgresql-all=${PG_VERSION}
|
||||
COPY --from=builder /stolon/bin /usr/local/bin
|
||||
USER postgres
|
||||
ENTRYPOINT []
|
||||
CMD ["/bin/bash"]
|
||||
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show more
Reference in a new issue