Compare commits

..

No commits in common. "main" and "feature/enable-traefik-metrics" have entirely different histories.

355 changed files with 4157 additions and 6788 deletions

3
.gitmodules vendored
View file

@ -1,3 +1,6 @@
[submodule "docker/static/goStatic"] [submodule "docker/static/goStatic"]
path = app/build/static/goStatic path = app/build/static/goStatic
url = https://github.com/PierreZ/goStatic url = https://github.com/PierreZ/goStatic
[submodule "docker/blog/quentin.dufour.io"]
path = docker/blog-quentin/quentin.dufour.io
url = git@gitlab.com:superboum/quentin.dufour.io.git

View file

@ -1,8 +1,27 @@
deuxfleurs.fr deuxfleurs.fr
============= =============
**OBSOLETION NOTICE:** We are progressively migrating our stack to NixOS, to replace Ansible. Most of the files present in this repository are outdated or obsolete, *Many things are still missing here, including a proper documentation. Please stay nice, it is a volunter project. Feel free to open pull/merge requests to improve it. Thanks.*
the current code for our infrastructure is at: <https://git.deuxfleurs.fr/Deuxfleurs/nixcfg>.
## Our abstraction stack
We try to build a generic abstraction stack between our different resources (CPU, RAM, disk, etc.) and our services (Chat, Storage, etc.):
* ansible (physical node conf)
* nomad (schedule containers)
* consul (distributed key value store / lock / service discovery)
* garage/glusterfs (file storage)
* stolon + postgresql (distributed relational database)
* docker (container tool)
* bottin (LDAP server, auth)
Some services we provide:
* Chat (Matrix/Riot)
* Email (Postfix/Dovecot/Sogo)
* Storage (Seafile)
As a generic abstraction is provided, deploying new services should be easy.
## I am lost, how this repo works? ## I am lost, how this repo works?
@ -19,3 +38,55 @@ To ease the development, we make the choice of a fully integrated environment
3. `op_guide`: Guides to explain you operations you can do cluster wide (like configuring postgres) 3. `op_guide`: Guides to explain you operations you can do cluster wide (like configuring postgres)
## Start hacking
### Clone the repository
```
git clone https://gitlab.com/superboum/deuxfleurs.fr.git
git submodule init
git submodule update
```
### Deploying/Updating new services is done from your machine
*The following instructions are provided for ops that already have access to the servers.*
Deploy Nomad on your machine:
```bash
export NOMAD_VER=0.9.1
wget https://releases.hashicorp.com/nomad/${NOMAD_VER}/nomad_${NOMAD_VER}_linux_amd64.zip
unzip nomad_${NOMAD_VER}_linux_amd64.zip
sudo mv nomad /usr/local/bin
rm nomad_${NOMAD_VER}_linux_amd64.zip
```
Deploy Consul on your machine:
```bash
export CONSUL_VER=1.5.1
wget https://releases.hashicorp.com/consul/${CONSUL_VER}/consul_${CONSUL_VER}_linux_amd64.zip
unzip consul_${CONSUL_VER}_linux_amd64.zip
sudo mv consul /usr/local/bin
rm consul_${CONSUL_VER}_linux_amd64.zip
```
Create an alias (and put it in your `.bashrc`) to bind APIs on your machine:
```
alias bind_df="ssh \
-p110 \
-N \
-L 4646:127.0.0.1:4646 \
-L 8500:127.0.0.1:8500 \
-L 8082:traefik.service.2.cluster.deuxfleurs.fr:8082 \
-L 5432:psql-proxy.service.2.cluster.deuxfleurs.fr:5432 \
<a server from the cluster>"
```
and run:
```
bind_df
```

2
app/.gitignore vendored
View file

@ -1,2 +0,0 @@
env/
__pycache__

View file

@ -1,66 +0,0 @@
# Folder hierarchy
- `<module>/build/<image_name>/`: folders with dockerfiles and other necessary resources for building container images
- `<module>/config/`: folder containing configuration files, referenced by deployment file
- `<module>/secrets/`: folder containing secrets, which can be synchronized with Consul using `secretmgr.py`
- `<module>/deploy/`: folder containing the HCL file(s) necessary for deploying the module
- `<module>/integration/`: folder containing files for integration testing using docker-compose
# Secret Manager `secretmgr.py`
The Secret Manager ensures that all secrets are present where they should in the cluster.
**You need access to the cluster** (SSH port forwarding) for it to find any secret on the cluster. Refer to the previous directory's [README](../README.md), at the bottom of the file.
## How to install `secretmgr.py` dependencies
```bash
### Install system dependencies first:
## On fedora
dnf install -y openldap-devel cyrus-sasl-devel
## On ubuntu
apt-get install -y libldap2-dev libsasl2-dev
### Now install the Python dependencies from requirements.txt:
## Either using a virtual environment
# (requires virtualenv python module)
python3 -m virtualenv env
# Must be done everytime you create a new terminal window in this folder:
. env/bin/activate
# Install the deps
pip install -r requirements.txt
## Either by installing the dependencies for your system user:
pip3 install --user -r requirements.txt
```
## How to use `secretmgr.py`
Check that all secrets are correctly deployed for app `dummy`:
```bash
./secretmgr.py check dummy
```
Generate secrets for app `dummy` if they don't already exist:
```bash
./secretmgr.py gen dummy
```
Rotate secrets for app `dummy`, overwriting existing ones (be careful, this is dangerous!):
```bash
./secretmgr.py regen dummy
```
# Upgrading one of our packaged apps to a new version
1. Edit `docker-compose.yml`
2. Change the `VERSION` variable to the desired version
3. Increment the docker image tag by 1 (eg: superboum/riot:v13 -> superboum/riot:v14)
4. Run `docker-compose build`
5. Run `docker-compose push`
6. Done

View file

@ -1 +0,0 @@
result

View file

@ -1,8 +0,0 @@
## Build
```bash
docker load < $(nix-build docker.nix)
docker push superboum/backup-psql:???
```

View file

@ -1,106 +0,0 @@
#!/usr/bin/env python3
import shutil,sys,os,datetime,minio,subprocess
working_directory = "."
if 'CACHE_DIR' in os.environ: working_directory = os.environ['CACHE_DIR']
required_space_in_bytes = 20 * 1024 * 1024 * 1024
bucket = os.environ['AWS_BUCKET']
key = os.environ['AWS_ACCESS_KEY_ID']
secret = os.environ['AWS_SECRET_ACCESS_KEY']
endpoint = os.environ['AWS_ENDPOINT']
pubkey = os.environ['CRYPT_PUBLIC_KEY']
psql_host = os.environ['PSQL_HOST']
psql_user = os.environ['PSQL_USER']
s3_prefix = str(datetime.datetime.now())
files = [ "backup_manifest", "base.tar.gz", "pg_wal.tar.gz" ]
clear_paths = [ os.path.join(working_directory, f) for f in files ]
crypt_paths = [ os.path.join(working_directory, f) + ".age" for f in files ]
s3_keys = [ s3_prefix + "/" + f for f in files ]
def abort(msg):
for p in clear_paths + crypt_paths:
if os.path.exists(p):
print(f"Remove {p}")
os.remove(p)
if msg: sys.exit(msg)
else: print("success")
# Check we have enough space on disk
if shutil.disk_usage(working_directory).free < required_space_in_bytes:
abort(f"Not enough space on disk at path {working_directory} to perform a backup, aborting")
# Check postgres password is set
if 'PGPASSWORD' not in os.environ:
abort(f"You must pass postgres' password through the environment variable PGPASSWORD")
# Check our working directory is empty
if len(os.listdir(working_directory)) != 0:
abort(f"Working directory {working_directory} is not empty, aborting")
# Check Minio
client = minio.Minio(endpoint, key, secret)
if not client.bucket_exists(bucket):
abort(f"Bucket {bucket} does not exist or its access is forbidden, aborting")
# Perform the backup locally
try:
ret = subprocess.run(["pg_basebackup",
f"--host={psql_host}",
f"--username={psql_user}",
f"--pgdata={working_directory}",
f"--format=tar",
"--wal-method=stream",
"--gzip",
"--compress=6",
"--progress",
"--max-rate=5M",
])
if ret.returncode != 0:
abort(f"pg_basebackup exited, expected return code 0, got {ret.returncode}. aborting")
except Exception as e:
abort(f"pg_basebackup raised exception {e}. aborting")
# Check that the expected files are here
for p in clear_paths:
print(f"Checking that {p} exists locally")
if not os.path.exists(p):
abort(f"File {p} expected but not found, aborting")
# Cipher them
for c, e in zip(clear_paths, crypt_paths):
print(f"Ciphering {c} to {e}")
try:
ret = subprocess.run(["age", "-r", pubkey, "-o", e, c])
if ret.returncode != 0:
abort(f"age exit code is {ret}, 0 expected. aborting")
except Exception as e:
abort(f"aged raised an exception. {e}. aborting")
# Upload the backup to S3
for p, k in zip(crypt_paths, s3_keys):
try:
print(f"Uploading {p} to {k}")
result = client.fput_object(bucket, k, p)
print(
"created {0} object; etag: {1}, version-id: {2}".format(
result.object_name, result.etag, result.version_id,
),
)
except Exception as e:
abort(f"Exception {e} occured while upload {p}. aborting")
# Check that the files have been uploaded
for k in s3_keys:
try:
print(f"Checking that {k} exists remotely")
result = client.stat_object(bucket, k)
print(
"last-modified: {0}, size: {1}".format(
result.last_modified, result.size,
),
)
except Exception as e:
abort(f"{k} not found on S3. {e}. aborting")
abort(None)

View file

@ -1,8 +0,0 @@
{
pkgsSrc = fetchTarball {
# Latest commit on https://github.com/NixOS/nixpkgs/tree/nixos-21.11
# As of 2022-04-15
url ="https://github.com/NixOS/nixpkgs/archive/2f06b87f64bc06229e05045853e0876666e1b023.tar.gz";
sha256 = "sha256:1d7zg96xw4qsqh7c89pgha9wkq3rbi9as3k3d88jlxy2z0ns0cy2";
};
}

View file

@ -1,37 +0,0 @@
let
common = import ./common.nix;
pkgs = import common.pkgsSrc {};
python-with-my-packages = pkgs.python3.withPackages (p: with p; [
minio
]);
in
pkgs.stdenv.mkDerivation {
name = "backup-psql";
src = pkgs.lib.sourceFilesBySuffices ./. [ ".py" ];
buildInputs = [
python-with-my-packages
pkgs.age
pkgs.postgresql_14
];
buildPhase = ''
cat > backup-psql <<EOF
#!${pkgs.bash}/bin/bash
export PYTHONPATH=${python-with-my-packages}/${python-with-my-packages.sitePackages}
export PATH=${python-with-my-packages}/bin:${pkgs.age}/bin:${pkgs.postgresql_14}/bin
${python-with-my-packages}/bin/python3 $out/lib/backup-psql.py
EOF
chmod +x backup-psql
'';
installPhase = ''
mkdir -p $out/{bin,lib}
cp *.py $out/lib/backup-psql.py
cp backup-psql $out/bin/backup-psql
'';
}

View file

@ -1,11 +0,0 @@
let
common = import ./common.nix;
app = import ./default.nix;
pkgs = import common.pkgsSrc {};
in
pkgs.dockerTools.buildImage {
name = "superboum/backup-psql-docker";
config = {
Cmd = [ "${app}/bin/backup-psql" ];
};
}

View file

@ -1,171 +0,0 @@
job "backup_daily" {
datacenters = ["dc1"]
type = "batch"
priority = "60"
periodic {
cron = "@daily"
// Do not allow overlapping runs.
prohibit_overlap = true
}
group "backup-dovecot" {
constraint {
attribute = "${attr.unique.hostname}"
operator = "="
value = "digitale"
}
task "main" {
driver = "docker"
config {
image = "restic/restic:0.12.1"
entrypoint = [ "/bin/sh", "-c" ]
args = [ "restic backup /mail && restic forget --keep-within 1m1d --keep-within-weekly 3m --keep-within-monthly 1y && restic prune --max-unused 50% --max-repack-size 2G && restic check" ]
volumes = [
"/mnt/ssd/mail:/mail"
]
}
template {
data = <<EOH
AWS_ACCESS_KEY_ID={{ key "secrets/email/dovecot/backup_aws_access_key_id" }}
AWS_SECRET_ACCESS_KEY={{ key "secrets/email/dovecot/backup_aws_secret_access_key" }}
RESTIC_REPOSITORY={{ key "secrets/email/dovecot/backup_restic_repository" }}
RESTIC_PASSWORD={{ key "secrets/email/dovecot/backup_restic_password" }}
EOH
destination = "secrets/env_vars"
env = true
}
resources {
cpu = 500
memory = 200
}
restart {
attempts = 2
interval = "30m"
delay = "15s"
mode = "fail"
}
}
}
group "backup-plume" {
constraint {
attribute = "${attr.unique.hostname}"
operator = "="
value = "digitale"
}
task "main" {
driver = "docker"
config {
image = "restic/restic:0.12.1"
entrypoint = [ "/bin/sh", "-c" ]
args = [ "restic backup /plume && restic forget --keep-within 1m1d --keep-within-weekly 3m --keep-within-monthly 1y && restic prune --max-unused 50% --max-repack-size 2G && restic check" ]
volumes = [
"/mnt/ssd/plume/media:/plume"
]
}
template {
data = <<EOH
AWS_ACCESS_KEY_ID={{ key "secrets/plume/backup_aws_access_key_id" }}
AWS_SECRET_ACCESS_KEY={{ key "secrets/plume/backup_aws_secret_access_key" }}
RESTIC_REPOSITORY={{ key "secrets/plume/backup_restic_repository" }}
RESTIC_PASSWORD={{ key "secrets/plume/backup_restic_password" }}
EOH
destination = "secrets/env_vars"
env = true
}
resources {
cpu = 500
memory = 200
}
restart {
attempts = 2
interval = "30m"
delay = "15s"
mode = "fail"
}
}
}
group "backup-consul" {
task "consul-kv-export" {
driver = "docker"
lifecycle {
hook = "prestart"
sidecar = false
}
config {
image = "consul:1.11.2"
network_mode = "host"
entrypoint = [ "/bin/sh", "-c" ]
args = [ "/bin/consul kv export > $NOMAD_ALLOC_DIR/consul.json" ]
}
env {
CONSUL_HTTP_ADDR = "http://consul.service.2.cluster.deuxfleurs.fr:8500"
}
resources {
cpu = 200
memory = 200
}
restart {
attempts = 2
interval = "30m"
delay = "15s"
mode = "fail"
}
}
task "restic-backup" {
driver = "docker"
config {
image = "restic/restic:0.12.1"
entrypoint = [ "/bin/sh", "-c" ]
args = [ "restic backup $NOMAD_ALLOC_DIR/consul.json && restic forget --keep-within 1m1d --keep-within-weekly 3m --keep-within-monthly 1y && restic prune --max-unused 50% --max-repack-size 2G && restic check" ]
}
template {
data = <<EOH
AWS_ACCESS_KEY_ID={{ key "secrets/backup/consul/backup_aws_access_key_id" }}
AWS_SECRET_ACCESS_KEY={{ key "secrets/backup/consul/backup_aws_secret_access_key" }}
RESTIC_REPOSITORY={{ key "secrets/backup/consul/backup_restic_repository" }}
RESTIC_PASSWORD={{ key "secrets/backup/consul/backup_restic_password" }}
EOH
destination = "secrets/env_vars"
env = true
}
resources {
cpu = 200
memory = 200
}
restart {
attempts = 2
interval = "30m"
delay = "15s"
mode = "fail"
}
}
}
}

View file

@ -1,55 +0,0 @@
job "backup_weekly" {
datacenters = ["dc1"]
type = "batch"
priority = "60"
periodic {
cron = "@weekly"
// Do not allow overlapping runs.
prohibit_overlap = true
}
group "backup-psql" {
task "main" {
driver = "docker"
config {
image = "superboum/backup-psql-docker:gyr3aqgmhs0hxj0j9hkrdmm1m07i8za2"
volumes = [
// Mount a cache on the hard disk to avoid filling the SSD
"/mnt/storage/tmp_bckp_psql:/mnt/cache"
]
}
template {
data = <<EOH
CACHE_DIR=/mnt/cache
AWS_BUCKET=backups-pgbasebackup
AWS_ENDPOINT=s3.deuxfleurs.shirokumo.net
AWS_ACCESS_KEY_ID={{ key "secrets/backup/psql/aws_access_key_id" }}
AWS_SECRET_ACCESS_KEY={{ key "secrets/backup/psql/aws_secret_access_key" }}
CRYPT_PUBLIC_KEY={{ key "secrets/backup/psql/crypt_public_key" }}
PSQL_HOST=psql-proxy.service.2.cluster.deuxfleurs.fr
PSQL_USER={{ key "secrets/postgres/keeper/pg_repl_username" }}
PGPASSWORD={{ key "secrets/postgres/keeper/pg_repl_pwd" }}
EOH
destination = "secrets/env_vars"
env = true
}
resources {
cpu = 200
memory = 200
}
restart {
attempts = 2
interval = "30m"
delay = "15s"
mode = "fail"
}
}
}
}

View file

@ -1 +0,0 @@
USER Backup AWS access key ID

View file

@ -1 +0,0 @@
USER Backup AWS secret access key

View file

@ -1 +0,0 @@
USER Restic password to encrypt backups

View file

@ -1 +0,0 @@
USER Restic repository, eg. s3:https://s3.garage.tld

View file

@ -1 +0,0 @@
USER_LONG Private ed25519 key of the container doing the backup

View file

@ -1 +0,0 @@
USER Public ed25519 key of the container doing the backup (this key must be in authorized_keys on the backup target host)

View file

@ -1 +0,0 @@
USER Minio access key

View file

@ -1 +0,0 @@
USER Minio secret key

View file

@ -1 +0,0 @@
USER a private key to decript backups from age

View file

@ -1 +0,0 @@
USER A public key to encypt backups with age

View file

@ -1 +0,0 @@
USER Directory where to store backups on target host

View file

@ -1 +0,0 @@
USER SSH fingerprint of the target machine (format: copy here the corresponding line from your known_hosts file)

View file

@ -1 +0,0 @@
USER Hostname of the backup target host

View file

@ -1 +0,0 @@
USER SSH port number to connect to the target host

View file

@ -1 +0,0 @@
USER SSH username to log in as on the target host

View file

@ -1,83 +0,0 @@
job "bagage" {
datacenters = ["dc1"]
type = "service"
priority = 90
constraint {
attribute = "${attr.cpu.arch}"
value = "amd64"
}
group "main" {
count = 1
network {
port "web_port" { to = 8080 }
port "ssh_port" {
static = 2222
to = 2222
}
}
task "server" {
driver = "docker"
config {
image = "superboum/amd64_bagage:v11"
readonly_rootfs = false
volumes = [
"secrets/id_rsa:/id_rsa"
]
ports = [ "web_port", "ssh_port" ]
}
env {
BAGAGE_LDAP_ENDPOINT = "bottin2.service.2.cluster.deuxfleurs.fr:389"
}
resources {
memory = 500
}
template {
data = "{{ key \"secrets/bagage/id_rsa\" }}"
destination = "secrets/id_rsa"
}
service {
name = "bagage-ssh"
port = "ssh_port"
address_mode = "host"
tags = [
"bagage",
"(diplonat (tcp_port 2222))"
]
}
service {
name = "bagage-webdav"
tags = [
"bagage",
"traefik.enable=true",
"traefik.frontend.entryPoints=https,http",
"traefik.frontend.rule=Host:bagage.deuxfleurs.fr",
"tricot bagage.deuxfleurs.fr",
]
port = "web_port"
address_mode = "host"
check {
type = "tcp"
port = "web_port"
address_mode = "host"
interval = "60s"
timeout = "5s"
check_restart {
limit = 3
grace = "90s"
ignore_warnings = false
}
}
}
}
}
}

View file

@ -1 +0,0 @@
CMD ssh-keygen -q -f >(cat) -N "" <<< y 2>/dev/null 1>&2 ; true

8
app/build/README.md Normal file
View file

@ -0,0 +1,8 @@
## How to upgrade our packaged apps to a new version?
1. Edit `docker-compose.yml`
2. Change the `VERSION` variable to the desired version
3. Increment the docker image tag by 1 (eg: superboum/riot:v13 -> superboum/riot:v14)
4. Run `docker-compose build`
5. Run `docker-compose push`
6. Done

View file

@ -6,15 +6,16 @@ ENV CGO_ENABLED=0 GOOS=linux GOARCH=amd64
WORKDIR /tmp/alps WORKDIR /tmp/alps
RUN git init && \ RUN git init && \
git remote add origin https://git.deuxfleurs.fr/Deuxfleurs/alps.git && \ git remote add origin https://git.sr.ht/~migadu/alps && \
git fetch --depth 1 origin ${VERSION} && \ git fetch --depth 1 origin ${VERSION} && \
git checkout FETCH_HEAD git checkout FETCH_HEAD
RUN go build -a -o /usr/local/bin/alps ./cmd/alps COPY skipverify.patch skipverify.patch
RUN git apply skipverify.patch && \
go build -a -o /usr/local/bin/alps ./cmd/alps
FROM scratch FROM scratch
COPY --from=builder /usr/local/bin/alps /alps COPY --from=builder /usr/local/bin/alps /alps
COPY --from=builder /tmp/alps/themes /themes COPY --from=builder /tmp/alps/themes /themes
COPY --from=builder /tmp/alps/plugins /plugins
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/ca-certificates.crt
ENTRYPOINT ["/alps"] ENTRYPOINT ["/alps"]

View file

@ -0,0 +1,55 @@
From 47765c10f1af2013556f76dc63dfa056167ae5e8 Mon Sep 17 00:00:00 2001
From: Quentin <quentin@deuxfleurs.fr>
Date: Fri, 4 Dec 2020 13:19:24 +0100
Subject: [PATCH] Skip CA verification
---
imap.go | 3 ++-
smtp.go | 3 ++-
2 files changed, 4 insertions(+), 2 deletions(-)
diff --git a/imap.go b/imap.go
index 7554331..1a4931d 100644
--- a/imap.go
+++ b/imap.go
@@ -3,6 +3,7 @@ package alps
import (
"fmt"
+ "crypto/tls"
"github.com/emersion/go-imap"
imapclient "github.com/emersion/go-imap/client"
"github.com/emersion/go-message/charset"
@@ -16,7 +17,7 @@ func (s *Server) dialIMAP() (*imapclient.Client, error) {
var c *imapclient.Client
var err error
if s.imap.tls {
- c, err = imapclient.DialTLS(s.imap.host, nil)
+ c, err = imapclient.DialTLS(s.imap.host, &tls.Config{InsecureSkipVerify: true})
if err != nil {
return nil, fmt.Errorf("failed to connect to IMAPS server: %v", err)
}
diff --git a/smtp.go b/smtp.go
index 5e178f2..8d22f1d 100644
--- a/smtp.go
+++ b/smtp.go
@@ -3,6 +3,7 @@ package alps
import (
"fmt"
+ "crypto/tls"
"github.com/emersion/go-smtp"
)
@@ -14,7 +15,7 @@ func (s *Server) dialSMTP() (*smtp.Client, error) {
var c *smtp.Client
var err error
if s.smtp.tls {
- c, err = smtp.DialTLS(s.smtp.host, nil)
+ c, err = smtp.DialTLS(s.smtp.host, &tls.Config{InsecureSkipVerify: true})
if err != nil {
return nil, fmt.Errorf("failed to connect to SMTPS server: %v", err)
}
--
2.28.0

View file

View file

@ -0,0 +1,16 @@
FROM amd64/debian:stretch as builder
COPY ./quentin.dufour.io/Gemfile /root/quentin.dufour.io/Gemfile
WORKDIR /root/quentin.dufour.io
RUN apt-get update && \
apt-get install -y ruby-dev gem build-essential bundler zlib1g-dev libxml2-dev && \
bundle install
COPY ./quentin.dufour.io/ /root/quentin.dufour.io/
RUN bundle exec jekyll build
FROM superboum/amd64_webserver:v2
COPY --from=builder /root/quentin.dufour.io/_site /srv/http

View file

@ -0,0 +1 @@
sudo docker build -t superboum/amd64_blog:v19 .

View file

@ -0,0 +1,8 @@
FROM amd64/debian:buster
RUN apt-get update && \
apt-get dist-upgrade -y && \
apt-get install -y \
coturn
CMD ["/usr/bin/turnserver"]

View file

@ -0,0 +1,17 @@
## Génère l'image
```
sudo docker build -t registry.gitlab.com/superboum/ankh-morpork/amd64_coturn:v1 .
```
## Run bash dans le container
```
sudo docker run --rm -t -i registry.gitlab.com/superboum/ankh-morpork/amd64_coturn:v1 bash
sudo docker run --rm -t -i -p 3478:3478/udp -p 3479:3479/udp -p 3478:3478/tcp -p 3479:3479/tcp registry.gitlab.com/superboum/ankh-morpork/amd64_coturn:v1
```
## Used ports
- udp/tcp 3478 3479
## Publish
sudo docker push registry.gitlab.com/superboum/ankh-morpork/amd64_coturn:v1

View file

@ -0,0 +1,84 @@
version: '3.4'
services:
mariadb:
build:
context: ./mariadb
args:
VERSION: 4 # fake for now
image: superboum/amd64_mariadb:v4
# Instant Messaging
riot:
build:
context: ./riotweb
args:
# https://github.com/vector-im/riot-web/releases
VERSION: 1.7.14
image: particallydone/amd64_riotweb:v18
synapse:
build:
context: ./matrix-synapse
args:
# https://github.com/matrix-org/synapse/releases
VERSION: 1.24.0
image: particallydone/amd64_synapse:v39
# Email
sogo:
build:
context: ./sogo
args:
# fake for now
VERSION: 5.0.0
image: superboum/amd64_sogo:v7
alps:
build:
context: ./alps
args:
VERSION: 5cef0aaff2b8b6ee3e00b566123517e241d8cfb8
image: superboum/amd64_alps:v1
# VoIP
jitsi-meet:
build:
context: ./jitsi-meet
args:
# https://github.com/jitsi/jitsi-meet
PREFIXV: stable/jitsi-meet_
VERSION: 4966
image: superboum/amd64_jitsi_meet:v1
jitsi-conference-focus:
build:
context: ./jitsi-conference-focus
args:
# https://github.com/jitsi/jicofo
PREFIXV: stable/jitsi-meet_
VERSION: 4966
image: superboum/amd64_jitsi_conference_focus:v5
jitsi-videobridge:
build:
context: ./jitsi-videobridge
args:
# https://github.com/jitsi/jitsi-videobridge
PREFIXV: stable/jitsi-meet_
VERSION: 4966
image: superboum/amd64_jitsi_videobridge:v15
jitsi-xmpp:
build:
context: ./jitsi-xmpp
args:
VERSION: fake-1
image: superboum/amd64_jitsi_xmpp:v4
plume:
build:
context: ./plume
args:
VERSION: 0cd26dfbf4ab7be467325ed77230cf371147a98e
image: superboum/plume:v1

View file

@ -1,4 +1,4 @@
FROM amd64/debian:bullseye FROM amd64/debian:stretch
RUN apt-get update && \ RUN apt-get update && \
apt-get install -y \ apt-get install -y \
@ -11,6 +11,7 @@ RUN apt-get update && \
dovecot-lmtpd && \ dovecot-lmtpd && \
rm -rf /etc/dovecot/* rm -rf /etc/dovecot/*
RUN useradd mailstore RUN useradd mailstore
COPY ./conf/* /etc/dovecot/
COPY entrypoint.sh /usr/local/bin/entrypoint COPY entrypoint.sh /usr/local/bin/entrypoint
ENTRYPOINT ["/usr/local/bin/entrypoint"] ENTRYPOINT ["/usr/local/bin/entrypoint"]

View file

@ -19,7 +19,10 @@ service auth {
} }
} }
passdb {
args = /etc/dovecot/dovecot-ldap.conf
driver = ldap
}
service lmtp { service lmtp {
inet_listener lmtp { inet_listener lmtp {
@ -28,23 +31,7 @@ service lmtp {
} }
} }
# https://doc.dovecot.org/configuration_manual/authentication/ldap_authentication/
passdb {
args = /etc/dovecot/dovecot-ldap.conf
driver = ldap
}
userdb {
driver = prefetch
}
userdb {
args = /etc/dovecot/dovecot-ldap.conf
driver = ldap
}
service imap-login { service imap-login {
service_count = 0 # performance mode. set to 1 for secure mode
process_min_avail = 1
inet_listener imap { inet_listener imap {
port = 143 port = 143
} }
@ -53,6 +40,11 @@ service imap-login {
} }
} }
userdb {
args = uid=mailstore gid=mailstore home=/var/mail/%u
driver = static
}
protocol imap { protocol imap {
mail_plugins = $mail_plugins imap_sieve mail_plugins = $mail_plugins imap_sieve
} }

View file

@ -0,0 +1,27 @@
FROM debian:buster AS builder
ARG PREFIXV
ARG VERSION
RUN apt-get update && \
apt-get install -y openjdk-11-jdk maven wget unzip && \
wget https://github.com/jitsi/jicofo/archive/${PREFIXV}${VERSION}.zip -O jicofo.zip
RUN unzip jicofo.zip && \
mv jicofo*${VERSION} jicofo && \
cd jicofo && \
mvn package -DskipTests -Dassembly.skipAssembly=false && \
unzip target/jicofo-1.1-SNAPSHOT-archive.zip && \
mv jicofo-1.1-SNAPSHOT /srv/build
FROM debian:buster
RUN apt-get update && \
apt-get install -y openjdk-11-jre-headless ca-certificates
ENV JAVA_SYS_PROPS="-Dnet.java.sip.communicator.SC_HOME_DIR_LOCATION=/root -Dnet.java.sip.communicator.SC_HOME_DIR_NAME=.sip-communicator -Dnet.java.sip.communicator.SC_LOG_DIR_LOCATION=/var/log/jitsi"
COPY --from=builder /srv/build /srv/jicofo
COPY jicofo /usr/local/bin/jicofo
COPY sip-communicator.properties /root/.sip-communicator/sip-communicator.properties
CMD ["/usr/local/bin/jicofo"]

View file

@ -0,0 +1,16 @@
#!/bin/bash
cp ${JITSI_CERTS_FOLDER}/auth.jitsi.deuxfleurs.fr.crt /usr/local/share/ca-certificates/auth.jitsi.deuxfleurs.fr.crt
update-ca-certificates -f
cat >> /etc/hosts <<EOF
${JITSI_PROSODY_HOST} jitsi.deuxfleurs.fr conference.jitsi.deuxfleurs.fr jitsi-videobridge.jitsi.deuxfleurs.fr focus.jitsi.deuxfleurs.fr auth.jitsi.deuxfleurs.fr
127.0.0.1 `hostname`
EOF
/srv/jicofo/jicofo.sh \
--host=${JITSI_PROSODY_HOST} \
--domain=jitsi.deuxfleurs.fr \
--secret=${JITSI_SECRET_JICOFO_COMPONENT} \
--user_domain=auth.jitsi.deuxfleurs.fr \
--user_password=${JITSI_SECRET_JICOFO_USER}

View file

@ -0,0 +1,2 @@
org.jitsi.jicofo.SHORT_ID=1
org.jitsi.jicofo.BRIDGE_MUC=JvbBrewery@internal.auth.jitsi.deuxfleurs.fr

View file

@ -0,0 +1,28 @@
FROM debian:buster AS builder
ARG PREFIXV
ARG VERSION
RUN apt-get update && \
apt-get install -y curl && \
curl -sL https://deb.nodesource.com/setup_14.x | bash - && \
apt-get install -y git nodejs make wget unzip && \
wget https://github.com/jitsi/jitsi-meet/archive/${PREFIXV}${VERSION}.zip -O jitsi-meet.zip
RUN unzip jitsi-meet.zip && \
mv jitsi-meet-*${VERSION} jitsi-meet && \
cd jitsi-meet && \
npm install && \
make
FROM debian:buster
COPY --from=builder /jitsi-meet /srv/jitsi-meet
RUN apt-get update && \
apt-get install -y nginx && \
rm /etc/nginx/sites-enabled/*
COPY config.js /srv/jitsi-meet/config.js
COPY entrypoint.sh /usr/local/bin/entrypoint
ENTRYPOINT ["/usr/local/bin/entrypoint"]
CMD ["/usr/sbin/nginx", "-g", "daemon off;"]

View file

@ -0,0 +1,517 @@
/* eslint-disable no-unused-vars, no-var */
var config = {
// Connection
//
hosts: {
// XMPP domain.
domain: 'jitsi.deuxfleurs.fr',
// When using authentication, domain for guest users.
// anonymousdomain: 'guest.example.com',
// Domain for authenticated users. Defaults to <domain>.
// authdomain: 'jitsi-meet.example.com',
// Jirecon recording component domain.
// jirecon: 'jirecon.jitsi-meet.example.com',
// Call control component (Jigasi).
// call_control: 'callcontrol.jitsi-meet.example.com',
// Focus component domain. Defaults to focus.<domain>.
// focus: 'focus.jitsi-meet.example.com',
// XMPP MUC domain. FIXME: use XEP-0030 to discover it.
muc: 'conference.jitsi.deuxfleurs.fr'
},
// BOSH URL. FIXME: use XEP-0156 to discover it.
bosh: '//jitsi.deuxfleurs.fr/http-bind',
// Websocket URL
// websocket: 'wss://jitsi-meet.example.com/xmpp-websocket',
// The name of client node advertised in XEP-0115 'c' stanza
clientNode: 'http://jitsi.org/jitsimeet',
// The real JID of focus participant - can be overridden here
// focusUserJid: 'focus@auth.jitsi-meet.example.com',
// Testing / experimental features.
//
testing: {
// Enables experimental simulcast support on Firefox.
enableFirefoxSimulcast: false,
// P2P test mode disables automatic switching to P2P when there are 2
// participants in the conference.
p2pTestMode: false
// Enables the test specific features consumed by jitsi-meet-torture
// testMode: false
// Disables the auto-play behavior of *all* newly created video element.
// This is useful when the client runs on a host with limited resources.
// noAutoPlayVideo: false
},
// Disables ICE/UDP by filtering out local and remote UDP candidates in
// signalling.
// webrtcIceUdpDisable: false,
// Disables ICE/TCP by filtering out local and remote TCP candidates in
// signalling.
// webrtcIceTcpDisable: false,
// Media
//
// Audio
// Disable measuring of audio levels.
// disableAudioLevels: false,
// audioLevelsInterval: 200,
// Enabling this will run the lib-jitsi-meet no audio detection module which
// will notify the user if the current selected microphone has no audio
// input and will suggest another valid device if one is present.
enableNoAudioDetection: true,
// Enabling this will run the lib-jitsi-meet noise detection module which will
// notify the user if there is noise, other than voice, coming from the current
// selected microphone. The purpose it to let the user know that the input could
// be potentially unpleasant for other meeting participants.
enableNoisyMicDetection: true,
// Start the conference in audio only mode (no video is being received nor
// sent).
// startAudioOnly: false,
// Every participant after the Nth will start audio muted.
// startAudioMuted: 10,
// Start calls with audio muted. Unlike the option above, this one is only
// applied locally. FIXME: having these 2 options is confusing.
// startWithAudioMuted: false,
// Enabling it (with #params) will disable local audio output of remote
// participants and to enable it back a reload is needed.
// startSilent: false
// Video
// Sets the preferred resolution (height) for local video. Defaults to 720.
resolution: 480,
// w3c spec-compliant video constraints to use for video capture. Currently
// used by browsers that return true from lib-jitsi-meet's
// util#browser#usesNewGumFlow. The constraints are independency from
// this config's resolution value. Defaults to requesting an ideal aspect
// ratio of 16:9 with an ideal resolution of 720.
constraints: {
video: {
aspectRatio: 16 / 9,
height: {
ideal: 480,
max: 720,
min: 240
}
}
},
// Enable / disable simulcast support.
// disableSimulcast: false,
// Enable / disable layer suspension. If enabled, endpoints whose HD
// layers are not in use will be suspended (no longer sent) until they
// are requested again.
// enableLayerSuspension: false,
// Every participant after the Nth will start video muted.
// startVideoMuted: 10,
// Start calls with video muted. Unlike the option above, this one is only
// applied locally. FIXME: having these 2 options is confusing.
// startWithVideoMuted: false,
// If set to true, prefer to use the H.264 video codec (if supported).
// Note that it's not recommended to do this because simulcast is not
// supported when using H.264. For 1-to-1 calls this setting is enabled by
// default and can be toggled in the p2p section.
// preferH264: true,
// If set to true, disable H.264 video codec by stripping it out of the
// SDP.
// disableH264: false,
// Desktop sharing
// The ID of the jidesha extension for Chrome.
desktopSharingChromeExtId: null,
// Whether desktop sharing should be disabled on Chrome.
// desktopSharingChromeDisabled: false,
// The media sources to use when using screen sharing with the Chrome
// extension.
desktopSharingChromeSources: [ 'screen', 'window', 'tab' ],
// Required version of Chrome extension
desktopSharingChromeMinExtVersion: '0.1',
// Whether desktop sharing should be disabled on Firefox.
// desktopSharingFirefoxDisabled: false,
// Optional desktop sharing frame rate options. Default value: min:5, max:5.
// desktopSharingFrameRate: {
// min: 5,
// max: 5
// },
// Try to start calls with screen-sharing instead of camera video.
// startScreenSharing: false,
// Recording
// Whether to enable file recording or not.
// fileRecordingsEnabled: false,
// Enable the dropbox integration.
// dropbox: {
// appKey: '<APP_KEY>' // Specify your app key here.
// // A URL to redirect the user to, after authenticating
// // by default uses:
// // 'https://jitsi-meet.example.com/static/oauth.html'
// redirectURI:
// 'https://jitsi-meet.example.com/subfolder/static/oauth.html'
// },
// When integrations like dropbox are enabled only that will be shown,
// by enabling fileRecordingsServiceEnabled, we show both the integrations
// and the generic recording service (its configuration and storage type
// depends on jibri configuration)
// fileRecordingsServiceEnabled: false,
// Whether to show the possibility to share file recording with other people
// (e.g. meeting participants), based on the actual implementation
// on the backend.
// fileRecordingsServiceSharingEnabled: false,
// Whether to enable live streaming or not.
// liveStreamingEnabled: false,
// Transcription (in interface_config,
// subtitles and buttons can be configured)
// transcribingEnabled: false,
// Enables automatic turning on captions when recording is started
// autoCaptionOnRecord: false,
// Misc
// Default value for the channel "last N" attribute. -1 for unlimited.
channelLastN: -1,
// Disables or enables RTX (RFC 4588) (defaults to false).
// disableRtx: false,
// Disables or enables TCC (the default is in Jicofo and set to true)
// (draft-holmer-rmcat-transport-wide-cc-extensions-01). This setting
// affects congestion control, it practically enables send-side bandwidth
// estimations.
// enableTcc: true,
// Disables or enables REMB (the default is in Jicofo and set to false)
// (draft-alvestrand-rmcat-remb-03). This setting affects congestion
// control, it practically enables recv-side bandwidth estimations. When
// both TCC and REMB are enabled, TCC takes precedence. When both are
// disabled, then bandwidth estimations are disabled.
// enableRemb: false,
// Defines the minimum number of participants to start a call (the default
// is set in Jicofo and set to 2).
// minParticipants: 2,
// Use XEP-0215 to fetch STUN and TURN servers.
// useStunTurn: true,
// Enable IPv6 support.
// useIPv6: true,
// Enables / disables a data communication channel with the Videobridge.
// Values can be 'datachannel', 'websocket', true (treat it as
// 'datachannel'), undefined (treat it as 'datachannel') and false (don't
// open any channel).
// openBridgeChannel: true,
// UI
//
// Use display name as XMPP nickname.
// useNicks: false,
// Require users to always specify a display name.
// requireDisplayName: true,
// Whether to use a welcome page or not. In case it's false a random room
// will be joined when no room is specified.
enableWelcomePage: true,
// Enabling the close page will ignore the welcome page redirection when
// a call is hangup.
// enableClosePage: false,
// Disable hiding of remote thumbnails when in a 1-on-1 conference call.
// disable1On1Mode: false,
// Default language for the user interface.
defaultLanguage: 'fr',
// If true all users without a token will be considered guests and all users
// with token will be considered non-guests. Only guests will be allowed to
// edit their profile.
enableUserRolesBasedOnToken: false,
// Whether or not some features are checked based on token.
// enableFeaturesBasedOnToken: false,
// Enable lock room for all moderators, even when userRolesBasedOnToken is enabled and participants are guests.
// lockRoomGuestEnabled: false,
// When enabled the password used for locking a room is restricted to up to the number of digits specified
// roomPasswordNumberOfDigits: 10,
// default: roomPasswordNumberOfDigits: false,
// Message to show the users. Example: 'The service will be down for
// maintenance at 01:00 AM GMT,
// noticeMessage: '',
// Enables calendar integration, depends on googleApiApplicationClientID
// and microsoftApiApplicationClientID
// enableCalendarIntegration: false,
// Stats
//
// Whether to enable stats collection or not in the TraceablePeerConnection.
// This can be useful for debugging purposes (post-processing/analysis of
// the webrtc stats) as it is done in the jitsi-meet-torture bandwidth
// estimation tests.
// gatherStats: false,
// The interval at which PeerConnection.getStats() is called. Defaults to 10000
// pcStatsInterval: 10000,
// To enable sending statistics to callstats.io you must provide the
// Application ID and Secret.
// callStatsID: '',
// callStatsSecret: '',
// enables sending participants display name to callstats
// enableDisplayNameInStats: false
// enables sending participants email if available to callstats and other analytics
// enableEmailInStats: false
// Privacy
//
// If third party requests are disabled, no other server will be contacted.
// This means avatars will be locally generated and callstats integration
// will not function.
// disableThirdPartyRequests: false,
// Peer-To-Peer mode: used (if enabled) when there are just 2 participants.
//
p2p: {
// Enables peer to peer mode. When enabled the system will try to
// establish a direct connection when there are exactly 2 participants
// in the room. If that succeeds the conference will stop sending data
// through the JVB and use the peer to peer connection instead. When a
// 3rd participant joins the conference will be moved back to the JVB
// connection.
enabled: true,
// Use XEP-0215 to fetch STUN and TURN servers.
// useStunTurn: true,
// The STUN servers that will be used in the peer to peer connections
stunServers: [
// { urls: 'stun:jitsi-meet.example.com:443' },
{ urls: 'stun:stun.l.google.com:19302' },
{ urls: 'stun:stun1.l.google.com:19302' },
{ urls: 'stun:stun2.l.google.com:19302' }
],
// Sets the ICE transport policy for the p2p connection. At the time
// of this writing the list of possible values are 'all' and 'relay',
// but that is subject to change in the future. The enum is defined in
// the WebRTC standard:
// https://www.w3.org/TR/webrtc/#rtcicetransportpolicy-enum.
// If not set, the effective value is 'all'.
// iceTransportPolicy: 'all',
// If set to true, it will prefer to use H.264 for P2P calls (if H.264
// is supported).
preferH264: true,
// If set to true, disable H.264 video codec by stripping it out of the
// SDP.
// disableH264: false,
// How long we're going to wait, before going back to P2P after the 3rd
// participant has left the conference (to filter out page reload).
backToP2PDelay: 60
},
analytics: {
// The Google Analytics Tracking ID:
// googleAnalyticsTrackingId: 'your-tracking-id-UA-123456-1'
// The Amplitude APP Key:
// amplitudeAPPKey: '<APP_KEY>'
// Array of script URLs to load as lib-jitsi-meet "analytics handlers".
// scriptURLs: [
// "libs/analytics-ga.min.js", // google-analytics
// "https://example.com/my-custom-analytics.js"
// ],
},
// Information about the jitsi-meet instance we are connecting to, including
// the user region as seen by the server.
deploymentInfo: {
// shard: "shard1",
// region: "europe",
// userRegion: "asia"
}
// Information for the chrome extension banner
// chromeExtensionBanner: {
// // The chrome extension to be installed address
// url: 'https://chrome.google.com/webstore/detail/jitsi-meetings/kglhbbefdnlheedjiejgomgmfplipfeb',
// // Extensions info which allows checking if they are installed or not
// chromeExtensionsInfo: [
// {
// id: 'kglhbbefdnlheedjiejgomgmfplipfeb',
// path: 'jitsi-logo-48x48.png'
// }
// ]
// }
// Local Recording
//
// localRecording: {
// Enables local recording.
// Additionally, 'localrecording' (all lowercase) needs to be added to
// TOOLBAR_BUTTONS in interface_config.js for the Local Recording
// button to show up on the toolbar.
//
// enabled: true,
//
// The recording format, can be one of 'ogg', 'flac' or 'wav'.
// format: 'flac'
//
// }
// Options related to end-to-end (participant to participant) ping.
// e2eping: {
// // The interval in milliseconds at which pings will be sent.
// // Defaults to 10000, set to <= 0 to disable.
// pingInterval: 10000,
//
// // The interval in milliseconds at which analytics events
// // with the measured RTT will be sent. Defaults to 60000, set
// // to <= 0 to disable.
// analyticsInterval: 60000,
// }
// If set, will attempt to use the provided video input device label when
// triggering a screenshare, instead of proceeding through the normal flow
// for obtaining a desktop stream.
// NOTE: This option is experimental and is currently intended for internal
// use only.
// _desktopSharingSourceDevice: 'sample-id-or-label'
// If true, any checks to handoff to another application will be prevented
// and instead the app will continue to display in the current browser.
// disableDeepLinking: false
// A property to disable the right click context menu for localVideo
// the menu has option to flip the locally seen video for local presentations
// disableLocalVideoFlip: false
// Deployment specific URLs.
// deploymentUrls: {
// // If specified a 'Help' button will be displayed in the overflow menu with a link to the specified URL for
// // user documentation.
// userDocumentationURL: 'https://docs.example.com/video-meetings.html',
// // If specified a 'Download our apps' button will be displayed in the overflow menu with a link
// // to the specified URL for an app download page.
// downloadAppsUrl: 'https://docs.example.com/our-apps.html'
// }
// List of undocumented settings used in jitsi-meet
/**
_immediateReloadThreshold
autoRecord
autoRecordToken
debug
debugAudioLevels
deploymentInfo
dialInConfCodeUrl
dialInNumbersUrl
dialOutAuthUrl
dialOutCodesUrl
disableRemoteControl
displayJids
etherpad_base
externalConnectUrl
firefox_fake_device
googleApiApplicationClientID
iAmRecorder
iAmSipGateway
microsoftApiApplicationClientID
peopleSearchQueryTypes
peopleSearchUrl
requireDisplayName
tokenAuthUrl
*/
// List of undocumented settings used in lib-jitsi-meet
/**
_peerConnStatusOutOfLastNTimeout
_peerConnStatusRtcMuteTimeout
abTesting
avgRtpStatsN
callStatsConfIDNamespace
callStatsCustomScriptUrl
desktopSharingSources
disableAEC
disableAGC
disableAP
disableHPF
disableNS
enableLipSync
enableTalkWhileMuted
forceJVB121Ratio
hiddenDomain
ignoreStartMuted
nick
startBitrate
*/
};
/* eslint-enable no-unused-vars, no-var */

View file

@ -0,0 +1,38 @@
#!/bin/bash
cat > /etc/nginx/sites-available/jitsi <<EOF
server_names_hash_bucket_size 64;
server {
listen 0.0.0.0:443 ssl http2 default_server;
listen [::]:443 ssl http2 default_server;
server_name _;
ssl_certificate ${JITSI_CERTS_FOLDER}/jitsi.deuxfleurs.fr.crt;
ssl_certificate_key ${JITSI_CERTS_FOLDER}/jitsi.deuxfleurs.fr.key;
root /srv/jitsi-meet;
index index.html;
location ~ ^/([a-zA-Z0-9=\?]+)$ {
rewrite ^/(.*)$ / break;
}
location / {
ssi on;
}
# BOSH, Bidirectional-streams Over Synchronous HTTP
# https://en.wikipedia.org/wiki/BOSH_(protocol)
location /http-bind {
proxy_pass http://${JITSI_PROSODY_BOSH_HOST}:${JITSI_PROSODY_BOSH_PORT}/http-bind;
proxy_set_header X-Forwarded-For \$remote_addr;
proxy_set_header Host \$http_host;
}
# external_api.js must be accessible from the root of the
# installation for the electron version of Jitsi Meet to work
# https://github.com/jitsi/jitsi-meet-electron
location /external_api.js {
alias /srv/jitsi-meet/libs/external_api.min.js;
}
}
EOF
ln -sf /etc/nginx/sites-available/jitsi /etc/nginx/sites-enabled/jitsi
exec "$@"

View file

@ -0,0 +1,30 @@
FROM debian:buster AS builder
ARG PREFIXV
ARG VERSION
RUN apt-get update && \
apt-get install -y wget unzip maven openjdk-11-jdk && \
wget https://github.com/jitsi/jitsi-videobridge/archive/${PREFIXV}${VERSION}.zip -O jvb.zip
RUN unzip jvb.zip && \
mv jitsi-videobridge*${VERSION} jvb && \
cd jvb && \
mvn package -DskipTests && \
ls jvb/target && \
unzip jvb/target/jitsi-videobridge*.zip && \
mv jitsi-videobridge-*-SNAPSHOT build
FROM debian:buster
RUN apt-get update && \
apt-get install -y openjdk-11-jre-headless
COPY --from=builder /jvb/build /srv/jvb
ENV HOME=/root
WORKDIR /root
COPY jvb_run /usr/local/bin/jvb_run
ENV JAVA_SYS_PROPS="-Dnet.java.sip.communicator.SC_HOME_DIR_LOCATION=/root -Dnet.java.sip.communicator.SC_HOME_DIR_NAME=.sip-communicator -Dnet.java.sip.communicator.SC_LOG_DIR_LOCATION=/var/log/jitsi"
CMD ["/usr/local/bin/jvb_run"]

View file

@ -0,0 +1,54 @@
#!/bin/bash
cat >> /etc/hosts <<EOF
${JITSI_PROSODY_HOST} jitsi.deuxfleurs.fr conference.jitsi.deuxfleurs.fr jitsi-videobridge.jitsi.deuxfleurs.fr focus.jitsi.deuxfleurs.fr auth.jitsi.deuxfleurs.fr
127.0.0.1 `hostname`
EOF
mkdir -p /root/.sip-communicator
cat > /root/.sip-communicator/sip-communicator.properties <<EOF
# Enable broadcasting stats/presence in a MUC
org.jitsi.videobridge.ENABLE_STATISTICS=true
org.jitsi.videobridge.STATISTICS_TRANSPORT=muc
# Connect to the first XMPP server
org.jitsi.videobridge.xmpp.user.shard.HOSTNAME=jitsi.deuxfleurs.fr
org.jitsi.videobridge.xmpp.user.shard.DOMAIN=auth.jitsi.deuxfleurs.fr
org.jitsi.videobridge.xmpp.user.shard.USERNAME=jvb
org.jitsi.videobridge.xmpp.user.shard.PASSWORD=${JITSI_SECRET_VIDEOBRIDGE}
org.jitsi.videobridge.xmpp.user.shard.MUC_JIDS=JvbBrewery@internal.auth.jitsi.deuxfleurs.fr
org.jitsi.videobridge.xmpp.user.shard.MUC=JvbBrewery@internal.auth.jitsi.deuxfleurs.fr
org.jitsi.videobridge.xmpp.user.shard.MUC_NICKNAME=singleton
org.jitsi.videobridge.xmpp.user.shard.DISABLE_CERTIFICATE_VERIFICATION=true
# Do we need it? @FIXME
org.jitsi.impl.neomedia.transform.srtp.SRTPCryptoContext.checkReplay=false
# NAT things, two times just in case...
org.ice4j.ice.harvest.TCP_HARVESTER_PORT=${JITSI_VIDEO_TCP}
org.ice4j.ice.harvest.NAT_HARVESTER_LOCAL_ADDRESS=${JITSI_NAT_LOCAL_IP}
org.ice4j.ice.harvest.NAT_HARVESTER_PUBLIC_ADDRESS=${JITSI_NAT_PUBLIC_IP}
org.jitsi.videobridge.TCP_HARVESTER_PORT=${JITSI_VIDEO_TCP}
org.jitsi.videobridge.NAT_HARVESTER_LOCAL_ADDRESS=${JITSI_NAT_LOCAL_IP}
org.jitsi.videobridge.NAT_HARVESTER_PUBLIC_ADDRESS=${JITSI_NAT_PUBLIC_IP}
org.jitsi.videobridge.DISABLE_TCP_HARVESTER=false
EOF
[ -v JITSI_DEBUG ] && cat >> /root/.sip-communicator/sip-communicator.properties <<EOF
net.java.sip.communicator.packetlogging.PACKET_LOGGING_ENABLED=true
net.java.sip.communicator.packetlogging.PACKET_LOGGING_ARBITRARY_ENABLED=true
net.java.sip.communicator.packetlogging.PACKET_LOGGING_SIP_ENABLED=true
net.java.sip.communicator.packetlogging.PACKET_LOGGING_JABBER_ENABLED=true
net.java.sip.communicator.packetlogging.PACKET_LOGGING_RTP_ENABLED=true
net.java.sip.communicator.packetlogging.PACKET_LOGGING_ICE4j_ENABLED=true
net.java.sip.communicator.packetlogging.PACKET_LOGGING_FILE_COUNT=1
net.java.sip.communicator.packetlogging.PACKET_LOGGING_FILE_SIZE=-1
EOF
/srv/jvb/jvb.sh \
--host=${JITSI_PROSODY_HOST} \
--domain=jitsi.deuxfleurs.fr \
--port=5347 \
--secret=${JITSI_SECRET_VIDEOBRIDGE} \
--apis=xmpp,rest

View file

@ -0,0 +1,11 @@
FROM debian:buster
RUN apt-get update && \
apt-get install -y prosody
COPY external_components.cfg.lua /etc/prosody/conf.d/external_components.cfg.lua
COPY xmpp_conf /usr/local/bin/xmpp_conf
COPY xmpp_gen /usr/local/bin/xmpp_gen
COPY xmpp_run /usr/local/bin/xmpp_run
CMD ["/usr/local/bin/xmpp_run"]

View file

@ -0,0 +1,2 @@
component_ports = { 5347 }
component_interface = "0.0.0.0"

47
app/build/jitsi-xmpp/xmpp_conf Executable file
View file

@ -0,0 +1,47 @@
#!/bin/bash
cat >> /etc/hosts <<EOF
${JITSI_PROSODY_HOST} jitsi.deuxfleurs.fr conference.jitsi.deuxfleurs.fr jitsi-videobridge.jitsi.deuxfleurs.fr focus.jitsi.deuxfleurs.fr auth.jitsi.deuxfleurs.fr
127.0.0.1 `hostname`
EOF
mkdir -p /etc/prosody/conf.{d,avail}/
cat > /etc/prosody/conf.avail/jitsi.deuxfleurs.fr.cfg.lua <<EOF
VirtualHost "jitsi.deuxfleurs.fr"
authentication = "anonymous"
ssl = {
key = "/var/lib/prosody/jitsi.deuxfleurs.fr.key";
certificate = "/var/lib/prosody/jitsi.deuxfleurs.fr.crt";
}
modules_enabled = {
"bosh";
"pubsub";
}
c2s_require_encryption = false
VirtualHost "auth.jitsi.deuxfleurs.fr"
ssl = {
key = "/var/lib/prosody/auth.jitsi.deuxfleurs.fr.key";
certificate = "/var/lib/prosody/auth.jitsi.deuxfleurs.fr.crt";
}
authentication = "internal_plain"
admins = { "focus@auth.jitsi.deuxfleurs.fr"}
Component "conference.jitsi.deuxfleurs.fr" "muc"
Component "internal.auth.jitsi.deuxfleurs.fr" "muc"
storage = "memory"
modules_enabled = { "ping"; }
admins = { "focus@auth.jitsi.deuxfleurs.fr", "jvb@auth.jitsi.deuxfleurs.fr" }
Component "jitsi-videobridge.jitsi.deuxfleurs.fr"
component_secret = "${JITSI_SECRET_VIDEOBRIDGE}"
Component "focus.jitsi.deuxfleurs.fr"
component_secret = "${JITSI_SECRET_JICOFO_COMPONENT}"
EOF
ln -sf \
/etc/prosody/conf.avail/jitsi.deuxfleurs.fr.cfg.lua \
/etc/prosody/conf.d/jitsi.deuxfleurs.fr.cfg.lua

9
app/build/jitsi-xmpp/xmpp_gen Executable file
View file

@ -0,0 +1,9 @@
#!/bin/bash
/usr/local/bin/xmpp_conf
prosodyctl cert generate jitsi.deuxfleurs.fr
prosodyctl cert generate auth.jitsi.deuxfleurs.fr
cp /var/lib/prosody/*.crt ${JITSI_CERTS_FOLDER}
cp /var/lib/prosody/*.key ${JITSI_CERTS_FOLDER}

20
app/build/jitsi-xmpp/xmpp_run Executable file
View file

@ -0,0 +1,20 @@
#!/bin/bash
/usr/local/bin/xmpp_conf
cp ${JITSI_CERTS_FOLDER}/* /var/lib/prosody/
chown -R prosody:prosody /var/lib/prosody
mkdir -p /usr/local/share/ca-certificates/
ln -sf \
/var/lib/prosody/auth.jitsi.deuxfleurs.fr.crt \
/usr/local/share/ca-certificates/auth.jitsi.deuxfleurs.fr.crt
prosodyctl register focus auth.jitsi.deuxfleurs.fr ${JITSI_SECRET_JICOFO_USER}
prosodyctl register jvb auth.jitsi.deuxfleurs.fr ${JITSI_SECRET_VIDEOBRIDGE}
mkdir /run/prosody
touch /run/prosody/prosody.pid
chown -R prosody:prosody /run/prosody
cd /var/lib/prosody
su - prosody -s /bin/bash -c prosody

View file

@ -0,0 +1,3 @@
```
docker build -t superboum/amd64_landing:v8 .
```

View file

@ -0,0 +1,3 @@
[mariadb]
pam_use_cleartext_plugin
bind-address = 0.0.0.0

View file

@ -0,0 +1,3 @@
[mariadb]
plugin-load=auth_pam.so

View file

@ -0,0 +1,2 @@
[mysqld]
bind-address = *

View file

@ -0,0 +1,14 @@
FROM debian:stretch
RUN apt-get update && \
apt-get dist-upgrade -y && \
DEBIAN_FRONTEND=noninteractive apt-get install -y mariadb-server mariadb-client libnss-ldapd
COPY 60-ldap.cnf /etc/mysql/mariadb.conf.d/60-ldap.cnf
COPY 60-remote.cnf /etc/mysql/mariadb.conf.d/60-remote.cnf
COPY 60-disable-dialog.cnf /etc/mysql/mariadb.conf.d/60-disable-dialog.cnf
COPY pam-mariadb /etc/pam.d/mariadb
COPY nsswitch.conf /etc/nsswitch.conf
COPY entrypoint.sh /usr/local/bin/entrypoint
ENTRYPOINT ["/usr/local/bin/entrypoint"]

View file

@ -0,0 +1,19 @@
```
sudo docker build -t superboum/amd64_mariadb:v3 .
sudo docker run \
-t -i \
-p 3306:3306 \
-v /tmp/mysql:/var/lib/mysql \
-e LDAP_URI='ldap://bottin.service.2.cluster.deuxfleurs.fr' \
-e LDAP_BASE='ou=users,dc=deuxfleurs,dc=fr' \
-e LDAP_VERSION=3 \
-e LDAP_BIND_DN='cn=admin,dc=deuxfleurs,dc=fr' \
-e LDAP_BIND_PW='xxxx' \
-e MYSQL_PASSWORD='xxxx' \
superboum/amd64_mariadb:v1 \
tail -f /var/log/mysql/error.log
CREATE USER quentin@localhost IDENTIFIED VIA pam USING 'mariadb';
```

50
app/build/mariadb/entrypoint.sh Executable file
View file

@ -0,0 +1,50 @@
#!/bin/bash
set -e
cat > /etc/nslcd.conf <<EOF
# /etc/nslcd.conf
# nslcd configuration file. See nslcd.conf(5)
# for details.
# The user and group nslcd should run as.
uid nslcd
gid nslcd
# The location at which the LDAP server(s) should be reachable.
uri ${LDAP_URI}
# The search base that will be used for all queries.
base ${LDAP_BASE}
# The LDAP protocol version to use.
ldap_version ${LDAP_VERSION}
# The DN to bind with for normal lookups.
binddn ${LDAP_BIND_DN}
bindpw ${LDAP_BIND_PW}
# The DN used for password modifications by root.
#rootpwmoddn cn=admin,dc=example,dc=com
# SSL options
#ssl off
#tls_reqcert never
tls_cacertfile /etc/ssl/certs/ca-certificates.crt
# The search scope.
#scope sub
EOF
/usr/sbin/nslcd
chown mysql:mysql /var/lib/mysql
[ -z "$(ls -A /var/lib/mysql)" ] && mysql_install_db --user=mysql --basedir=/usr --datadir=/var/lib/mysql
/usr/bin/mysqld_safe &
until ls /var/run/mysqld/mysqld.sock; do sleep 1; done
/usr/bin/mysqladmin -u root password ${MYSQL_PASSWORD} || true
exec "$@"

View file

@ -0,0 +1,21 @@
# /etc/nsswitch.conf
#
# Example configuration of GNU Name Service Switch functionality.
# If you have the `glibc-doc-reference' and `info' packages installed, try:
# `info libc "Name Service Switch"' for information about this file.
passwd: files ldap
group: files ldap
shadow: files ldap
gshadow: files
hosts: files dns
networks: files
protocols: db files
services: db files
ethers: db files
rpc: db files
netgroup: nis

View file

@ -0,0 +1,2 @@
auth required pam_ldap.so
account required pam_ldap.so

View file

@ -1,7 +1,6 @@
FROM amd64/debian:buster as builder FROM amd64/debian:buster as builder
ARG VERSION ARG VERSION
ARG S3_VERSION
RUN apt-get update && \ RUN apt-get update && \
apt-get -qq -y full-upgrade && \ apt-get -qq -y full-upgrade && \
apt-get install -y \ apt-get install -y \
@ -19,14 +18,11 @@ RUN apt-get update && \
# postgresql-dev \ # postgresql-dev \
libpq-dev \ libpq-dev \
virtualenv \ virtualenv \
libxslt1-dev \ libxslt1-dev && \
git && \
virtualenv /root/matrix-env -p /usr/bin/python3 && \ virtualenv /root/matrix-env -p /usr/bin/python3 && \
. /root/matrix-env/bin/activate && \ . /root/matrix-env/bin/activate && \
pip3 install \ pip3 install \
https://github.com/matrix-org/synapse/archive/v${VERSION}.tar.gz#egg=matrix-synapse[matrix-synapse-ldap3,postgres,resources.consent,saml2,url_preview] && \ https://github.com/matrix-org/synapse/archive/v${VERSION}.tar.gz#egg=matrix-synapse[matrix-synapse-ldap3,postgres,resources.consent,saml2,url_preview]
pip3 install \
git+https://github.com/matrix-org/synapse-s3-storage-provider.git@${S3_VERSION}
FROM amd64/debian:buster FROM amd64/debian:buster
@ -46,7 +42,6 @@ RUN apt-get update && \
ENV LD_PRELOAD /usr/lib/x86_64-linux-gnu/libjemalloc.so.2 ENV LD_PRELOAD /usr/lib/x86_64-linux-gnu/libjemalloc.so.2
COPY --from=builder /root/matrix-env /root/matrix-env COPY --from=builder /root/matrix-env /root/matrix-env
COPY matrix-s3-async /usr/local/bin/matrix-s3-async
COPY entrypoint.sh /usr/local/bin/entrypoint COPY entrypoint.sh /usr/local/bin/entrypoint
ENTRYPOINT ["/usr/local/bin/entrypoint"] ENTRYPOINT ["/usr/local/bin/entrypoint"]

View file

@ -0,0 +1,27 @@
FROM debian:10
RUN apt-get update && \
apt-get -qq -y full-upgrade
RUN apt-get install -y apache2 php php-gd php-mbstring php-pgsql php-curl php-dom php-xml php-zip \
php-intl php-ldap php-fileinfo php-exif php-apcu php-redis php-imagick unzip curl wget && \
phpenmod gd && \
phpenmod curl && \
phpenmod mbstring && \
phpenmod pgsql && \
phpenmod dom && \
phpenmod zip && \
phpenmod intl && \
phpenmod ldap && \
phpenmod fileinfo && \
phpenmod exif && \
phpenmod apcu && \
phpenmod redis && \
phpenmod imagick && \
phpenmod xml
COPY container-setup.sh /tmp
RUN /tmp/container-setup.sh
COPY entrypoint.sh /
CMD /entrypoint.sh

View file

@ -0,0 +1,37 @@
#!/bin/sh
set -ex
curl https://download.nextcloud.com/server/releases/nextcloud-19.0.0.zip > /tmp/nextcloud.zip
cd /var/www
unzip /tmp/nextcloud.zip
rm /tmp/nextcloud.zip
mv html html.old
mv nextcloud html
cd html
mkdir data
cd apps
wget https://github.com/nextcloud/tasks/releases/download/v0.13.1/tasks.tar.gz
tar xf tasks.tar.gz
wget https://github.com/nextcloud/maps/releases/download/v0.1.6/maps-0.1.6.tar.gz
tar xf maps-0.1.6.tar.gz
wget https://github.com/nextcloud/calendar/releases/download/v2.0.3/calendar.tar.gz
tar xf calendar.tar.gz
wget https://github.com/nextcloud/news/releases/download/14.1.11/news.tar.gz
tar xf news.tar.gz
wget https://github.com/nextcloud/notes/releases/download/v3.6.0/notes.tar.gz
tar xf notes.tar.gz
wget https://github.com/nextcloud/contacts/releases/download/v3.3.0/contacts.tar.gz
tar xf contacts.tar.gz
wget https://github.com/nextcloud/mail/releases/download/v1.4.0/mail.tar.gz
tar xf mail.tar.gz
wget https://github.com/nextcloud/groupfolders/releases/download/v6.0.6/groupfolders.tar.gz
tar xf groupfolders.tar.gz
rm *.tar.gz
chown -R www-data:www-data /var/www/html
cd /var/www/html
php occ

View file

@ -0,0 +1,8 @@
#!/bin/sh
set -xe
chown www-data:www-data /var/www/html/config/config.php
touch /var/www/html/data/.ocdata
exec apachectl -DFOREGROUND

Binary file not shown.

View file

@ -0,0 +1,4 @@
FROM amd64/openjdk:13-alpine
COPY pithos-0.7.5-standalone.jar /srv/pithos.jar
ENTRYPOINT ["/opt/openjdk-13/bin/java", "-jar", "/srv/pithos.jar"]

View file

@ -0,0 +1,9 @@
This project is considered as "dangerous" as it is tagged as "Project not under active development".
Consequently, just in case, I am backuping the .jar and the sources in this git repo.
Better safe than sorry or pretty.
```
sudo docker build -t superboum/amd64_pithos:v1 .
sudo docker push superboum/amd64_pithos:v1
sudo docker run --rm -it -p 8080:8080 -v pithos.yaml:/etc/pithos/pithos.yaml superboum/amd64_pithos:v1
```

Binary file not shown.

View file

@ -1,4 +1,4 @@
FROM rust:1.58.1-slim-bullseye as builder FROM rust:1.47.0-slim-buster as builder
RUN apt-get update && \ RUN apt-get update && \
apt-get install -y \ apt-get install -y \
@ -10,7 +10,6 @@ RUN apt-get update && \
libpq-dev \ libpq-dev \
gettext \ gettext \
git \ git \
python \
curl \ curl \
gcc \ gcc \
make \ make \
@ -20,19 +19,20 @@ RUN apt-get update && \
ARG VERSION ARG VERSION
WORKDIR /opt WORKDIR /opt
RUN git clone -n https://git.joinplu.me/Plume/Plume.git plume RUN git clone -n https://git.deuxfleurs.fr/Deuxfleurs/plume.git
WORKDIR /opt/plume WORKDIR /opt/plume
RUN git checkout ${VERSION} RUN git checkout ${VERSION}
WORKDIR /opt/plume/script RUN cargo install diesel_cli --no-default-features --features postgres --version '=1.3.0'
RUN chmod a+x ./wasm-deps.sh && ./wasm-deps.sh
WORKDIR /opt/plume # frontend
RUN cargo install wasm-pack RUN cargo install cargo-web
RUN chmod a+x ./script/plume-front.sh && ./script/plume-front.sh RUN cargo web deploy -p plume-front --release
RUN cargo install --path ./ --force --no-default-features --features postgres # backend
RUN cargo install --path plume-cli --force --no-default-features --features postgres RUN cargo install --no-default-features --features postgres -f --path .
# cli
RUN cargo install --no-default-features --features postgres --path plume-cli
RUN cargo clean RUN cargo clean
#----------------------------- #-----------------------------
@ -41,14 +41,16 @@ FROM debian:bullseye-slim
RUN apt-get update && apt-get install -y --no-install-recommends \ RUN apt-get update && apt-get install -y --no-install-recommends \
ca-certificates \ ca-certificates \
libpq5 \ libpq5 \
libssl1.1 \ libssl1.1
rclone \
fuse
WORKDIR /app WORKDIR /app
COPY --from=builder /opt/plume /app COPY --from=builder /opt/plume /app
COPY --from=builder /usr/local/cargo/bin/diesel /usr/local/bin/
COPY --from=builder /usr/local/cargo/bin/plm /usr/local/bin/ COPY --from=builder /usr/local/cargo/bin/plm /usr/local/bin/
COPY --from=builder /usr/local/cargo/bin/plume /usr/local/bin/ COPY --from=builder /usr/local/cargo/bin/plume /usr/local/bin/
COPY plm-start /usr/local/bin/
CMD ["plume"] CMD ["plm-start"]
EXPOSE 7878

9
app/build/plume/plm-start Executable file
View file

@ -0,0 +1,9 @@
#!/bin/bash
until plm migration run;
do sleep 2;
done
plm search init
plm instance new --domain "$DOMAIN_NAME" --name "$INSTANCE_NAME" --private
plume

View file

@ -1,10 +1,8 @@
FROM amd64/debian:buster FROM amd64/debian:buster
ARG VERSION
RUN apt-get update && \ RUN apt-get update && \
apt-get install -y \ apt-get install -y \
postfix=$VERSION \ postfix \
postfix-ldap postfix-ldap
COPY entrypoint.sh /usr/local/bin/entrypoint COPY entrypoint.sh /usr/local/bin/entrypoint

View file

@ -26,6 +26,5 @@ for file in $(ls /etc/postfix-conf); do
done done
echo ${MAILNAME} > /etc/mailname echo ${MAILNAME} > /etc/mailname
postmap /etc/postfix/transport
exec "$@" exec "$@"

View file

@ -0,0 +1,19 @@
FROM amd64/debian:stretch
RUN echo "deb http://deb.debian.org/debian stretch-backports main contrib non-free # available after stretch release" > /etc/apt/sources.list.d/stretch-backports.list && \
apt-get update && \
apt-get -qq -y full-upgrade && \
apt-get install -y postgresql-all golang-1.11 git && \
export GOPATH=/usr/local/go && \
mkdir -p /usr/local/go/src/github.com/sorintlab && \
cd /usr/local/go/src/github.com/sorintlab && \
git clone --depth=1 https://github.com/sorintlab/stolon && \
ln -s /usr/lib/go-1.11/bin/go /usr/bin/go && \
ln -s /usr/lib/go-1.11/bin/gofmt /usr/bin/gofmt && \
cd ./stolon && \
./build && \
mv /usr/local/go/src/github.com/sorintlab/stolon/bin/* /usr/local/bin/ && \
rm -rf /usr/local/go
USER postgres

View file

@ -0,0 +1,4 @@
```
docker build -t superboum/arm32v7_postgres .
docker build -t superboum/amd64_postgres:v2 .
```

22
app/build/postgres/start.sh Executable file
View file

@ -0,0 +1,22 @@
#!/bin/bash
if [ -f /local/pg_hba.conf ]; then
echo "Copying Nomad configuration..."
cp /local/pg_hba.conf /etc/postgresql/9.6/main/
echo "Done"
fi
if [ -z "$(ls -A /var/lib/postgresql/9.6/main)" ]; then
echo "Copying base"
cp -r /var/lib/postgresql/9.6/base/* /var/lib/postgresql/9.6/main
echo "Done"
fi
chmod -R 700 /var/lib/postgresql/9.6/main
chown -R postgres /var/lib/postgresql/9.6/main
echo "Starting postgres..."
. /usr/share/postgresql-common/init.d-functions
start 9.6
tail -f /var/log/postgresql/postgresql-9.6-main.log

View file

@ -5,9 +5,9 @@ WORKDIR /root
RUN apt-get update && \ RUN apt-get update && \
apt-get install -y wget && \ apt-get install -y wget && \
wget https://github.com/vector-im/element-web/releases/download/v${VERSION}/element-v${VERSION}.tar.gz && \ wget https://github.com/vector-im/riot-web/releases/download/v${VERSION}/riot-v${VERSION}.tar.gz && \
tar xf element-v${VERSION}.tar.gz && \ tar xf riot-v${VERSION}.tar.gz && \
mv element-v${VERSION}/ riot/ mv riot-v${VERSION}/ riot/
FROM superboum/amd64_webserver:v3 FROM superboum/amd64_webserver:v3
COPY --from=builder /root/riot /srv/http COPY --from=builder /root/riot /srv/http

View file

@ -0,0 +1,46 @@
FROM amd64/debian:buster as builder
ENV VERSION 7.0.5
RUN apt-get update && \
apt-get dist-upgrade -y && \
DEBIAN_FRONTEND=noninteractive apt-get install -y wget tar && \
wget https://download.seadrive.org/seafile-server_${VERSION}_x86-64.tar.gz -O ./seafile.tar.gz && \
tar xf ./seafile.tar.gz && \
mv seafile-server-${VERSION} seafile-server
FROM amd64/debian:buster
COPY --from=builder ./seafile-server /srv/webstore/seafile-server
RUN apt-get update && \
apt-get dist-upgrade -y && \
DEBIAN_FRONTEND=noninteractive apt-get install -y \
python \
mariadb-client \
python2.7 \
libpython2.7 \
python-setuptools \
python-ldap \
python-urllib3 \
ffmpeg \
python-pip \
python-mysqldb \
python-memcache \
procps \
python-requests && \
pip install Pillow==4.3.0 && \
pip install moviepy && \
useradd -u 1000 -d /srv/webstore seauser && \
chown -R seauser:1000 /srv/webstore/
RUN mkdir -p /usr/local/lib/mariadb/plugin/ && \
ln -s /usr/lib/x86_64-linux-gnu/mariadb*/plugin/mysql_clear_password.so /usr/local/lib/mariadb/plugin/ && \
ln -s /usr/lib/x86_64-linux-gnu/mariadb*/plugin/dialog.so /usr/local/lib/mariadb/plugin/
WORKDIR /srv/webstore/seafile-server
COPY seadocker /usr/local/bin/seadocker
COPY seaenv /usr/local/bin/seaenv
ENTRYPOINT ["/usr/local/bin/seaenv"]
CMD ["/usr/local/bin/seadocker"]

View file

@ -0,0 +1,27 @@
```bash
sudo docker build -t superboum/amd64_seafile:v5 .
```
When upgrading, connect on a production server and run:
```bash
nomad stop seafile
sudo docker build -t superboum/amd64_seafile:v6 .
sudo docker run -t -i \
-v /mnt/glusterfs/seafile:/mnt/seafile-data \
-v /mnt/glusterfs/seaconf/conf:/srv/webstore/conf \
-v /mnt/glusterfs/seaconf/ccnet:/srv/webstore/ccnet \
superboum/amd64_seafile:v5
# See:
# * https://download.seafile.com/published/seafile-manual/deploy/upgrade.md
# * https://download.seafile.com/published/seafile-manual/changelog/server-changelog.md
nomad start seafile.hcl
```
when upgrading, change the command on start

4
app/build/seafile/seadocker Executable file
View file

@ -0,0 +1,4 @@
#!/bin/bash
/srv/webstore/seafile-server/seafile.sh start
/srv/webstore/seafile-server/seahub.sh start
tail -f /srv/webstore/logs/*

7
app/build/seafile/seaenv Executable file
View file

@ -0,0 +1,7 @@
#!/bin/bash
chown seauser /srv/webstore
chown seauser -R /srv/webstore/ccnet
chown seauser -R /srv/webstore/conf
runuser -u seauser -- "$@"

Some files were not shown because too many files have changed in this diff Show more