Infrastructure code for
Go to file
2021-01-29 12:11:43 +01:00
app Upgraded Synapse and Element-web on cluster's nomad, and the OP guide 2021-01-29 12:11:43 +01:00
op_guide Upgraded Synapse and Element-web on cluster's nomad, and the OP guide 2021-01-29 12:11:43 +01:00
os Fix ansible inventory + Fix jicofo's hocon conf + fix jicofo's dockerfile 2021-01-28 17:02:10 +01:00
.gitignore Rework jitsi-xmpp to support cert gen 2020-03-22 18:01:54 +01:00
.gitmodules Refactor 2 2020-09-12 20:17:07 +02:00
LICENSE Initial commit 2019-07-11 09:33:07 +02:00 updated READMEs 2021-01-19 15:21:23 +01:00

Many things are still missing here, including a proper documentation. Please stay nice, it is a volunter project. Feel free to open pull/merge requests to improve it. Thanks.

Our abstraction stack

We try to build a generic abstraction stack between our different resources (CPU, RAM, disk, etc.) and our services (Chat, Storage, etc.), we develop our own tools when needed:

  • garage: S3-compatible lightweight object store for self-hosted geo-distributed deployments (we also have a legacy glusterfs cluster)
  • diplonat: network automation (firewalling, upnp igd)
  • bottin: authentication and authorization (LDAP protocol, consul backend)
  • guichet: a dashboard for our users and administrators
  • ansible: physical node configuration
  • nomad: schedule containers and handle their lifecycle
  • consul: distributed key value store + lock + service discovery
  • stolon + postgresql: distributed relational database
  • docker: package, distribute and isolate applications

Some services we provide:

  • Websites: garage (static) + fediverse blog (plume)
  • Chat: Synapse + Element Web (Matrix protocol)
  • Email: Postfix SMTP + Dovecot IMAP + opendkim DKIM + Sogo webmail (legacy) | Alps webmail (experimental)
  • Storage: Seafile (legacy) | Nextcloud (experimental)
  • Visio: Jitsi

As a generic abstraction is provided, deploying new services should be easy.

I am lost, how this repo works?

To ease the development, we make the choice of a fully integrated environment

  1. os the base os for the cluster
    1. build: where you will build our OS image based on Debian that you will install on your server
    2. config: our Ansible recipes to configure and update your freshly installed server
  2. apps apps we deploy on the cluster
    1. build: our Docker files to build immutable images of our applications
    2. integration: Our Docker compose files to test locally how our built images interact together
    3. config: Files containing application configurations to be deployed on Consul Key Value Store
    4. deployment: Files containing application definitions to be deployed on Nomad Scheduler
  3. op_guide: Guides to explain you operations you can do cluster wide (like configuring postgres)

Start hacking

Deploying/Updating new services is done from your machine

The following instructions are provided for ops that already have access to the servers (meaning: their SSH public key is known by the cluster).

Deploy Nomad on your machine:

export NOMAD_VER=1.0.1
unzip nomad_${NOMAD_VER}
sudo mv nomad /usr/local/bin
rm nomad_${NOMAD_VER}

Deploy Consul on your machine:

export CONSUL_VER=1.9.0
unzip consul_${CONSUL_VER}
sudo mv consul /usr/local/bin
rm consul_${CONSUL_VER}

Create an alias (and put it in your .bashrc) to bind APIs on your machine:

alias bind_df="ssh \
  -p110 \
  -N \
  -L \
  -L 4646: \
  -L \
  -L \
  -L 8500: \
  <a server from the cluster>"

and run:


Adrien uses .ssh/config configuration instead. I works basically the same. Here it goes:

# in ~/.ssh/config 

Host deuxfleurs
    User adrien
    # If you don't use the default ~/.ssh/id_rsa to connect to Deuxfleurs
    IdentityFile <some_key_path>
    PubKeyAuthentication yes
    ForwardAgent No
    LocalForward 1389
    LocalForward 4646
    LocalForward 5432
    LocalForward 8082
    LocalForward 8500

Now, to connect, do the following:

ssh deuxfleurs -N