app | ||
op_guide | ||
os | ||
.gitignore | ||
.gitmodules | ||
LICENSE | ||
README.md |
deuxfleurs.fr
OBSOLETION NOTICE: We are progressively migrating our stack to NixOS, to replace Ansible. Most of the files present in this repository are outdated or obsolete, the current code for our infrastructure is at: https://git.deuxfleurs.fr/Deuxfleurs/nixcfg.
Our abstraction stack
We try to build a generic abstraction stack between our different resources (CPU, RAM, disk, etc.) and our services (Chat, Storage, etc.), we develop our own tools when needed:
- garage: S3-compatible lightweight object store for self-hosted geo-distributed deployments (we also have a legacy glusterfs cluster)
- diplonat: network automation (firewalling, upnp igd)
- bottin: authentication and authorization (LDAP protocol, consul backend)
- guichet: a dashboard for our users and administrators
- ansible: physical node configuration
- nomad: schedule containers and handle their lifecycle
- consul: distributed key value store + lock + service discovery
- stolon + postgresql: distributed relational database
- docker: package, distribute and isolate applications
Some services we provide:
- Websites: garage (static) + fediverse blog (plume)
- Chat: Synapse + Element Web (Matrix protocol)
- Email: Postfix SMTP + Dovecot IMAP + opendkim DKIM + Sogo webmail (legacy) | Alps webmail (experimental)
- Storage: Seafile (legacy) | Nextcloud (experimental)
- Visio: Jitsi
As a generic abstraction is provided, deploying new services should be easy.
I am lost, how this repo works?
To ease the development, we make the choice of a fully integrated environment
os
the base os for the clusterbuild
: where you will build our OS image based on Debian that you will install on your serverconfig
: our Ansible recipes to configure and update your freshly installed server
apps
apps we deploy on the clusterbuild
: our Docker files to build immutable images of our applicationsintegration
: Our Docker compose files to test locally how our built images interact togetherconfig
: Files containing application configurations to be deployed on Consul Key Value Storedeployment
: Files containing application definitions to be deployed on Nomad Scheduler
op_guide
: Guides to explain you operations you can do cluster wide (like configuring postgres)
Start hacking
Deploying/Updating new services is done from your machine
The following instructions are provided for ops that already have access to the servers (meaning: their SSH public key is known by the cluster).
Deploy Nomad on your machine:
export NOMAD_VER=1.0.1
wget https://releases.hashicorp.com/nomad/${NOMAD_VER}/nomad_${NOMAD_VER}_linux_amd64.zip
unzip nomad_${NOMAD_VER}_linux_amd64.zip
sudo mv nomad /usr/local/bin
rm nomad_${NOMAD_VER}_linux_amd64.zip
Deploy Consul on your machine:
export CONSUL_VER=1.9.0
wget https://releases.hashicorp.com/consul/${CONSUL_VER}/consul_${CONSUL_VER}_linux_amd64.zip
unzip consul_${CONSUL_VER}_linux_amd64.zip
sudo mv consul /usr/local/bin
rm consul_${CONSUL_VER}_linux_amd64.zip
Create an alias (and put it in your .bashrc
) to bind APIs on your machine:
alias bind_df="ssh \
-p110 \
-N \
-L 1389:bottin2.service.2.cluster.deuxfleurs.fr:389 \
-L 4646:127.0.0.1:4646 \
-L 5432:psql-proxy.service.2.cluster.deuxfleurs.fr:5432 \
-L 8082:traefik-admin.service.2.cluster.deuxfleurs.fr:8082 \
-L 8500:127.0.0.1:8500 \
<a server from the cluster>"
and run:
bind_df
Adrien uses .ssh/config
configuration instead. I works basically the same. Here it goes:
# in ~/.ssh/config
Host deuxfleurs
User adrien
Hostname deuxfleurs.fr
# If you don't use the default ~/.ssh/id_rsa to connect to Deuxfleurs
IdentityFile <some_key_path>
PubKeyAuthentication yes
ForwardAgent No
LocalForward 1389 bottin2.service.2.cluster.deuxfleurs.fr:389
LocalForward 4646 127.0.0.1:4646
LocalForward 5432 psql-proxy.service.2.cluster.deuxfleurs.fr:5432
LocalForward 8082 traefik-admin.service.2.cluster.deuxfleurs.fr:8082
LocalForward 8500 127.0.0.1:8500
Now, to connect, do the following:
ssh deuxfleurs -N
Test cluster
Configured machines available for testing are listed in the test_cluster
Ansible inventory.