Compare commits

..

10 Commits
main ... alexvm

Author SHA1 Message Date
Alex edb0a3737a WIP NextCloud using Garage backend, fix app download urls 2020-07-15 16:06:28 +02:00
Alex 24118ab426 Make things work on cluster devx.adnab.me 2020-07-15 16:06:08 +02:00
Alex 65af077d5a Fix iptables not liking comment on same line 2020-07-15 16:03:51 +02:00
Alex d3ada90d83 Fix nomad ip address
Remove the networ_interface parameter in nomad config
This means that nomad will now autodetect its own ip address
by looking at the default route.
Thus nodes in a LAN behind a NAT will get their LAN address,
and internet nodes will get their public address.
They won't get their VPN addresses.
This seems not to break Consul's use of VPN addresses to address
services, and fixes attr.unique.network.ip-address for DiploNAT.
2020-07-15 16:03:51 +02:00
Alex 3bf830713f don't retrieve wireguard privkeys in ansible 2020-07-15 16:03:51 +02:00
Alex 207d1fa278 Allow external VPN nodes, make multi-DC deployment work 2020-07-15 16:03:42 +02:00
Alex bee7e10256 Document Wireguard config 2020-07-15 16:03:42 +02:00
Alex a4f9aa2d98 Set up wireguard in dev cluster 2020-07-15 16:03:33 +02:00
Alex 1a16fc7f9e Add gitea config example 2020-07-15 15:49:52 +02:00
Alex 3174179100 Achieve a working install on my VMs 2020-07-15 15:49:52 +02:00
389 changed files with 4558 additions and 7647 deletions

5
.gitmodules vendored
View File

@ -1,3 +1,6 @@
[submodule "docker/static/goStatic"]
path = app/build/static/goStatic
path = docker/static/goStatic
url = https://github.com/PierreZ/goStatic
[submodule "docker/blog/quentin.dufour.io"]
path = docker/blog-quentin/quentin.dufour.io
url = git@gitlab.com:superboum/quentin.dufour.io.git

View File

@ -1,21 +1,76 @@
deuxfleurs.fr
=============
**OBSOLETION NOTICE:** We are progressively migrating our stack to NixOS, to replace Ansible. Most of the files present in this repository are outdated or obsolete,
the current code for our infrastructure is at: <https://git.deuxfleurs.fr/Deuxfleurs/nixcfg>.
*Many things are still missing here, including a proper documentation. Please stay nice, it is a volunter project. Feel free to open pull/merge requests to improve it. Thanks.*
## I am lost, how this repo works?
## Our abstraction stack
To ease the development, we make the choice of a fully integrated environment
We try to build a generic abstraction stack between our different resources (CPU, RAM, disk, etc.) and our services (Chat, Storage, etc.):
1. `os` the base os for the cluster
1. `build`: where you will build our OS image based on Debian that you will install on your server
2. `config`: our Ansible recipes to configure and update your freshly installed server
2. `apps` apps we deploy on the cluster
1. `build`: our Docker files to build immutable images of our applications
2. `integration`: Our Docker compose files to test locally how our built images interact together
3. `config`: Files containing application configurations to be deployed on Consul Key Value Store
4. `deployment`: Files containing application definitions to be deployed on Nomad Scheduler
3. `op_guide`: Guides to explain you operations you can do cluster wide (like configuring postgres)
* ansible (physical node conf)
* nomad (schedule containers)
* consul (distributed key value store / lock / service discovery)
* glusterfs (file storage)
* stolon + postgresql (distributed relational database)
* docker (container tool)
* bottin (LDAP server, auth)
Some services we provide:
* Chat (Matrix/Riot)
* Email (Postfix/Dovecot/Sogo)
* Storage (Seafile)
As a generic abstraction is provided, deploying new services should be easy.
## Start hacking
### Clone the repository
```
git clone https://gitlab.com/superboum/deuxfleurs.fr.git
git submodule init
git submodule update
```
### Deploying/Updating new services is done from your machine
*The following instructions are provided for ops that already have access to the servers.*
Deploy Nomad on your machine:
```bash
export NOMAD_VER=0.9.1
wget https://releases.hashicorp.com/nomad/${NOMAD_VER}/nomad_${NOMAD_VER}_linux_amd64.zip
unzip nomad_${NOMAD_VER}_linux_amd64.zip
sudo mv nomad /usr/local/bin
rm nomad_${NOMAD_VER}_linux_amd64.zip
```
Deploy Consul on your machine:
```bash
export CONSUL_VER=1.5.1
wget https://releases.hashicorp.com/consul/${CONSUL_VER}/consul_${CONSUL_VER}_linux_amd64.zip
unzip consul_${CONSUL_VER}_linux_amd64.zip
sudo mv consul /usr/local/bin
rm consul_${CONSUL_VER}_linux_amd64.zip
```
Create an alias (and put it in your `.bashrc`) to bind APIs on your machine:
```
alias bind_df="ssh \
-p110 \
-N \
-L 4646:127.0.0.1:4646 \
-L 8500:127.0.0.1:8500 \
-L 8082:traefik.service.2.cluster.deuxfleurs.fr:8082 \
<a server from the cluster>"
```
and run:
```
bind_df
```

5
administratif/.gitignore vendored Normal file
View File

@ -0,0 +1,5 @@
*.aux
*.fdb_latexmk
*.fls
*.log
*.pdf

View File

@ -0,0 +1,68 @@
\documentclass[a4paper,DIV=12]{scrartcl}
\usepackage[french]{babel}
% On abuse komafont pour réduire la place prise par le titre
\addtokomafont{title}{\vspace*{-3em}}
\addtokomafont{author}{\vspace*{-1em}}
\addtokomafont{date}{\vspace*{-0.5em}}
% On ajoute "Article" devant les sections
\renewcommand\sectionformat{Article\enskip\thesection~:\hspace{1em}}
% On réduit la taille des sections
\addtokomafont{section}{\large}
% On rajoute un peu d'espace entre les paragraphes
\setlength{\parskip}{.8em}
% On enlève de la place après les titres
% (je n'ai pas pu utiliser le paquet dédié titlesec car il cause plein d'erreurs)
%\titlespacing\section{1pt}{*4}{*1.5}
\let\oldsection\section
\renewcommand{\section}[1]{\oldsection{#1}\vspace{-1em}}
\title{Procès-verbal de lassemblée générale constitutive de l'association Deuxfleurs}
\date{13 janvier 2020}
\author{Association Deuxfleurs\\10A Allée de Lanvaux, 35700 Rennes}
\begin{document}
\maketitle
Le 13 janvier 2020 à 19 heures, les fondateurs de lassociation Deuxfleurs se sont réunis en assemblée générale constitutive au 24 rue des Tanneurs à Rennes. Sont présents Adrien, Alex, Anaïs, Axelle, Louison, Maximilien, Quentin, Rémi et Vincent.
Lassemblée générale désigne Adrien Luxey en qualité de président de séance et Quentin Dufour en qualité de secrétaire de séance.
Le président de séance met à la disposition des présents le projet de statuts de lassociation et létat des actes passés pour le compte de lassociation en formation.
Puis il rappelle que lassemblée générale constitutive est appelée à statuer sur lordre du jour suivant :
\begin{itemize}
\item présentation du projet de constitution de lassociation ;
\item présentation du projet de statuts ;
\item adoption des statuts ;
\item désignation des premiers membres du conseil ;
\item pouvoirs en vue des formalités de déclaration et publication.
\end{itemize}
Enfin, le président de séance expose les motifs du projet de création de lassociation et commente le projet de statuts.
Il ouvre la discussion. Un débat sinstaure entre les membres de lassemblée.
Après quoi, personne ne demandant plus la parole, le président met successivement aux voix les délibérations suivantes.
\paragraph{1\iere~délibération} Lassemblée générale adopte les statuts dont le projet lui a été soumis.
Cette délibération est adoptée à lunanimité.
\paragraph{2\ieme~délibération} Lassemblée générale constitutive désigne en qualité de premiers membres du conseil d'administration :
\begin{itemize}
\item Adrien Luxey
\item Alex Auvolat
\item Maximilien Richer
\item Quentin Dufour
\item Vincent Giraud
\end{itemize}
Conformément aux statuts, cette désignation est faite pour une durée expirant lors de lassemblée générale qui sera appelée à statuer sur les comptes de lexercice clos le 13 janvier 2021.
Les membres du conseil ainsi désignés acceptent leurs fonctions
Nom, prénom et signature du président et du secrétaire de séance
\end{document}

View File

@ -0,0 +1,104 @@
\documentclass[a4paper,DIV=12]{scrartcl}
\usepackage[frenchb]{babel}
% On abuse komafont pour réduire la place prise par le titre
\addtokomafont{title}{\vspace*{-3em}}
\addtokomafont{author}{\vspace*{-1em}}
\addtokomafont{date}{\vspace*{-2em}}
% On ajoute "Article" devant les sections
\renewcommand\sectionformat{Article\enskip\thesection~:\hspace{1em}}
% On réduit la taille des sections
\addtokomafont{section}{\large}
% On rajoute un peu d'espace entre les paragraphes
\setlength{\parskip}{.8em}
% On enlève de la place après les titres
% (je n'ai pas pu utiliser le paquet dédié titlesec car il cause plein d'erreurs)
%\titlespacing\section{1pt}{*4}{*1.5}
\let\oldsection\section
\renewcommand{\section}[1]{\oldsection{#1}\vspace{-1em}}
\title{Statuts de l'association Deuxfleurs}
\date{13 janvier 2020}
\begin{document}
\maketitle
\section{Constitution et dénomination}
Il est fondé entre les adhérents aux présents statuts une association régie par la loi 1901, ayant pour titre Deuxfleurs.
\section{Buts}
Cette association a pour but de défendre et promouvoir les libertés individuelles et collectives à travers la mise en place d'infrastuctures numériques libres.
\section{Siège social}
Le siège social est fixé au 10A, Allée de Lanvaux, 35700 Rennes.
Il pourra être transféré suite à un vote par l'assemblée générale.
\section{Durée de l'association}
L'association perdure tant qu'elle possède au moins un membre, ou jusqu'à sa dissolution décidée en assemblée générale.
\section{Admission et adhésion}\label{article:admission}
Pour faire partie de l'association, il faut être coopté par un membre de l'association, adhérer aux présents statuts et s'acquitter de la cotisation annuelle dont le montant est de 10 euros.
\section{Composition de l'association}
L'association se compose exclusivement de membres admis selon les dispositions de l'article~\ref{article:admission} et à jour de leur cotisation.
Tout membre actif possède une voix lors des votes en assemblée générale.
Est considéré actif tout membre présent à l'assemblée générale (physiquement, par visioconférence ou par procuration écrite donnée à un autre membre de l'association).
\section{Perte de la qualité de membre}
La qualité de membre se perd par :
\begin{itemize}
\item la démission,
\item le non-renouvelement de la cotisation dans un délai de deux mois après le 1er Janvier de l'année courante,
\item le décès,
\item la radiation prononcée aux deux tiers des votes exprimés, lors d'un vote extraordinaire ou de l'assemblée générale.
\end{itemize}
\section{L'assemblée générale}\label{article:ag}
L'assemblée générale ordinaire se réunit au moins une fois par an, convoquée par le conseil d'administration.
Lassemblée générale extraordinaire est convoquée par le conseil dadministration, à la demande de celui-ci ou à la demande du quart au moins des membres de l'association.
L'assemblée générale (ordinaire ou extraordinaire) comprend tous les membres de l'association à jour de leur cotisation.
Quinze jours au moins avant la date fixée, les membres de l'association sont convoqués via la liste de diffusion de l'association et l'ordre du jour est inscrit sur les convocations.
Le conseil dadministration anime lassemblée générale.
Lassemblée générale, après avoir délibéré, se prononce sur le rapport moral et/ou d'activités.
Le conseil dadministration rend compte de l'exercice financier clos et soumet le bilan de lexercice clos à lapprobation de lassemblée dans un délai de six mois après la clôture des comptes.
Lassemblée générale délibère sur les orientations à venir et se prononce sur le budget prévisionnel de lannée en cours.
Elle pourvoit, au scrutin secret, à la nomination ou au renouvellement des membres du conseil d'administration via un scrutin de Condorcet Randomisé.
Elle fixe le montant de la cotisation annuelle.
Les décisions de l'assemblée sont prises à la majorité des membres présents ou représentés.
Chaque membre présent ne peut détenir plus d'une procuration.
\section{Membres mineurs}
Les mineurs peuvent adhérer à lassociation sous réserve dun accord tacite ou dune autorisation écrite de leurs parents ou tuteurs légaux.
Ils sont membres à part entière de lassociation.
Seuls les membres âgés de 16 ans au moins au jour dune élection sont autorisés à y voter, notamment au cours d'une assemblée générale.
Pour les autres, leur droit de vote est transmis à leur représentant légal.
\section{Le conseil d'administration}
L'association est administrée par un conseil d'administration composé de 3 à 6 membres, élus pour 1 an dans les conditions fixées à larticle~\ref{article:ag}.
Tous les membres de lassociation à jour de leur cotisation sont éligibles.
En cas de vacance de poste, le conseil d'administration peut pourvoir provisoirement au remplacement de ses membres. Ce remplacement est obligatoire quand le conseil d'administration compte moins de 3 membres.
Il est procédé à leur remplacement définitif à la plus prochaine assemblée générale.
Les pouvoirs des membres ainsi élus prennent fin à l'époque où devrait normalement expirer le mandat des membres remplacés.
Le conseil dadministration met en œuvre les décisions de lassemblée générale, organise et anime la vie de lassociation, dans le cadre fixé par les statuts.
Chacun de ses membres peut être habilité par le conseil à remplir toutes les formalités de déclaration et de publication prescrites par la législation et tout autre acte nécessaire au fonctionnement de lassociation et décidé par le conseil dadministration.
Tous les membres du conseil dadministration sont responsables des engagements contractés par lassociation.
Tout contrat ou convention passé entre lassociation d'une part, et un membre du conseil d'administration, son conjoint ou un proche, d'autre part, est soumis pour autorisation au conseil d'administration et présenté pour information à la plus prochaine assemblée générale.
Le conseil dadministration se réunit au moins 4 fois par an et toutes les fois qu'il est convoqué par le tiers de ses membres.
La présence de la moitié au moins des membres du conseil est nécessaire pour que le conseil d'administration puisse délibérer valablement.
Les décisions sont prises au consensus et, à défaut, à la majorité des voix des présents. Le vote par procuration n'est pas autorisé.
\section{Modification des statuts de l'association}
Sur demande d'un tiers des membres actifs, ou sur demande du conseil d'administration, des amendements aux statuts de l'association peuvent être discutés et soumis au vote lors d'une assemblée générale, selon les modalités de l'article~\ref{article:ag}.
\end{document}

3
administratif/README.md Normal file
View File

@ -0,0 +1,3 @@
# Documents administatifs
__Statuts__ : Pour compiler les statuts, faites `latexmk -pdf statuts.tex`

71
ansible/README.md Normal file
View File

@ -0,0 +1,71 @@
# ANSIBLE
## How to proceed
For each machine, **one by one** do:
- Check that cluster is healthy
- `sudo gluster peer status`
- `sudo gluster volume status all` (check Online Col, only `Y` must appear)
- Check that Nomad is healthy
- Check that Consul is healthy
- Check that Postgres is healthy
- Run `ansible-playbook -i production --limit <machine> site.yml`
- Reboot
- Check that cluster is healthy
## New configuration with Wireguard
This configuration is used to make all of the cluster nodes appear in a single
virtual private network, enable them to communicate on all ports even if they
are behind NATs at different locations. The VPN also provides a layer of
security, encrypting all comunications that occur over the internet.
### Prerequisites
Nodes must all have two publicly accessible ports (potentially routed through a NAT):
- A port that maps to the SSH port (port 22) of the machine, allowing TCP connections
- A port that maps to the Wireguard port (port 51820) of the machine, allowing UDP connections
### Configuration
The network role sets up a Wireguard interface, called `wgdeuxfleurs`, and
establishes a full mesh between all cluster machines. The following
configuration variables are necessary in the node list:
- `ansible_host`: hostname to which Ansible connects to, usually the same as `public_ip`
- `ansible_user`: username to connect as for Ansible to run commands through SSH
- `ansible_port`: if SSH is not bound publicly on port 22, set the port here
- `public_ip`: the public IP for the machine or the NATting router behind which the machine is
- `public_vpn_port`: the public port number on `public_ip` that maps to port 51820 of the machine
- `vpn_ip`: the IP address to affect to the node on the VPN (each node must have a different one)
- `dns_server`: any DNS resolver, typically your ISP's DNS or a public one such as OpenDNS
The new iptables configuration now prevents direct communication between
cluster machines, except on port 51820 which is used to transmit VPN packets.
All intra-cluster communications must now go through the VPN interface (thus
machines refer to one another using their VPN IP addresses and never their
public or LAN addresses).
### Restarting Nomad
When switching to the Wireguard configuration, machines will stop using their
LAN addresses and switch to using their VPN addresses. Consul seems to handle
this correctly, however Nomad does not. To make Nomad able to restart
correctly, its Raft protocol module must be informed of the new IP addresses of
the cluster members. This is done by creating on all nodes the file
`/var/lib/nomad/server/raft/peers.json` that contains the list of IP addresses
of the cluster. Here is an example for such a file:
```
["10.68.70.11:4647","10.68.70.12:4647","10.68.70.13:4647"]
```
Once this file is created and is the same on all nodes, restart Nomad on all
nodes. The cluster should resume operation normally.
The same procedure can also be applied to fix Consul, however my tests showed
that it didn't break when IP addresses changed (it just took a bit long to come
back up).

View File

@ -14,12 +14,6 @@
- role: network
tags: net
- hosts: extra_nodes
serial: 1
roles:
- role: common
tags: base
- role: users
tags: account
- role: network
tags: net
# UNSAFE!! This section configures glusterfs. Once done, don't run it ever again as it may break stuff.
# - role: storage
# tags: sto

6
ansible/lxvm Normal file
View File

@ -0,0 +1,6 @@
[cluster_nodes]
#ubuntu1 ansible_host=192.168.42.10
debian1 ansible_host=192.168.42.20 ansible_user=root public_ip=192.168.42.20 dns_server=208.67.222.222 vpn_ip=10.68.70.11 public_vpn_port=51820 datacenter=belair interface=enp1s0
debian2 ansible_host=192.168.42.21 ansible_user=root public_ip=192.168.42.21 dns_server=208.67.222.222 vpn_ip=10.68.70.12 public_vpn_port=51820 datacenter=belair interface=enp1s0
debian3 ansible_host=192.168.42.22 ansible_user=root public_ip=192.168.42.22 dns_server=208.67.222.222 vpn_ip=10.68.70.13 public_vpn_port=51820 datacenter=belair interface=enp1s0
ovh1 ansible_host=51.75.4.20 ansible_user=debian ansible_become=yes public_ip=51.75.4.20 dns_server=208.67.222.222 vpn_ip=10.68.70.20 public_vpn_port=51820 datacenter=saturne interface=eth0

4
ansible/production Normal file
View File

@ -0,0 +1,4 @@
[cluster_nodes]
veterini ansible_host=fbx-rennes2.machine.deuxfleurs.fr ansible_port=110 ansible_user=root public_ip=192.168.1.2 private_ip=192.168.1.2 interface=eno1 dns_server=80.67.169.40
silicareux ansible_host=fbx-rennes2.machine.deuxfleurs.fr ansible_port=111 ansible_user=root public_ip=192.168.1.3 private_ip=192.168.1.3 interface=eno1 dns_server=80.67.169.40
wonse ansible_host=fbx-rennes2.machine.deuxfleurs.fr ansible_port=112 ansible_user=root public_ip=192.168.1.4 private_ip=192.168.1.4 interface=eno1 dns_server=80.67.169.40

View File

@ -3,6 +3,7 @@
that:
- "ansible_architecture == 'aarch64' or ansible_architecture == 'armv7l' or ansible_architecture == 'x86_64'"
- "ansible_os_family == 'Debian'"
- "ansible_distribution_version == '10'"
- name: "Upgrade system"
apt:
@ -15,6 +16,7 @@
- name: "Install base tools"
apt:
name:
- sudo
- vim
- htop
- screen
@ -28,30 +30,17 @@
- bmon
- iftop
- iotop
# - docker.io # The bad way of installing Docker
- locales
- docker.io
- unzip
- tar
- tcpdump
- less
- parted
# - btrfs-tools # not in Debian 11
- btrfs-tools
- libnss-resolve
- net-tools
- strace
- sudo
- ethtool
- pciutils
- pv
- zstd
- miniupnpc
- rsync
- ncdu
- smartmontools
- ioping
- lm-sensors
- netcat
- sysstat
state: present
- name: "Passwordless sudo"

View File

@ -1,6 +1,6 @@
- name: "Set consul version"
set_fact:
consul_version: 1.11.4
consul_version: 1.8.0
- name: "Download and install Consul for x86_64"
unarchive:
@ -21,3 +21,6 @@
- name: "Enable consul systemd service at boot"
service: name=consul state=started enabled=yes daemon_reload=yes
- name: "Deploy resolv.conf to use Consul"
template: src=resolv.conf.j2 dest=/etc/resolv.conf

View File

@ -0,0 +1,31 @@
{
"datacenter": "deuxfleurs",
"data_dir": "/var/lib/consul",
"bind_addr": "0.0.0.0",
"advertise_addr": "{{ vpn_ip }}",
"addresses": {
"dns": "0.0.0.0",
"http": "0.0.0.0"
},
"retry_join": [
{% for selected_host in groups['cluster_nodes']|difference([inventory_hostname]) %}{# @FIXME: Reject doesn't work #}
"{{ hostvars[selected_host]['vpn_ip'] }}" {{ "," if not loop.last else "" }}
{% endfor %}
],
"bootstrap_expect": {{ groups['cluster_nodes']|length }},
"server": true,
"ui": true,
"ports": {
"dns": 53
},
"recursors": [
"{{ dns_server }}"
],
"encrypt": "{{ consul_gossip_encrypt }}",
"domain": "2.cluster.deuxfleurs.fr",
"performance": {
"raft_multiplier": 10,
"rpc_hold_timeout": "30s",
"leave_drain_time": "30s"
}
}

View File

@ -0,0 +1,2 @@
nameserver {{ vpn_ip }}
nameserver {{ dns_server }}

View File

@ -0,0 +1,12 @@
# WARNING!! When rules.{v4,v6} are changed, the whole iptables configuration is reloaded.
# This creates issues with Docker, which injects its own configuration in iptables when it starts.
# In practice, most (all?) containers will break if rules.{v4,v6} are changed,
# and docker will have to be restared.
*filter
:INPUT DROP [0:0]
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [0:0]
COMMIT

View File

@ -0,0 +1,5 @@
---
- name: reload wireguard
service:
name: wg-quick@wgdeuxfleurs
state: restarted

View File

@ -0,0 +1,59 @@
- name: "Create iptables configuration direcetory"
file: path=/etc/iptables/ state=directory
- name: "Deploy iptablesv4 configuration"
template: src=rules.v4.j2 dest=/etc/iptables/rules.v4
- name: "Deploy iptablesv6 configuration"
copy: src=rules.v6 dest=/etc/iptables/rules.v6
- name: "Activate IP forwarding"
sysctl:
name: net.ipv4.ip_forward
value: "1"
sysctl_set: yes
# Wireguard configuration
- name: "Enable backports repository"
apt_repository:
repo: deb http://deb.debian.org/debian buster-backports main
state: present
- name: "Install wireguard"
apt:
name:
- wireguard
- wireguard-tools
- "linux-headers-{{ ansible_kernel }}"
state: present
- name: "Create wireguard configuration direcetory"
file: path=/etc/wireguard/ state=directory
- name: "Check if wireguard private key exists"
stat: path=/etc/wireguard/privkey
register: wireguard_privkey
- name: "Create wireguard private key"
shell: wg genkey > /etc/wireguard/privkey
when: wireguard_privkey.stat.exists == false
notify:
- reload wireguard
- name: "Secure wireguard private key"
file: path=/etc/wireguard/privkey mode=0600
- name: "Retrieve wireguard public key"
shell: wg pubkey < /etc/wireguard/privkey
register: wireguard_pubkey
- name: "Deploy wireguard configuration"
template: src=wireguard.conf.j2 dest=/etc/wireguard/wgdeuxfleurs.conf mode=0600
notify:
- reload wireguard
- name: "Enable Wireguard systemd service at boot"
service: name=wg-quick@wgdeuxfleurs state=started enabled=yes daemon_reload=yes
- name: "Create /tmp/wgdeuxfleurs.template.conf example configuration file for external nodes"
local_action: template src=wireguard_external.conf.j2 dest=/tmp/wgdeuxfleurs.template.conf

View File

@ -3,21 +3,25 @@
:FORWARD DROP [0:0]
:OUTPUT ACCEPT [0:0]
# Internet Control Message Protocol
-A INPUT -p icmp -j ACCEPT
# Administration
-A INPUT -p tcp --dport 22 -j ACCEPT
# Diplonat needs everything open to communicate with IGD with the router
-A INPUT -s 192.168.0.254 -j ACCEPT
# Cluster
-A INPUT -s 192.168.0.2 -j ACCEPT
-A INPUT -s 192.168.0.3 -j ACCEPT
-A INPUT -s 192.168.0.4 -j ACCEPT
{% for selected_host in groups['cluster_nodes'] %}
-A INPUT -s {{ hostvars[selected_host]['public_ip'] }} -p udp --dport 51820 -j ACCEPT
-A INPUT -s {{ hostvars[selected_host]['vpn_ip'] }} -j ACCEPT
{% endfor %}
{% for host in other_vpn_nodes %}
-A INPUT -s {{ host.public_ip }} -p udp --dport 51820 -j ACCEPT
-A INPUT -s {{ host.vpn_ip }} -j ACCEPT
{% endfor %}
# Rennes
-A INPUT -s 93.2.173.168 -j ACCEPT
-A INPUT -s 82.253.205.190 -j ACCEPT
# router
-A INPUT -s 192.168.1.254 -j ACCEPT
# Local
-A INPUT -i docker0 -j ACCEPT
-A INPUT -s 127.0.0.1/8 -j ACCEPT
-A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT

View File

@ -0,0 +1,20 @@
[Interface]
Address = {{ vpn_ip }}
PostUp = wg set %i private-key <(cat /etc/wireguard/privkey)
ListenPort = 51820
{% for selected_host in groups['cluster_nodes']|difference([inventory_hostname]) %}
[Peer]
PublicKey = {{ hostvars[selected_host].wireguard_pubkey.stdout }}
Endpoint = {{ hostvars[selected_host].public_ip }}:{{ hostvars[selected_host].public_vpn_port }}
AllowedIPs = {{ hostvars[selected_host].vpn_ip }}/32
PersistentKeepalive = 25
{% endfor %}
{% for host in other_vpn_nodes %}
[Peer]
PublicKey = {{ host.pubkey }}
Endpoint = {{ host.public_ip }}:{{ host.public_vpn_port }}
AllowedIPs = {{ host.vpn_ip }}/32
PersistentKeepalive = 25
{% endfor %}

View File

@ -0,0 +1,27 @@
# Template configuration file for VPN nodes that are non in the cluster
# The private key should be stored as /etc/wireguard/privkey
# External nodes should be registered in network/vars/main.yml
[Interface]
Address = <INSERT YOUR IP HERE, IT SHOULD MATCH THE ONE IN vars/main.yml>
PostUp = wg set %i private-key <(cat /etc/wireguard/privkey)
ListenPort = 51820
# Cluster nodes
{% for selected_host in groups['cluster_nodes'] %}
[Peer]
PublicKey = {{ hostvars[selected_host].wireguard_pubkey.stdout }}
Endpoint = {{ hostvars[selected_host].public_ip }}:{{ hostvars[selected_host].public_vpn_port }}
AllowedIPs = {{ hostvars[selected_host].vpn_ip }}/32
PersistentKeepalive = 25
{% endfor %}
# External nodes
# TODO: remove yourself from here
{% for host in other_vpn_nodes %}
[Peer]
PublicKey = {{ host.pubkey }}
Endpoint = {{ host.public_ip }}:{{ host.public_vpn_port }}
AllowedIPs = {{ host.vpn_ip }}/32
PersistentKeepalive = 25
{% endfor %}

View File

@ -0,0 +1,6 @@
---
other_vpn_nodes:
- pubkey: "QUiUNMk70TEQ75Ut7Uqikr5uGVSXmx8EGNkGM6tANlg="
public_ip: "37.187.118.206"
public_vpn_port: "51820"
vpn_ip: "10.68.70.101"

View File

@ -1,6 +1,10 @@
- name: "Set nomad version"
- name: "Set Nomad version"
set_fact:
nomad_version: 1.2.6
nomad_version: 0.12.0-beta2
- name: "Set CNI version"
set_fact:
cni_plugins_version: 0.8.6
- name: "Download and install Nomad for x86_64"
unarchive:
@ -10,6 +14,19 @@
when:
- "ansible_architecture == 'x86_64'"
- name: "Create /opt/cni/bin"
file: path=/opt/cni/bin state=directory
- name: "Download and install CNI plugins for x86_64"
unarchive:
src: "https://github.com/containernetworking/plugins/releases/download/v{{ cni_plugins_version }}/cni-plugins-linux-amd64-v{{ cni_plugins_version }}.tgz"
dest: /opt/cni/bin
remote_src: yes
when:
- "ansible_architecture == 'x86_64'"
notify:
- restart nomad
- name: "Create Nomad configuration directory"
file: path=/etc/nomad/ state=directory

View File

@ -0,0 +1,46 @@
datacenter = "{{ datacenter }}"
addresses {
http = "0.0.0.0"
rpc = "0.0.0.0"
serf = "0.0.0.0"
}
advertise {
http = "{{ vpn_ip }}"
rpc = "{{ vpn_ip }}"
serf = "{{ vpn_ip }}"
}
data_dir = "/var/lib/nomad"
server {
enabled = true
bootstrap_expect = {{ groups['cluster_nodes']|length }}
}
consul {
address="127.0.0.1:8500"
}
client {
enabled = true
#cpu_total_compute = 4000
servers = ["127.0.0.1:4648"]
options {
docker.privileged.enabled = "true"
docker.volumes.enabled = "true"
}
network_interface = "wgdeuxfleurs"
host_network "default" {
#cidr = "{{ vpn_ip }}/24"
interface = "wgdeuxfleurs"
}
host_network "public" {
#cidr = "{{ public_ip }}/32"
interface = "{{ interface }}"
}
}

View File

@ -0,0 +1,3 @@
---
- name: umount gluster
shell: umount --force --lazy /mnt/glusterfs ; true

View File

@ -0,0 +1,72 @@
- name: "Add GlusterFS Repo Key"
apt_key:
url: https://download.gluster.org/pub/gluster/glusterfs/5/rsa.pub
state: present
- name: "Add GlusterFS official repository"
apt_repository:
repo: "deb [arch=amd64] https://download.gluster.org/pub/gluster/glusterfs/5/LATEST/Debian/buster/amd64/apt buster main"
state: present
filename: gluster
- name: "Install GlusterFS"
apt:
name:
- glusterfs-server
- glusterfs-client
state: present
- name: "Ensure Gluster Daemon started and enabled"
service:
name: glusterd
enabled: yes
state: started
- name: "Create directory for GlusterFS bricks"
file: path=/mnt/storage/glusterfs/brick1 recurse=yes state=directory
- name: "Create GlusterFS volumes"
gluster_volume:
state: present
name: donnees
bricks: /mnt/storage/glusterfs/brick1/g1
#rebalance: yes
redundancies: 1
disperses: 3
#replicas: 3
force: yes
options:
client.event-threads: "8"
server.event-threads: "8"
performance.stat-prefetch: "on"
nfs.disable: "on"
features.cache-invalidation: "on"
performance.client-io-threads: "on"
config.transport: tcp
performance.quick-read: "on"
performance.io-cache: "on"
nfs.export-volumes: "off"
cluster.lookup-optimize: "on"
cluster: "{% for selected_host in groups['cluster_nodes'] %}{{ hostvars[selected_host]['vpn_ip'] }}{{ ',' if not loop.last else '' }}{% endfor %}"
run_once: true
- name: "Create mountpoint"
file: path=/mnt/glusterfs recurse=yes state=directory
- name: "Flush handlers (umount glusterfs and restart ganesha)"
meta: flush_handlers
- name: "Add fstab entry"
tags: gluster-fstab
mount:
path: /mnt/glusterfs
src: "{{ vpn_ip }}:/donnees"
fstype: glusterfs
opts: "defaults,_netdev,noauto,x-systemd.automount"
state: present
- name: Mount everything
command: mount -a
args:
warn: no

View File

@ -10,8 +10,7 @@ active_users:
is_admin: true
ssh_keys:
- 'alex-key1.pub'
#- 'alex-key2.pub'
- 'alex-key3.pub'
- 'alex-key2.pub'
- username: 'maximilien'
is_admin: true
@ -21,18 +20,9 @@ active_users:
- username: 'florian'
is_admin: false
ssh_keys:
- 'florian-key1.pub'
- 'florian-key2.pub'
- username: 'adrien'
is_admin: true
ssh_keys:
- 'adrien-key1.pub'
- username: 'kokakiwi'
is_admin: true
ssh_keys:
- 'jill-key1.pub'
- 'quentin-key1.pub'
#- 'florian-key1.pub'
#- 'florian-key2.pub'
disabled_users:
- 'john.doe'

2
app/.gitignore vendored
View File

@ -1,2 +0,0 @@
env/
__pycache__

View File

@ -1,66 +0,0 @@
# Folder hierarchy
- `<module>/build/<image_name>/`: folders with dockerfiles and other necessary resources for building container images
- `<module>/config/`: folder containing configuration files, referenced by deployment file
- `<module>/secrets/`: folder containing secrets, which can be synchronized with Consul using `secretmgr.py`
- `<module>/deploy/`: folder containing the HCL file(s) necessary for deploying the module
- `<module>/integration/`: folder containing files for integration testing using docker-compose
# Secret Manager `secretmgr.py`
The Secret Manager ensures that all secrets are present where they should in the cluster.
**You need access to the cluster** (SSH port forwarding) for it to find any secret on the cluster. Refer to the previous directory's [README](../README.md), at the bottom of the file.
## How to install `secretmgr.py` dependencies
```bash
### Install system dependencies first:
## On fedora
dnf install -y openldap-devel cyrus-sasl-devel
## On ubuntu
apt-get install -y libldap2-dev libsasl2-dev
### Now install the Python dependencies from requirements.txt:
## Either using a virtual environment
# (requires virtualenv python module)
python3 -m virtualenv env
# Must be done everytime you create a new terminal window in this folder:
. env/bin/activate
# Install the deps
pip install -r requirements.txt
## Either by installing the dependencies for your system user:
pip3 install --user -r requirements.txt
```
## How to use `secretmgr.py`
Check that all secrets are correctly deployed for app `dummy`:
```bash
./secretmgr.py check dummy
```
Generate secrets for app `dummy` if they don't already exist:
```bash
./secretmgr.py gen dummy
```
Rotate secrets for app `dummy`, overwriting existing ones (be careful, this is dangerous!):
```bash
./secretmgr.py regen dummy
```
# Upgrading one of our packaged apps to a new version
1. Edit `docker-compose.yml`
2. Change the `VERSION` variable to the desired version
3. Increment the docker image tag by 1 (eg: superboum/riot:v13 -> superboum/riot:v14)
4. Run `docker-compose build`
5. Run `docker-compose push`
6. Done

View File

@ -1,28 +0,0 @@
FROM golang:buster as builder
WORKDIR /root
RUN git clone https://filippo.io/age && cd age/cmd/age && go build -o age .
FROM amd64/debian:buster
COPY --from=builder /root/age/cmd/age/age /usr/local/bin/age
RUN apt-get update && \
apt-get -qq -y full-upgrade && \
apt-get install -y rsync wget openssh-client unzip && \
apt-get clean && \
rm -f /var/lib/apt/lists/*_*
RUN mkdir -p /root/.ssh
WORKDIR /root
RUN wget https://releases.hashicorp.com/consul/1.8.5/consul_1.8.5_linux_amd64.zip && \
unzip consul_1.8.5_linux_amd64.zip && \
chmod +x consul && \
mv consul /usr/local/bin && \
rm consul_1.8.5_linux_amd64.zip
COPY do_backup.sh /root/do_backup.sh
CMD "/root/do_backup.sh"

View File

@ -1,20 +0,0 @@
#!/bin/sh
set -x -e
cd /root
chmod 0600 .ssh/id_ed25519
cat > .ssh/config <<EOF
Host backuphost
HostName $TARGET_SSH_HOST
Port $TARGET_SSH_PORT
User $TARGET_SSH_USER
EOF
consul kv export | \
gzip | \
age -r "$(cat /root/.ssh/id_ed25519.pub)" | \
ssh backuphost "cat > $TARGET_SSH_DIR/consul/$(date --iso-8601=minute)_consul_kv_export.gz.age"

View File

@ -1 +0,0 @@
result

View File

@ -1,8 +0,0 @@
## Build
```bash
docker load < $(nix-build docker.nix)
docker push superboum/backup-psql:???
```

View File

@ -1,106 +0,0 @@
#!/usr/bin/env python3
import shutil,sys,os,datetime,minio,subprocess
working_directory = "."
if 'CACHE_DIR' in os.environ: working_directory = os.environ['CACHE_DIR']
required_space_in_bytes = 20 * 1024 * 1024 * 1024
bucket = os.environ['AWS_BUCKET']
key = os.environ['AWS_ACCESS_KEY_ID']
secret = os.environ['AWS_SECRET_ACCESS_KEY']
endpoint = os.environ['AWS_ENDPOINT']
pubkey = os.environ['CRYPT_PUBLIC_KEY']
psql_host = os.environ['PSQL_HOST']
psql_user = os.environ['PSQL_USER']
s3_prefix = str(datetime.datetime.now())
files = [ "backup_manifest", "base.tar.gz", "pg_wal.tar.gz" ]
clear_paths = [ os.path.join(working_directory, f) for f in files ]
crypt_paths = [ os.path.join(working_directory, f) + ".age" for f in files ]
s3_keys = [ s3_prefix + "/" + f for f in files ]
def abort(msg):
for p in clear_paths + crypt_paths:
if os.path.exists(p):
print(f"Remove {p}")
os.remove(p)
if msg: sys.exit(msg)
else: print("success")
# Check we have enough space on disk
if shutil.disk_usage(working_directory).free < required_space_in_bytes:
abort(f"Not enough space on disk at path {working_directory} to perform a backup, aborting")
# Check postgres password is set
if 'PGPASSWORD' not in os.environ:
abort(f"You must pass postgres' password through the environment variable PGPASSWORD")
# Check our working directory is empty
if len(os.listdir(working_directory)) != 0:
abort(f"Working directory {working_directory} is not empty, aborting")
# Check Minio
client = minio.Minio(endpoint, key, secret)
if not client.bucket_exists(bucket):
abort(f"Bucket {bucket} does not exist or its access is forbidden, aborting")
# Perform the backup locally
try:
ret = subprocess.run(["pg_basebackup",
f"--host={psql_host}",
f"--username={psql_user}",
f"--pgdata={working_directory}",
f"--format=tar",
"--wal-method=stream",
"--gzip",
"--compress=6",
"--progress",
"--max-rate=5M",
])
if ret.returncode != 0:
abort(f"pg_basebackup exited, expected return code 0, got {ret.returncode}. aborting")
except Exception as e:
abort(f"pg_basebackup raised exception {e}. aborting")
# Check that the expected files are here
for p in clear_paths:
print(f"Checking that {p} exists locally")
if not os.path.exists(p):
abort(f"File {p} expected but not found, aborting")
# Cipher them
for c, e in zip(clear_paths, crypt_paths):
print(f"Ciphering {c} to {e}")
try:
ret = subprocess.run(["age", "-r", pubkey, "-o", e, c])
if ret.returncode != 0:
abort(f"age exit code is {ret}, 0 expected. aborting")
except Exception as e:
abort(f"aged raised an exception. {e}. aborting")
# Upload the backup to S3
for p, k in zip(crypt_paths, s3_keys):
try:
print(f"Uploading {p} to {k}")
result = client.fput_object(bucket, k, p)
print(
"created {0} object; etag: {1}, version-id: {2}".format(
result.object_name, result.etag, result.version_id,
),
)
except Exception as e:
abort(f"Exception {e} occured while upload {p}. aborting")
# Check that the files have been uploaded
for k in s3_keys:
try:
print(f"Checking that {k} exists remotely")
result = client.stat_object(bucket, k)
print(
"last-modified: {0}, size: {1}".format(
result.last_modified, result.size,
),
)
except Exception as e:
abort(f"{k} not found on S3. {e}. aborting")
abort(None)

View File

@ -1,8 +0,0 @@
{
pkgsSrc = fetchTarball {
# Latest commit on https://github.com/NixOS/nixpkgs/tree/nixos-21.11
# As of 2022-04-15
url ="https://github.com/NixOS/nixpkgs/archive/2f06b87f64bc06229e05045853e0876666e1b023.tar.gz";
sha256 = "sha256:1d7zg96xw4qsqh7c89pgha9wkq3rbi9as3k3d88jlxy2z0ns0cy2";
};
}

View File

@ -1,37 +0,0 @@
let
common = import ./common.nix;
pkgs = import common.pkgsSrc {};
python-with-my-packages = pkgs.python3.withPackages (p: with p; [
minio
]);
in
pkgs.stdenv.mkDerivation {
name = "backup-psql";
src = pkgs.lib.sourceFilesBySuffices ./. [ ".py" ];
buildInputs = [
python-with-my-packages
pkgs.age
pkgs.postgresql_14
];
buildPhase = ''
cat > backup-psql <<EOF
#!${pkgs.bash}/bin/bash
export PYTHONPATH=${python-with-my-packages}/${python-with-my-packages.sitePackages}
export PATH=${python-with-my-packages}/bin:${pkgs.age}/bin:${pkgs.postgresql_14}/bin
${python-with-my-packages}/bin/python3 $out/lib/backup-psql.py
EOF
chmod +x backup-psql
'';
installPhase = ''
mkdir -p $out/{bin,lib}
cp *.py $out/lib/backup-psql.py
cp backup-psql $out/bin/backup-psql
'';
}

View File

@ -1,11 +0,0 @@
let
common = import ./common.nix;
app = import ./default.nix;
pkgs = import common.pkgsSrc {};
in
pkgs.dockerTools.buildImage {
name = "superboum/backup-psql-docker";
config = {
Cmd = [ "${app}/bin/backup-psql" ];
};
}

View File

@ -1,171 +0,0 @@
job "backup_daily" {
datacenters = ["dc1"]
type = "batch"
priority = "60"
periodic {
cron = "@daily"
// Do not allow overlapping runs.
prohibit_overlap = true
}
group "backup-dovecot" {
constraint {
attribute = "${attr.unique.hostname}"
operator = "="
value = "digitale"
}
task "main" {
driver = "docker"
config {
image = "restic/restic:0.12.1"
entrypoint = [ "/bin/sh", "-c" ]
args = [ "restic backup /mail && restic forget --keep-within 1m1d --keep-within-weekly 3m --keep-within-monthly 1y && restic prune --max-unused 50% --max-repack-size 2G && restic check" ]
volumes = [
"/mnt/ssd/mail:/mail"
]
}
template {
data = <<EOH
AWS_ACCESS_KEY_ID={{ key "secrets/email/dovecot/backup_aws_access_key_id" }}
AWS_SECRET_ACCESS_KEY={{ key "secrets/email/dovecot/backup_aws_secret_access_key" }}
RESTIC_REPOSITORY={{ key "secrets/email/dovecot/backup_restic_repository" }}
RESTIC_PASSWORD={{ key "secrets/email/dovecot/backup_restic_password" }}
EOH
destination = "secrets/env_vars"
env = true
}
resources {
cpu = 500
memory = 200
}
restart {
attempts = 2
interval = "30m"
delay = "15s"
mode = "fail"
}
}
}
group "backup-plume" {
constraint {
attribute = "${attr.unique.hostname}"
operator = "="
value = "digitale"
}
task "main" {
driver = "docker"
config {
image = "restic/restic:0.12.1"
entrypoint = [ "/bin/sh", "-c" ]
args = [ "restic backup /plume && restic forget --keep-within 1m1d --keep-within-weekly 3m --keep-within-monthly 1y && restic prune --max-unused 50% --max-repack-size 2G && restic check" ]
volumes = [
"/mnt/ssd/plume/media:/plume"
]
}
template {
data = <<EOH
AWS_ACCESS_KEY_ID={{ key "secrets/plume/backup_aws_access_key_id" }}
AWS_SECRET_ACCESS_KEY={{ key "secrets/plume/backup_aws_secret_access_key" }}
RESTIC_REPOSITORY={{ key "secrets/plume/backup_restic_repository" }}
RESTIC_PASSWORD={{ key "secrets/plume/backup_restic_password" }}
EOH
destination = "secrets/env_vars"
env = true
}
resources {
cpu = 500
memory = 200
}
restart {
attempts = 2
interval = "30m"
delay = "15s"
mode = "fail"
}
}
}
group "backup-consul" {
task "consul-kv-export" {
driver = "docker"
lifecycle {
hook = "prestart"
sidecar = false
}
config {
image = "consul:1.11.2"
network_mode = "host"
entrypoint = [ "/bin/sh", "-c" ]
args = [ "/bin/consul kv export > $NOMAD_ALLOC_DIR/consul.json" ]
}
env {
CONSUL_HTTP_ADDR = "http://consul.service.2.cluster.deuxfleurs.fr:8500"
}
resources {
cpu = 200
memory = 200
}
restart {
attempts = 2
interval = "30m"
delay = "15s"
mode = "fail"
}
}
task "restic-backup" {
driver = "docker"
config {
image = "restic/restic:0.12.1"
entrypoint = [ "/bin/sh", "-c" ]
args = [ "restic backup $NOMAD_ALLOC_DIR/consul.json && restic forget --keep-within 1m1d --keep-within-weekly 3m --keep-within-monthly 1y && restic prune --max-unused 50% --max-repack-size 2G && restic check" ]
}
template {
data = <<EOH
AWS_ACCESS_KEY_ID={{ key "secrets/backup/consul/backup_aws_access_key_id" }}
AWS_SECRET_ACCESS_KEY={{ key "secrets/backup/consul/backup_aws_secret_access_key" }}
RESTIC_REPOSITORY={{ key "secrets/backup/consul/backup_restic_repository" }}
RESTIC_PASSWORD={{ key "secrets/backup/consul/backup_restic_password" }}
EOH
destination = "secrets/env_vars"
env = true
}
resources {
cpu = 200
memory = 200
}
restart {
attempts = 2
interval = "30m"
delay = "15s"
mode = "fail"
}
}
}
}

View File

@ -1,55 +0,0 @@
job "backup_weekly" {
datacenters = ["dc1"]
type = "batch"
priority = "60"
periodic {
cron = "@weekly"
// Do not allow overlapping runs.
prohibit_overlap = true
}
group "backup-psql" {
task "main" {
driver = "docker"
config {
image = "superboum/backup-psql-docker:gyr3aqgmhs0hxj0j9hkrdmm1m07i8za2"
volumes = [
// Mount a cache on the hard disk to avoid filling the SSD
"/mnt/storage/tmp_bckp_psql:/mnt/cache"
]
}
template {
data = <<EOH
CACHE_DIR=/mnt/cache
AWS_BUCKET=backups-pgbasebackup
AWS_ENDPOINT=s3.deuxfleurs.shirokumo.net
AWS_ACCESS_KEY_ID={{ key "secrets/backup/psql/aws_access_key_id" }}
AWS_SECRET_ACCESS_KEY={{ key "secrets/backup/psql/aws_secret_access_key" }}
CRYPT_PUBLIC_KEY={{ key "secrets/backup/psql/crypt_public_key" }}
PSQL_HOST=psql-proxy.service.2.cluster.deuxfleurs.fr
PSQL_USER={{ key "secrets/postgres/keeper/pg_repl_username" }}
PGPASSWORD={{ key "secrets/postgres/keeper/pg_repl_pwd" }}
EOH
destination = "secrets/env_vars"
env = true
}
resources {
cpu = 200
memory = 200
}
restart {
attempts = 2
interval = "30m"
delay = "15s"
mode = "fail"
}
}
}
}

View File

@ -1,67 +0,0 @@
job "backup_periodic" {
datacenters = ["dc1"]
type = "batch"
periodic {
// Launch every hour
cron = "0 * * * * *"
// Do not allow overlapping runs.
prohibit_overlap = true
}
task "backup-consul" {
driver = "docker"
config {
image = "lxpz/backup_consul:12"
volumes = [
"secrets/id_ed25519:/root/.ssh/id_ed25519",
"secrets/id_ed25519.pub:/root/.ssh/id_ed25519.pub",
"secrets/known_hosts:/root/.ssh/known_hosts"
]
network_mode = "host"
}
env {
CONSUL_HTTP_ADDR = "http://consul.service.2.cluster.deuxfleurs.fr:8500"
}
template {
data = <<EOH
TARGET_SSH_USER={{ key "secrets/backup/target_ssh_user" }}
TARGET_SSH_PORT={{ key "secrets/backup/target_ssh_port" }}
TARGET_SSH_HOST={{ key "secrets/backup/target_ssh_host" }}
TARGET_SSH_DIR={{ key "secrets/backup/target_ssh_dir" }}
EOH
destination = "secrets/env_vars"
env = true
}
template {
data = "{{ key \"secrets/backup/id_ed25519\" }}"
destination = "secrets/id_ed25519"
}
template {
data = "{{ key \"secrets/backup/id_ed25519.pub\" }}"
destination = "secrets/id_ed25519.pub"
}
template {
data = "{{ key \"secrets/backup/target_ssh_fingerprint\" }}"
destination = "secrets/known_hosts"
}
resources {
memory = 200
}
restart {
attempts = 2
interval = "30m"
delay = "15s"
mode = "fail"
}
}
}

View File

@ -1 +0,0 @@
USER Backup AWS access key ID

View File

@ -1 +0,0 @@
USER Backup AWS secret access key

View File

@ -1 +0,0 @@
USER Restic password to encrypt backups

View File

@ -1 +0,0 @@
USER Restic repository, eg. s3:https://s3.garage.tld

View File

@ -1 +0,0 @@
USER_LONG Private ed25519 key of the container doing the backup

View File

@ -1 +0,0 @@
USER Public ed25519 key of the container doing the backup (this key must be in authorized_keys on the backup target host)

View File

@ -1 +0,0 @@
USER Minio access key

View File

@ -1 +0,0 @@
USER Minio secret key

View File

@ -1 +0,0 @@
USER a private key to decript backups from age

View File

@ -1 +0,0 @@
USER A public key to encypt backups with age

View File

@ -1 +0,0 @@
USER Directory where to store backups on target host

View File

@ -1 +0,0 @@
USER SSH fingerprint of the target machine (format: copy here the corresponding line from your known_hosts file)

View File

@ -1 +0,0 @@
USER Hostname of the backup target host

View File

@ -1 +0,0 @@
USER SSH port number to connect to the target host

View File

@ -1 +0,0 @@
USER SSH username to log in as on the target host

View File

@ -1,83 +0,0 @@
job "bagage" {
datacenters = ["dc1"]
type = "service"
priority = 90
constraint {
attribute = "${attr.cpu.arch}"
value = "amd64"
}
group "main" {
count = 1
network {
port "web_port" { to = 8080 }
port "ssh_port" {
static = 2222
to = 2222
}
}
task "server" {
driver = "docker"
config {
image = "superboum/amd64_bagage:v11"
readonly_rootfs = false
volumes = [
"secrets/id_rsa:/id_rsa"
]
ports = [ "web_port", "ssh_port" ]
}
env {
BAGAGE_LDAP_ENDPOINT = "bottin2.service.2.cluster.deuxfleurs.fr:389"
}
resources {
memory = 500
}
template {
data = "{{ key \"secrets/bagage/id_rsa\" }}"
destination = "secrets/id_rsa"
}
service {
name = "bagage-ssh"
port = "ssh_port"
address_mode = "host"
tags = [
"bagage",
"(diplonat (tcp_port 2222))"
]
}
service {
name = "bagage-webdav"
tags = [
"bagage",
"traefik.enable=true",
"traefik.frontend.entryPoints=https,http",
"traefik.frontend.rule=Host:bagage.deuxfleurs.fr",
"tricot bagage.deuxfleurs.fr",
]
port = "web_port"
address_mode = "host"
check {
type = "tcp"
port = "web_port"
address_mode = "host"
interval = "60s"
timeout = "5s"
check_restart {
limit = 3
grace = "90s"
ignore_warnings = false
}
}
}
}
}
}

View File

@ -1 +0,0 @@
CMD ssh-keygen -q -f >(cat) -N "" <<< y 2>/dev/null 1>&2 ; true

View File

@ -1,2 +0,0 @@
docker load < $(nix-build docker.nix)
docker push superboum/cryptpad:???

View File

@ -1,8 +0,0 @@
{
pkgsSrc = fetchTarball {
# Latest commit on https://github.com/NixOS/nixpkgs/tree/nixos-21.11
# As of 2022-04-15
url ="https://github.com/NixOS/nixpkgs/archive/2f06b87f64bc06229e05045853e0876666e1b023.tar.gz";
sha256 = "sha256:1d7zg96xw4qsqh7c89pgha9wkq3rbi9as3k3d88jlxy2z0ns0cy2";
};
}

View File

@ -1,10 +0,0 @@
let
common = import ./common.nix;
pkgs = import common.pkgsSrc {};
in
pkgs.dockerTools.buildImage {
name = "superboum/cryptpad";
config = {
Cmd = [ "${pkgs.cryptpad}/bin/cryptpad" ];
};
}

View File

@ -1,283 +0,0 @@
/* globals module */
/* DISCLAIMER:
There are two recommended methods of running a CryptPad instance:
1. Using a standalone nodejs server without HTTPS (suitable for local development)
2. Using NGINX to serve static assets and to handle HTTPS for API server's websocket traffic
We do not officially recommend or support Apache, Docker, Kubernetes, Traefik, or any other configuration.
Support requests for such setups should be directed to their authors.
If you're having difficulty difficulty configuring your instance
we suggest that you join the project's IRC/Matrix channel.
If you don't have any difficulty configuring your instance and you'd like to
support us for the work that went into making it pain-free we are quite happy
to accept donations via our opencollective page: https://opencollective.com/cryptpad
*/
module.exports = {
/* CryptPad is designed to serve its content over two domains.
* Account passwords and cryptographic content is handled on the 'main' domain,
* while the user interface is loaded on a 'sandbox' domain
* which can only access information which the main domain willingly shares.
*
* In the event of an XSS vulnerability in the UI (that's bad)
* this system prevents attackers from gaining access to your account (that's good).
*
* Most problems with new instances are related to this system blocking access
* because of incorrectly configured sandboxes. If you only see a white screen
* when you try to load CryptPad, this is probably the cause.
*
* PLEASE READ THE FOLLOWING COMMENTS CAREFULLY.
*
*/
/* httpUnsafeOrigin is the URL that clients will enter to load your instance.
* Any other URL that somehow points to your instance is supposed to be blocked.
* The default provided below assumes you are loading CryptPad from a server
* which is running on the same machine, using port 3000.
*
* In a production instance this should be available ONLY over HTTPS
* using the default port for HTTPS (443) ie. https://cryptpad.fr
* In such a case this should be also handled by NGINX, as documented in
* cryptpad/docs/example.nginx.conf (see the $main_domain variable)
*
*/
httpUnsafeOrigin: 'http://localhost:3000',
/* httpSafeOrigin is the URL that is used for the 'sandbox' described above.
* If you're testing or developing with CryptPad on your local machine then
* it is appropriate to leave this blank. The default behaviour is to serve
* the main domain over port 3000 and to serve the sandbox content over port 3001.
*
* This is not appropriate in a production environment where invasive networks
* may filter traffic going over abnormal ports.
* To correctly configure your production instance you must provide a URL
* with a different domain (a subdomain is sufficient).
* It will be used to load the UI in our 'sandbox' system.
*
* This value corresponds to the $sandbox_domain variable
* in the example nginx file.
*
* Note that in order for the sandboxing system to be effective
* httpSafeOrigin must be different from httpUnsafeOrigin.
*
* CUSTOMIZE AND UNCOMMENT THIS FOR PRODUCTION INSTALLATIONS.
*/
// httpSafeOrigin: "https://some-other-domain.xyz",
/* httpAddress specifies the address on which the nodejs server
* should be accessible. By default it will listen on 127.0.0.1
* (IPv4 localhost on most systems). If you want it to listen on
* all addresses, including IPv6, set this to '::'.
*
*/
httpAddress: '::',
/* httpPort specifies on which port the nodejs server should listen.
* By default it will serve content over port 3000, which is suitable
* for both local development and for use with the provided nginx example,
* which will proxy websocket traffic to your node server.
*
*/
//httpPort: 3000,
/* httpSafePort allows you to specify an alternative port from which
* the node process should serve sandboxed assets. The default value is
* that of your httpPort + 1. You probably don't need to change this.
*
*/
//httpSafePort: 3001,
/* CryptPad will launch a child process for every core available
* in order to perform CPU-intensive tasks in parallel.
* Some host environments may have a very large number of cores available
* or you may want to limit how much computing power CryptPad can take.
* If so, set 'maxWorkers' to a positive integer.
*/
// maxWorkers: 4,
/* =====================
* Admin
* ===================== */
/*
* CryptPad contains an administration panel. Its access is restricted to specific
* users using the following list.
* To give access to the admin panel to a user account, just add their public signing
* key, which can be found on the settings page for registered users.
* Entries should be strings separated by a comma.
*/
/*
adminKeys: [
//"[cryptpad-user1@my.awesome.website/YZgXQxKR0Rcb6r6CmxHPdAGLVludrAF2lEnkbx1vVOo=]",
],
*/
/* =====================
* STORAGE
* ===================== */
/* Pads that are not 'pinned' by any registered user can be set to expire
* after a configurable number of days of inactivity (default 90 days).
* The value can be changed or set to false to remove expiration.
* Expired pads can then be removed using a cron job calling the
* `evict-inactive.js` script with node
*
* defaults to 90 days if nothing is provided
*/
//inactiveTime: 90, // days
/* CryptPad archives some data instead of deleting it outright.
* This archived data still takes up space and so you'll probably still want to
* remove these files after a brief period.
*
* cryptpad/scripts/evict-inactive.js is intended to be run daily
* from a crontab or similar scheduling service.
*
* The intent with this feature is to provide a safety net in case of accidental
* deletion. Set this value to the number of days you'd like to retain
* archived data before it's removed permanently.
*
* defaults to 15 days if nothing is provided
*/
//archiveRetentionTime: 15,
/* It's possible to configure your instance to remove data
* stored on behalf of inactive accounts. Set 'accountRetentionTime'
* to the number of days an account can remain idle before its
* documents and other account data is removed.
*
* Leave this value commented out to preserve all data stored
* by user accounts regardless of inactivity.
*/
//accountRetentionTime: 365,
/* Starting with CryptPad 3.23.0, the server automatically runs
* the script responsible for removing inactive data according to
* your configured definition of inactivity. Set this value to `true`
* if you prefer not to remove inactive data, or if you prefer to
* do so manually using `scripts/evict-inactive.js`.
*/
//disableIntegratedEviction: true,
/* Max Upload Size (bytes)
* this sets the maximum size of any one file uploaded to the server.
* anything larger than this size will be rejected
* defaults to 20MB if no value is provided
*/
//maxUploadSize: 20 * 1024 * 1024,
/* Users with premium accounts (those with a plan included in their customLimit)
* can benefit from an increased upload size limit. By default they are restricted to the same
* upload size as any other registered user.
*
*/
//premiumUploadSize: 100 * 1024 * 1024,
/* =====================
* DATABASE VOLUMES
* ===================== */
/*
* CryptPad stores each document in an individual file on your hard drive.
* Specify a directory where files should be stored.
* It will be created automatically if it does not already exist.
*/
filePath: './root/tmp/mut/datastore/',
/* CryptPad offers the ability to archive data for a configurable period
* before deleting it, allowing a means of recovering data in the event
* that it was deleted accidentally.
*
* To set the location of this archive directory to a custom value, change
* the path below:
*/
archivePath: './root/tmp/mut/data/archive',
/* CryptPad allows logged in users to request that particular documents be
* stored by the server indefinitely. This is called 'pinning'.
* Pin requests are stored in a pin-store. The location of this store is
* defined here.
*/
pinPath: './root/tmp/mut/data/pins',
/* if you would like the list of scheduled tasks to be stored in
a custom location, change the path below:
*/
taskPath: './root/tmp/mut/data/tasks',
/* if you would like users' authenticated blocks to be stored in
a custom location, change the path below:
*/
blockPath: './root/tmp/mut/block',
/* CryptPad allows logged in users to upload encrypted files. Files/blobs
* are stored in a 'blob-store'. Set its location here.
*/
blobPath: './root/tmp/mut/blob',
/* CryptPad stores incomplete blobs in a 'staging' area until they are
* fully uploaded. Set its location here.
*/
blobStagingPath: './root/tmp/mut/data/blobstage',
decreePath: './root/tmp/mut/data/decrees',
/* CryptPad supports logging events directly to the disk in a 'logs' directory
* Set its location here, or set it to false (or nothing) if you'd rather not log
*/
logPath: './root/tmp/mut/data/logs',
/* =====================
* Debugging
* ===================== */
/* CryptPad can log activity to stdout
* This may be useful for debugging
*/
logToStdout: true,
/* CryptPad can be configured to log more or less
* the various settings are listed below by order of importance
*
* silly, verbose, debug, feedback, info, warn, error
*
* Choose the least important level of logging you wish to see.
* For example, a 'silly' logLevel will display everything,
* while 'info' will display 'info', 'warn', and 'error' logs
*
* This will affect both logging to the console and the disk.
*/
logLevel: 'debug',
/* clients can use the /settings/ app to opt out of usage feedback
* which informs the server of things like how much each app is being
* used, and whether certain clientside features are supported by
* the client's browser. The intent is to provide feedback to the admin
* such that the service can be improved. Enable this with `true`
* and ignore feedback with `false` or by commenting the attribute
*
* You will need to set your logLevel to include 'feedback'. Set this
* to false if you'd like to exclude feedback from your logs.
*/
logFeedback: false,
/* CryptPad supports verbose logging
* (false by default)
*/
verbose: true,
/* Surplus information:
*
* 'installMethod' is included in server telemetry to voluntarily
* indicate how many instances are using unofficial installation methods
* such as Docker.
*
*/
installMethod: 'unspecified',
};

View File

@ -1,27 +0,0 @@
{
"suffix": "dc=deuxfleurs,dc=fr",
"bind": "0.0.0.0:389",
"consul_host": "http://consul.service.2.cluster.deuxfleurs.fr:8500",
"log_level": "debug",
"acl": [
"*,dc=deuxfleurs,dc=fr::read:*:* !userpassword !user_secret !alternate_user_secrets !garage_s3_secret_key",
"*::read modify:SELF:*",
"ANONYMOUS::bind:*,ou=users,dc=deuxfleurs,dc=fr:",
"ANONYMOUS::bind:cn=admin,dc=deuxfleurs,dc=fr:",
"*,ou=services,ou=users,dc=deuxfleurs,dc=fr::bind:*,ou=users,dc=deuxfleurs,dc=fr:*",
"*,ou=services,ou=users,dc=deuxfleurs,dc=fr::read:*:*",
"*:cn=asso_deuxfleurs,ou=groups,dc=deuxfleurs,dc=fr:add:*,ou=invitations,dc=deuxfleurs,dc=fr:*",
"ANONYMOUS::bind:*,ou=invitations,dc=deuxfleurs,dc=fr:",
"*,ou=invitations,dc=deuxfleurs,dc=fr::delete:SELF:*",
"*:cn=asso_deuxfleurs,ou=groups,dc=deuxfleurs,dc=fr:add:*,ou=users,dc=deuxfleurs,dc=fr:*",
"*,ou=invitations,dc=deuxfleurs,dc=fr::add:*,ou=users,dc=deuxfleurs,dc=fr:*",
"*:cn=asso_deuxfleurs,ou=groups,dc=deuxfleurs,dc=fr:modifyAdd:cn=email,ou=groups,dc=deuxfleurs,dc=fr:*",
"*,ou=invitations,dc=deuxfleurs,dc=fr::modifyAdd:cn=email,ou=groups,dc=deuxfleurs,dc=fr:*",
"cn=admin,dc=deuxfleurs,dc=fr::read add modify delete:*:*",
"*:cn=admin,ou=groups,dc=deuxfleurs,dc=fr:read add modify delete:*:*"
]
}

View File

@ -1 +0,0 @@
USER Garage access key for Guichet profile pictures

View File

@ -1 +0,0 @@
USER Garage secret key for Guichet profile pictures

View File

@ -1 +0,0 @@
USER SMTP password

View File

@ -1 +0,0 @@
USER SMTP username

View File

@ -1,108 +0,0 @@
version: '3.4'
services:
# Instant Messaging
riot:
build:
context: ./im/build/riotweb
args:
# https://github.com/vector-im/riot-web/releases
VERSION: 1.10.15
image: superboum/amd64_riotweb:v30
synapse:
build:
context: ./im/build/matrix-synapse
args:
# https://github.com/matrix-org/synapse/releases
VERSION: 1.61.1
# https://github.com/matrix-org/synapse-s3-storage-provider/commits/main
# Update with the latest commit on main each time you update the synapse version
# otherwise synapse may fail to launch due to incompatibility issues
# see this issue for an example: https://github.com/matrix-org/synapse-s3-storage-provider/issues/64
S3_VERSION: ffd3fa477321608e57d27644197e721965e0e858
image: superboum/amd64_synapse:v53
# Email
sogo:
build:
context: ./email/build/sogo
args:
# fake for now
VERSION: 5.0.0
image: superboum/amd64_sogo:v7
alps:
build:
context: ./email/build/alps
args:
VERSION: 9bafa64b9d
image: superboum/amd64_alps:v1
dovecot:
build:
context: ./email/build/dovecot
image: superboum/amd64_dovecot:v6
# VoIP
jitsi-meet:
build:
context: ./jitsi/build/jitsi-meet
args:
# https://github.com/jitsi/jitsi-meet
MEET_TAG: stable/jitsi-meet_6826
image: superboum/amd64_jitsi_meet:v5
jitsi-conference-focus:
build:
context: ./jitsi/build/jitsi-conference-focus
args:
# https://github.com/jitsi/jicofo
JICOFO_TAG: stable/jitsi-meet_6826
image: superboum/amd64_jitsi_conference_focus:v9
jitsi-videobridge:
build:
context: ./jitsi/build/jitsi-videobridge
args:
# https://github.com/jitsi/jitsi-videobridge
# note: JVB is not tagged with non-stable tags
JVB_TAG: stable/jitsi-meet_6826
image: superboum/amd64_jitsi_videobridge:v20
jitsi-xmpp:
build:
context: ./jitsi/build/jitsi-xmpp
args:
MEET_TAG: stable/jitsi-meet_6826
PROSODY_VERSION: 0.11.12-1
image: superboum/amd64_jitsi_xmpp:v10
plume:
build:
context: ./plume/build/plume
args:
VERSION: 8709f6cf9f8ff7e3c5ee7ea699ee7c778e92fefc
image: superboum/plume:v8
postfix:
build:
context: ./email/build/postfix
args:
# https://packages.debian.org/fr/buster/postfix
VERSION: 3.4.14-0+deb10u1
image: superboum/amd64_postfix:v3
postgres:
build:
args:
# https://github.com/sorintlab/stolon/releases
STOLON_VERSION: 3bb7499f815f77140551eb762b200cf4557f57d3
context: ./postgres/build/postgres
image: superboum/amd64_postgres:v11
backup-consul:
build:
context: ./backup/build/backup-consul
image: lxpz/backup_consul:12

View File

@ -1,127 +0,0 @@
job "drone-ci" {
datacenters = ["dc1"]
type = "service"
group "server" {
count = 1
network {
port "web_port" {
to = 80
}
}
task "drone_server" {
driver = "docker"
config {
image = "drone/drone:2.12.0"
ports = [ "web_port" ]
}
template {
data = <<EOH
DRONE_GITEA_SERVER=https://git.deuxfleurs.fr
DRONE_GITEA_CLIENT_ID={{ key "secrets/drone-ci/oauth_client_id" }}
DRONE_GITEA_CLIENT_SECRET={{ key "secrets/drone-ci/oauth_client_secret" }}
DRONE_RPC_SECRET={{ key "secrets/drone-ci/rpc_secret" }}
DRONE_SERVER_HOST=drone.deuxfleurs.fr
DRONE_SERVER_PROTO=https
DRONE_DATABASE_SECRET={{ key "secrets/drone-ci/db_enc_secret" }}
DRONE_COOKIE_SECRET={{ key "secrets/drone-ci/cookie_secret" }}
AWS_ACCESS_KEY_ID={{ key "secrets/drone-ci/s3_ak" }}
AWS_SECRET_ACCESS_KEY={{ key "secrets/drone-ci/s3_sk" }}
AWS_DEFAULT_REGION=garage
AWS_REGION=garage
DRONE_S3_BUCKET={{ key "secrets/drone-ci/s3_bucket" }}
DRONE_S3_ENDPOINT=https://garage.deuxfleurs.fr
DRONE_S3_PATH_STYLE=true
DRONE_DATABASE_DRIVER=postgres
DRONE_DATABASE_DATASOURCE=postgres://{{ key "secrets/drone-ci/db_user" }}:{{ key "secrets/drone-ci/db_pass" }}@psql-proxy.service.2.cluster.deuxfleurs.fr:5432/drone?sslmode=disable
DRONE_USER_CREATE=username:lx-admin,admin:true
DRONE_REGISTRATION_CLOSED=true
DRONE_LOGS_TEXT=true
DRONE_LOGS_PRETTY=true
DRONE_LOGS_DEBUG=true
DOCKER_API_VERSION=1.39
EOH
destination = "secrets/env"
env = true
}
resources {
cpu = 100
memory = 100
}
service {
name = "drone"
tags = [
"drone",
"traefik.enable=true",
"traefik.frontend.entryPoints=https,http",
"traefik.frontend.rule=Host:drone.deuxfleurs.fr",
"tricot drone.deuxfleurs.fr",
]
port = "web_port"
address_mode = "host"
check {
type = "http"
protocol = "http"
port = "web_port"
path = "/"
interval = "60s"
timeout = "5s"
check_restart {
limit = 3
grace = "600s"
ignore_warnings = false
}
}
}
}
}
/*
group "runner" {
count = 3
constraint {
operator = "distinct_hosts"
value = "true"
}
task "drone_runner" {
driver = "docker"
config {
network_mode = "host"
#image = "drone/drone-runner-nomad:latest"
image = "drone/drone-runner-docker:1.6.3"
volumes = [
"/var/run/docker.sock:/var/run/docker.sock"
]
}
template {
data = <<EOH
DRONE_RPC_SECRET={{ key "secrets/drone-ci/rpc_secret" }}
DRONE_RPC_HOST=drone.deuxfleurs.fr
DRONE_RPC_PROTO=https
DRONE_RUNNER_NAME={{ env "node.unique.name" }}
DRONE_DEBUG=true
NOMAD_ADDR=http://nomad-client.service.2.cluster.deuxfleurs.fr:4646
DOCKER_API_VERSION=1.39
EOH
destination = "secrets/env"
env = true
}
resources {
memory = 40
cpu = 50
}
}
}
*/
}

View File

@ -1,69 +0,0 @@
## Install Debian
We recommend Debian Bullseye
## Install Docker CE from docker.io
Do not use the docker engine shipped by Debian
Doc:
- https://docs.docker.com/engine/install/debian/
- https://docs.docker.com/compose/install/
On a fresh install, as root:
```bash
apt-get remove -y docker docker-engine docker.io containerd runc
apt-get update
apt-get install apt-transport-https ca-certificates curl gnupg lsb-release
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/debian $(lsb_release -cs) stable" | tee /etc/apt/sources.list.d/docker.list > /dev/null
apt-get update
apt-get install -y docker-ce docker-ce-cli containerd.io
curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
```
## Install the runner
*This is our Nix runner version 2, previously we had another way to start Nix runners. This one has a proper way to handle concurrency, require less boilerplate, and should be safer and more idiomatic.*
```bash
wget https://git.deuxfleurs.fr/Deuxfleurs/infrastructure/raw/branch/main/app/drone-ci/integration/nix.conf
wget https://git.deuxfleurs.fr/Deuxfleurs/infrastructure/raw/branch/main/app/drone-ci/integration/docker-compose.yml
# Edit the docker-compose.yml to adapt its variables to your needs,
# especially the capacitiy value and its name.
COMPOSE_PROJECT_NAME=drone DRONE_SECRET=xxx docker-compose up -d
```
That's all folks.
## Check if a given job is built by your runner
```bash
export URL=https://drone.deuxfleurs.fr
export REPO=Deuxfleurs/garage
export BUILD=1312
curl ${URL}/api/repos/${REPO}/builds/${BUILD} \
| jq -c '[.stages[] | { name: .name, machine: .machine }]'
```
It will give you the following result:
```json
[{"name":"default","machine":"1686a"},{"name":"release-linux-x86_64","machine":"vimaire"},{"name":"release-linux-i686","machine":"carcajou"},{"name":"release-linux-aarch64","machine":"caribou"},{"name":"release-linux-armv6l","machine":"cariacou"},{"name":"refresh-release-page","machine":null}]
```
## Random note
*This part might be deprecated!*
This setup is done mainly to allow nix builds with some cache.
To use the cache in Drone, you must set your repository as trusted.
The command line tool does not work (it says it successfully set your repository as trusted but it did nothing):
the only way to set your repository as trusted is to connect on the DB and set the `repo_trusted` field of your repo to true.

View File

@ -1,54 +0,0 @@
version: '3.4'
services:
nix-daemon:
image: nixpkgs/nix:nixos-22.05
restart: always
command: nix-daemon
privileged: true
volumes:
- "nix:/nix"
- "./nix.conf:/etc/nix/nix.conf:ro"
drone-runner:
image: drone/drone-runner-docker:latest
restart: always
environment:
- DRONE_RPC_PROTO=https
- DRONE_RPC_HOST=drone.deuxfleurs.fr
- DRONE_RPC_SECRET=${DRONE_SECRET}
- DRONE_RUNNER_CAPACITY=3
- DRONE_DEBUG=true
- DRONE_LOGS_TRACE=true
- DRONE_RPC_DUMP_HTTP=true
- DRONE_RPC_DUMP_HTTP_BODY=true
- DRONE_RUNNER_NAME=i_forgot_to_change_my_runner_name
- DRONE_RUNNER_LABELS=nix-daemon:1
# we should put "nix:/nix:ro but it is not supported by
# drone-runner-docker because the dependency envconfig does
# not support having two colons (:) in the same stanza.
# Without the RO flag (or using docker userns), build isolation
# is broken.
# https://discourse.drone.io/t/allow-mounting-a-host-volume-as-read-only/10071
# https://github.com/kelseyhightower/envconfig/pull/153
#
# A workaround for isolation is to configure docker with a userns,
# so even if the folder is writable to root, it is not to any non
# privileged docker daemon ran by drone!
- DRONE_RUNNER_VOLUMES=drone_nix:/nix
- DRONE_RUNNER_ENVIRON=NIX_REMOTE:daemon
ports:
- "3000:3000/tcp"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
drone-gc:
image: drone/gc:latest
restart: always
environment:
- GC_DEBUG=true
- GC_CACHE=10gb
- GC_INTERVAL=10m
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
volumes:
nix:

View File

@ -1,9 +0,0 @@
substituters = https://cache.nixos.org https://nix.web.deuxfleurs.fr
trusted-public-keys = cache.nixos.org-1:6NCHdD59X431o0gWypbMrAURkbJ16ZPMQFGspcDShjY= nix.web.deuxfleurs.fr:eTGL6kvaQn6cDR/F9lDYUIP9nCVR/kkshYfLDJf1yKs=
max-jobs = auto
cores = 0
log-lines = 200
filter-syscalls = true
sandbox = true
keep-outputs = true
keep-derivations = true

View File

@ -1 +0,0 @@
CMD openssl rand -hex 16

View File

@ -1 +0,0 @@
CMD_ONCE openssl rand -hex 16

View File

@ -1 +0,0 @@
SERVICE_PASSWORD drone

View File

@ -1 +0,0 @@
CONST drone

View File

@ -1 +0,0 @@
USER OAuth client ID (on Gitea)

View File

@ -1 +0,0 @@
USER OAuth client secret (for gitea)

View File

@ -1 +0,0 @@
CMD openssl rand -hex 16

View File

@ -1 +0,0 @@
USER S3 (garage) access key for Drone

View File

@ -1 +0,0 @@
CONST drone

View File

@ -1 +0,0 @@
USER S3 (garage) secret key for Drone

View File

@ -1 +0,0 @@
CMD head -c 10 /dev/urandom | base64

View File

@ -1 +0,0 @@
CONST this is a constant

View File

@ -1,5 +0,0 @@
CONST_LONG
this is a
constant
on several
lines

View File

@ -1 +0,0 @@
SERVICE_DN dummy Dummy service for testing secretmgr.py

Some files were not shown because too many files have changed in this diff Show More