Compare commits

...

467 commits

Author SHA1 Message Date
ff7462b2c7 prod: update nomad to 1.6 2024-04-20 12:29:26 +02:00
972fc4ea7c prod: nixos 23.11 and nomad 1.5 2024-04-20 10:58:36 +02:00
444306aa54 prod: allow woodpecker on neptune now with good ipv6 2024-04-20 10:20:04 +02:00
c6a1bb341f prod: update nixos to 23.05 2024-04-20 10:09:55 +02:00
eddc95c5df prod: update ip config for Free ISP at Neptune 2024-04-20 09:37:24 +02:00
fb871fd350 staging: accept nomad bsl license 2024-04-19 08:54:11 +02:00
27df86a7e5 fix pad when not in neptune, and allow android7 email to move to bespin 2024-04-19 08:53:48 +02:00
d817ad7b15 Merge branch 'poil' 2024-04-18 19:36:32 +02:00
1871f7bbff ajout de Jill & Trinity en admins de CryptPad 2024-04-18 19:36:07 +02:00
18e73b18f3 Merge pull request 'cluster/prod(app): Upgrade CryptPad to 2024.3.0' (#23) from KokaKiwi/nixcfg:crytptpad-upgrade-1 into main
Reviewed-on: Deuxfleurs/nixcfg#23
2024-04-18 17:35:36 +00:00
a817d764d3 déplacement du service cryptpad concombre -> abricot 2024-04-18 19:07:08 +02:00
9111997f84
cluster/prod(app): Add new CryptPad build files 2024-04-18 18:56:19 +02:00
d41e10bd25
cluster/prod(app): Upgrade CryptPad to 2024.3.0 2024-04-18 18:45:07 +02:00
718a23b74b
cluster/prod: Add kokakiwi to adminAccounts 2024-04-18 17:57:24 +02:00
96ead9a597 prod: garage v1.0.0-rc1 2024-04-01 20:11:24 +02:00
6152dc18d6 remove notice message for moderation 2024-03-29 15:48:21 +01:00
1a1ad0a8ad staging: garage v1.0 rc1 2024-03-28 17:17:21 +01:00
5b89004c0f staging: deploy garage 0.10 beta + fix monitoring 2024-03-28 11:56:51 +01:00
e4708a325d add trinity.fr.eu.org to DKIM 2024-03-24 13:42:47 +00:00
05dcd1c6a6 Courderec.re domain in the DKIM table 2024-03-24 14:23:47 +01:00
8fdffdf12f prod: remove drone-ci 2024-03-17 11:35:07 +01:00
d55c9610a9 ajout de marion et darkgallium 2024-03-16 18:53:18 +01:00
18af714330 Fusion conflict 2024-03-16 18:53:11 +01:00
f228592473
Ajout de la regex dans le query parameter du http-bind aussi 2024-03-11 08:37:40 +01:00
263dad0243 ajout redirection nginx des salons Jitsi suspects 2024-03-10 21:05:43 +01:00
aaf95aa110 added notice message on Jitsi about our monitoring 2024-03-10 20:39:41 +01:00
6544cd3e14 increased Jitsi logs a bit 2024-03-09 12:56:34 +01:00
691299b5ed Merge pull request 'Update lightstream and grafana' (#20) from telemetry-update into main
Reviewed-on: Deuxfleurs/nixcfg#20
2024-03-09 10:49:52 +00:00
54f7cb670d
Update lightstream and grafana 2024-03-09 11:41:46 +01:00
3ca0203753 store real IP from Jitsi 2024-03-08 21:25:43 +01:00
dde6ece4db prod: give more memory to promehteus 2024-03-08 12:03:48 +01:00
3d75b5a0bd remove orsay extra service 2024-03-06 15:15:21 +01:00
eb40718bee force woodpecker on scorpio 2024-03-04 15:38:21 +01:00
62bd80a346 garage: update to v0.9.2 final 2024-03-01 18:11:36 +01:00
71e959ee79 prod: update to garage 0.9.2-rc1 2024-02-29 16:19:21 +01:00
ae632bfecf staging: deploy garage v0.9.2-rc1 2024-02-29 15:32:16 +01:00
5f0cec7d3e woodpecker-ci: higher affinity to scorpio 2024-02-28 11:42:39 +01:00
74668a31b2 staging: update garage to test release 2024-02-19 12:46:22 +01:00
f724e81239 add automatic subdomains for v4 and v6 per site for dashboard 2024-02-14 09:28:31 +01:00
82500758f6 prod: unpin woodpecker 2024-02-13 17:32:01 +01:00
c2e0e12dc8 add woodpecker agent instructions 2024-02-09 11:29:03 +01:00
52cfe54129 prod: install woodpecker-ci 2024-02-08 16:10:39 +01:00
47d33c1773 remove unused remote-unlock.nix 2024-02-06 17:46:55 +01:00
9d77b5863a added URL to redirect 2024-02-05 00:43:14 +01:00
4cddb15fa4 prod: updat external services 2024-01-31 19:04:02 +01:00
1bf356e49d staging: remove node carcajou 2024-01-31 09:33:12 +01:00
e98ec690b9 staging: updates 2024-01-22 23:21:26 +01:00
e89d1c82bb tlsproxy: bind on 127.0.0.1 explicitly to avoid ipv6 issues 2024-01-22 23:21:12 +01:00
27242fbf70 staging: cluster upgrades 2024-01-22 17:15:29 +01:00
6db49e0059 staging: remove nix mutual cache 2024-01-18 00:05:40 +01:00
3ff35c5527 staging: new hostnames in known_hosts 2024-01-17 20:44:23 +01:00
572822093c Changement du guide onboarding avec une config ssh aux petits oignons 2024-01-17 19:33:33 +00:00
ab481c5e70 staging: use dynamic dns names to connect to nodes for deployment 2024-01-17 20:30:00 +01:00
88f8f9fd1e staging: add automatic dns names for staging machines 2024-01-17 20:25:35 +01:00
be0cbea19b ajout clé ssh boris, aeddis et vincent 2024-01-17 20:07:48 +01:00
afb28a690b tlsproxy: temporary fix for year 2024 (TODO fix before mid-2024) 2024-01-17 20:07:20 +01:00
a21493745d prod: update diplonat and make garage restart on template changes again
Diplonat update prevents unnecessary flapping of autodiscovered ip
addresses, which was the cause of useless restarts of the garage daemon.
But in principle we want Garage to be restarted if the ipv6 address
changes as it indicates changes in the network.
2024-01-17 12:38:53 +01:00
56e4dd954f staging: add ram for im replicate-db 2024-01-16 16:30:33 +01:00
102152a14e staging: garage v0.9.1-pre (not yet released nor tagged), diplonat with STUN flapping fix 2024-01-16 16:10:29 +01:00
3b34e3c2f5
upgraded postfix to fix smtp smuggling cve
https://security-tracker.debian.org/tracker/source-package/postfix
https://www.postfix.org/smtp-smuggling.html
2023-12-25 14:09:57 +01:00
ac42e95f1a
update smtp server security conf 2023-12-25 14:00:36 +01:00
2472a6b61a added Quentin's control loop diagram of the infrastructural services 2023-12-21 14:49:18 +01:00
Baptiste Jonglez
55c9b89cb2 Revert "Revert "garage prod: use dynamically determined ipv6 addresses""
Quentin's fix seems to work fine.

This reverts commit e5f3b6ef0a.
2023-12-19 09:27:40 +01:00
Baptiste Jonglez
e5f3b6ef0a Revert "garage prod: use dynamically determined ipv6 addresses"
This partially reverts commit 47e982b29d.

This leads to invalid config:

    Dec 19 08:23:09 courgette 25f10ae4271c[781]: 2023-12-19T07:23:09.087813Z  INFO garage::server: Loading configuration...
    Dec 19 08:23:09 courgette 25f10ae4271c[781]: Error: TOML decode error: TOML parse error at line 16, column 17
    Dec 19 08:23:09 courgette 25f10ae4271c[781]:    |
    Dec 19 08:23:09 courgette 25f10ae4271c[781]: 16 | rpc_bind_addr = "[<no value>]:3901"
    Dec 19 08:23:09 courgette 25f10ae4271c[781]:    |                 ^^^^^^^^^^^^^^^^^^^
    Dec 19 08:23:09 courgette 25f10ae4271c[781]: invalid socket address syntax
    Dec 19 08:23:09 courgette 25f10ae4271c[781]:
2023-12-19 08:38:12 +01:00
516ab9ad91
stop reloading config file 2023-12-19 08:36:26 +01:00
16168b916e
tricot upgrade 2023-12-14 10:59:40 +01:00
47e982b29d garage prod: use dynamically determined ipv6 addresses 2023-12-13 17:33:56 +01:00
d694ddbe2c
Move garage's redirections to a dedicated service
Reason:
 - do not slow down the garage web endpoint
 - required now that we map domain name to a garage bucket
2023-12-04 12:32:46 +01:00
0c3db22de6
fix bagage 2023-12-04 12:19:00 +01:00
af242486a3
add degrowth 2023-12-04 12:16:41 +01:00
23690238c9
add a sftp domain name 2023-12-02 11:52:35 +01:00
7da4510ee8
tricot update 2023-12-01 16:02:09 +01:00
52044402ac
add some redirections 2023-11-29 17:08:13 +01:00
d14fc2516c
Upgrade tricot 2023-11-29 16:58:37 +01:00
c1d307d7a9 matrix: add memory to async media upload after oom crash 2023-11-27 13:56:47 +01:00
9c6f98f4b8 fix cryptpad backup 2023-11-27 13:43:42 +01:00
a315d5d1af coquille 2023-11-22 19:43:42 +01:00
a2654529c7 prod: update synapse and element 2023-11-15 16:39:11 +01:00
b1e0397265 revert prometheus scraping on openwrt 2023-11-08 16:21:20 +01:00
a46aa03fe2 prod: add monitoring of openwrt router 2023-11-08 16:14:33 +01:00
a6b84527b0
fix typo 2023-10-30 12:15:30 +01:00
3c22659d90
ajout de domaines d'Esther 2023-10-30 12:00:21 +01:00
79f380c72d
directory 2023-10-30 11:55:25 +01:00
b0fecddaec correct doc links 2023-10-23 10:40:37 +02:00
Baptiste Jonglez
a214496d8c [staging] Update known_hosts 2023-10-22 21:28:10 +02:00
Baptiste Jonglez
b1630cfa8e [staging] Update garage to v0.9.0 2023-10-22 21:27:55 +02:00
Baptiste Jonglez
d396f35235 Update IP for piranha.corrin 2023-10-22 20:17:33 +02:00
78ed3864d7 update bagage version with cors allow all 2023-10-16 16:16:18 +02:00
ea8b2e8c82 màj garage prod 2023-10-16 14:54:16 +02:00
fbffe1f0dc staging: update guichet with website management 2023-10-05 18:51:13 +02:00
c790f6f3e1 staging: reaffect raft leaders 2023-10-05 13:48:29 +02:00
e94cb54661 prod: add matrix syncv3 daemon 2023-10-04 11:51:04 +02:00
525f04515e staging: deploy garage v0.9.0-rc1 2023-10-04 10:44:17 +02:00
2e3725e8a2 staging: disable jaeger; update diplonat 2023-10-03 22:56:41 +02:00
56e19ff2e5
remove default HTTP CSP, put your CSP in your HTML 2023-10-03 16:00:11 +02:00
9e113416ac
fix update guichet 2023-10-03 15:58:20 +02:00
7c7adc76b4
Set sogo as debug 2023-10-03 08:33:29 +02:00
c4f3dece14 update tricot 2023-10-02 16:59:01 +02:00
4e20eb43b3 cryptpad: ajout alex admin 2023-09-22 15:42:02 +02:00
f139238c17 staging: update garage to 0.8.4 2023-09-11 23:28:29 +02:00
ba3e24c41e added Adrien in admins for CryptPad 2023-09-08 11:31:49 +02:00
9b8882c250 add missing d53 tags for sogo and alps 2023-09-04 19:15:09 +02:00
a490f082bc prod: remove all apps from orion, add some missing in scorpio 2023-09-04 19:05:18 +02:00
e42ed08788
fix Jitsi public IPv4 config 2023-08-31 18:08:46 +02:00
1340fb6962
upgraded backups 2023-08-29 11:51:18 +02:00
3d925a4505
move emails to lille 2023-08-29 11:43:45 +02:00
b688a1bbb9
increase sogo RAM 2023-08-28 09:50:46 +02:00
7dd8153653 màj tricot 2023-08-27 18:07:30 +02:00
ecb4cabcf0 prod garage: add health check using admin api's '/health' 2023-08-27 13:56:51 +02:00
8e304e8f5f staging im-nix: add sqlite 2023-08-27 13:36:36 +02:00
be8484b494
[tricot] warmup memory store on boot 2023-08-09 10:40:08 +02:00
ca3283d6a7
upgrade matrix 2023-08-07 12:13:56 +02:00
0c9ea6bc56
disable network fingerprinting in nomad 2023-08-07 11:17:40 +02:00
e7a3582c4e
Update telemetry stack to grafana 10.0.3 & co 2023-08-06 13:45:46 +02:00
aaa80ae678
final csp 2023-07-23 14:36:04 +02:00
233556e9ef
Simpler IPv6 config for Garage 2023-07-23 14:06:36 +02:00
132ad670a1
lines 2023-07-23 13:59:35 +02:00
1048456fbf
switch postfix to ipv4 as we have no reverse dns on ipv6 2023-07-08 14:48:34 +02:00
919004ae79
albatros 0.9-rc3 2023-07-08 14:38:00 +02:00
03658e8f7b
ajout pointecouteau 2023-06-28 15:35:37 +02:00
8ebd35730c added estherbouquet.com to DKIM signing table 2023-06-24 18:02:29 +02:00
effe155248 Add armael to staging and ssh key for max 2023-06-24 17:14:34 +02:00
6c12a71ecb Deploy nixos 23.05 on staging and other staging fixes 2023-06-13 11:56:10 +02:00
1d19bae7a1 remove postgres replica on concombre 2023-06-12 19:58:03 +02:00
3fcda94aa0 undo remove postgres from diplotaxis 2023-06-12 16:19:57 +02:00
3e40bfcca9 add stolon replica on abricot instead of diplotaxis 2023-06-12 13:41:42 +02:00
e06d6b14a3 add ananas, set it raft server instead of dahlia 2023-06-12 13:41:34 +02:00
e71ca8fe11 rename wgautomesh config to deuxfleurs namespace to avoid conflict 2023-06-12 13:40:53 +02:00
1a11ff4202 staging: updated garage with new consul registration 2023-06-02 16:37:13 +02:00
14b59ba4b0 màj config gitea 2023-06-02 15:40:43 +02:00
c31de0e94f tricot passthrough of external services at neptune 2023-05-24 10:18:02 +02:00
ADRN
7022b768e4 added a note about forwarding to personal services in the readme (I struggled to find where this was) 2023-05-23 09:36:22 +02:00
ff13616887 staging: dev garage with fixed k2v double-urlencoding 2023-05-19 12:53:10 +02:00
efd5ec3323 Remove plume backup job (not usefull anymore) 2023-05-16 15:39:36 +02:00
8a75be4d43 Merge pull request 'prod: Plume with S3 storage backend' (#13) from plume-s3 into main
Reviewed-on: Deuxfleurs/nixcfg#13
2023-05-16 13:38:07 +00:00
4ca45cf1d4 updated d53 on prod 2023-05-16 15:35:06 +02:00
aee3a09471 Merge pull request 'Simplify network configuration' (#11) from simplify-network-config into main
Reviewed-on: Deuxfleurs/nixcfg#11
2023-05-16 13:19:33 +00:00
76b7f86d22 use RA on orion as well 2023-05-16 14:14:27 +02:00
560486bc50 prod plume with s3 backend 2023-05-15 17:30:41 +02:00
2488ad0ac2 staging plume: cleanup and update 2023-05-15 13:36:38 +02:00
9cef48a6c2 Merge branch 'main' into simplify-network-config 2023-05-12 18:45:58 +02:00
5c7a8c72d8 first plume on staging with S3 backend 2023-05-12 18:45:20 +02:00
258d27c566 deploy tricot at bespin, register gitea (not accessed yet) 2023-05-09 15:12:03 +02:00
04464f632f Export all Grafana dashboards 2023-05-09 12:29:37 +02:00
24cf7ddd91 Merge branch 'main' into simplify-network-config 2023-05-09 12:20:35 +02:00
24192cc61a
Update telemetry stack apps 2023-05-07 23:46:48 +02:00
b73c39c7c1 multi-zone matrix 2023-05-04 17:00:31 +02:00
e375304c38 orient SoGo and Synapse to closest psql-proxy; psql backup anywhere 2023-05-04 16:48:22 +02:00
f3cd2e98b4 multisite postgres, orient plume to correct db 2023-05-04 16:39:25 +02:00
6c07a42978 different wgautomesh gossip ports for prod and staging 2023-05-04 13:39:33 +02:00
Baptiste Jonglez
e23b523467 Add infinite restart policy for postgresql 2023-05-03 08:53:59 +02:00
3befdea206
nix: allow wireguard + logs 2023-04-28 09:26:32 +02:00
607add3161 make specifying an ipv6 fully optionnal 2023-04-21 14:36:10 +02:00
c4598bd84f Diplonat on bespin, ipv6-only 2023-04-21 12:03:35 +02:00
0b3332fd32 break out core services into separate files 2023-04-21 11:55:24 +02:00
a9e9149739 Fix unbound; remove Nixos firewall (use only diplonat) 2023-04-21 11:29:15 +02:00
529480b133 Merge branch 'main' into simplify-network-config 2023-04-21 10:31:05 +02:00
b4e82e37e4 diplonat with fixed iptables thing 2023-04-20 15:13:13 +02:00
af82308e84 Garage backup to SFTP target hosted by Max 2023-04-20 12:10:07 +02:00
e5f9f3c849 increase diplonat ram 2023-04-19 21:05:47 +02:00
0372df95b5 staging: fix consul server addresses 2023-04-19 20:36:24 +02:00
9737c661a4 Merge branch 'main' into simplify-network-config 2023-04-19 20:15:03 +02:00
57aa2ce1d2
interface gestion site web guichet 2023-04-19 15:20:49 +02:00
a614f495ad
allow memory overprovisionning 2023-04-08 10:43:42 +02:00
07f50f297a D53 with addresses from DiploNAT autodiscovery; diplonat fw opening for tricot 2023-04-05 16:30:28 +02:00
0e4c641db7
redeploy bagage 2023-04-05 15:50:53 +02:00
c08bc17cc0 Adapt prod config to new parameters 2023-04-05 14:09:04 +02:00
16422d2809 introduce back static ipv4 prefix lenght but with default value 2023-04-05 14:04:11 +02:00
bb25797d2f make script clearer and add documentation 2023-04-05 13:44:38 +02:00
dec4ea479d Allow for IPv6 with RA disabled by manually providing gateway 2023-04-05 13:27:18 +02:00
cb8d7e92d2 staging: ipv6-only diplonat for automatic address discovery 2023-04-05 10:25:22 +02:00
c9f122bcd3 diplonat with ipv6 firewall support; email ipv6 addresses in dns 2023-04-04 14:13:57 +02:00
a31c6d109e remove obsolete directives 2023-03-31 16:27:08 +02:00
d83d230aee added luxeylab to dkim signingtable 2023-03-30 18:09:12 +02:00
3a883b51df
better classification 2023-03-27 12:26:01 +02:00
3ce25b880a
update descriptios 2023-03-27 12:24:12 +02:00
4c903a2447
update readme 2023-03-27 12:22:00 +02:00
2de291e9b7
upgrade bottin + remove bespin 2023-03-26 10:14:04 +02:00
ecfab3c628 Merge branch 'main' into simplify-network-config 2023-03-24 15:35:27 +01:00
96566ae523 refactor configuration syntax 2023-03-24 15:26:39 +01:00
e2aea648cf greatly simplify ipv4 and ipv6 configuration 2023-03-24 14:42:36 +01:00
Baptiste Jonglez
8ae9ec6514 Update piranha IP again 2023-03-24 13:01:24 +01:00
a0db30ca26 Sanitize DNS configuration
- get rid of outside nameserver, unbound does the recursive resolving
  itself (and it checks DNSSEC)
- remove CAP_NET_BIND_SERVICE for Consul as it is no longer binding on
  port 53 (was already obsolete)
- make unbound config independant of LAN IPv4 address
2023-03-24 12:58:44 +01:00
76c8e8f0b0 Merge pull request 'Passer wgautomesh en prod' (#9) from wgautomesh into main
Reviewed-on: Deuxfleurs/nixcfg#9
2023-03-24 11:05:29 +00:00
53b9cfd838 wgautomesh actually on prod 2023-03-24 12:01:38 +01:00
5cd69a9ba1 Merge branch 'main' into wgautomesh 2023-03-24 11:29:14 +01:00
8e29ee3b0b backup memory 2023-03-24 11:29:07 +01:00
4a56b3360f
upgrade matrix 2023-03-22 22:23:37 +01:00
b7c4f94ebd Add Garage backup script running on Abricot 2023-03-20 16:47:22 +01:00
6ffaa0ed91 use nix enum type 2023-03-20 11:17:38 +01:00
eec09724fe
socat proxy 2023-03-20 10:45:40 +01:00
bebbf5bd8b
wip rsa-ecc proxy 2023-03-20 09:45:05 +01:00
90efd9155b wgautomesh variable log level (debug for staging) 2023-03-17 18:21:50 +01:00
39254cca0e keep wg-quick code as reference 2023-03-17 18:18:25 +01:00
f629f4c171 wgautomesh from static binary hosted on gitea 2023-03-17 18:01:35 +01:00
f9b94f0b47 update wgautomesh 2023-03-17 17:17:56 +01:00
bb2660792f wgautomesh persist state to file 2023-03-17 17:17:56 +01:00
6664affaa0 wgautomesh gossip secret file 2023-03-17 17:17:56 +01:00
a3edbb4100 document wgautomesh port 2023-03-17 17:17:56 +01:00
baae97b192 sample deployment of wgautomesh on staging (dont deploy prod with this commit) 2023-03-17 17:17:56 +01:00
870511931a abricot fixed ipv6 2023-03-17 16:22:24 +01:00
a6c791d342 remove email-in 2023-03-17 13:44:48 +01:00
28e7503b27 virguuuule 2023-03-17 10:04:21 +01:00
fd4f601ee0 Merge pull request 'configuration for imap.deuxfleurs.fr & smtp.deuxfleurs.fr as part of email service for d53 + convert tabs into spaces (couldn't help myself)' (#8) from feat/d53-email into main
Reviewed-on: Deuxfleurs/nixcfg#8
2023-03-17 08:53:27 +00:00
551988c808
do not allow stale information reading 2023-03-16 17:01:17 +01:00
6fe8ef6eed
update albatros 2023-03-16 16:53:16 +01:00
8b67c48c52
Fix consul port 2023-03-16 16:19:35 +01:00
7bf1467cb1
add albatros 2023-03-16 15:52:13 +01:00
fe2eda1702 configuration for imap.deuxfleurs.fr & smtp.deuxfleurs.fr as part of email service for d53 + convert tabs into spaces (couldn't help myself) 2023-03-16 15:48:52 +01:00
81d3c0e03a d53 for email-in.deuxfleurs.fr (A only, AAAA missing firewall) 2023-03-16 14:42:47 +01:00
1c623c796a update garage and let it use more ram 2023-03-16 14:18:59 +01:00
e4065dade8 added Consul Registration of personal services (for Adrien's personal stuff) 2023-03-15 18:55:09 +01:00
f7be968531 TODOs in deuxfleurs.nix because the old world is maybe mixing with the new 2023-03-15 18:19:01 +01:00
1a2ff3f6b9 upgrade nixos 2023-03-15 17:50:06 +01:00
2a0eff07c0 fix cleanup of deploypass 2023-03-15 17:49:31 +01:00
f6c4576b6c added forgotten new files for scorpio/abricot 2023-03-15 17:30:35 +01:00
85595a9205 there was a little problem 2023-03-15 17:27:26 +01:00
031d029e10 added scorpio site and abricot node 2023-03-15 17:10:38 +01:00
c681f63222
alloc more mem 2023-03-14 18:37:28 +01:00
d2b8b0c517
wip homemade ci? 2023-03-14 17:32:49 +01:00
385882c74c Changes in prod:
- migrate courgette and concombre to M710q machines with SSD+HDD
- migrate prod/c* to nixos 22.11
2023-03-13 19:58:37 +01:00
d56f895a1c
integrate turn in matrix 2023-03-11 12:37:57 +01:00
6b8a94ba2e
wip coturn 2023-03-11 11:44:17 +01:00
850ea784e7 staging updates 2023-03-09 11:08:33 +01:00
6a287ffb57 prod: garage v0.8.1 2023-03-06 14:39:12 +01:00
Baptiste Jonglez
3eb5e21f9d New IP for piranha 2023-03-06 14:30:22 +01:00
49cc83db21
use https links 2023-02-28 10:51:34 +01:00
4ef04f7971
add teabag (for static cms) 2023-02-27 18:42:38 +01:00
a4eb0b2b56 increased jitsi's priority so that it is above Matrix's 2023-02-20 16:43:29 +01:00
0b1fccac1c Prod: guichet with mailing list edition interface 2023-02-08 16:58:12 +01:00
69f1950b55
bespin 2023-02-03 13:39:48 +01:00
87fc43d5e6
remove feature flags 2023-02-02 16:30:24 +01:00
a3ade938e0
update config with some flags, not sure 2023-02-02 16:21:43 +01:00
67bcd07056
upgrade prod tentative 1 2023-02-02 15:37:43 +01:00
a3ca27055d
fix integration 2023-02-02 15:32:40 +01:00
2d6616195f
upgrade the building logic 2023-02-02 14:48:59 +01:00
6445d55e3e
upgarde jitsi config 2023-02-02 08:48:19 +01:00
535b28945d
improve jitsi conf 2023-02-02 08:24:50 +01:00
2d55b1dfcc updated garage and d53 on staging 2023-01-26 17:52:27 +01:00
8e76707c44
fix tricot hostname on prod 2023-01-11 22:18:52 +01:00
0da378d053
staging: remove constraint on im 2023-01-05 11:15:30 +01:00
9fabb5844a
staging: remove node cariacou, update garage 2023-01-04 17:06:39 +01:00
3a8588a1ea
Open ports 80 and 443 on all Orion nodes 2023-01-04 11:10:10 +01:00
da78f3671e
staging: deploy things on bespin 2023-01-04 10:06:06 +01:00
26f78872e6
staging: add node df-pw5 at bespin 2023-01-04 10:02:21 +01:00
c11b6499b8
prod: deploy d53 2023-01-04 09:35:40 +01:00
6478560087
prod: update tricot 2023-01-03 21:14:02 +01:00
fe805b6bab
Fix prometheus ssl certs 2023-01-03 21:00:10 +01:00
606668e25e
fill in cname_target and public_ipv4 for prod cluster 2023-01-03 19:27:35 +01:00
18eef6e8e7
Staging: Reduce resource requirements to pack more things 2023-01-03 18:25:32 +01:00
af73126f45
fix deploy_pki 2023-01-02 13:51:13 +01:00
d588764748 don't rotate grafana password 2023-01-01 20:44:28 +01:00
3847c08181 Merge pull request 'updated version of secretmgr' (#5) from new-secretmgr into main
Reviewed-on: Deuxfleurs/nixcfg#5
2023-01-01 18:47:34 +00:00
ad6db2f1c5 Remove hardcoded years in deuxfleurs.nix 2023-01-01 19:43:35 +01:00
Baptiste Jonglez
95540260cb Fix doc, app/frontend has been merged in app/core 2022-12-29 18:27:12 +01:00
Baptiste Jonglez
08c324f1c4 Add new zone to core services 2022-12-29 18:26:52 +01:00
Baptiste Jonglez
de41f3db4e Document how to run jobs 2022-12-29 14:22:28 +01:00
Baptiste Jonglez
1c48fd4ae4 Add new staging zone and node 2022-12-28 16:49:43 +01:00
0d8c6a2d45
Remove obsolete Matrix TLS keys 2022-12-25 23:54:55 +01:00
0becfc2571
Merge branch 'main' into new-secretmgr 2022-12-25 23:47:52 +01:00
b63c03f635
refactor ssh config and move known_hosts 2022-12-25 23:45:53 +01:00
40f5670753
Remove old way of doing email certs (self-signed) 2022-12-25 23:03:37 +01:00
2bbf540945
Remove convertsecrets script, we're done with that 2022-12-25 22:57:33 +01:00
3b74376191
update drone secrets for rotation 2022-12-25 22:50:20 +01:00
8cee3b0043
Update prod secret files 2022-12-25 22:45:05 +01:00
87bb031ed0
Migrate prod cluster secrets to new format 2022-12-25 22:31:18 +01:00
6d6e48c8fa
Improve secretmgr more, update secrets for staging 2022-12-25 22:12:38 +01:00
8d0a7a806d
New secretmgr 2022-12-25 21:03:16 +01:00
7fd81f3470
WIP new secretmgr 2022-12-25 19:52:28 +01:00
11f87a3cd2
staging: add missing secrets, update exiting ones to autogen/autorotate 2022-12-24 23:58:38 +01:00
8d17a07c9b
reorganize some things 2022-12-24 22:59:37 +01:00
4b527c4db8
document scheduler config 2022-12-23 00:24:17 +01:00
827987d201
cleanup 2022-12-23 00:07:02 +01:00
94a9c8afa8
security for deployment on prod 2022-12-22 23:59:51 +01:00
0e1574a82b
More doc reorganization 2022-12-22 23:44:00 +01:00
3e5e2d60cd
reorganize documentation 2022-12-22 23:33:10 +01:00
912753c7ad
remove useless lines in caribou,origan.nix 2022-12-22 23:16:15 +01:00
4d637c91b1
remove outdated telemetry doc 2022-12-22 18:01:46 +01:00
b47334d7d7
Replace deploy_wg by a NixOS activation script 2022-12-14 18:02:30 +01:00
cc70cdc660
write about why not ansible 2022-12-14 17:52:36 +01:00
8513003388
staging: garage update 2022-12-14 17:52:13 +01:00
7ab91a16e9
Proper nat on origan 2022-12-13 16:01:36 +01:00
3af066397e
Replace carcajou by origan for raft server 2022-12-11 23:13:04 +01:00
dca2e53442
run a bunch of things on new Origan node 2022-12-11 23:02:14 +01:00
578075a925
Add origan node in staging cluster (+ refactor system.stateVersion) 2022-12-11 22:37:28 +01:00
36e6756b3c
staging: update D53 tags to new (simpler) syntax 2022-12-11 21:27:16 +01:00
a1fc396412
Add possible public_ipv4 node tag 2022-12-07 17:13:03 +01:00
4c50dd57f1
staging: reorganize core services and add D53 2022-12-07 16:35:21 +01:00
ab97a7bffd
Staging: Add CNAME target meta parameter, will be used for diplonat auto dns update 2022-12-07 12:32:21 +01:00
1d4599fc1c
prod: update tricot and reduce resource constraints 2022-12-07 12:03:15 +01:00
93e66389f7
staging: update Tricot 2022-12-07 11:21:51 +01:00
4e3db0cd5e
staging: correct public IPs through NAT for wireguard 2022-12-07 11:21:39 +01:00
Baptiste Jonglez
c9bcfb5e46 sshtool: quote password to fix shell interpretation 2022-12-06 23:13:32 +01:00
5bed1e66db
update alps 2022-12-06 16:14:57 +01:00
724f0ccfec
Tricot: updated with enough bins for histogram data 2022-12-06 15:11:35 +01:00
14bea296da
prod: enable site load balancing in tricot 2022-12-06 14:43:58 +01:00
6036f5a1b7
deploy tricot metrics on production 2022-12-06 14:41:53 +01:00
e1ddb2d1d3
staging: tricot do load balancing of garage requests to local nodes 2022-12-06 12:41:12 +01:00
27b23e15ec
Staging: tricot with metrics 2022-12-05 23:42:53 +01:00
b260b01915
staging garage: use new health check endpoint 2022-12-05 16:25:46 +01:00
1e32bebd38
Document used port numbers 2022-12-02 12:14:55 +01:00
a1a2a83727
Staging: let nodes use each other as Nix caches (only inside same site) 2022-12-02 11:59:32 +01:00
88ddfea4d5
staging: run grafana from nixpkgs 2022-12-02 00:14:31 +01:00
2482a2f819
staging: run prometheus from nixpkgs 2022-12-01 23:48:46 +01:00
b0405d47a6
staging: remove hcl file for garage on docker 2022-12-01 23:33:16 +01:00
db8638223f
staging: also run Guichet from nix 2022-12-01 23:30:12 +01:00
e67b460ae2
staging: run bottin as nix job 2022-12-01 22:49:55 +01:00
bc88622ea2
Staging: run diplonat as nix job 2022-12-01 22:32:02 +01:00
d3fac34e63
staging: simplify litestream config on nix 2022-12-01 17:35:19 +01:00
18ab08a86c
staging: run node_exporter from nixos; run synapse as non-root 2022-12-01 17:25:53 +01:00
195e340f56
prod: more agressive restart on core services 2022-12-01 17:03:20 +01:00
9d0a2d8914
Run Tricot as Nix flake instead of Docker image 2022-12-01 16:04:47 +01:00
e4684ae169
staging: reduce litestream memory_max because it uses it all 2022-11-30 10:04:42 +01:00
6db4ec5311
staging: update garage 2022-11-29 22:59:55 +01:00
1ac9790806
Staging: remove Docker-based synapse config 2022-11-29 22:03:48 +01:00
ab7a770168
Synapse on Nix works great 2022-11-29 22:02:21 +01:00
55e407a3a4
First version of Matrix-synapse in Nix 2022-11-29 21:19:57 +01:00
4036a2d951
Clean stuff up and update nix driver 2022-11-29 16:21:38 +01:00
fb4c2ef55a
Remove old nomad-driver-nix 2022-11-29 15:41:35 +01:00
da07fee575
Use nix driver moved to Deuxfleurs namespace 2022-11-29 14:46:42 +01:00
14e3e6deff
Staging: cleanup garage job 2022-11-29 14:42:53 +01:00
c9f9ed4c71
Deploy garage on staging using nix2 driver 2022-11-29 14:21:12 +01:00
105c081728
Staging: ability to run Nix jobs using exec2 driver 2022-11-28 22:58:39 +01:00
a327876e25
Remove root, add wg-quick-wg0 after unbound 2022-11-28 10:19:48 +01:00
c4ed69336b
Remove spurrious nixos result 2022-11-26 10:13:10 +01:00
bedfae8424
Fix wg-quick MTU because it does bad stuff by default 2022-11-22 16:22:05 +01:00
8d363d2e66
Add after config on nomad and consul 2022-11-22 13:30:00 +01:00
6659deb544
Add Baptiste ; fix wireguard 2022-11-22 12:09:28 +01:00
945dd4fa9a
Run Garage as a Nomad Nix job on staging cluster 2022-11-17 00:17:56 +01:00
3c5f4b55e6
fix typo 2022-11-17 00:00:13 +01:00
78440a03d2
add+cleanup config 2022-11-16 16:52:38 +01:00
49b0dc2d5b
poc 2 for nix containers: use nomad-driver-nix 2022-11-16 16:28:18 +01:00
eac950c47f
Upgrade to garage v0.8.0-rc2 2022-11-16 11:57:11 +01:00
7df8162913
nix volumes RO 2022-11-16 00:12:14 +01:00
2cd4bf1ee7
Demo running directly a service from the nix store 2022-11-15 23:13:55 +01:00
13fac2b446 edited passwd command to set bash as interpreter 2022-11-09 19:02:02 +01:00
359c1a1e40 edited README: added more info to 'how to operate a node' 2022-11-09 18:57:49 +01:00
45fc3f4dd4 changed shebang of tlsproxy file to bash, because trap failed with sh (trap is a builtin of bash) 2022-11-09 18:53:21 +01:00
9e19b2b5a2
Update ssh keys 2022-11-09 18:35:17 +01:00
cade21aa24
Give more resources to core stuff 2022-11-04 12:29:43 +01:00
7587024ff5
staging: change resources for im job 2022-11-04 11:22:54 +01:00
cc945340a1
update telemetry config on staging 2022-11-04 11:09:37 +01:00
b37c4b3196
Updated drone version 2022-11-04 11:09:19 +01:00
ea8185d7e6
Reinstall caribou 2022-11-03 19:25:28 +01:00
40d5665ffe
Upgrade Matrix but disable URL preview 2022-10-28 09:45:00 +02:00
859813440c
Automatic garage node discover on staging through consul 2022-10-18 22:09:55 +02:00
4584b39639
Update celeri config 2022-10-18 15:44:15 +02:00
afc368421d
Rebalance ressource attribution on staging 2022-10-18 10:40:59 +02:00
2592dcaa2d
Update telemetry on staging as well 2022-10-18 10:32:41 +02:00
7866a92e16
remove systemd-resolved 2022-10-16 19:36:15 +02:00
27214332e9
IPv6 by FDN 2022-10-16 19:10:51 +02:00
5613ed9908
Complete telemetry configuration 2022-10-16 18:12:57 +02:00
42409de1b1 Deploy garage on bespin 2022-10-16 14:17:12 +00:00
a69a71ca00 Add mounts on bespin + tlsproxy 2022-10-16 14:17:12 +00:00
554c20cc04 How to bind your consul and nomad on your machine 2022-10-16 14:17:12 +00:00
e6f118adb0 Celeri is no more a raft server 2022-10-16 14:17:12 +00:00
2eecece831 Fix typo on IP, add keys 2022-10-16 14:17:12 +00:00
fdc50fdcfd Add hint about ssh_config 2022-10-16 14:17:12 +00:00
5f08713dfb Remove additonal DNS entries from docker 2022-10-16 14:17:12 +00:00
mricher
c48a7e80c3 Fix key 2022-10-16 14:17:12 +00:00
e658b79d06 Add channel selection in the deploy script 2022-10-16 14:17:12 +00:00
c4c20b691c Update README.md 2022-10-16 14:17:12 +00:00
mricher
8797d4450a Add cluster configuration 2022-10-16 14:17:12 +00:00
mricher
6bafa20bf6 Add bespin machines 2022-10-16 14:17:12 +00:00
38a544d9c4
Correctly inject dns servers in docker 2022-10-16 13:25:46 +02:00
b5a0f8bd82
Add docker 2022-10-16 13:13:43 +02:00
45a0e850ce
Improve deployment doc 2022-10-16 12:02:55 +02:00
d442b9a068
Update README 2022-10-16 11:58:11 +02:00
9a8cbf9121
WIP doc 2022-10-16 11:14:50 +02:00
6942355d43
update readme.md 2022-10-16 11:04:46 +02:00
c3a30aabab
Switch to systemd-networkd 2022-10-15 10:38:48 +02:00
10b0840daa
Disable IPv6 RA/autoconf/temp addr 2022-10-14 08:38:19 +02:00
3247bf69cf
move grafana-new. to grafana. 2022-10-13 11:01:45 +02:00
f4689d25de
Change email address for let's encrypt expiry notifications 2022-10-09 22:57:55 +02:00
b4e737afdf
Rotate ssh key 2022-10-09 17:46:59 +02:00
c239e34a25
IPv6 prefix at Neptune changed again 2022-10-09 17:07:47 +02:00
e8cdd6864a
Split garage deployments in 2 categories
- The ones that will receive some traffic from tricot
 - The ones "only for storage" that will not receive traffic from tricot
2022-10-08 22:23:19 +02:00
32658ff4d3
Add jaeger service to staging to view Garage traces 2022-09-26 15:53:32 +02:00
711b788eb4
Fix restic forget commands 2022-09-26 13:05:53 +02:00
5b88919746
Move cryptpad backup job to backup-daily.hcl 2022-09-26 13:02:38 +02:00
535c90b38e
Replace Adrien's SSH key 2022-09-26 11:37:48 +02:00
f22e242700
SSB experiment 2022-09-21 19:29:08 +02:00
4e939f55fc
Update garage staging 2022-09-21 19:28:54 +02:00
56ff4c5cfd
Prod-like telemetry into staging 2022-09-20 17:13:46 +02:00
9b6bdc7092
Update to garage config 2022-09-20 17:13:36 +02:00
72606368bf
Force Garage to use ipv6 connectivity 2022-09-15 11:57:24 +02:00
2dad5700d3
garage v0.8.0-beta1 on staging 2022-09-13 23:32:12 +02:00
39fbbbe863
Change ipv6 tunnel server 2022-09-09 17:23:23 +02:00
a90de2cfb9
Update garage staging 2022-09-09 12:24:29 +02:00
be0d7a7ccc
Drone integration files for new version (Nix runners) 2022-09-09 12:24:11 +02:00
b23218a7f6
systemd timesyncd 2022-09-08 10:35:14 +02:00
2695fe4ae8
Force IPv4 when sending to gmail
Because Free does not provide rDNS on IPv6
so GMail complains that it does not find a PTR record
for our IPv6 address
2022-09-07 08:13:15 +02:00
02c65de5fe
Restart backups 2022-09-01 18:05:50 +02:00
1749a98e86
Update LDAP configuration 2022-08-31 10:25:58 +02:00
6ec9aad801
Improve DNS configuration
Add Unbound server that separates queries between those going to Consul
and those going elsewhere.  This allows us to have DNS working even if
Consul fails for some reason. This way we can also remove the secondary
`nameserver` entry in /etc/resolv.conf, thus fixing a bug where certain
containers (Alpine-based images?) were using the secondary resolver some
of the time, making them unable to access .consul hosts.
2022-08-30 15:52:42 +02:00
e81716e41e
Update drone config and add drone monitoring to prometheus 2022-08-30 15:48:32 +02:00
b5328c3341
Activate memory oversubscription+use it for Plume 2022-08-26 13:04:42 +02:00
72d033dcd4
Remove garage files at bad location, add basic telemetry 2022-08-25 13:59:40 +02:00
fd3ed44dad
Disable netdata on prod (useless) 2022-08-25 12:34:02 +02:00
3f9ad5edc3
Configure the final URL for Guichet 2022-08-25 04:46:42 +02:00
ec0e483d99
Add email support 2022-08-25 04:39:44 +02:00
ea1b0e9d19
Add a docker-compose for Jitsi 2022-08-25 01:06:06 +02:00
e37c1f9057
Deploy Matrix 2022-08-25 01:02:16 +02:00
3be2659aa1
Make service addressable by zones 2022-08-24 21:06:48 +02:00
243eee4322
Ask consul to use advertised address and not bind one 2022-08-24 20:03:31 +02:00
00b754727d
Add postgres + WIP plume + fix diplonat 2022-08-24 19:54:15 +02:00
1172e8e511
Fix nomad talking to consul 2022-08-24 18:51:55 +02:00
0d2d46f437
skip consul tls verify for diplonat and tricot (should be reverted?) 2022-08-24 18:19:04 +02:00
cfb1d623d9
Reconfigure services to use correct tricot url, TLS fails 2022-08-24 17:31:08 +02:00
a0c8280c02
Fix access to consul for non-server nodes 2022-08-24 16:58:50 +02:00
fe1f261738
Add another DNS to the pki 2022-08-24 16:53:02 +02:00
6ea18bf8ae
Add directory config for prod 2022-08-24 16:03:52 +02:00
41128f4c36
Clone core module in staging and prod, move bad stuff to experimental 2022-08-24 15:48:18 +02:00
981294e3d7
Move dummy nginx to cluster/staging 2022-08-24 15:44:40 +02:00
2e8923b383
Move app files into cluster subdirectories; add prod garage 2022-08-24 15:42:47 +02:00
9848f3090f
Remove courgette from raft 2022-08-24 15:25:28 +02:00
6c51a6e484
Don't make diplotaxis and doradille raft servers, fix sshtool 2022-08-24 14:29:56 +02:00
ec2020b71b
Disable bootstrap_expect unless specific deuxfleurs.bootstrap is set 2022-08-24 14:23:17 +02:00
468c6b702b
Add ipv6 gateway at neptune 2022-08-24 12:31:55 +02:00
4253fd84a5
Wireguard configuration of Orion 2022-08-24 12:06:01 +02:00
9e39677e1d
Fix IPv6 2022-08-24 11:06:55 +02:00
e50e1c407d
Move prod to wireguard and not wesher, and reaffect IPs 2022-08-24 00:31:07 +02:00
2a1459d887
Reaffect wireguard IPs in staging cluster 2022-08-24 00:07:08 +02:00
ab901fc81d
Remove wesher, reconfigure staging without it 2022-08-23 23:55:15 +02:00
a7ac31cdf5
Affect cluster_ip in d* in correct prefix (10.83.0.0/16 for prod) 2022-08-23 23:22:23 +02:00
88d57f8e34
Add new cluster nodes 2022-08-23 22:13:26 +02:00
5994e41ad1
Add jitsi 2022-08-23 18:00:07 +02:00
02b1e6200c
Disable ipv6 temporary addresses 2022-08-23 13:12:07 +02:00
8cd804a8c0
Add Drone CI server with sqlite-on-s3 thing 2022-08-23 12:10:25 +02:00
7d7efab9ee
Update to nixos 22.05 2022-07-27 11:18:23 +02:00
2453a45c74
Disable spoutnik 2022-07-27 10:39:09 +02:00
f262fa7d1b
Remove self-advertisement in consul 2022-07-18 15:36:58 +02:00
d4499bbef9
garage v0.7.99.2-k2v on staging 2022-07-18 15:31:43 +02:00
698cdefadb
Update garage (repair task in comments) 2022-07-04 11:57:06 +02:00
c81442dc01
Update README; DNS on prod 2022-06-01 15:27:11 +02:00
0dedbd2d22
Fix bottin url in guichet config 2022-06-01 14:54:02 +02:00
641a68715f
Configure Consul DNS 2022-06-01 14:48:16 +02:00
72f5c70096
Move domains of some things to staging.deuxfleurs.org 2022-06-01 14:25:45 +02:00
bee58a7891
Add directory 2022-06-01 14:04:20 +02:00
53309d3845
Add more ram to replicate-db 2022-06-01 13:21:32 +02:00
2130407a0f
Move back to using Docker runner 2022-05-31 11:59:20 +02:00
93c9e7d9ae
Make some RAM space for drone workers 2022-05-30 17:22:12 +02:00
0c015b4e0c
Drone VM works 2022-05-30 17:04:03 +02:00
4ec5cc43d4
Drone runner VM almost works 2022-05-30 16:36:17 +02:00
d47d4e93ab
Work on drone runner as VM 2022-05-30 14:57:05 +02:00
2d9adf82d0
Add admin token to garage staging 2022-05-23 19:54:20 +02:00
6639908fbd
Garage v0.7.1-k2v on staging 2022-05-18 22:28:13 +02:00
e657ebd0a0
Tricot 41 2022-05-10 16:40:17 +02:00
52f14f9da2
Backup Cryptpad 2022-05-10 15:58:09 +02:00
8cd2f72926
Working cryptpad 2022-05-10 15:18:07 +02:00
79e61b6bfd
Garage 0.7.1 on staging 2022-05-09 16:20:29 +02:00
1e23341710
Fix firewall rule for IGD 2022-05-09 00:29:17 +02:00
178107af0c
Network configuration updates 2022-05-09 00:20:02 +02:00
83dd3ea25a
Update network configuration 2022-05-08 14:42:18 +02:00
397a3fdfa9
Migrate to my Cryptpad image 2022-05-06 18:13:35 +02:00
1a6371d8d5
Mostly working Cryptpad 2022-05-06 17:55:23 +02:00
071e87a202
Own packaging of Cryptpad 2022-05-06 17:35:09 +02:00
0561fa8d5f
Tricot version 39 2022-05-06 12:39:01 +02:00
bdcb1760ed
Add required headers 2022-05-06 12:22:59 +02:00
ca55b15b57
Add Cryptpad build and config 2022-05-06 11:44:13 +02:00
b75d7c7841
WIP Cryptpad integration to Deuxfleurs 2022-05-06 11:43:49 +02:00
3df47c8440
Configuration for prod to run on Wesher & other new stuff 2022-05-04 17:38:54 +02:00
72ed2517a9
Fix passwd script 2022-05-04 16:41:07 +02:00
9cae8c8fc2
Update telemetry to ES 8.2.0 and simplify config a bit 2022-05-04 16:27:46 +02:00
1b4f96ffb2
Fix telemetry 2022-05-04 15:32:51 +02:00
d9e2465e28
Access staging cluster through IPv6
- for now DiploNAT is no longer used to transfer port
- and it is not yet capable of updating DNS AAAA record,
  so tricot is pinned to a single machine for now
2022-05-04 15:07:03 +02:00
44d3d6d19c
Tricot 37 on staging 2022-05-04 14:50:11 +02:00
316 changed files with 46493 additions and 1289 deletions

185
README.md
View file

@ -1,160 +1,55 @@
# Deuxfleurs on NixOS!
This repository contains code to run Deuxfleur's infrastructure on NixOS.
This repository contains code to run Deuxfleurs' infrastructure on NixOS.
It sets up the following:
## Our abstraction stack
- A Wireguard mesh between all nodes
- Consul, with TLS
- Nomad, with TLS
We try to build a generic abstraction stack between our different resources (CPU, RAM, disk, etc.) and our services (Chat, Storage, etc.), we develop our own tools when needed.
## Configuring the OS
Our first abstraction level is the NixOS level, which installs a bunch of standard components:
This repo contains a bunch of scripts to configure NixOS on all cluster nodes.
Most scripts are invoked with the following syntax:
* **Wireguard:** provides encrypted communication between remote nodes
* **Nomad:** schedule containers and handle their lifecycle
* **Consul:** distributed key value store + lock + service discovery
* **Docker:** package, distribute and isolate applications
Then, inside our Nomad+Consul orchestrator, we deploy a number of base services:
- for scripts that generate secrets: `./gen_<something> <cluster_name>` to generate the secrets to be used on cluster `<cluster_name>`
- for deployment scripts:
- `./deploy_<something> <cluster_name>` to run the deployment script on all nodes of the cluster `<cluster_name>`
- `./deploy_<something> <cluster_name> <node1> <node2> ...` to run the deployment script only on nodes `node1, node2, ...` of cluster `<cluster_name>`.
* Data management
* **[Garage](https://git.deuxfleurs.fr/Deuxfleurs/garage/):** S3-compatible lightweight object store for self-hosted geo-distributed deployments
* **Stolon + PostgreSQL:** distributed relational database
* Network Control Plane
* **[DiploNAT](https://git.deuxfleurs.fr/Deuxfleurs/diplonat):** - network automation (firewalling, upnp igd)
* **[D53](https://git.deuxfleurs.fr/lx/d53)** - update DNS entries (A and AAAA) dynamically based on Nomad service scheduling and local node info
* **[Tricot](https://git.deuxfleurs.fr/Deuxfleurs/tricot)** - a dynamic reverse proxy for nomad+consul inspired by traefik
* **[wgautomesh](https://git.deuxfleurs.fr/Deuxfleurs/wgautomesh)** - a dynamic wireguard mesh configurator
* User Management
* **[Bottin](https://git.deuxfleurs.fr/Deuxfleurs/bottin):** authentication and authorization (LDAP protocol, consul backend)
* **[Guichet](https://git.deuxfleurs.fr/Deuxfleurs/guichet):** a dashboard for our users and administrators7
* Observability
* **Prometheus + Grafana:** monitoring
All deployment scripts can use the following parameters passed as environment variables:
Some services we provide based on this abstraction:
- `SUDO_PASS`: optionnally, the password for `sudo` on cluster nodes. If not set, it will be asked at the begninning.
- `SSH_USER`: optionnally, the user to try to login using SSH. If not set, the username from your local machine will be used.
* **Websites:** Garage (static) + fediverse blog (Plume)
* **Chat:** Synapse + Element Web (Matrix protocol)
* **Email:** Postfix SMTP + Dovecot IMAP + opendkim DKIM + Sogo webmail | Alps webmail (experimental)
- **[Aerogramme](https://git.deuxfleurs.fr/Deuxfleurs/aerogramme/):** an encrypted IMAP server
* **Visioconference:** Jitsi
* **Collaboration:** CryptPad
### Assumptions (how to setup your environment)
As a generic abstraction is provided, deploying new services should be easy.
- you have an SSH access to all of your cluster nodes (listed in `cluster/<cluster_name>/ssh_config`)
## How to use this?
- your account is in group `wheel` and you know its password (you need it to become root using `sudo`);
the password is the same on all cluster nodes (see below for password management tools)
See the following documentation topics:
- you have a clone of the secrets repository in your `pass` password store, for instance at `~/.password-store/deuxfleurs`
(scripts in this repo will read and write all secrets in `pass` under `deuxfleurs/cluster/<cluster_name>/`)
- [Quick start and onboarding for new administrators](doc/onboarding.md)
- [How to add new nodes to a cluster (rapid overview)](doc/adding-nodes.md)
- [Architecture of this repo, how the scripts work](doc/architecture.md)
- [List of TCP and UDP ports used by services](doc/ports)
- [Why not Ansible?](doc/why-not-ansible.md)
### Deploying the NixOS configuration
## Got personal services in addition to Deuxfleurs at home?
The NixOS configuration makes use of a certain number of files:
- files in `nix/` that are the same for all deployments on all clusters
- the file `cluster/<cluster_name>/cluster.nix`, a Nix configuration file that is specific to the cluster but is copied the same on all cluster nodes
- files in `cluster/<cluster_name>/site/`, which are specific to the various sites on which Nix nodes are deployed
- files in `cluster/<cluster_name>/node/` which are specific to each node
To deploy the NixOS configuration on the cluster, simply do:
```
./deploy_nixos <cluster_name>
```
or to deploy only on a single node:
```
./deploy_nixos <cluster_name> <node_name>
```
To upgrade NixOS, use the `./upgrade_nixos` script instead (it has the same syntax).
**When adding a node to the cluster:** just do `./deploy_nixos <cluster_name> <name_of_new_node>`
### Deploying Wesher
We use Wesher to provide an encrypted overlay network between nodes in the cluster.
This is usefull in particular for securing services that are not able to do mTLS,
but as a security-in-depth measure, we make all traffic go through Wesher even when
TLS is done correctly. It is thus mandatory to have a working Wesher installation
in the cluster for it to run correctly.
First, if no Wesher shared secret key has been generated for this cluster yet,
generate it with:
```
./gen_wesher_key <cluster_name>
```
This key will be stored in `pass`, so you must have a working `pass` installation
for this script to run correctly.
Then, deploy the key on all nodes with:
```
./deploy_wesher_key <cluster_name>
```
This should be done after `./deploy_nixos` has run successfully on all nodes.
You should now have a working Wesher network between all your nodes!
**When adding a node to the cluster:** just do `./deploy_wesher_key <cluster_name> <name_of_new_node>`
### Generating and deploying a PKI for Consul and Nomad
This is very similar to how we do for Wesher.
First, if the PKI has not yet been created, create it with:
```
./gen_pki <cluster_name>
```
Then, deploy the PKI on all nodes with:
```
./deploy_pki <cluster_name>
```
**When adding a node to the cluster:** just do `./deploy_pki <cluster_name> <name_of_new_node>`
### Adding administrators and password management
Adminstrators are defined in the `cluster.nix` file for each cluster (they could also be defined in the site-specific Nix files if necessary).
This is where their public SSH keys for remote access are put.
Administrators will also need passwords to administrate the cluster, as we are not using passwordless sudo.
To set the password for a new administrator, they must have a working `pass` installation as specified above.
They must then run:
```
./passwd <cluster_name> <user_name>
```
to set their password in the `pass` database (the password is hashed, so other administrators cannot learn their password even if they have access to the `pass` db).
Then, an administrator that already has root access must run the following (after syncing the `pass` db) to set the password correctly on all cluster nodes:
```
./deploy_passwords <cluster_name>
```
## Deploying stuff on Nomad
### Connecting to Nomad
Connect using SSH to one of the cluster nodes, forwarding port 14646 to port 4646 on localhost, and port 8501 to port 8501 on localhost.
You can for instance use an entry in your `~/.ssh/config` that looks like this:
```
Host caribou
HostName 2a01:e0a:c:a720::23
LocalForward 14646 127.0.0.1:4646
LocalForward 8501 127.0.0.1:8501
```
Then, in a separate window, launch `./tlsproxy <cluster_name>`: this will
launch `socat` proxies that strip the TLS layer and allow you to simply access
Nomad and Consul on the regular, unencrypted URLs: `http://localhost:4646` for
Nomad and `http://localhost:8500` for Consul. Keep this terminal window for as
long as you need to access Nomad and Consul on the cluster.
### Launching services
Stuff should be started in this order:
- `app/core`
- `app/frontend`
- `app/garage-staging`
At this point, we are able to have a systemd service called `mountgarage` that mounts Garage buckets in `/mnt/garage-staging`. This is used by the following services that can be launched afterwards:
- `app/im`
Go check [`cluster/prod/register_external_services.sh`](./cluster/prod/register_external_services.sh). In bash, we register a redirect from Tricot to your own services or your personal reverse proxy.

View file

@ -1 +0,0 @@
dummy-volume.hcl

View file

@ -1,35 +0,0 @@
job "dummy-nginx" {
datacenters = ["neptune"]
type = "service"
group "nginx" {
count = 1
network {
port "http" {
to = 80
}
}
task "nginx" {
driver = "docker"
config {
image = "nginx"
ports = [ "http" ]
}
}
service {
port = "http"
tags = [
"tricot home.adnab.me 100",
]
check {
type = "http"
path = "/"
interval = "10s"
timeout = "2s"
}
}
}
}

View file

@ -1,82 +0,0 @@
job "frontend" {
datacenters = ["neptune"]
type = "service"
priority = 90
group "tricot" {
network {
port "http_port" { static = 80 }
port "https_port" { static = 443 }
}
task "server" {
driver = "docker"
config {
image = "lxpz/amd64_tricot:36"
network_mode = "host"
readonly_rootfs = true
ports = [ "http_port", "https_port" ]
volumes = [
"secrets:/etc/tricot",
]
}
resources {
cpu = 2000
memory = 200
}
restart {
interval = "30m"
attempts = 2
delay = "15s"
mode = "delay"
}
template {
data = "{{ key \"secrets/consul/consul-ca.crt\" }}"
destination = "secrets/consul-ca.crt"
}
template {
data = "{{ key \"secrets/consul/consul-client.crt\" }}"
destination = "secrets/consul-client.crt"
}
template {
data = "{{ key \"secrets/consul/consul-client.key\" }}"
destination = "secrets/consul-client.key"
}
template {
data = <<EOH
TRICOT_NODE_NAME={{ env "attr.unique.consul.name" }}
TRICOT_LETSENCRYPT_EMAIL=alex@adnab.me
TRICOT_ENABLE_COMPRESSION=true
TRICOT_CONSUL_HOST=https://localhost:8501
TRICOT_CONSUL_CA_CERT=/etc/tricot/consul-ca.crt
TRICOT_CONSUL_CLIENT_CERT=/etc/tricot/consul-client.crt
TRICOT_CONSUL_CLIENT_KEY=/etc/tricot/consul-client.key
RUST_LOG=tricot=debug
EOH
destination = "secrets/env"
env = true
}
service {
name = "tricot-http"
port = "http_port"
tags = [ "(diplonat (tcp_port 80))" ]
address_mode = "host"
}
service {
name = "tricot-https"
port = "https_port"
tags = [ "(diplonat (tcp_port 443))" ]
address_mode = "host"
}
}
}
}

View file

@ -1,27 +0,0 @@
block_size = 1048576
metadata_dir = "/meta"
data_dir = "/data"
replication_mode = "3"
rpc_bind_addr = "0.0.0.0:3991"
rpc_secret = "{{ key "secrets/garage-staging/rpc_secret" | trimSpace }}"
consul_host = "localhost:8500"
consul_service_name = "garage-staging-rpc-self-advertised"
bootstrap_peers = []
[s3_api]
s3_region = "garage-staging"
api_bind_addr = "0.0.0.0:3990"
[s3_web]
bind_addr = "0.0.0.0:3992"
root_domain = ".garage-staging-web.home.adnab.me"
index = "index.html"
[admin]
api_bind_addr = "0.0.0.0:3909"
trace_sink = "http://{{ env "attr.unique.network.ip-address" }}:4317"

View file

@ -1,139 +0,0 @@
job "garage-staging" {
type = "system"
#datacenters = [ "neptune", "pluton" ]
datacenters = [ "neptune" ]
priority = 80
constraint {
attribute = "${attr.cpu.arch}"
value = "amd64"
}
group "garage-staging" {
network {
port "s3" { static = 3990 }
port "rpc" { static = 3991 }
port "web" { static = 3992 }
port "admin" { static = 3909 }
}
update {
max_parallel = 1
min_healthy_time = "30s"
healthy_deadline = "5m"
}
task "server" {
driver = "docker"
config {
image = "dxflrs/amd64_garage:v0.7.0"
command = "/garage"
args = [ "server" ]
network_mode = "host"
volumes = [
"/mnt/storage/garage-staging/data:/data",
"/mnt/ssd/garage-staging/meta:/meta",
"secrets/garage.toml:/etc/garage.toml",
]
}
template {
data = file("../config/garage.toml")
destination = "secrets/garage.toml"
}
resources {
memory = 1000
cpu = 1000
}
kill_signal = "SIGINT"
kill_timeout = "20s"
service {
tags = [
"garage-staging-api",
"tricot garage-staging.home.adnab.me",
"tricot-add-header Access-Control-Allow-Origin *",
]
port = 3990
address_mode = "driver"
name = "garage-staging-api"
check {
type = "tcp"
port = 3990
address_mode = "driver"
interval = "60s"
timeout = "5s"
check_restart {
limit = 3
grace = "90s"
ignore_warnings = false
}
}
}
service {
tags = ["garage-staging-rpc"]
port = 3991
address_mode = "driver"
name = "garage-staging-rpc"
check {
type = "tcp"
port = 3991
address_mode = "driver"
interval = "60s"
timeout = "5s"
check_restart {
limit = 3
grace = "90s"
ignore_warnings = false
}
}
}
service {
tags = [
"garage-staging-web",
"tricot *.garage-staging-web.home.adnab.me",
"tricot matrix.home.adnab.me/.well-known/matrix/server",
"tricot rust-docs",
"tricot-add-header Access-Control-Allow-Origin *",
]
port = 3992
address_mode = "driver"
name = "garage-staging-web"
check {
type = "tcp"
port = 3992
address_mode = "driver"
interval = "60s"
timeout = "5s"
check_restart {
limit = 3
grace = "90s"
ignore_warnings = false
}
}
}
service {
tags = [
"garage-staging-admin",
]
port = 3909
address_mode = "driver"
name = "garage-staging-admin"
}
restart {
interval = "30m"
attempts = 10
delay = "15s"
mode = "delay"
}
}
}
}

View file

@ -1 +0,0 @@
CMD_ONCE openssl rand -hex 32

View file

@ -1,13 +0,0 @@
#!/bin/bash
cat > database.yaml <<EOF
sqlite:
database: $SYNAPSE_SQLITE_DB
EOF
while true; do
/root/matrix-env/bin/s3_media_upload update-db 0d
/root/matrix-env/bin/s3_media_upload --no-progress check-deleted $SYNAPSE_MEDIA_STORE
/root/matrix-env/bin/s3_media_upload --no-progress upload $SYNAPSE_MEDIA_STORE $SYNAPSE_MEDIA_S3_BUCKET --delete --endpoint-url $S3_ENDPOINT
sleep 600
done

View file

@ -1 +0,0 @@
USER Synapse's `form_secret` configuration parameter

View file

@ -1 +0,0 @@
USER Synapse's `macaroon_secret_key` parameter

View file

@ -1 +0,0 @@
USER Synapse's `registration_shared_secret` parameter

View file

@ -1 +0,0 @@
USER S3 access key ID for database storage

View file

@ -1 +0,0 @@
USER S3 secret key for database storage

View file

@ -1 +0,0 @@
USER Signing key for messages

View file

@ -1 +0,0 @@
../../infrastructure/app/secretmgr.py

View file

@ -0,0 +1,28 @@
FROM golang:buster as builder
WORKDIR /root
RUN git clone https://filippo.io/age && cd age/cmd/age && go build -o age .
FROM amd64/debian:buster
COPY --from=builder /root/age/cmd/age/age /usr/local/bin/age
RUN apt-get update && \
apt-get -qq -y full-upgrade && \
apt-get install -y rsync wget openssh-client unzip && \
apt-get clean && \
rm -f /var/lib/apt/lists/*_*
RUN mkdir -p /root/.ssh
WORKDIR /root
RUN wget https://releases.hashicorp.com/consul/1.8.5/consul_1.8.5_linux_amd64.zip && \
unzip consul_1.8.5_linux_amd64.zip && \
chmod +x consul && \
mv consul /usr/local/bin && \
rm consul_1.8.5_linux_amd64.zip
COPY do_backup.sh /root/do_backup.sh
CMD "/root/do_backup.sh"

View file

@ -0,0 +1,20 @@
#!/bin/sh
set -x -e
cd /root
chmod 0600 .ssh/id_ed25519
cat > .ssh/config <<EOF
Host backuphost
HostName $TARGET_SSH_HOST
Port $TARGET_SSH_PORT
User $TARGET_SSH_USER
EOF
consul kv export | \
gzip | \
age -r "$(cat /root/.ssh/id_ed25519.pub)" | \
ssh backuphost "cat > $TARGET_SSH_DIR/consul/$(date --iso-8601=minute)_consul_kv_export.gz.age"

View file

@ -0,0 +1,7 @@
FROM alpine:3.17
RUN apk add rclone curl bash jq
COPY do-backup.sh /do-backup.sh
CMD bash /do-backup.sh

View file

@ -0,0 +1,83 @@
#!/usr/bin/env bash
# DESCRIPTION:
# Script to backup all buckets on a Garage cluster using rclone.
#
# REQUIREMENTS:
# An access key for the backup script must be created in Garage beforehand.
# This script will use the Garage administration API to grant read access
# to this key on all buckets.
#
# A rclone configuration file is expected to be located at `/etc/secrets/rclone.conf`,
# which contains credentials to the following two remotes:
# garage: the Garage server, for read access (using the backup access key)
# backup: the backup location
#
# DEPENDENCIES: (see Dockerfile)
# curl
# jq
# rclone
#
# PARAMETERS (environmenet variables)
# $GARAGE_ADMIN_API_URL => Garage administration API URL (e.g. http://localhost:3903)
# $GARAGE_ADMIN_TOKEN => Garage administration access token
# $GARAGE_ACCESS_KEY => Garage access key ID
# $TARGET_BACKUP_DIR => Folder on the backup remote where to store buckets
if [ -z "$GARAGE_ACCESS_KEY" -o -z "$GARAGE_ADMIN_TOKEN" -o -z "$GARAGE_ADMIN_API_URL" ]; then
echo "Missing parameters"
fi
# copy potentially immutable file to a mutable location,
# otherwise rclone complains
mkdir -p /root/.config/rclone
cp /etc/secrets/rclone.conf /root/.config/rclone/rclone.conf
function gcurl {
curl -s -H "Authorization: Bearer $GARAGE_ADMIN_TOKEN" $@
}
BUCKETS=$(gcurl "$GARAGE_ADMIN_API_URL/v0/bucket" | jq -r '.[].id')
mkdir -p /tmp/buckets-info
for BUCKET in $BUCKETS; do
echo "==== BUCKET $BUCKET ===="
gcurl "http://localhost:3903/v0/bucket?id=$BUCKET" > "/tmp/buckets-info/$BUCKET.json"
rclone copy "/tmp/buckets-info/$BUCKET.json" "backup:$TARGET_BACKUP_DIR/" 2>&1
ALIASES=$(jq -r '.globalAliases[]' < "/tmp/buckets-info/$BUCKET.json")
echo "(aka. $ALIASES)"
case $ALIASES in
*backup*)
echo "Skipping $BUCKET (not doing backup of backup)"
;;
*cache*)
echo "Skipping $BUCKET (not doing backup of cache)"
;;
*)
echo "Backing up $BUCKET"
gcurl -X POST -H "Content-Type: application/json" --data @- "http://localhost:3903/v0/bucket/allow" >/dev/null <<EOF
{
"bucketId": "$BUCKET",
"accessKeyId": "$GARAGE_ACCESS_KEY",
"permissions": {"read": true}
}
EOF
rclone sync \
--transfers 32 \
--fast-list \
--stats-one-line \
--stats 10s \
--stats-log-level NOTICE \
"garage:$BUCKET" "backup:$TARGET_BACKUP_DIR/$BUCKET" 2>&1
;;
esac
done
echo "========= DONE SYNCHRONIZING =========="

View file

@ -0,0 +1 @@
result

View file

@ -0,0 +1,8 @@
## Build
```bash
docker load < $(nix-build docker.nix)
docker push superboum/backup-psql:???
```

View file

@ -0,0 +1,106 @@
#!/usr/bin/env python3
import shutil,sys,os,datetime,minio,subprocess
working_directory = "."
if 'CACHE_DIR' in os.environ: working_directory = os.environ['CACHE_DIR']
required_space_in_bytes = 20 * 1024 * 1024 * 1024
bucket = os.environ['AWS_BUCKET']
key = os.environ['AWS_ACCESS_KEY_ID']
secret = os.environ['AWS_SECRET_ACCESS_KEY']
endpoint = os.environ['AWS_ENDPOINT']
pubkey = os.environ['CRYPT_PUBLIC_KEY']
psql_host = os.environ['PSQL_HOST']
psql_user = os.environ['PSQL_USER']
s3_prefix = str(datetime.datetime.now())
files = [ "backup_manifest", "base.tar.gz", "pg_wal.tar.gz" ]
clear_paths = [ os.path.join(working_directory, f) for f in files ]
crypt_paths = [ os.path.join(working_directory, f) + ".age" for f in files ]
s3_keys = [ s3_prefix + "/" + f for f in files ]
def abort(msg):
for p in clear_paths + crypt_paths:
if os.path.exists(p):
print(f"Remove {p}")
os.remove(p)
if msg: sys.exit(msg)
else: print("success")
# Check we have enough space on disk
if shutil.disk_usage(working_directory).free < required_space_in_bytes:
abort(f"Not enough space on disk at path {working_directory} to perform a backup, aborting")
# Check postgres password is set
if 'PGPASSWORD' not in os.environ:
abort(f"You must pass postgres' password through the environment variable PGPASSWORD")
# Check our working directory is empty
if len(os.listdir(working_directory)) != 0:
abort(f"Working directory {working_directory} is not empty, aborting")
# Check Minio
client = minio.Minio(endpoint, key, secret)
if not client.bucket_exists(bucket):
abort(f"Bucket {bucket} does not exist or its access is forbidden, aborting")
# Perform the backup locally
try:
ret = subprocess.run(["pg_basebackup",
f"--host={psql_host}",
f"--username={psql_user}",
f"--pgdata={working_directory}",
f"--format=tar",
"--wal-method=stream",
"--gzip",
"--compress=6",
"--progress",
"--max-rate=5M",
])
if ret.returncode != 0:
abort(f"pg_basebackup exited, expected return code 0, got {ret.returncode}. aborting")
except Exception as e:
abort(f"pg_basebackup raised exception {e}. aborting")
# Check that the expected files are here
for p in clear_paths:
print(f"Checking that {p} exists locally")
if not os.path.exists(p):
abort(f"File {p} expected but not found, aborting")
# Cipher them
for c, e in zip(clear_paths, crypt_paths):
print(f"Ciphering {c} to {e}")
try:
ret = subprocess.run(["age", "-r", pubkey, "-o", e, c])
if ret.returncode != 0:
abort(f"age exit code is {ret}, 0 expected. aborting")
except Exception as e:
abort(f"aged raised an exception. {e}. aborting")
# Upload the backup to S3
for p, k in zip(crypt_paths, s3_keys):
try:
print(f"Uploading {p} to {k}")
result = client.fput_object(bucket, k, p)
print(
"created {0} object; etag: {1}, version-id: {2}".format(
result.object_name, result.etag, result.version_id,
),
)
except Exception as e:
abort(f"Exception {e} occured while upload {p}. aborting")
# Check that the files have been uploaded
for k in s3_keys:
try:
print(f"Checking that {k} exists remotely")
result = client.stat_object(bucket, k)
print(
"last-modified: {0}, size: {1}".format(
result.last_modified, result.size,
),
)
except Exception as e:
abort(f"{k} not found on S3. {e}. aborting")
abort(None)

View file

@ -0,0 +1,8 @@
{
pkgsSrc = fetchTarball {
# Latest commit on https://github.com/NixOS/nixpkgs/tree/nixos-21.11
# As of 2022-04-15
url ="https://github.com/NixOS/nixpkgs/archive/2f06b87f64bc06229e05045853e0876666e1b023.tar.gz";
sha256 = "sha256:1d7zg96xw4qsqh7c89pgha9wkq3rbi9as3k3d88jlxy2z0ns0cy2";
};
}

View file

@ -0,0 +1,37 @@
let
common = import ./common.nix;
pkgs = import common.pkgsSrc {};
python-with-my-packages = pkgs.python3.withPackages (p: with p; [
minio
]);
in
pkgs.stdenv.mkDerivation {
name = "backup-psql";
src = pkgs.lib.sourceFilesBySuffices ./. [ ".py" ];
buildInputs = [
python-with-my-packages
pkgs.age
pkgs.postgresql_14
];
buildPhase = ''
cat > backup-psql <<EOF
#!${pkgs.bash}/bin/bash
export PYTHONPATH=${python-with-my-packages}/${python-with-my-packages.sitePackages}
export PATH=${python-with-my-packages}/bin:${pkgs.age}/bin:${pkgs.postgresql_14}/bin
${python-with-my-packages}/bin/python3 $out/lib/backup-psql.py
EOF
chmod +x backup-psql
'';
installPhase = ''
mkdir -p $out/{bin,lib}
cp *.py $out/lib/backup-psql.py
cp backup-psql $out/bin/backup-psql
'';
}

View file

@ -0,0 +1,11 @@
let
common = import ./common.nix;
app = import ./default.nix;
pkgs = import common.pkgsSrc {};
in
pkgs.dockerTools.buildImage {
name = "superboum/backup-psql-docker";
config = {
Cmd = [ "${app}/bin/backup-psql" ];
};
}

View file

@ -0,0 +1,196 @@
job "backup_daily" {
datacenters = ["neptune", "scorpio", "bespin"]
type = "batch"
priority = "60"
periodic {
cron = "@daily"
// Do not allow overlapping runs.
prohibit_overlap = true
}
group "backup-dovecot" {
constraint {
attribute = "${attr.unique.hostname}"
operator = "="
value = "ananas"
}
task "main" {
driver = "docker"
config {
image = "restic/restic:0.16.0"
entrypoint = [ "/bin/sh", "-c" ]
args = [ "restic backup /mail && restic forget --group-by paths --keep-within 1m1d --keep-within-weekly 3m --keep-within-monthly 1y && restic prune --max-unused 50% --max-repack-size 2G && restic check" ]
volumes = [
"/mnt/ssd/mail:/mail"
]
}
template {
data = <<EOH
AWS_ACCESS_KEY_ID={{ key "secrets/email/dovecot/backup_aws_access_key_id" }}
AWS_SECRET_ACCESS_KEY={{ key "secrets/email/dovecot/backup_aws_secret_access_key" }}
RESTIC_REPOSITORY={{ key "secrets/email/dovecot/backup_restic_repository" }}
RESTIC_PASSWORD={{ key "secrets/email/dovecot/backup_restic_password" }}
EOH
destination = "secrets/env_vars"
env = true
}
resources {
cpu = 500
memory = 100
memory_max = 1000
}
restart {
attempts = 2
interval = "30m"
delay = "15s"
mode = "fail"
}
}
}
group "backup-consul" {
task "consul-kv-export" {
driver = "docker"
lifecycle {
hook = "prestart"
sidecar = false
}
config {
image = "consul:1.13.1"
network_mode = "host"
entrypoint = [ "/bin/sh", "-c" ]
args = [ "/bin/consul kv export > $NOMAD_ALLOC_DIR/consul.json" ]
volumes = [
"secrets:/etc/consul",
]
}
env {
CONSUL_HTTP_ADDR = "https://consul.service.prod.consul:8501"
CONSUL_HTTP_SSL = "true"
CONSUL_CACERT = "/etc/consul/consul.crt"
CONSUL_CLIENT_CERT = "/etc/consul/consul-client.crt"
CONSUL_CLIENT_KEY = "/etc/consul/consul-client.key"
}
resources {
cpu = 200
memory = 200
}
template {
data = "{{ key \"secrets/consul/consul.crt\" }}"
destination = "secrets/consul.crt"
}
template {
data = "{{ key \"secrets/consul/consul-client.crt\" }}"
destination = "secrets/consul-client.crt"
}
template {
data = "{{ key \"secrets/consul/consul-client.key\" }}"
destination = "secrets/consul-client.key"
}
restart {
attempts = 2
interval = "30m"
delay = "15s"
mode = "fail"
}
}
task "restic-backup" {
driver = "docker"
config {
image = "restic/restic:0.16.0"
entrypoint = [ "/bin/sh", "-c" ]
args = [ "restic backup $NOMAD_ALLOC_DIR/consul.json && restic forget --group-by paths --keep-within 1m1d --keep-within-weekly 3m --keep-within-monthly 1y && restic prune --max-unused 50% --max-repack-size 2G && restic check" ]
}
template {
data = <<EOH
AWS_ACCESS_KEY_ID={{ key "secrets/backup/consul/backup_aws_access_key_id" }}
AWS_SECRET_ACCESS_KEY={{ key "secrets/backup/consul/backup_aws_secret_access_key" }}
RESTIC_REPOSITORY={{ key "secrets/backup/consul/backup_restic_repository" }}
RESTIC_PASSWORD={{ key "secrets/backup/consul/backup_restic_password" }}
EOH
destination = "secrets/env_vars"
env = true
}
resources {
cpu = 200
memory = 200
}
restart {
attempts = 2
interval = "30m"
delay = "15s"
mode = "fail"
}
}
}
group "backup-cryptpad" {
constraint {
attribute = "${attr.unique.hostname}"
operator = "="
value = "concombre"
}
task "main" {
driver = "docker"
config {
image = "restic/restic:0.16.0"
entrypoint = [ "/bin/sh", "-c" ]
args = [ "restic backup /cryptpad && restic forget --group-by paths --keep-within 1m1d --keep-within-weekly 3m --keep-within-monthly 1y && restic prune --max-unused 50% --max-repack-size 2G && restic check" ]
volumes = [
"/mnt/ssd/cryptpad:/cryptpad"
]
}
template {
data = <<EOH
AWS_ACCESS_KEY_ID={{ key "secrets/backup/cryptpad/backup_aws_access_key_id" }}
AWS_SECRET_ACCESS_KEY={{ key "secrets/backup/cryptpad/backup_aws_secret_access_key" }}
RESTIC_REPOSITORY={{ key "secrets/backup/cryptpad/backup_restic_repository" }}
RESTIC_PASSWORD={{ key "secrets/backup/cryptpad/backup_restic_password" }}
EOH
destination = "secrets/env_vars"
env = true
}
resources {
cpu = 500
memory = 100
memory_max = 1000
}
restart {
attempts = 2
interval = "30m"
delay = "15s"
mode = "fail"
}
}
}
}

View file

@ -0,0 +1,72 @@
job "backup-garage" {
datacenters = ["neptune", "bespin", "scorpio"]
type = "batch"
priority = "60"
periodic {
cron = "@daily"
// Do not allow overlapping runs.
prohibit_overlap = true
}
group "backup-garage" {
task "main" {
driver = "docker"
config {
image = "lxpz/backup_garage:9"
network_mode = "host"
volumes = [
"secrets/rclone.conf:/etc/secrets/rclone.conf"
]
}
template {
data = <<EOH
GARAGE_ADMIN_TOKEN={{ key "secrets/garage/admin_token" }}
GARAGE_ADMIN_API_URL=http://localhost:3903
GARAGE_ACCESS_KEY={{ key "secrets/backup/garage/s3_access_key_id" }}
TARGET_BACKUP_DIR={{ key "secrets/backup/garage/target_sftp_directory" }}
EOH
destination = "secrets/env_vars"
env = true
}
template {
data = <<EOH
[garage]
type = s3
provider = Other
env_auth = false
access_key_id = {{ key "secrets/backup/garage/s3_access_key_id" }}
secret_access_key = {{ key "secrets/backup/garage/s3_secret_access_key" }}
endpoint = http://localhost:3900
region = garage
[backup]
type = sftp
host = {{ key "secrets/backup/garage/target_sftp_host" }}
user = {{ key "secrets/backup/garage/target_sftp_user" }}
port = {{ key "secrets/backup/garage/target_sftp_port" }}
key_pem = {{ key "secrets/backup/garage/target_sftp_key_pem" | replaceAll "\n" "\\n" }}
shell_type = unix
EOH
destination = "secrets/rclone.conf"
}
resources {
cpu = 500
memory = 200
memory_max = 4000
}
restart {
attempts = 2
interval = "30m"
delay = "15s"
mode = "fail"
}
}
}
}

View file

@ -0,0 +1,55 @@
job "backup_weekly" {
datacenters = ["scorpio", "neptune", "bespin"]
type = "batch"
priority = "60"
periodic {
cron = "@weekly"
// Do not allow overlapping runs.
prohibit_overlap = true
}
group "backup-psql" {
task "main" {
driver = "docker"
config {
image = "superboum/backup-psql-docker:gyr3aqgmhs0hxj0j9hkrdmm1m07i8za2"
volumes = [
// Mount a cache on the hard disk to avoid filling up the SSD
"/mnt/storage/tmp_bckp_psql:/mnt/cache"
]
}
template {
data = <<EOH
CACHE_DIR=/mnt/cache
AWS_BUCKET=backups-pgbasebackup
AWS_ENDPOINT=s3.deuxfleurs.shirokumo.net
AWS_ACCESS_KEY_ID={{ key "secrets/postgres/backup/aws_access_key_id" }}
AWS_SECRET_ACCESS_KEY={{ key "secrets/postgres/backup/aws_secret_access_key" }}
CRYPT_PUBLIC_KEY={{ key "secrets/postgres/backup/crypt_public_key" }}
PSQL_HOST={{ env "meta.site" }}.psql-proxy.service.prod.consul
PSQL_USER={{ key "secrets/postgres/keeper/pg_repl_username" }}
PGPASSWORD={{ key "secrets/postgres/keeper/pg_repl_pwd" }}
EOH
destination = "secrets/env_vars"
env = true
}
resources {
cpu = 200
memory = 200
}
restart {
attempts = 2
interval = "30m"
delay = "15s"
mode = "fail"
}
}
}
}

View file

@ -0,0 +1,92 @@
# Cryptpad backup
[secrets."backup/cryptpad/backup_restic_password"]
type = 'user'
description = 'Restic password to encrypt backups'
[secrets."backup/cryptpad/backup_aws_secret_access_key"]
type = 'user'
description = 'Backup AWS secret access key'
[secrets."backup/cryptpad/backup_restic_repository"]
type = 'user'
description = 'Restic repository'
example = 's3:https://s3.garage.tld'
[secrets."backup/cryptpad/backup_aws_access_key_id"]
type = 'user'
description = 'Backup AWS access key ID'
# Consul backup
[secrets."backup/consul/backup_restic_password"]
type = 'user'
description = 'Restic password to encrypt backups'
[secrets."backup/consul/backup_aws_secret_access_key"]
type = 'user'
description = 'Backup AWS secret access key'
[secrets."backup/consul/backup_restic_repository"]
type = 'user'
description = 'Restic repository'
example = 's3:https://s3.garage.tld'
[secrets."backup/consul/backup_aws_access_key_id"]
type = 'user'
description = 'Backup AWS access key ID'
# Postgresql backup
[secrets."postgres/backup/aws_access_key_id"]
type = 'user'
description = 'Minio access key'
[secrets."postgres/backup/aws_secret_access_key"]
type = 'user'
description = 'Minio secret key'
[secrets."postgres/backup/crypt_public_key"]
type = 'user'
description = 'A public key to encypt backups with age'
# Plume backup
[secrets."plume/backup_restic_repository"]
type = 'user'
description = 'Restic repository'
example = 's3:https://s3.garage.tld'
[secrets."plume/backup_restic_password"]
type = 'user'
description = 'Restic password to encrypt backups'
[secrets."plume/backup_aws_secret_access_key"]
type = 'user'
description = 'Backup AWS secret access key'
[secrets."plume/backup_aws_access_key_id"]
type = 'user'
description = 'Backup AWS access key ID'
# Dovecot backup
[secrets."email/dovecot/backup_restic_password"]
type = 'user'
description = 'Restic backup password to encrypt data'
[secrets."email/dovecot/backup_aws_secret_access_key"]
type = 'user'
description = 'AWS Secret Access key'
[secrets."email/dovecot/backup_restic_repository"]
type = 'user'
description = 'Restic Repository URL, check op_guide/backup-minio to see the format'
[secrets."email/dovecot/backup_aws_access_key_id"]
type = 'user'
description = 'AWS Acces Key ID'

View file

@ -0,0 +1,88 @@
job "bagage" {
datacenters = ["scorpio", "neptune"]
type = "service"
priority = 90
constraint {
attribute = "${attr.cpu.arch}"
value = "amd64"
}
group "main" {
count = 1
network {
port "web_port" {
static = 8080
to = 8080
}
port "ssh_port" {
static = 2222
to = 2222
}
}
task "server" {
driver = "docker"
config {
image = "lxpz/amd64_bagage:20231016-3"
readonly_rootfs = false
network_mode = "host"
volumes = [
"secrets/id_rsa:/id_rsa"
]
ports = [ "web_port", "ssh_port" ]
}
env {
BAGAGE_LDAP_ENDPOINT = "bottin.service.prod.consul:389"
}
resources {
memory = 200
cpu = 100
}
template {
data = "{{ key \"secrets/bagage/id_rsa\" }}"
destination = "secrets/id_rsa"
}
service {
name = "bagage-ssh"
port = "ssh_port"
address_mode = "host"
tags = [
"bagage",
"(diplonat (tcp_port 2222))",
"d53-a sftp.deuxfleurs.fr",
"d53-aaaa sftp.deuxfleurs.fr",
]
}
service {
name = "bagage-webdav"
tags = [
"bagage",
"tricot bagage.deuxfleurs.fr",
"d53-cname bagage.deuxfleurs.fr",
]
port = "web_port"
address_mode = "host"
check {
type = "tcp"
port = "web_port"
address_mode = "host"
interval = "60s"
timeout = "5s"
check_restart {
limit = 3
grace = "90s"
ignore_warnings = false
}
}
}
}
}
}

View file

@ -0,0 +1,4 @@
[secrets."bagage/id_rsa"]
type = 'command'
rotate = true
command = 'ssh-keygen -q -f >(cat) -N "" <<< y 2>/dev/null 1>&2 ; true'

View file

@ -0,0 +1,11 @@
HOST=0.0.0.0
PORT={{ env "NOMAD_PORT_web_port" }}
SESSION_SECRET={{ key "secrets/cms/teabag/session" | trimSpace }}
GITEA_KEY={{ key "secrets/cms/teabag/gitea_key" | trimSpace }}
GITEA_SECRET={{ key "secrets/cms/teabag/gitea_secret" | trimSpace }}
GITEA_BASE_URL=https://git.deuxfleurs.fr
GITEA_AUTH_URI=login/oauth/authorize
GITEA_TOKEN_URI=login/oauth/access_token
GITEA_USER_URI=api/v1/user
CALLBACK_URI=https://teabag.deuxfleurs.fr/callback

View file

@ -0,0 +1,74 @@
job "cms" {
datacenters = ["neptune", "scorpio"]
type = "service"
priority = 100
constraint {
attribute = "${attr.cpu.arch}"
value = "amd64"
}
group "auth" {
count = 1
network {
port "web_port" { }
}
task "teabag" {
driver = "docker"
config {
# Using a digest to pin the container as no tag is provided
# https://github.com/denyskon/teabag/pkgs/container/teabag
image = "ghcr.io/denyskon/teabag@sha256:d5af7c6caf172727fbfa047c8ee82f9087ef904f0f3bffdeec656be04e9e0a14"
ports = [ "web_port" ]
volumes = [
"secrets/teabag.env:/etc/teabag/teabag.env",
]
}
template {
data = file("../config/teabag.env")
destination = "secrets/teabag.env"
}
resources {
memory = 20
memory_max = 50
cpu = 50
}
service {
name = "teabag"
tags = [
"teabag",
"tricot teabag.deuxfleurs.fr",
"d53-cname teabag.deuxfleurs.fr",
]
port = "web_port"
check {
type = "http"
protocol = "http"
port = "web_port"
path = "/"
interval = "60s"
timeout = "5s"
check_restart {
limit = 3
grace = "600s"
ignore_warnings = false
}
}
}
restart {
interval = "30m"
attempts = 20
delay = "15s"
mode = "delay"
}
}
}
}

View file

@ -0,0 +1,17 @@
# HTTP Session Encryption Key
[secrets."cms/teabag/session"]
type = 'command'
rotate = true
command = 'openssl rand -base64 32'
# Gitea Application Token
[secrets."cms/teabag/gitea_key"]
type = 'user'
description = 'Gitea Application Key'
example = '4fea0...'
[secrets."cms/teabag/gitea_secret"]
type = 'user'
description = 'Gitea Secret Key'
example = 'gto_bz6f...'

View file

@ -0,0 +1,26 @@
{
"suffix": "{{ key "secrets/directory/ldap_base_dn" }}",
"bind": "0.0.0.0:389",
"log_level": "debug",
"acl": [
"*,{{ key "secrets/directory/ldap_base_dn" }}::read:*:* !userpassword !user_secret !alternate_user_secrets !garage_s3_secret_key",
"*::read modify:SELF:*",
"ANONYMOUS::bind:*,ou=users,{{ key "secrets/directory/ldap_base_dn" }}:",
"ANONYMOUS::bind:cn=admin,{{ key "secrets/directory/ldap_base_dn" }}:",
"*,ou=services,ou=users,{{ key "secrets/directory/ldap_base_dn" }}::bind:*,ou=users,{{ key "secrets/directory/ldap_base_dn" }}:*",
"*,ou=services,ou=users,{{ key "secrets/directory/ldap_base_dn" }}::read:*:*",
"*:cn=asso_deuxfleurs,ou=groups,{{ key "secrets/directory/ldap_base_dn" }}:add:*,ou=invitations,{{ key "secrets/directory/ldap_base_dn" }}:*",
"ANONYMOUS::bind:*,ou=invitations,{{ key "secrets/directory/ldap_base_dn" }}:",
"*,ou=invitations,{{ key "secrets/directory/ldap_base_dn" }}::delete:SELF:*",
"*:cn=asso_deuxfleurs,ou=groups,{{ key "secrets/directory/ldap_base_dn" }}:add:*,ou=users,{{ key "secrets/directory/ldap_base_dn" }}:*",
"*,ou=invitations,{{ key "secrets/directory/ldap_base_dn" }}::add:*,ou=users,{{ key "secrets/directory/ldap_base_dn" }}:*",
"*:cn=asso_deuxfleurs,ou=groups,{{ key "secrets/directory/ldap_base_dn" }}:modifyAdd:cn=email,ou=groups,{{ key "secrets/directory/ldap_base_dn" }}:*",
"*,ou=invitations,{{ key "secrets/directory/ldap_base_dn" }}::modifyAdd:cn=email,ou=groups,{{ key "secrets/directory/ldap_base_dn" }}:*",
"cn=admin,{{ key "secrets/directory/ldap_base_dn" }}::read add modify delete:*:*",
"*:cn=admin,ou=groups,{{ key "secrets/directory/ldap_base_dn" }}:read add modify delete:*:*"
]
}

View file

@ -0,0 +1,100 @@
job "core-bottin" {
datacenters = ["neptune", "scorpio"]
type = "system"
priority = 90
update {
max_parallel = 1
stagger = "1m"
}
group "bottin" {
constraint {
distinct_property = "${meta.site}"
value = "1"
}
network {
port "ldap_port" {
static = 389
to = 389
}
}
task "bottin" {
driver = "docker"
config {
image = "dxflrs/bottin:7h18i30cckckaahv87d3c86pn4a7q41z"
network_mode = "host"
readonly_rootfs = true
ports = [ "ldap_port" ]
volumes = [
"secrets/config.json:/config.json",
"secrets:/etc/bottin",
]
}
restart {
interval = "5m"
attempts = 10
delay = "15s"
mode = "delay"
}
resources {
memory = 100
memory_max = 200
}
template {
data = file("../config/bottin/config.json.tpl")
destination = "secrets/config.json"
}
template {
data = "{{ key \"secrets/consul/consul.crt\" }}"
destination = "secrets/consul.crt"
}
template {
data = "{{ key \"secrets/consul/consul-client.crt\" }}"
destination = "secrets/consul-client.crt"
}
template {
data = "{{ key \"secrets/consul/consul-client.key\" }}"
destination = "secrets/consul-client.key"
}
template {
data = <<EOH
CONSUL_HTTP_ADDR=https://consul.service.prod.consul:8501
CONSUL_HTTP_SSL=true
CONSUL_CACERT=/etc/bottin/consul.crt
CONSUL_CLIENT_CERT=/etc/bottin/consul-client.crt
CONSUL_CLIENT_KEY=/etc/bottin/consul-client.key
EOH
destination = "secrets/env"
env = true
}
service {
tags = [ "${meta.site}" ]
port = "ldap_port"
address_mode = "host"
name = "bottin"
check {
type = "tcp"
port = "ldap_port"
interval = "60s"
timeout = "5s"
check_restart {
limit = 3
grace = "90s"
ignore_warnings = false
}
}
}
}
}
}

View file

@ -0,0 +1,102 @@
job "core-d53" {
datacenters = ["neptune", "scorpio", "bespin"]
type = "service"
priority = 90
group "D53" {
count = 1
task "d53" {
driver = "docker"
config {
image = "lxpz/amd64_d53:4"
network_mode = "host"
readonly_rootfs = true
volumes = [
"secrets:/etc/d53",
]
}
resources {
cpu = 100
memory = 100
}
restart {
interval = "3m"
attempts = 10
delay = "15s"
mode = "delay"
}
template {
data = "{{ key \"secrets/consul/consul-ca.crt\" }}"
destination = "secrets/consul-ca.crt"
}
template {
data = "{{ key \"secrets/consul/consul-client.crt\" }}"
destination = "secrets/consul-client.crt"
}
template {
data = "{{ key \"secrets/consul/consul-client.key\" }}"
destination = "secrets/consul-client.key"
}
template {
data = <<EOH
D53_CONSUL_HOST=https://localhost:8501
D53_CONSUL_CA_CERT=/etc/d53/consul-ca.crt
D53_CONSUL_CLIENT_CERT=/etc/d53/consul-client.crt
D53_CONSUL_CLIENT_KEY=/etc/d53/consul-client.key
D53_PROVIDERS=deuxfleurs.fr:gandi
D53_GANDI_API_KEY={{ key "secrets/d53/gandi_api_key" }}
D53_ALLOWED_DOMAINS=deuxfleurs.fr
RUST_LOG=d53=info
EOH
destination = "secrets/env"
env = true
}
}
}
# Dummy task for Gitea (still on an external VM), runs on any bespin node
# and allows D53 to automatically update the A record for git.deuxfleurs.fr
# to the IPv4 address of the bespin site (that changes occasionnaly)
group "gitea-dummy" {
count = 1
network {
port "dummy" {
to = 999
}
}
task "main" {
driver = "docker"
constraint {
attribute = "${meta.site}"
operator = "="
value = "bespin"
}
config {
image = "alpine"
command = "sh"
args = ["-c", "while true; do echo x; sleep 60; done"]
ports = [ "dummy" ]
}
service {
name = "gitea-dummy"
port = "dummy"
tags = [
"d53-a git.deuxfleurs.fr",
]
}
}
}
}

View file

@ -1,41 +1,37 @@
job "core" {
datacenters = ["dc1", "neptune"]
job "core-diplonat" {
datacenters = ["neptune", "scorpio", "bespin"]
type = "system"
priority = 90
constraint {
attribute = "${attr.cpu.arch}"
value = "amd64"
}
update {
max_parallel = 1
max_parallel = 2
stagger = "1m"
}
group "network" {
group "diplonat" {
task "diplonat" {
driver = "docker"
config {
image = "lxpz/amd64_diplonat:3"
image = "lxpz/amd64_diplonat:7"
network_mode = "host"
readonly_rootfs = true
privileged = true
volumes = [
"secrets:/etc/diplonat",
]
}
restart {
interval = "30m"
attempts = 2
interval = "5m"
attempts = 10
delay = "15s"
mode = "delay"
}
template {
data = "{{ key \"secrets/consul/consul-ca.crt\" }}"
destination = "secrets/consul-ca.crt"
data = "{{ key \"secrets/consul/consul.crt\" }}"
destination = "secrets/consul.crt"
}
template {
@ -53,8 +49,8 @@ job "core" {
DIPLONAT_REFRESH_TIME=60
DIPLONAT_EXPIRATION_TIME=300
DIPLONAT_CONSUL_NODE_NAME={{ env "attr.unique.hostname" }}
DIPLONAT_CONSUL_URL=https://localhost:8501
DIPLONAT_CONSUL_CA_CERT=/etc/diplonat/consul-ca.crt
DIPLONAT_CONSUL_URL=https://consul.service.prod.consul:8501
DIPLONAT_CONSUL_TLS_SKIP_VERIFY=true
DIPLONAT_CONSUL_CLIENT_CERT=/etc/diplonat/consul-client.crt
DIPLONAT_CONSUL_CLIENT_KEY=/etc/diplonat/consul-client.key
RUST_LOG=debug
@ -64,7 +60,8 @@ EOH
}
resources {
memory = 40
memory = 100
memory_max = 200
}
}
}

View file

@ -0,0 +1,120 @@
job "core-tricot" {
# bespin pas pour l'instant, on a des soucis de SSL avec gitea
# on pourra mettre bespin quand on aura migré gitea de la vm vers le cluster
# en attendant, les deux ne sont pas capables de partager les certificats SSL
# donc on laisse la VM gitea gérer les certifs et prendre tout le trafic http(s)
datacenters = ["neptune", "scorpio"]
type = "system"
priority = 90
update {
max_parallel = 1
stagger = "5m"
}
group "tricot" {
constraint {
distinct_property = "${meta.site}"
value = "1"
}
network {
port "http_port" { static = 80 }
port "https_port" { static = 443 }
port "metrics_port" { static = 9334 }
}
task "server" {
driver = "docker"
config {
image = "superboum/amd64_tricot:54"
network_mode = "host"
readonly_rootfs = true
ports = [ "http_port", "https_port" ]
volumes = [
"secrets:/etc/tricot",
]
}
resources {
cpu = 1000
memory = 200
memory_max = 500
}
restart {
interval = "5m"
attempts = 10
delay = "15s"
mode = "delay"
}
template {
data = "{{ key \"secrets/consul/consul-ca.crt\" }}"
destination = "secrets/consul-ca.crt"
}
template {
data = "{{ key \"secrets/consul/consul-client.crt\" }}"
destination = "secrets/consul-client.crt"
}
template {
data = "{{ key \"secrets/consul/consul-client.key\" }}"
destination = "secrets/consul-client.key"
}
template {
data = <<EOH
TRICOT_NODE_NAME={{ env "attr.unique.hostname" }}
TRICOT_LETSENCRYPT_EMAIL=prod-sysadmin@deuxfleurs.fr
TRICOT_ENABLE_COMPRESSION=true
TRICOT_CONSUL_HOST=https://consul.service.prod.consul:8501
TRICOT_CONSUL_TLS_SKIP_VERIFY=true
TRICOT_CONSUL_CLIENT_CERT=/etc/tricot/consul-client.crt
TRICOT_CONSUL_CLIENT_KEY=/etc/tricot/consul-client.key
TRICOT_HTTP_BIND_ADDR=[::]:80
TRICOT_HTTPS_BIND_ADDR=[::]:443
TRICOT_METRICS_BIND_ADDR=[::]:9334
TRICOT_WARMUP_CERT_MEMORY_STORE=true
RUST_LOG=tricot=debug
EOH
destination = "secrets/env"
env = true
}
service {
name = "tricot-http"
port = "http_port"
tags = [
"(diplonat (tcp_port 80))",
"${meta.site}"
]
address_mode = "host"
}
service {
name = "tricot-https"
port = "https_port"
tags = [
"(diplonat (tcp_port 443))",
"${meta.site}",
"d53-a global.site.deuxfleurs.fr",
"d53-aaaa global.site.deuxfleurs.fr",
"d53-a ${meta.site}.site.deuxfleurs.fr",
"d53-aaaa ${meta.site}.site.deuxfleurs.fr",
"d53-a v4.${meta.site}.site.deuxfleurs.fr",
"d53-aaaa v6.${meta.site}.site.deuxfleurs.fr",
]
address_mode = "host"
}
service {
name = "tricot-metrics"
port = "metrics_port"
address_mode = "host"
}
}
}
}

View file

@ -0,0 +1,5 @@
[secrets."directory/ldap_base_dn"]
type = 'user'
description = 'LDAP base DN for everything'
example = 'dc=example,dc=com'

View file

@ -0,0 +1,15 @@
#!/bin/sh
turnserver \
-n \
--external-ip=$(detect-external-ip) \
--min-port=49160 \
--max-port=49169 \
--log-file=stdout \
--use-auth-secret \
--realm turn.deuxfleurs.fr \
--no-cli \
--no-tls \
--no-dtls \
--prometheus \
--static-auth-secret '{{ key "secrets/coturn/static-auth-secret" | trimSpace }}'

View file

@ -0,0 +1,87 @@
job "coturn" {
datacenters = ["neptune", "scorpio"]
type = "service"
priority = 100
constraint {
attribute = "${attr.cpu.arch}"
value = "amd64"
}
group "main" {
count = 1
network {
port "prometheus" { static = 9641 }
port "turn_ctrl" { static = 3478 }
port "turn_data0" { static = 49160 }
port "turn_data1" { static = 49161 }
port "turn_data2" { static = 49162 }
port "turn_data3" { static = 49163 }
port "turn_data4" { static = 49164 }
port "turn_data5" { static = 49165 }
port "turn_data6" { static = 49166 }
port "turn_data7" { static = 49167 }
port "turn_data8" { static = 49168 }
port "turn_data9" { static = 49169 }
}
task "turnserver" {
driver = "docker"
config {
image = "coturn/coturn:4.6.1-r2-alpine"
ports = [ "prometheus", "turn_ctrl", "turn_data0", "turn_data1", "turn_data2",
"turn_data3", "turn_data4", "turn_data5", "turn_data6", "turn_data7",
"turn_data8", "turn_data9" ]
network_mode = "host"
volumes = [
"secrets/docker-entrypoint.sh:/usr/local/bin/docker-entrypoint.sh",
]
}
template {
data = file("../config/docker-entrypoint.sh")
destination = "secrets/docker-entrypoint.sh"
perms = 555
}
resources {
memory = 20
memory_max = 50
cpu = 50
}
service {
name = "coturn"
tags = [
"coturn",
"d53-cname turn.deuxfleurs.fr",
"(diplonat (tcp_port 3478) (udp_port 3478 49160 49161 49162 49163 49164 49165 49166 49167 49168 49169))",
]
port = "turn_ctrl"
check {
type = "http"
protocol = "http"
port = "prometheus"
path = "/"
interval = "60s"
timeout = "5s"
check_restart {
limit = 3
grace = "600s"
ignore_warnings = false
}
}
}
restart {
interval = "30m"
attempts = 20
delay = "15s"
mode = "delay"
}
}
}
}

View file

@ -0,0 +1,7 @@
docker run \
--name coturn \
--rm \
-it \
-v `pwd`/docker-entrypoint.sh:/usr/local/bin/docker-entrypoint.sh \
--network=host \
coturn/coturn:4.6.1-r2-alpine

View file

@ -0,0 +1,6 @@
stun+turn
tcp: 3478
udp: 49160-49169
prometheus:
tcp: 9641

View file

@ -0,0 +1,5 @@
# coturn
[secrets."coturn/static-auth-secret"]
type = 'command'
rotate = true
command = "openssl rand -base64 64|tr -d '\n'"

View file

@ -0,0 +1,56 @@
# CryptPad for NixOS with Deuxfleurs flavour
## Building
The `default.nix` file follows the nixpkgs `callPackage` convention for fetching dependencies, so you need to either:
- Run `nix-build --expr '{ ... }@args: (import <nixpkgs> {}).callPackage ./default.nix args'`
- Do the `callPackage from a higher-level directory importing your package`
### Docker
The `docker.nix` derives into a Docker image you can load simply by running:
```shell
docker load -i $(nix-build docker.nix)
```
You can then test the built Docker image using the provided `docker-compose.yml` and `config.js` files, which are
configured to render the instance accessible at `http://localhost:3000` with data stored into the `_data` folder.
### Deuxfleurs flavour
The `deuxfleurs.nix` file derives into two derivations: The CryptPad derivation itself and a Docker image,
which can be choose by passing the `-A [name]` flags to `nix-build`
For example, to build and load the Deuxfleurs-flavoured CryptPad Docker image, you run:
```shell
docker load -i $(nix-build deuxfleurs.nix -A docker)
```
## OnlyOffice integration
Apart for `deuxfleurs.nix`, both `default.nix` and `docker.nix` files build CryptPad with a copy of OnlyOffice pre-built and
used by CryptPad, which can result to large Docker image (~2.6GiB)
This behaviour is configurable by passing the `--arg withOnlyOffice false` flag to `nix-build` when building them.
## Updating the Deuxfleurs pinned nixpkgs
The pinned sources files are generated with the [niv](https://github.com/nmattia/niv) tool.
To update the pinned nixpkgs, you simply run the following command:
```shell
niv update
```
To modify the pinned nixpkgs, you can use the `niv modify` command, for example, to move to nixpkgs-unstable:
```shell
niv modify nixpkgs -b nixos-unstable
```
## Quirks
- The CryptPad `package-lock.json` is included here because the upstream-provided one appeared to be desync'ed, so a
manual lockfile generation was needed

View file

@ -0,0 +1,118 @@
{ lib
, stdenvNoCC
, buildNpmPackage
, fetchFromGitHub
, nodejs
, withOnlyOffice ? true
}: let
onlyOfficeVersions = {
v1 = {
rev = "4f370bebe96e3a0d4054df87412ee5b2c6ed8aaa";
hash = "sha256-TE/99qOx4wT2s0op9wi+SHwqTPYq/H+a9Uus9Zj4iSY=";
};
v2b = {
rev = "d9da72fda95daf93b90ffa345757c47eb5b919dd";
hash = "sha256-SiRDRc2vnLwCVnvtk+C8PKw7IeuSzHBaJmZHogRe3hQ=";
};
v4 = {
rev = "6ebc6938b6841440ffad2efc1e23f1dc1ceda964";
hash = "sha256-eto1+8Tk/s3kbUCpbUh8qCS8EOq700FYG1/KiHyynaA=";
};
v5 = {
rev = "88a356f08ded2f0f4620bda66951caf1d7f02c21";
hash = "sha256-8j1rlAyHlKx6oAs2pIhjPKcGhJFj6ZzahOcgenyeOCc=";
};
v6 = {
rev = "abd8a309f6dd37289f950cd8cea40df4492d8a15";
hash = "sha256-BZdExj2q/bqUD3k9uluOot2dlrWKA+vpad49EdgXKww=";
};
v7 = {
rev = "9d8b914a81f0f9e5d0bc3f0fc631adf4b6d480e7";
hash = "sha256-M+rPJ/Xo2olhqB5ViynGRaesMLLfG/1ltUoLnepMPnM=";
};
};
mkOnlyOffice = {
pname, version
}: stdenvNoCC.mkDerivation (final: {
pname = "${pname}-onlyoffice";
inherit version;
srcs = lib.mapAttrsToList (version: { rev, hash ? lib.fakeHash }: fetchFromGitHub {
name = "${final.pname}-${version}-source";
owner = "cryptpad";
repo = "onlyoffice-builds";
inherit rev hash;
}) onlyOfficeVersions;
dontBuild = true;
sourceRoot = ".";
installPhase = ''
mkdir -p $out
${lib.concatLines (map
(version: "cp -Tr ${final.pname}-${version}-source $out/${version}")
(builtins.attrNames onlyOfficeVersions)
)}
'';
});
in buildNpmPackage rec {
pname = "cryptpad";
version = "2024.3.0";
src = fetchFromGitHub {
owner = "cryptpad";
repo = "cryptpad";
rev = version;
hash = "sha256-VUW6KvoSatk1/hlzklMQYlSNVH/tdbH+yU4ONUQ0JSQ=";
};
npmDepsHash = "sha256-tvTkoxxioPuNoe8KIuXSP7QQbvcpxMnygsMmzKBQIY0=";
inherit nodejs;
onlyOffice = lib.optional withOnlyOffice (mkOnlyOffice {
inherit pname version;
});
makeCacheWritable = true;
dontFixup = true;
postPatch = ''
cp -T ${./package-lock.json} package-lock.json
'';
preBuild = ''
npm run install:components
'' + lib.optionalString withOnlyOffice ''
ln -s $onlyOffice www/common/onlyoffice/dist
'';
postBuild = ''
rm -rf customize
'';
installPhase = ''
runHook preInstall
mkdir -p $out
cp -R . $out/
substituteInPlace $out/lib/workers/index.js \
--replace-warn "lib/workers/db-worker" "$out/lib/workers/db-worker"
makeWrapper ${lib.getExe nodejs} $out/bin/cryptpad-server \
--chdir $out \
--add-flags server.js
runHook postInstall
'';
meta = {
homepage = "https://cryptpad.org";
mainProgram = "cryptpad-server";
};
}

View file

@ -0,0 +1,12 @@
{ name ? "deuxfleurs/cryptpad"
, tag ? "nix-latest"
}: let
sources = import ./nix/sources.nix;
pkgs = import sources.nixpkgs {};
in rec {
cryptpad = pkgs.callPackage ./default.nix {};
docker = pkgs.callPackage ./docker.nix {
inherit name tag;
inherit cryptpad;
};
}

View file

@ -0,0 +1,27 @@
{ pkgs ? import <nixpkgs> {}
, name ? "cryptpad"
, tag ? "nix-latest"
, withOnlyOffice ? true
, cryptpad ? pkgs.callPackage ./default.nix { inherit withOnlyOffice; }
}: let
cryptpad' = cryptpad.overrideAttrs {
postInstall = ''
ln -sf /cryptpad/customize $out/customize
'';
};
in pkgs.dockerTools.buildImage {
inherit name tag;
config = {
Cmd = [
(pkgs.lib.getExe cryptpad')
];
Volumes = {
"/cryptpad/customize" = {};
};
};
}

View file

@ -0,0 +1,14 @@
{
"nixpkgs": {
"branch": "nixos-23.11",
"description": "Nix Packages collection",
"homepage": null,
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "53a2c32bc66f5ae41a28d7a9a49d321172af621e",
"sha256": "0yqbwqbripb1bbhlwjfbqmg9qb0lai2fc0k1vfh674d6rrc8igwv",
"type": "tarball",
"url": "https://github.com/NixOS/nixpkgs/archive/53a2c32bc66f5ae41a28d7a9a49d321172af621e.tar.gz",
"url_template": "https://github.com/<owner>/<repo>/archive/<rev>.tar.gz"
}
}

View file

@ -0,0 +1,198 @@
# This file has been generated by Niv.
let
#
# The fetchers. fetch_<type> fetches specs of type <type>.
#
fetch_file = pkgs: name: spec:
let
name' = sanitizeName name + "-src";
in
if spec.builtin or true then
builtins_fetchurl { inherit (spec) url sha256; name = name'; }
else
pkgs.fetchurl { inherit (spec) url sha256; name = name'; };
fetch_tarball = pkgs: name: spec:
let
name' = sanitizeName name + "-src";
in
if spec.builtin or true then
builtins_fetchTarball { name = name'; inherit (spec) url sha256; }
else
pkgs.fetchzip { name = name'; inherit (spec) url sha256; };
fetch_git = name: spec:
let
ref =
spec.ref or (
if spec ? branch then "refs/heads/${spec.branch}" else
if spec ? tag then "refs/tags/${spec.tag}" else
abort "In git source '${name}': Please specify `ref`, `tag` or `branch`!"
);
submodules = spec.submodules or false;
submoduleArg =
let
nixSupportsSubmodules = builtins.compareVersions builtins.nixVersion "2.4" >= 0;
emptyArgWithWarning =
if submodules
then
builtins.trace
(
"The niv input \"${name}\" uses submodules "
+ "but your nix's (${builtins.nixVersion}) builtins.fetchGit "
+ "does not support them"
)
{ }
else { };
in
if nixSupportsSubmodules
then { inherit submodules; }
else emptyArgWithWarning;
in
builtins.fetchGit
({ url = spec.repo; inherit (spec) rev; inherit ref; } // submoduleArg);
fetch_local = spec: spec.path;
fetch_builtin-tarball = name: throw
''[${name}] The niv type "builtin-tarball" is deprecated. You should instead use `builtin = true`.
$ niv modify ${name} -a type=tarball -a builtin=true'';
fetch_builtin-url = name: throw
''[${name}] The niv type "builtin-url" will soon be deprecated. You should instead use `builtin = true`.
$ niv modify ${name} -a type=file -a builtin=true'';
#
# Various helpers
#
# https://github.com/NixOS/nixpkgs/pull/83241/files#diff-c6f540a4f3bfa4b0e8b6bafd4cd54e8bR695
sanitizeName = name:
(
concatMapStrings (s: if builtins.isList s then "-" else s)
(
builtins.split "[^[:alnum:]+._?=-]+"
((x: builtins.elemAt (builtins.match "\\.*(.*)" x) 0) name)
)
);
# The set of packages used when specs are fetched using non-builtins.
mkPkgs = sources: system:
let
sourcesNixpkgs =
import (builtins_fetchTarball { inherit (sources.nixpkgs) url sha256; }) { inherit system; };
hasNixpkgsPath = builtins.any (x: x.prefix == "nixpkgs") builtins.nixPath;
hasThisAsNixpkgsPath = <nixpkgs> == ./.;
in
if builtins.hasAttr "nixpkgs" sources
then sourcesNixpkgs
else if hasNixpkgsPath && ! hasThisAsNixpkgsPath then
import <nixpkgs> { }
else
abort
''
Please specify either <nixpkgs> (through -I or NIX_PATH=nixpkgs=...) or
add a package called "nixpkgs" to your sources.json.
'';
# The actual fetching function.
fetch = pkgs: name: spec:
if ! builtins.hasAttr "type" spec then
abort "ERROR: niv spec ${name} does not have a 'type' attribute"
else if spec.type == "file" then fetch_file pkgs name spec
else if spec.type == "tarball" then fetch_tarball pkgs name spec
else if spec.type == "git" then fetch_git name spec
else if spec.type == "local" then fetch_local spec
else if spec.type == "builtin-tarball" then fetch_builtin-tarball name
else if spec.type == "builtin-url" then fetch_builtin-url name
else
abort "ERROR: niv spec ${name} has unknown type ${builtins.toJSON spec.type}";
# If the environment variable NIV_OVERRIDE_${name} is set, then use
# the path directly as opposed to the fetched source.
replace = name: drv:
let
saneName = stringAsChars (c: if (builtins.match "[a-zA-Z0-9]" c) == null then "_" else c) name;
ersatz = builtins.getEnv "NIV_OVERRIDE_${saneName}";
in
if ersatz == "" then drv else
# this turns the string into an actual Nix path (for both absolute and
# relative paths)
if builtins.substring 0 1 ersatz == "/" then /. + ersatz else /. + builtins.getEnv "PWD" + "/${ersatz}";
# Ports of functions for older nix versions
# a Nix version of mapAttrs if the built-in doesn't exist
mapAttrs = builtins.mapAttrs or (
f: set: with builtins;
listToAttrs (map (attr: { name = attr; value = f attr set.${attr}; }) (attrNames set))
);
# https://github.com/NixOS/nixpkgs/blob/0258808f5744ca980b9a1f24fe0b1e6f0fecee9c/lib/lists.nix#L295
range = first: last: if first > last then [ ] else builtins.genList (n: first + n) (last - first + 1);
# https://github.com/NixOS/nixpkgs/blob/0258808f5744ca980b9a1f24fe0b1e6f0fecee9c/lib/strings.nix#L257
stringToCharacters = s: map (p: builtins.substring p 1 s) (range 0 (builtins.stringLength s - 1));
# https://github.com/NixOS/nixpkgs/blob/0258808f5744ca980b9a1f24fe0b1e6f0fecee9c/lib/strings.nix#L269
stringAsChars = f: s: concatStrings (map f (stringToCharacters s));
concatMapStrings = f: list: concatStrings (map f list);
concatStrings = builtins.concatStringsSep "";
# https://github.com/NixOS/nixpkgs/blob/8a9f58a375c401b96da862d969f66429def1d118/lib/attrsets.nix#L331
optionalAttrs = cond: as: if cond then as else { };
# fetchTarball version that is compatible between all the versions of Nix
builtins_fetchTarball = { url, name ? null, sha256 }@attrs:
let
inherit (builtins) lessThan nixVersion fetchTarball;
in
if lessThan nixVersion "1.12" then
fetchTarball ({ inherit url; } // (optionalAttrs (name != null) { inherit name; }))
else
fetchTarball attrs;
# fetchurl version that is compatible between all the versions of Nix
builtins_fetchurl = { url, name ? null, sha256 }@attrs:
let
inherit (builtins) lessThan nixVersion fetchurl;
in
if lessThan nixVersion "1.12" then
fetchurl ({ inherit url; } // (optionalAttrs (name != null) { inherit name; }))
else
fetchurl attrs;
# Create the final "sources" from the config
mkSources = config:
mapAttrs
(
name: spec:
if builtins.hasAttr "outPath" spec
then
abort
"The values in sources.json should not have an 'outPath' attribute"
else
spec // { outPath = replace name (fetch config.pkgs name spec); }
)
config.sources;
# The "config" used by the fetchers
mkConfig =
{ sourcesFile ? if builtins.pathExists ./sources.json then ./sources.json else null
, sources ? if sourcesFile == null then { } else builtins.fromJSON (builtins.readFile sourcesFile)
, system ? builtins.currentSystem
, pkgs ? mkPkgs sources system
}: {
# The sources, i.e. the attribute set of spec name to spec
inherit sources;
# The "pkgs" (evaluated nixpkgs) to use for e.g. non-builtin fetchers
inherit pkgs;
};
in
mkSources (mkConfig { }) // { __functor = _: settings: mkSources (mkConfig settings); }

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,40 @@
/*
* You can override the configurable values from this file.
* The recommended method is to make a copy of this file (/customize.dist/application_config.js)
in a 'customize' directory (/customize/application_config.js).
* If you want to check all the configurable values, you can open the internal configuration file
but you should not change it directly (/common/application_config_internal.js)
*/
define(['/common/application_config_internal.js'], function (AppConfig) {
// To inform users of the support ticket panel which languages your admins speak:
AppConfig.supportLanguages = [ 'en', 'fr' ];
/* Select the buttons displayed on the main page to create new collaborative sessions.
* Removing apps from the list will prevent users from accessing them. They will instead be
* redirected to the drive.
* You should never remove the drive from this list.
*/
AppConfig.availablePadTypes = ['drive', 'teams', 'doc', 'presentation', 'pad', 'kanban', 'code', 'form', 'poll', 'whiteboard',
'file', 'contacts', 'slide', 'convert'];
// disabled: sheet
/* You can display a link to your own privacy policy in the static pages footer.
* Since this is different for each individual or organization there is no default value.
* See the comments above for a description of possible configurations.
*/
AppConfig.privacy = {
"default": "https://deuxfleurs.fr/CGU.html",
};
/* You can display a link to your instances's terms of service in the static pages footer.
* A default is included for backwards compatibility, but we recommend replacing this
* with your own terms.
*
* See the comments above for a description of possible configurations.
*/
AppConfig.terms = {
"default": "https://deuxfleurs.fr/CGU.html",
};
return AppConfig;
});

View file

@ -0,0 +1,285 @@
/* globals module */
/* DISCLAIMER:
There are two recommended methods of running a CryptPad instance:
1. Using a standalone nodejs server without HTTPS (suitable for local development)
2. Using NGINX to serve static assets and to handle HTTPS for API server's websocket traffic
We do not officially recommend or support Apache, Docker, Kubernetes, Traefik, or any other configuration.
Support requests for such setups should be directed to their authors.
If you're having difficulty difficulty configuring your instance
we suggest that you join the project's IRC/Matrix channel.
If you don't have any difficulty configuring your instance and you'd like to
support us for the work that went into making it pain-free we are quite happy
to accept donations via our opencollective page: https://opencollective.com/cryptpad
*/
module.exports = {
/* CryptPad is designed to serve its content over two domains.
* Account passwords and cryptographic content is handled on the 'main' domain,
* while the user interface is loaded on a 'sandbox' domain
* which can only access information which the main domain willingly shares.
*
* In the event of an XSS vulnerability in the UI (that's bad)
* this system prevents attackers from gaining access to your account (that's good).
*
* Most problems with new instances are related to this system blocking access
* because of incorrectly configured sandboxes. If you only see a white screen
* when you try to load CryptPad, this is probably the cause.
*
* PLEASE READ THE FOLLOWING COMMENTS CAREFULLY.
*
*/
/* httpUnsafeOrigin is the URL that clients will enter to load your instance.
* Any other URL that somehow points to your instance is supposed to be blocked.
* The default provided below assumes you are loading CryptPad from a server
* which is running on the same machine, using port 3000.
*
* In a production instance this should be available ONLY over HTTPS
* using the default port for HTTPS (443) ie. https://cryptpad.fr
* In such a case this should be also handled by NGINX, as documented in
* cryptpad/docs/example.nginx.conf (see the $main_domain variable)
*
*/
httpUnsafeOrigin: 'https://pad.deuxfleurs.fr',
/* httpSafeOrigin is the URL that is used for the 'sandbox' described above.
* If you're testing or developing with CryptPad on your local machine then
* it is appropriate to leave this blank. The default behaviour is to serve
* the main domain over port 3000 and to serve the sandbox content over port 3001.
*
* This is not appropriate in a production environment where invasive networks
* may filter traffic going over abnormal ports.
* To correctly configure your production instance you must provide a URL
* with a different domain (a subdomain is sufficient).
* It will be used to load the UI in our 'sandbox' system.
*
* This value corresponds to the $sandbox_domain variable
* in the example nginx file.
*
* Note that in order for the sandboxing system to be effective
* httpSafeOrigin must be different from httpUnsafeOrigin.
*
* CUSTOMIZE AND UNCOMMENT THIS FOR PRODUCTION INSTALLATIONS.
*/
httpSafeOrigin: "https://pad-sandbox.deuxfleurs.fr",
/* httpAddress specifies the address on which the nodejs server
* should be accessible. By default it will listen on 127.0.0.1
* (IPv4 localhost on most systems). If you want it to listen on
* all addresses, including IPv6, set this to '::'.
*
*/
httpAddress: '::',
/* httpPort specifies on which port the nodejs server should listen.
* By default it will serve content over port 3000, which is suitable
* for both local development and for use with the provided nginx example,
* which will proxy websocket traffic to your node server.
*
*/
httpPort: 3000,
/* httpSafePort allows you to specify an alternative port from which
* the node process should serve sandboxed assets. The default value is
* that of your httpPort + 1. You probably don't need to change this.
*
*/
// httpSafePort: 3001,
/* CryptPad will launch a child process for every core available
* in order to perform CPU-intensive tasks in parallel.
* Some host environments may have a very large number of cores available
* or you may want to limit how much computing power CryptPad can take.
* If so, set 'maxWorkers' to a positive integer.
*/
// maxWorkers: 4,
/* =====================
* Admin
* ===================== */
/*
* CryptPad contains an administration panel. Its access is restricted to specific
* users using the following list.
* To give access to the admin panel to a user account, just add their public signing
* key, which can be found on the settings page for registered users.
* Entries should be strings separated by a comma.
*/
adminKeys: [
"[quentin@pad.deuxfleurs.fr/EWtzm-CiqJnM9RZL9mj-YyTgAtX-Zh76sru1K5bFpN8=]",
"[adrn@pad.deuxfleurs.fr/PxDpkPwd-jDJWkfWdAzFX7wtnLpnPlBeYZ4MmoEYS6E=]",
"[lx@pad.deuxfleurs.fr/FwQzcXywx1FIb83z6COB7c3sHnz8rNSDX1xhjPuH3Fg=]",
"[trinity-1686a@pad.deuxfleurs.fr/Pu6Ef03jEsAGBbZI6IOdKd6+5pORD5N51QIYt4-Ys1c=]",
"[Jill@pad.deuxfleurs.fr/tLW7W8EVNB2KYETXEaOYR+HmNiBQtZj7u+SOxS3hGmg=]"
],
/* =====================
* STORAGE
* ===================== */
/* Pads that are not 'pinned' by any registered user can be set to expire
* after a configurable number of days of inactivity (default 90 days).
* The value can be changed or set to false to remove expiration.
* Expired pads can then be removed using a cron job calling the
* `evict-inactive.js` script with node
*
* defaults to 90 days if nothing is provided
*/
//inactiveTime: 90, // days
/* CryptPad archives some data instead of deleting it outright.
* This archived data still takes up space and so you'll probably still want to
* remove these files after a brief period.
*
* cryptpad/scripts/evict-inactive.js is intended to be run daily
* from a crontab or similar scheduling service.
*
* The intent with this feature is to provide a safety net in case of accidental
* deletion. Set this value to the number of days you'd like to retain
* archived data before it's removed permanently.
*
* defaults to 15 days if nothing is provided
*/
//archiveRetentionTime: 15,
/* It's possible to configure your instance to remove data
* stored on behalf of inactive accounts. Set 'accountRetentionTime'
* to the number of days an account can remain idle before its
* documents and other account data is removed.
*
* Leave this value commented out to preserve all data stored
* by user accounts regardless of inactivity.
*/
//accountRetentionTime: 365,
/* Starting with CryptPad 3.23.0, the server automatically runs
* the script responsible for removing inactive data according to
* your configured definition of inactivity. Set this value to `true`
* if you prefer not to remove inactive data, or if you prefer to
* do so manually using `scripts/evict-inactive.js`.
*/
//disableIntegratedEviction: true,
/* Max Upload Size (bytes)
* this sets the maximum size of any one file uploaded to the server.
* anything larger than this size will be rejected
* defaults to 20MB if no value is provided
*/
//maxUploadSize: 20 * 1024 * 1024,
/* Users with premium accounts (those with a plan included in their customLimit)
* can benefit from an increased upload size limit. By default they are restricted to the same
* upload size as any other registered user.
*
*/
//premiumUploadSize: 100 * 1024 * 1024,
/* =====================
* DATABASE VOLUMES
* ===================== */
/*
* CryptPad stores each document in an individual file on your hard drive.
* Specify a directory where files should be stored.
* It will be created automatically if it does not already exist.
*/
filePath: '/mnt/datastore/',
/* CryptPad offers the ability to archive data for a configurable period
* before deleting it, allowing a means of recovering data in the event
* that it was deleted accidentally.
*
* To set the location of this archive directory to a custom value, change
* the path below:
*/
archivePath: '/mnt/data/archive',
/* CryptPad allows logged in users to request that particular documents be
* stored by the server indefinitely. This is called 'pinning'.
* Pin requests are stored in a pin-store. The location of this store is
* defined here.
*/
pinPath: '/mnt/data/pins',
/* if you would like the list of scheduled tasks to be stored in
a custom location, change the path below:
*/
taskPath: '/mnt/data/tasks',
/* if you would like users' authenticated blocks to be stored in
a custom location, change the path below:
*/
blockPath: '/mnt/block',
/* CryptPad allows logged in users to upload encrypted files. Files/blobs
* are stored in a 'blob-store'. Set its location here.
*/
blobPath: '/mnt/blob',
/* CryptPad stores incomplete blobs in a 'staging' area until they are
* fully uploaded. Set its location here.
*/
blobStagingPath: '/mnt/data/blobstage',
decreePath: '/mnt/data/decrees',
/* CryptPad supports logging events directly to the disk in a 'logs' directory
* Set its location here, or set it to false (or nothing) if you'd rather not log
*/
logPath: false,
/* =====================
* Debugging
* ===================== */
/* CryptPad can log activity to stdout
* This may be useful for debugging
*/
logToStdout: true,
/* CryptPad can be configured to log more or less
* the various settings are listed below by order of importance
*
* silly, verbose, debug, feedback, info, warn, error
*
* Choose the least important level of logging you wish to see.
* For example, a 'silly' logLevel will display everything,
* while 'info' will display 'info', 'warn', and 'error' logs
*
* This will affect both logging to the console and the disk.
*/
logLevel: 'silly',
/* clients can use the /settings/ app to opt out of usage feedback
* which informs the server of things like how much each app is being
* used, and whether certain clientside features are supported by
* the client's browser. The intent is to provide feedback to the admin
* such that the service can be improved. Enable this with `true`
* and ignore feedback with `false` or by commenting the attribute
*
* You will need to set your logLevel to include 'feedback'. Set this
* to false if you'd like to exclude feedback from your logs.
*/
logFeedback: false,
/* CryptPad supports verbose logging
* (false by default)
*/
verbose: true,
/* Surplus information:
*
* 'installMethod' is included in server telemetry to voluntarily
* indicate how many instances are using unofficial installation methods
* such as Docker.
*
*/
installMethod: 'deuxfleurs.fr',
};

View file

@ -0,0 +1,78 @@
job "cryptpad" {
datacenters = ["scorpio"]
type = "service"
group "cryptpad" {
count = 1
network {
port "http" {
to = 3000
}
}
restart {
attempts = 10
delay = "30s"
}
task "main" {
driver = "docker"
constraint {
attribute = "${attr.unique.hostname}"
operator = "="
value = "abricot"
}
config {
image = "kokakiwi/cryptpad:2024.3.0"
ports = [ "http" ]
volumes = [
"/mnt/ssd/cryptpad:/mnt",
"secrets/config.js:/cryptpad/config.js",
]
}
env {
CRYPTPAD_CONFIG = "/cryptpad/config.js"
}
template {
data = file("../config/config.js")
destination = "secrets/config.js"
}
/* Disabled because it requires modifications to the docker image and I do not want to invest the time yet
template {
data = file("../config/application_config.js")
destination = "secrets/config.js"
}
*/
resources {
memory = 1000
cpu = 500
}
service {
name = "cryptpad"
port = "http"
tags = [
"tricot pad.deuxfleurs.fr",
"tricot pad-sandbox.deuxfleurs.fr",
"tricot-add-header Cross-Origin-Resource-Policy cross-origin",
"tricot-add-header Cross-Origin-Embedder-Policy require-corp",
"d53-cname pad.deuxfleurs.fr",
"d53-cname pad-sandbox.deuxfleurs.fr",
]
check {
type = "http"
path = "/"
interval = "10s"
timeout = "2s"
}
}
}
}
}

View file

@ -0,0 +1,20 @@
FROM golang:1.19.3-buster as builder
ARG VERSION
ENV CGO_ENABLED=0 GOOS=linux GOARCH=amd64
WORKDIR /tmp/alps
RUN git init && \
git remote add origin https://git.deuxfleurs.fr/Deuxfleurs/alps.git && \
git fetch --depth 1 origin ${VERSION} && \
git checkout FETCH_HEAD
RUN go build -a -o /usr/local/bin/alps ./cmd/alps
FROM scratch
COPY --from=builder /usr/local/bin/alps /alps
COPY --from=builder /tmp/alps/themes /themes
COPY --from=builder /tmp/alps/plugins /plugins
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/ca-certificates.crt
ENTRYPOINT ["/alps"]

View file

@ -0,0 +1,36 @@
version: '3.4'
services:
# Email
sogo:
build:
context: ./sogo
args:
# fake for now
VERSION: 5.0.0
image: superboum/amd64_sogo:v7
alps:
build:
context: ./alps
args:
VERSION: bf9ccc6ed17e8b50a230e9f5809d820e9de8562f
image: lxpz/amd64_alps:v4
dovecot:
build:
context: ./dovecot
image: superboum/amd64_dovecot:v6
postfix:
build:
context: ./postfix
args:
# https://packages.debian.org/fr/trixie/postfix
VERSION: 3.8.4-1
image: superboum/amd64_postfix:v4
opendkim:
build:
context: ./opendkim
image: superboum/amd64_opendkim:v6

View file

@ -0,0 +1 @@
dovecot-ldap.conf

View file

@ -0,0 +1,16 @@
FROM amd64/debian:bullseye
RUN apt-get update && \
apt-get install -y \
dovecot-antispam \
dovecot-core \
dovecot-imapd \
dovecot-ldap \
dovecot-managesieved \
dovecot-sieve \
dovecot-lmtpd && \
rm -rf /etc/dovecot/*
RUN useradd mailstore
COPY entrypoint.sh /usr/local/bin/entrypoint
ENTRYPOINT ["/usr/local/bin/entrypoint"]

View file

@ -0,0 +1,18 @@
```
sudo docker build -t superboum/amd64_dovecot:v2 .
```
```
sudo docker run -t -i \
-e TLSINFO="/C=FR/ST=Bretagne/L=Rennes/O=Deuxfleurs/CN=www.deuxfleurs.fr" \
-p 993:993 \
-p 143:143 \
-p 24:24 \
-p 1337:1337 \
-v /mnt/glusterfs/email/ssl:/etc/ssl/ \
-v /mnt/glusterfs/email/mail:/var/mail \
-v `pwd`/dovecot-ldap.conf:/etc/dovecot/dovecot-ldap.conf \
superboum/amd64_dovecot:v1 \
dovecot -F
```

View file

@ -0,0 +1,27 @@
#!/bin/bash
if [[ ! -f /etc/ssl/certs/dovecot.crt || ! -f /etc/ssl/private/dovecot.key ]]; then
cd /root
openssl req \
-new \
-newkey rsa:4096 \
-days 3650 \
-nodes \
-x509 \
-subj ${TLSINFO} \
-keyout dovecot.key \
-out dovecot.crt
mkdir -p /etc/ssl/{certs,private}/
cp dovecot.crt /etc/ssl/certs/dovecot.crt
cp dovecot.key /etc/ssl/private/dovecot.key
chmod 400 /etc/ssl/certs/dovecot.crt
chmod 400 /etc/ssl/private/dovecot.key
fi
if [[ $(stat -c '%U' /var/mail/) != "mailstore" ]]; then
chown -R mailstore /var/mail
fi
exec "$@"

View file

@ -0,0 +1,5 @@
require ["fileinto", "mailbox"];
if header :contains "X-Spam-Flag" "YES" {
fileinto :create "Junk";
}

View file

@ -0,0 +1,8 @@
hosts = ldap.example.com
dn = cn=admin,dc=example,dc=com
dnpass = s3cr3t
base = dc=example,dc=com
scope = subtree
user_filter = (&(mail=%u)(&(objectClass=inetOrgPerson)(memberOf=cn=email,ou=groups,dc=example,dc=com)))
pass_filter = (&(mail=%u)(&(objectClass=inetOrgPerson)(memberOf=cn=email,ou=groups,dc=example,dc=com)))
user_attrs = mail=/var/mail/%{ldap:mail}

View file

@ -0,0 +1,17 @@
require ["vnd.dovecot.pipe", "copy", "imapsieve", "environment", "variables", "vnd.dovecot.debug"];
if environment :matches "imap.mailbox" "*" {
set "mailbox" "${1}";
}
if string "${mailbox}" "Trash" {
stop;
}
if environment :matches "imap.user" "*" {
set "username" "${1}";
}
pipe :copy "sa-learn" [ "--ham", "-u", "debian-spamd" ];
debug_log "ham reported by ${username}";

View file

@ -0,0 +1,9 @@
require ["vnd.dovecot.pipe", "copy", "imapsieve", "environment", "variables", "vnd.dovecot.debug"];
if environment :matches "imap.user" "*" {
set "username" "${1}";
}
pipe :copy "sa-learn" [ "--spam", "-u", "debian-spamd"];
debug_log "spam reported by ${username}";

View file

@ -0,0 +1,9 @@
FROM amd64/debian:bullseye
RUN apt-get update && \
apt-get dist-upgrade -y && \
apt-get install -y opendkim opendkim-tools
COPY ./opendkim.conf /etc/opendkim.conf
COPY ./entrypoint /entrypoint
CMD ["/entrypoint"]

View file

@ -0,0 +1,12 @@
```
sudo docker build -t superboum/amd64_opendkim:v1 .
```
```
sudo docker run -t -i \
-v `pwd`/conf:/etc/dkim \
-v /dev/log:/dev/log \
-p 8999:8999
superboum/amd64_opendkim:v1
opendkim -f -v -x /etc/opendkim.conf
```

View file

@ -0,0 +1,8 @@
#!/bin/bash
chown 0:0 /etc/dkim/*
chown 0:0 /etc/dkim
chmod 400 /etc/dkim/*
chmod 700 /etc/dkim
opendkim -f -v -x /etc/opendkim.conf

View file

@ -0,0 +1,12 @@
Syslog yes
SyslogSuccess yes
LogWhy yes
UMask 007
Mode sv
OversignHeaders From
TrustAnchorFile /usr/share/dns/root.key
KeyTable refile:/etc/dkim/keytable
SigningTable refile:/etc/dkim/signingtable
ExternalIgnoreList refile:/etc/dkim/trusted
InternalHosts refile:/etc/dkim/trusted
Socket inet:8999

View file

@ -0,0 +1,13 @@
FROM amd64/debian:trixie
ARG VERSION
RUN apt-get update && \
apt-get install -y \
postfix=$VERSION \
postfix-ldap
COPY entrypoint.sh /usr/local/bin/entrypoint
ENTRYPOINT ["/usr/local/bin/entrypoint"]
CMD ["postfix", "start-fg"]

View file

@ -0,0 +1,18 @@
```
sudo docker build -t superboum/amd64_postfix:v1 .
```
```
sudo docker run -t -i \
-e TLSINFO="/C=FR/ST=Bretagne/L=Rennes/O=Deuxfleurs/CN=smtp.deuxfleurs.fr" \
-e MAILNAME="smtp.deuxfleurs.fr" \
-p 25:25 \
-p 465:465 \
-p 587:587 \
-v `pwd`/../../ansible/roles/container_conf/files/email/postfix-conf:/etc/postfix-conf \
-v /mnt/glusterfs/email/postfix-ssl/private:/etc/ssl/private \
-v /mnt/glusterfs/email/postfix-ssl/certs:/etc/ssl/certs \
superboum/amd64_postfix:v1 \
bash
```

View file

@ -0,0 +1,31 @@
#!/bin/bash
if [[ ! -f /etc/ssl/certs/postfix.crt || ! -f /etc/ssl/private/postfix.key ]]; then
cd /root
openssl req \
-new \
-newkey rsa:4096 \
-days 3650 \
-nodes \
-x509 \
-subj ${TLSINFO} \
-keyout postfix.key \
-out postfix.crt
mkdir -p /etc/ssl/{certs,private}/
cp postfix.crt /etc/ssl/certs/postfix.crt
cp postfix.key /etc/ssl/private/postfix.key
chmod 400 /etc/ssl/certs/postfix.crt
chmod 400 /etc/ssl/private/postfix.key
fi
# A way to map files inside the postfix folder :s
for file in $(ls /etc/postfix-conf); do
cp /etc/postfix-conf/${file} /etc/postfix/${file}
done
echo ${MAILNAME} > /etc/mailname
postmap /etc/postfix/transport
exec "$@"

View file

@ -0,0 +1,17 @@
#FROM amd64/debian:stretch as builder
FROM amd64/debian:buster
RUN mkdir ~/.gnupg && echo "disable-ipv6" >> ~/.gnupg/dirmngr.conf
RUN apt-get update && \
apt-get install -y apt-transport-https gnupg2 sudo nginx && \
rm -rf /etc/nginx/sites-enabled/* && \
apt-key adv --keyserver keys.gnupg.net --recv-key 0x810273C4 && \
echo "deb http://packages.inverse.ca/SOGo/nightly/5/debian/ buster buster" > /etc/apt/sources.list.d/sogo.list && \
apt-get update && \
apt-get install -y sogo sogo-activesync sope4.9-gdl1-postgresql postgresql-client
COPY sogo.nginx.conf /etc/nginx/sites-enabled/sogo.conf
COPY entrypoint /usr/sbin/entrypoint
ENTRYPOINT ["/usr/sbin/entrypoint"]

View file

@ -0,0 +1,20 @@
```
docker build -t superboum/amd64_sogo:v6 .
# privileged is only for debug
docker run --rm -ti \
--privileged \
-p 8080:8080 \
-v /tmp/sogo/log:/var/log/sogo \
-v /tmp/sogo/run:/var/run/sogo \
-v /tmp/sogo/spool:/var/spool/sogo \
-v /tmp/sogo/tmp:/tmp \
-v `pwd`/sogo:/etc/sogo:ro \
superboum/amd64_sogo:v1
```
Password must be url encoded in sogo.conf for postgres
Will need a nginx instance: http://wiki.sogo.nu/nginxSettings
Might (or might not) be needed:
traefik.frontend.headers.customRequestHeaders=x-webobjects-server-port:443||x-webobjects-server-name=sogo.deuxfleurs.fr||x-webobjects-server-url:https://sogo.deuxfleurs.fr

View file

@ -0,0 +1,13 @@
#!/bin/bash
mkdir -p /var/log/sogo
mkdir -p /var/run/sogo
mkdir -p /var/spool/sogo
chown sogo /var/log/sogo
chown sogo /var/run/sogo
chown sogo /var/spool/sogo
nginx -g 'daemon on; master_process on;'
sudo -u sogo memcached -d
sudo -u sogo sogod
sleep 10
tail -n200 -f /var/log/sogo/sogo.log

View file

@ -0,0 +1,83 @@
server {
listen 8080;
server_name default_server;
root /usr/lib/GNUstep/SOGo/WebServerResources/;
## requirement to create new calendars in Thunderbird ##
proxy_http_version 1.1;
# Message size limit
client_max_body_size 50m;
client_body_buffer_size 128k;
location = / {
rewrite ^ '/SOGo';
allow all;
}
location = /principals/ {
rewrite ^ '/SOGo/dav';
allow all;
}
location ^~/SOGo {
proxy_pass 'http://127.0.0.1:20000';
proxy_redirect 'http://127.0.0.1:20000' default;
# forward user's IP address
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_set_header x-webobjects-server-protocol HTTP/1.0;
proxy_set_header x-webobjects-remote-host 127.0.0.1;
proxy_set_header x-webobjects-server-name $server_name;
proxy_set_header x-webobjects-server-url $scheme://$host;
proxy_set_header x-webobjects-server-port $server_port;
proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;
proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
break;
}
location /SOGo.woa/WebServerResources/ {
alias /usr/lib/GNUstep/SOGo/WebServerResources/;
allow all;
expires max;
}
location /SOGo/WebServerResources/ {
alias /usr/lib/GNUstep/SOGo/WebServerResources/;
allow all;
expires max;
}
location (^/SOGo/so/ControlPanel/Products/([^/]*)/Resources/(.*)$) {
alias /usr/lib/GNUstep/SOGo/$1.SOGo/Resources/$2;
expires max;
}
location (^/SOGo/so/ControlPanel/Products/[^/]*UI/Resources/.*\.(jpg|png|gif|css|js)$) {
alias /usr/lib/GNUstep/SOGo/$1.SOGo/Resources/$2;
expires max;
}
location ^~ /Microsoft-Server-ActiveSync {
access_log /var/log/nginx/activesync.log;
error_log /var/log/nginx/activesync-error.log;
proxy_connect_timeout 75;
proxy_send_timeout 3600;
proxy_read_timeout 3600;
proxy_buffers 64 256k;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://127.0.0.1:20000/SOGo/Microsoft-Server-ActiveSync;
proxy_redirect http://127.0.0.1:20000/SOGo/Microsoft-Server-ActiveSync /;
}
}

View file

@ -0,0 +1 @@
smtp._domainkey.deuxfleurs.fr deuxfleurs.fr:smtp:/etc/dkim/smtp.private

View file

@ -0,0 +1,9 @@
*@deuxfleurs.fr smtp._domainkey.deuxfleurs.fr
*@dufour.io smtp._domainkey.deuxfleurs.fr
*@luxeylab.net smtp._domainkey.deuxfleurs.fr
*@estherbouquet.com smtp._domainkey.deuxfleurs.fr
*@pointecouteau.com smtp._domainkey.deuxfleurs.fr
*@maycausesideeffects.com smtp._domainkey.deuxfleurs.fr
*@e-x-t-r-a-c-t.me smtp._domainkey.deuxfleurs.fr
*@courderec.re smtp._domainkey.deuxfleurs.fr
*@trinity.fr.eu.org smtp._domainkey.deuxfleurs.fr

View file

@ -0,0 +1,4 @@
127.0.0.1
localhost
192.168.1.0/24
172.16.0.0/12

View file

@ -0,0 +1,12 @@
hosts = {{ env "meta.site" }}.bottin.service.prod.consul
dn = {{ key "secrets/email/dovecot/ldap_binddn" | trimSpace }}
dnpass = {{ key "secrets/email/dovecot/ldap_bindpwd" | trimSpace }}
base = dc=deuxfleurs,dc=fr
scope = subtree
user_filter = (&(mail=%u)(&(objectClass=inetOrgPerson)(memberOf=cn=email,ou=groups,dc=deuxfleurs,dc=fr)))
pass_filter = (&(mail=%u)(&(objectClass=inetOrgPerson)(memberOf=cn=email,ou=groups,dc=deuxfleurs,dc=fr)))
user_attrs = \
=user=%{ldap:cn}, \
=mail=maildir:/var/mail/%{ldap:cn}, \
=uid=1000, \
=gid=1000

View file

@ -0,0 +1,87 @@
auth_mechanisms = plain login
auth_username_format = %u
log_timestamp = "%Y-%m-%d %H:%M:%S "
mail_location = maildir:/var/mail/%u
mail_privileged_group = mail
log_path = /dev/stderr
info_log_path = /dev/stdout
debug_log_path = /dev/stdout
protocols = imap sieve lmtp
ssl_cert = < /etc/ssl/certs/dovecot.crt
ssl_key = < /etc/ssl/private/dovecot.key
service auth {
inet_listener {
port = 1337
}
}
service lmtp {
inet_listener lmtp {
address = 0.0.0.0
port = 24
}
}
# https://doc.dovecot.org/configuration_manual/authentication/ldap_authentication/
passdb {
args = /etc/dovecot/dovecot-ldap.conf
driver = ldap
}
userdb {
driver = prefetch
}
userdb {
args = /etc/dovecot/dovecot-ldap.conf
driver = ldap
}
service imap-login {
service_count = 0 # performance mode. set to 1 for secure mode
process_min_avail = 1
inet_listener imap {
port = 143
}
inet_listener imaps {
port = 993
}
}
protocol imap {
mail_plugins = $mail_plugins imap_sieve
}
protocol lda {
auth_socket_path = /var/run/dovecot/auth-master
info_log_path = /var/log/dovecot-deliver.log
log_path = /var/log/dovecot-deliver-errors.log
postmaster_address = postmaster@deuxfleurs.fr
mail_plugins = $mail_plugins sieve
}
plugin {
sieve = file:~/sieve;active=~/dovecot.sieve
sieve_before = /etc/dovecot/all_before.sieve
# antispam learn
sieve_plugins = sieve_imapsieve sieve_extprograms
sieve_global_extensions = +vnd.dovecot.pipe +vnd.dovecot.environment +vnd.dovecot.debug
sieve_pipe_bin_dir = /usr/bin
imapsieve_mailbox1_name = Junk
imapsieve_mailbox1_causes = COPY FLAG APPEND
imapsieve_mailbox1_before = file:/etc/dovecot/report-spam.sieve
imapsieve_mailbox2_name = *
imapsieve_mailbox2_from = Spam
imapsieve_mailbox2_causes = COPY APPEND
imapsieve_mailbox2_before = file:/etc/dovecot/report-ham.sieve
}

View file

@ -0,0 +1,9 @@
# Postfix dynamic maps configuration file.
#
# The first match found is the one that is used. Wildcards are not supported
# as of postfix 2.0.2
#
#type location of .so file open function (mkmap func)
#==== ================================ ============= ============
ldap postfix-ldap.so dict_ldap_open
sqlite postfix-sqlite.so dict_sqlite_open

View file

@ -0,0 +1,3 @@
/^Received:/ IGNORE
/^X-Originating-IP:/ IGNORE
/^X-Mailer:/ IGNORE

View file

@ -0,0 +1,12 @@
bind = yes
bind_dn = {{ key "secrets/email/postfix/ldap_binddn" | trimSpace }}
bind_pw = {{ key "secrets/email/postfix/ldap_bindpwd" | trimSpace }}
version = 3
timeout = 20
start_tls = no
tls_require_cert = no
server_host = ldap://{{ env "meta.site" }}.bottin.service.prod.consul
scope = sub
search_base = ou=users,dc=deuxfleurs,dc=fr
query_filter = mail=%s
result_attribute = mail

View file

@ -0,0 +1,9 @@
server_host = {{ env "meta.site" }}.bottin.service.prod.consul
server_port = 389
search_base = dc=deuxfleurs,dc=fr
query_filter = (&(objectClass=inetOrgPerson)(memberOf=cn=%s,ou=mailing_lists,ou=groups,dc=deuxfleurs,dc=fr))
result_attribute = mail
bind = yes
bind_dn = {{ key "secrets/email/postfix/ldap_binddn" | trimSpace }}
bind_pw = {{ key "secrets/email/postfix/ldap_bindpwd" | trimSpace }}
version = 3

View file

@ -0,0 +1,12 @@
bind = yes
bind_dn = {{ key "secrets/email/postfix/ldap_binddn" | trimSpace }}
bind_pw = {{ key "secrets/email/postfix/ldap_bindpwd" | trimSpace }}
version = 3
timeout = 20
start_tls = no
tls_require_cert = no
server_host = ldap://{{ env "meta.site" }}.bottin.service.prod.consul
scope = sub
search_base = ou=domains,ou=groups,dc=deuxfleurs,dc=fr
query_filter = (&(objectclass=dNSDomain)(domain=%s))
result_attribute = domain

View file

@ -0,0 +1,110 @@
#===
# Base configuration
#===
myhostname = smtp.deuxfleurs.fr
alias_maps = hash:/etc/aliases
alias_database = hash:/etc/aliases
myorigin = /etc/mailname
mydestination = smtp.deuxfleurs.fr
mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 192.168.1.0/24
mailbox_size_limit = 0
recipient_delimiter = +
inet_protocols = all
inet_interfaces = all
message_size_limit = 204800000
smtpd_banner = $myhostname
biff = no
append_dot_mydomain = no
readme_directory = no
compatibility_level = 2
#===
# TLS parameters
#===
smtpd_tls_cert_file=/etc/ssl/postfix.crt
smtpd_tls_key_file=/etc/ssl/postfix.key
smtpd_tls_dh1024_param_file=auto
smtpd_use_tls=yes
smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache
smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache
#smtp_tls_policy_maps = hash:/etc/postfix/tls_policy
smtp_tls_security_level = may
#===
# Remove privacy related content from emails
#===
mime_header_checks = regexp:/etc/postfix/header_checks
header_checks = regexp:/etc/postfix/header_checks
#===
# Handle user authentication (handled by dovecot)
#===
smtpd_sasl_auth_enable = yes
smtpd_sasl_path = inet:dovecot-auth.service.prod.consul:1337
smtpd_sasl_type = dovecot
#===
# Restrictions / Checks
#===
# -- Inspired by: http://www.postfix.org/SMTPD_ACCESS_README.html#lists
# Require a valid HELO
smtpd_helo_required = yes
# As we use the same postfix to send and receive,
# we can't enforce a valid HELO hostname...
#smtpd_helo_restrictions =
# reject_unknown_helo_hostname
# Require that sender email has a valid domain
smtpd_sender_restrictions =
reject_unknown_sender_domain
# Delivering email policy
# MyNetwork is required by sogo
smtpd_recipient_restrictions =
permit_sasl_authenticated
permit_mynetworks
reject_unauth_destination
reject_rbl_client zen.spamhaus.org
reject_rhsbl_reverse_client dbl.spamhaus.org
reject_rhsbl_helo dbl.spamhaus.org
reject_rhsbl_sender dbl.spamhaus.org
# Sending email policy
# MyNetwork is required by sogo
smtpd_relay_restrictions =
permit_sasl_authenticated
permit_mynetworks
reject_unauth_destination
# Disable SMTP smuggling attacks
# https://www.postfix.org/smtp-smuggling.html
smtpd_forbid_unauth_pipelining = yes
smtpd_discard_ehlo_keywords = chunking
smtpd_forbid_bare_newline = yes
smtpd_client_connection_rate_limit = 2
#===
# Rate limiting
#===
slow_destination_recipient_limit = 20
slow_destination_concurrency_limit = 2
#====
# Transport configuration
#====
default_transport = smtp-ipv4
transport_maps = hash:/etc/postfix/transport
virtual_mailbox_domains = ldap:/etc/postfix/ldap-virtual-domains.cf
virtual_mailbox_maps = ldap:/etc/postfix/ldap-account.cf
virtual_alias_maps = ldap:/etc/postfix/ldap-alias.cf
virtual_transport = lmtp:dovecot-lmtp.service.prod.consul:24
#===
# Mail filters
#===
milter_default_action = accept
milter_protocol = 6
smtpd_milters = inet:opendkim.service.prod.consul:8999
non_smtpd_milters = inet:opendkim.service.prod.consul:8999

View file

@ -0,0 +1,117 @@
#
# Postfix master process configuration file. For details on the format
# of the file, see the master(5) manual page (command: "man 5 master").
#
# Do not forget to execute "postfix reload" after editing this file.
#
# ==========================================================================
# service type private unpriv chroot wakeup maxproc command + args
# (yes) (yes) (yes) (never) (100)
# ==========================================================================
smtp inet n - n - - smtpd
submission inet n - n - - smtpd
-o smtpd_tls_security_level=encrypt
-o smtpd_sasl_auth_enable=yes
-o smtpd_client_restrictions=permit_sasl_authenticated,reject
-o milter_macro_daemon_name=ORIGINATING
smtps inet n - n - - smtpd
-o smtpd_tls_wrappermode=yes
-o smtpd_sasl_auth_enable=yes
-o smtpd_client_restrictions=permit_sasl_authenticated,reject
-o milter_macro_daemon_name=ORIGINATING
#628 inet n - - - - qmqpd
pickup fifo n - n 60 1 pickup
cleanup unix n - n - 0 cleanup
qmgr fifo n - n 300 1 qmgr
#qmgr fifo n - - 300 1 oqmgr
tlsmgr unix - - n 1000? 1 tlsmgr
rewrite unix - - n - - trivial-rewrite
bounce unix - - n - 0 bounce
defer unix - - n - 0 bounce
trace unix - - n - 0 bounce
verify unix - - n - 1 verify
flush unix n - n 1000? 0 flush
proxymap unix - - n - - proxymap
proxywrite unix - - n - 1 proxymap
# When relaying mail as backup MX, disable fallback_relay to avoid MX loops
smtp unix - - n - - smtp
# -o smtp_helo_timeout=5 -o smtp_connect_timeout=5
smtp-ipv4 unix - - n - - smtp
-o syslog_name=postfix-ipv4
-o inet_protocols=ipv4
slow unix - - n - 5 smtp
-o syslog_name=postfix-slow
-o smtp_destination_concurrency_limit=3
-o slow_destination_rate_delay=1
relay unix - - n - - smtp
-o smtp_fallback_relay=
showq unix n - n - - showq
error unix - - n - - error
retry unix - - n - - error
discard unix - - n - - discard
local unix - n n - - local
virtual unix - n n - - virtual
lmtp unix - - n - - lmtp
anvil unix - - n - 1 anvil
#
# ====================================================================
# Interfaces to non-Postfix software. Be sure to examine the manual
# pages of the non-Postfix software to find out what options it wants.
#
# Many of the following services use the Postfix pipe(8) delivery
# agent. See the pipe(8) man page for information about ${recipient}
# and other message envelope options.
# ====================================================================
#
# maildrop. See the Postfix MAILDROP_README file for details.
# Also specify in main.cf: maildrop_destination_recipient_limit=1
#
scache unix - - n - 1 scache
maildrop unix - n n - - pipe
flags=DRhu user=vmail argv=/usr/bin/maildrop -d ${recipient}
#
# ====================================================================
#
# Recent Cyrus versions can use the existing "lmtp" master.cf entry.
#
# Specify in cyrus.conf:
# lmtp cmd="lmtpd -a" listen="localhost:lmtp" proto=tcp4
#
# Specify in main.cf one or more of the following:
# mailbox_transport = lmtp:inet:localhost
# virtual_transport = lmtp:inet:localhost
#
# ====================================================================
#
# Cyrus 2.1.5 (Amos Gouaux)
# Also specify in main.cf: cyrus_destination_recipient_limit=1
#
#cyrus unix - n n - - pipe
# user=cyrus argv=/cyrus/bin/deliver -e -r ${sender} -m ${extension} ${user}
#
# ====================================================================
# Old example of delivery via Cyrus.
#
#old-cyrus unix - n n - - pipe
# flags=R user=cyrus argv=/cyrus/bin/deliver -e -m ${extension} ${user}
#
# ====================================================================
#
# See the Postfix UUCP_README file for configuration details.
#
uucp unix - n n - - pipe
flags=Fqhu user=uucp argv=uux -r -n -z -a$sender - $nexthop!rmail ($recipient)
#
# Other external delivery methods.
#
ifmail unix - n n - - pipe
flags=F user=ftn argv=/usr/lib/ifmail/ifmail -r $nexthop ($recipient)
bsmtp unix - n n - - pipe
flags=Fq. user=bsmtp argv=/usr/lib/bsmtp/bsmtp -t$nexthop -f$sender $recipient
scalemail-backend unix - n n - 2 pipe
flags=R user=scalemail argv=/usr/lib/scalemail/bin/scalemail-store ${nexthop} ${user} ${extension}
mailman unix - n n - - pipe
flags=FR user=list argv=/usr/lib/mailman/bin/postfix-to-mailman.py
${nexthop} ${user}

View file

@ -0,0 +1,6 @@
#wanadoo.com slow:
#wanadoo.fr slow:
#orange.com slow:
#orange.fr slow:
#smtp.orange.fr slow:
gmail.com smtp-ipv4:

Binary file not shown.

View file

@ -0,0 +1,76 @@
{
WONoDetach = NO;
WOWorkersCount = 3;
SxVMemLimit = 600;
WOPort = "127.0.0.1:20000";
SOGoProfileURL = "postgresql://{{ key "secrets/email/sogo/postgre_auth" | trimSpace }}@{{ env "meta.site" }}.psql-proxy.service.prod.consul:5432/sogo/sogo_user_profile";
OCSFolderInfoURL = "postgresql://{{ key "secrets/email/sogo/postgre_auth" | trimSpace }}@{{ env "meta.site" }}.psql-proxy.service.prod.consul:5432/sogo/sogo_folder_info";
OCSSessionsFolderURL = "postgresql://{{ key "secrets/email/sogo/postgre_auth" | trimSpace }}@{{ env "meta.site" }}.psql-proxy.service.prod.consul:5432/sogo/sogo_sessions_folder";
OCSEMailAlarmsFolderURL = "postgresql://{{ key "secrets/email/sogo/postgre_auth" | trimSpace }}@{{ env "meta.site" }}.psql-proxy.service.prod.consul:5432/sogo/sogo_alarms_folder";
OCSStoreURL = "postgresql://{{ key "secrets/email/sogo/postgre_auth" | trimSpace }}@{{ env "meta.site" }}.psql-proxy.service.prod.consul:5432/sogo/sogo_store";
OCSAclURL = "postgresql://{{ key "secrets/email/sogo/postgre_auth" | trimSpace }}@{{ env "meta.site" }}.psql-proxy.service.prod.consul:5432/sogo/sogo_acl";
OCSCacheFolderURL = "postgresql://{{ key "secrets/email/sogo/postgre_auth" | trimSpace }}@{{ env "meta.site" }}.psql-proxy.service.prod.consul:5432/sogo/sogo_cache_folder";
SOGoTimeZone = "Europe/Paris";
SOGoMailDomain = "deuxfleurs.fr";
SOGoLanguage = French;
SOGoAppointmentSendEMailNotifications = YES;
SOGoEnablePublicAccess = YES;
SOGoMailingMechanism = smtp;
SOGoSMTPServer = postfix-smtp.service.prod.consul;
SOGoSMTPAuthenticationType = PLAIN;
SOGoForceExternalLoginWithEmail = YES;
SOGoIMAPAclConformsToIMAPExt = YES;
SOGoTimeZone = UTC;
SOGoSentFolderName = Sent;
SOGoTrashFolderName = Trash;
SOGoDraftsFolderName = Drafts;
SOGoIMAPServer = "imaps://dovecot-imaps.service.prod.consul:993/?tlsVerifyMode=none";
SOGoSieveServer = "sieve://sieve.service.prod.consul:4190/?tls=YES";
SOGoIMAPAclConformsToIMAPExt = YES;
SOGoVacationEnabled = NO;
SOGoForwardEnabled = NO;
SOGoSieveScriptsEnabled = NO;
SOGoFirstDayOfWeek = 1;
SOGoRefreshViewCheck = every_5_minutes;
SOGoMailAuxiliaryUserAccountsEnabled = NO;
SOGoPasswordChangeEnabled = YES;
SOGoPageTitle = "deuxfleurs.fr";
SOGoLoginModule = Mail;
SOGoMailAddOutgoingAddresses = YES;
SOGoSelectedAddressBook = autobook;
SOGoMailAuxiliaryUserAccountsEnabled = YES;
SOGoCalendarEventsDefaultClassification = PRIVATE;
SOGoMailReplyPlacement = above;
SOGoMailSignaturePlacement = above;
SOGoMailComposeMessageType = html;
SOGoLDAPContactInfoAttribute = "displayname";
SOGoDebugRequests = YES;
//SOGoEASDebugEnabled = YES;
//ImapDebugEnabled = YES;
LDAPDebugEnabled = YES;
//MySQL4DebugEnabled = YES;
PGDebugEnabled = YES;
SOGoUserSources = (
{
type = ldap;
CNFieldName = displayname;
IDFieldName = cn;
UIDFieldName = cn;
MailFieldNames = (mail, mailForwardingAddress);
SearchFieldNames = (displayname, cn, sn, mail, telephoneNumber);
IMAPLoginFieldName = mail;
baseDN = "ou=users,dc=deuxfleurs,dc=fr";
bindDN = "{{ key "secrets/email/sogo/ldap_binddn" | trimSpace }}";
bindPassword = "{{ key "secrets/email/sogo/ldap_bindpw" | trimSpace}}";
bindFields = (cn, mail);
canAuthenticate = YES;
displayName = "Bottin";
hostname = "ldap://{{ env "meta.site" }}.bottin.service.prod.consul:389";
id = bottin;
isAddressBook = NO;
}
);
}

View file

@ -0,0 +1,126 @@
job "email-android7" {
datacenters = ["neptune", "bespin"]
type = "service"
priority = 100
group "rsa-ecc-proxy" {
network {
port "smtps" {
static = 465
to = 465
}
port "imaps" {
static = 993
to = 993
}
}
task "imaps-proxy" {
driver = "docker"
config {
image = "alpine/socat:1.7.4.4"
readonly_rootfs = true
ports = [ "imaps" ]
network_mode = "host"
args = [
"openssl-listen:993,reuseaddr,fork,verify=0,bind=0.0.0.0,cert=/var/secrets/rsa.crt,key=/var/secrets/rsa.key",
"openssl:imap.deuxfleurs.fr:993,verify=0",
]
volumes = [
"secrets/certs:/var/secrets"
]
}
template {
data = "{{ key \"secrets/email/tls-tls-proxy/rsa.crt\" }}"
destination = "secrets/certs/rsa.crt"
}
template {
data = "{{ key \"secrets/email/tls-tls-proxy/rsa.key\" }}"
destination = "secrets/certs/rsa.key"
}
resources {
cpu = 50
memory = 50
}
service {
name = "imap-android7"
port = "imaps"
address_mode = "host"
tags = [
"rsa-ecc-proxy",
"(diplonat (tcp_port 993))",
"d53-a imap-android7.deuxfleurs.fr",
# ipv6 is commented for now as socat does not listen on ipv6 now
# "d53-aaaa imap-android7.deuxfleurs.fr"
]
check {
type = "tcp"
port = "imaps"
interval = "60s"
timeout = "5s"
check_restart {
limit = 3
grace = "90s"
ignore_warnings = false
}
}
}
}
task "smtps-proxy" {
driver = "docker"
config {
image = "alpine/socat:1.7.4.4"
readonly_rootfs = true
network_mode = "host"
ports = [ "smtps" ]
args = [
"openssl-listen:465,reuseaddr,fork,verify=0,bind=0.0.0.0,cert=/var/secrets/rsa.crt,key=/var/secrets/rsa.key",
"openssl:smtp.deuxfleurs.fr:465,verify=0",
]
volumes = [
"secrets/certs:/var/secrets"
]
}
template {
data = "{{ key \"secrets/email/tls-tls-proxy/rsa.crt\" }}"
destination = "secrets/certs/rsa.crt"
}
template {
data = "{{ key \"secrets/email/tls-tls-proxy/rsa.key\" }}"
destination = "secrets/certs/rsa.key"
}
resources {
cpu = 50
memory = 50
}
service {
name = "smtp-android7"
port = "smtps"
address_mode = "host"
tags = [
"rsa-ecc-proxy",
"(diplonat (tcp_port 465))",
"d53-a smtp-android7.deuxfleurs.fr",
# ipv6 is commented for now as socat does not listen on ipv6 now
# "d53-aaaa smtp-android7.deuxfleurs.fr"
]
check {
type = "tcp"
port = "smtps"
interval = "60s"
timeout = "5s"
check_restart {
limit = 3
grace = "90s"
ignore_warnings = false
}
}
}
}
}
}

View file

@ -0,0 +1,505 @@
job "email" {
datacenters = ["scorpio"]
type = "service"
priority = 65
group "dovecot" {
count = 1
network {
port "zauthentication_port" {
static = 1337
to = 1337
}
port "imaps_port" {
static = 993
to = 993
}
port "imap_port" {
static = 143
to = 143
}
port "lmtp_port" {
static = 24
to = 24
}
}
task "server" {
driver = "docker"
constraint {
attribute = "${attr.unique.hostname}"
operator = "="
value = "ananas"
}
config {
image = "superboum/amd64_dovecot:v6"
readonly_rootfs = false
network_mode = "host"
ports = [ "zauthentication_port", "imaps_port", "imap_port", "lmtp_port" ]
command = "dovecot"
args = [ "-F" ]
volumes = [
"secrets/ssl/certs:/etc/ssl/certs",
"secrets/ssl/private:/etc/ssl/private",
"secrets/conf/:/etc/dovecot/",
"/mnt/ssd/mail:/var/mail/",
]
}
env {
TLSINFO = "/C=FR/ST=Bretagne/L=Rennes/O=Deuxfleurs/CN=imap.deuxfleurs.fr"
}
resources {
cpu = 100
memory = 200
}
service {
name = "dovecot-imap"
port = "imap_port"
tags = [
"dovecot",
]
check {
type = "tcp"
port = "imap_port"
interval = "60s"
timeout = "5s"
check_restart {
limit = 3
grace = "90s"
ignore_warnings = false
}
}
}
service {
name = "dovecot-imaps"
port = "imaps_port"
tags = [
"dovecot",
"(diplonat (tcp_port 993))",
"d53-a imap.deuxfleurs.fr",
"d53-aaaa imap.deuxfleurs.fr",
]
check {
type = "tcp"
port = "imaps_port"
interval = "60s"
timeout = "5s"
check_restart {
limit = 3
grace = "90s"
ignore_warnings = false
}
}
}
service {
name = "dovecot-lmtp"
port = "lmtp_port"
tags = [
"dovecot",
]
check {
type = "tcp"
port = "lmtp_port"
interval = "60s"
timeout = "5s"
check_restart {
limit = 3
grace = "90s"
ignore_warnings = false
}
}
}
service {
name = "dovecot-auth"
port = "zauthentication_port"
tags = [
"dovecot",
]
check {
type = "tcp"
port = "zauthentication_port"
interval = "60s"
timeout = "5s"
check_restart {
limit = 3
grace = "90s"
ignore_warnings = false
}
}
}
template {
data = file("../config/dovecot/dovecot-ldap.conf.tpl")
destination = "secrets/conf/dovecot-ldap.conf"
perms = "400"
}
template {
data = file("../config/dovecot/dovecot.conf")
destination = "secrets/conf/dovecot.conf"
perms = "400"
}
# ----- secrets ------
template {
data = "{{ with $d := key \"tricot/certs/imap.deuxfleurs.fr\" | parseJSON }}{{ $d.cert_pem }}{{ end }}"
destination = "secrets/ssl/certs/dovecot.crt"
perms = "400"
}
template {
data = "{{ with $d := key \"tricot/certs/imap.deuxfleurs.fr\" | parseJSON }}{{ $d.key_pem }}{{ end }}"
destination = "secrets/ssl/private/dovecot.key"
perms = "400"
}
}
}
group "opendkim" {
count = 1
network {
port "dkim_port" {
static = 8999
to = 8999
}
}
task "server" {
driver = "docker"
config {
image = "superboum/amd64_opendkim:v6"
readonly_rootfs = false
ports = [ "dkim_port" ]
volumes = [
"/dev/log:/dev/log",
"secrets/dkim:/etc/dkim",
]
}
resources {
cpu = 100
memory = 50
}
service {
name = "opendkim"
port = "dkim_port"
address_mode = "host"
tags = [
"opendkim",
]
check {
type = "tcp"
port = "dkim_port"
interval = "60s"
timeout = "5s"
check_restart {
limit = 3
grace = "90s"
ignore_warnings = false
}
}
}
template {
data = file("../config/dkim/keytable")
destination = "secrets/dkim/keytable"
}
template {
data = file("../config/dkim/signingtable")
destination = "secrets/dkim/signingtable"
}
template {
data = file("../config/dkim/trusted")
destination = "secrets/dkim/trusted"
}
# --- secrets ---
template {
data = "{{ key \"secrets/email/dkim/smtp.private\" }}"
destination = "secrets/dkim/smtp.private"
}
}
}
group "postfix" {
count = 1
network {
port "smtp_port" {
static = 25
to = 25
}
port "smtps_port" {
static = 465
to = 465
}
port "submission_port" {
static = 587
to = 587
}
}
task "server" {
driver = "docker"
config {
image = "superboum/amd64_postfix:v4"
readonly_rootfs = false
network_mode = "host"
ports = [ "smtp_port", "smtps_port", "submission_port" ]
command = "postfix"
args = [ "start-fg" ]
volumes = [
"secrets/ssl:/etc/ssl",
"secrets/postfix:/etc/postfix-conf",
"/dev/log:/dev/log"
]
}
env {
TLSINFO = "/C=FR/ST=Bretagne/L=Rennes/O=Deuxfleurs/CN=smtp.deuxfleurs.fr"
MAILNAME = "smtp.deuxfleurs.fr"
}
resources {
cpu = 100
memory = 200
}
service {
name = "postfix-smtp"
port = "smtp_port"
address_mode = "host"
tags = [
"postfix",
"(diplonat (tcp_port 25 465 587))",
"d53-a smtp.deuxfleurs.fr",
"d53-aaaa smtp.deuxfleurs.fr"
]
check {
type = "tcp"
port = "smtp_port"
interval = "60s"
timeout = "5s"
check_restart {
limit = 3
grace = "90s"
ignore_warnings = false
}
}
}
service {
name = "postfix-smtps"
port = "smtps_port"
address_mode = "host"
tags = [
"postfix",
]
check {
type = "tcp"
port = "smtps_port"
interval = "60s"
timeout = "5s"
check_restart {
limit = 3
grace = "90s"
ignore_warnings = false
}
}
}
service {
name = "postfix-submission"
port = "submission_port"
address_mode = "host"
tags = [
"postfix",
]
check {
type = "tcp"
port = "submission_port"
interval = "60s"
timeout = "5s"
check_restart {
limit = 3
grace = "90s"
ignore_warnings = false
}
}
}
template {
data = file("../config/postfix/ldap-account.cf.tpl")
destination = "secrets/postfix/ldap-account.cf"
}
template {
data = file("../config/postfix/ldap-alias.cf.tpl")
destination = "secrets/postfix/ldap-alias.cf"
}
template {
data = file("../config/postfix/ldap-virtual-domains.cf.tpl")
destination = "secrets/postfix/ldap-virtual-domains.cf"
}
template {
data = file("../config/postfix/dynamicmaps.cf")
destination = "secrets/postfix/dynamicmaps.cf"
}
template {
data = file("../config/postfix/header_checks")
destination = "secrets/postfix/header_checks"
}
template {
data = file("../config/postfix/main.cf")
destination = "secrets/postfix/main.cf"
}
template {
data = file("../config/postfix/master.cf")
destination = "secrets/postfix/master.cf"
}
template {
data = file("../config/postfix/transport")
destination = "secrets/postfix/transport"
}
# --- secrets ---
template {
data = "{{ with $d := key \"tricot/certs/smtp.deuxfleurs.fr\" | parseJSON }}{{ $d.cert_pem }}{{ end }}"
destination = "secrets/ssl/postfix.crt"
perms = "400"
}
template {
data = "{{ with $d := key \"tricot/certs/smtp.deuxfleurs.fr\" | parseJSON }}{{ $d.key_pem }}{{ end }}"
destination = "secrets/ssl/postfix.key"
perms = "400"
}
}
}
group "alps" {
count = 1
network {
port "alps_web_port" { to = 1323 }
}
task "main" {
driver = "docker"
config {
image = "lxpz/amd64_alps:v4"
readonly_rootfs = true
ports = [ "alps_web_port" ]
args = [
"-skiptlsverification",
"-theme",
"alps",
"imaps://imap.deuxfleurs.fr:993",
"smtps://smtp.deuxfleurs.fr:465"
]
}
resources {
cpu = 100
memory = 100
}
service {
name = "alps"
port = "alps_web_port"
address_mode = "host"
tags = [
"alps",
"tricot alps.deuxfleurs.fr",
"d53-cname alps.deuxfleurs.fr",
]
check {
type = "tcp"
port = "alps_web_port"
interval = "60s"
timeout = "5s"
check_restart {
limit = 3
grace = "5m"
ignore_warnings = false
}
}
}
}
}
group "sogo" {
count = 1
network {
port "sogo_web_port" { to = 8080 }
}
task "bundle" {
driver = "docker"
config {
image = "superboum/amd64_sogo:v7"
readonly_rootfs = false
ports = [ "sogo_web_port" ]
volumes = [
"secrets/sogo.conf:/etc/sogo/sogo.conf",
]
}
template {
data = file("../config/sogo/sogo.conf.tpl")
destination = "secrets/sogo.conf"
}
resources {
cpu = 400
memory = 1500
memory_max = 2000
}
service {
name = "sogo"
port = "sogo_web_port"
address_mode = "host"
tags = [
"sogo",
"tricot www.sogo.deuxfleurs.fr",
"tricot sogo.deuxfleurs.fr",
"d53-cname sogo.deuxfleurs.fr",
]
check {
type = "tcp"
port = "sogo_web_port"
interval = "60s"
timeout = "5s"
check_restart {
limit = 3
grace = "5m"
ignore_warnings = false
}
}
}
}
}
}

View file

@ -0,0 +1,23 @@
# Email
## TLS TLS Proxy
Required for Android 7.0 that does not support elliptic curves.
Generate a key:
```bash
openssl req -x509 -newkey rsa:4096 -sha256 -days 3650 -nodes -keyout rsa.key -out rsa.crt -subj "/CN=imap.deuxfleurs.fr" -addext "subjectAltName=DNS:smtp.deuxfleurs.fr"
```
Run the command:
```bash
./integration/proxy.sh imap.deuxfleurs.fr:993 1993
```
Test it:
```bash
openssl s_client localhost:1993
```

View file

@ -0,0 +1,13 @@
#!/usr/bin/env bash
UPSTREAM=$1
PROXY_PORT=$2
socat -dd \
"openssl-listen:${PROXY_PORT},\
reuseaddr,\
fork,\
cert=/tmp/tls-tls-proxy/rsa.crt,\
key=/tmp/tls-tls-proxy/rsa.key,\
verify=0,\
bind=0.0.0.0" \
"openssl:${UPSTREAM},\
verify=0"

View file

@ -0,0 +1,32 @@
# ---- POSTFIX ----
[secrets."email/dkim/smtp.private"]
type = 'RSA_PRIVATE_KEY'
name = 'dkim'
# ---- DOVECOT ----
[service_users."dovecot"]
dn_secret = "email/dovecot/ldap_binddn"
password_secret = "email/dovecot/ldap_bindpwd"
# ---- SOGO ----
[service_users."sogo"]
dn_secret = "email/sogo/ldap_binddn"
password_secret = "email/sogo/ldap_bindpw"
[secrets."email/sogo/postgre_auth"]
type = 'user'
description = 'SoGo postgres auth (format: sogo:<password>) (TODO: replace this with two separate files and change template)'
# ---- TLS TLS PROXY ---
[secrets."email/tls-tls-proxy/rsa.crt"]
type="user"
description="PEM encoded file containing the RSA certificate"
[secrets."email/tls-tls-proxy/rsa.key"]
type="user"
description="PEM encoded file containing the RSA key"

View file

@ -0,0 +1,47 @@
block_size = 1048576
metadata_dir = "/meta"
data_dir = "/data"
db_engine = "lmdb"
replication_mode = "3"
metadata_auto_snapshot_interval = "24h"
# IPv6 config using the ipv6 address statically defined in Nomad's node metadata
# make sure to put back double { and } if re-enabling this
#rpc_bind_addr = "[{ env "meta.public_ipv6" }]:3901"
#rpc_public_addr = "[{ env "meta.public_ipv6" }]:3901"
# IPv6 config using the ipv6 address dynamically detected from diplonat
{{ with $a := env "attr.unique.hostname" | printf "diplonat/autodiscovery/ipv6/%s" | key | parseJSON }}
rpc_bind_addr = "[{{ $a.address }}]:3901"
rpc_public_addr = "[{{ $a.address }}]:3901"
{{ end }}
rpc_secret = "{{ key "secrets/garage/rpc_secret" | trimSpace }}"
[consul_discovery]
consul_http_addr = "https://consul.service.prod.consul:8501"
service_name = "garage-prod-discovery"
ca_cert = "/etc/garage/consul-ca.crt"
client_cert = "/etc/garage/consul-client.crt"
client_key = "/etc/garage/consul-client.key"
tls_skip_verify = true
[s3_api]
s3_region = "garage"
api_bind_addr = "[::]:3900"
root_domain = ".garage.deuxfleurs.fr"
[k2v_api]
api_bind_addr = "[::]:3904"
[s3_web]
bind_addr = "[::]:3902"
root_domain = ".web.deuxfleurs.fr"
[admin]
api_bind_addr = "[::]:3903"
metrics_token = "{{ key "secrets/garage/metrics_token" | trimSpace }}"
admin_token = "{{ key "secrets/garage/admin_token" | trimSpace }}"

View file

@ -0,0 +1,221 @@
job "garage" {
datacenters = [ "neptune", "bespin", "scorpio" ]
type = "system"
priority = 80
update {
max_parallel = 2
min_healthy_time = "60s"
}
group "garage" {
network {
port "s3" { static = 3900 }
port "rpc" { static = 3901 }
port "web" { static = 3902 }
port "admin" { static = 3903 }
port "k2v" { static = 3904 }
}
update {
max_parallel = 10
min_healthy_time = "30s"
healthy_deadline = "5m"
}
task "server" {
driver = "docker"
config {
image = "dxflrs/garage:v1.0.0-rc1"
command = "/garage"
args = [ "server" ]
network_mode = "host"
volumes = [
"/mnt/storage/garage/data:/data",
"/mnt/ssd/garage/meta:/meta",
"secrets/garage.toml:/etc/garage.toml",
"secrets:/etc/garage",
]
logging {
type = "journald"
}
}
template {
data = file("../config/garage.toml")
destination = "secrets/garage.toml"
#change_mode = "noop"
}
template {
data = "{{ key \"secrets/consul/consul-ca.crt\" }}"
destination = "secrets/consul-ca.crt"
}
template {
data = "{{ key \"secrets/consul/consul-client.crt\" }}"
destination = "secrets/consul-client.crt"
}
template {
data = "{{ key \"secrets/consul/consul-client.key\" }}"
destination = "secrets/consul-client.key"
}
resources {
memory = 1000
memory_max = 3000
cpu = 1000
}
kill_timeout = "20s"
restart {
interval = "30m"
attempts = 10
delay = "15s"
mode = "delay"
}
#### Configuration for service ports: admin port (internal use only)
service {
port = "admin"
address_mode = "host"
name = "garage-admin"
# Check that Garage is alive and answering TCP connections
check {
type = "tcp"
interval = "60s"
timeout = "5s"
check_restart {
limit = 3
grace = "90s"
ignore_warnings = false
}
}
}
#### Configuration for service ports: externally available ports (API, web)
service {
tags = [
"garage_api",
"tricot garage.deuxfleurs.fr",
"tricot *.garage.deuxfleurs.fr",
"tricot-site-lb",
]
port = "s3"
address_mode = "host"
name = "garage-api"
# Check 1: Garage is alive and answering TCP connections
check {
name = "garage-api-live"
type = "tcp"
interval = "60s"
timeout = "5s"
check_restart {
limit = 3
grace = "90s"
ignore_warnings = false
}
}
# Check 2: Garage is in a healthy state and requests should be routed here
check {
name = "garage-api-healthy"
port = "admin"
type = "http"
path = "/health"
interval = "60s"
timeout = "5s"
}
}
service {
tags = [
"garage-web",
"tricot * 1",
"tricot-add-header Strict-Transport-Security max-age=63072000; includeSubDomains; preload",
"tricot-add-header X-Frame-Options SAMEORIGIN",
"tricot-add-header X-XSS-Protection 1; mode=block",
"tricot-add-header X-Content-Type-Options nosniff",
"tricot-on-demand-tls-ask http://garage-admin.service.prod.consul:3903/check",
"tricot-site-lb",
]
port = "web"
address_mode = "host"
name = "garage-web"
# Check 1: Garage is alive and answering TCP connections
check {
name = "garage-web-live"
type = "tcp"
interval = "60s"
timeout = "5s"
check_restart {
limit = 3
grace = "90s"
ignore_warnings = false
}
}
# Check 2: Garage is in a healthy state and requests should be routed here
check {
name = "garage-web-healthy"
port = "admin"
type = "http"
path = "/health"
interval = "60s"
timeout = "5s"
}
}
service {
tags = [
"garage-redirect-dummy",
"tricot www.deuxfleurs.fr 2",
"tricot osuny.org 2",
"tricot www.degrowth.net 2",
"tricot-add-redirect www.deuxfleurs.fr deuxfleurs.fr 301",
"tricot-add-redirect osuny.org www.osuny.org 301",
"tricot-add-redirect www.degrowth.net degrowth.net 301",
]
name = "garage-redirect-dummy"
address_mode = "host"
port = "web"
on_update = "ignore"
}
service {
tags = [
"garage_k2v",
"tricot k2v.deuxfleurs.fr",
"tricot-site-lb",
]
port = "k2v"
address_mode = "host"
name = "garage-k2v"
# Check 1: Garage is alive and answering TCP connections
check {
name = "garage-k2v-live"
type = "tcp"
interval = "60s"
timeout = "5s"
check_restart {
limit = 3
grace = "90s"
ignore_warnings = false
}
}
# Check 2: Garage is in a healthy state and requests should be routed here
check {
name = "garage-k2v-healthy"
port = "admin"
type = "http"
path = "/health"
interval = "60s"
timeout = "5s"
}
}
}
}
}

View file

@ -0,0 +1,14 @@
[secrets."garage/rpc_secret"]
type = 'command'
command = 'openssl rand -hex 32'
# can't auto-rotate, because we still have some nodes outside of Nomad
[secrets."garage/admin_token"]
type = 'command'
command = 'openssl rand -hex 32'
rotate = true
[secrets."garage/metrics_token"]
type = 'command'
command = 'openssl rand -hex 32'
rotate = true

View file

@ -0,0 +1,40 @@
{
"http_bind_addr": ":9991",
"ldap_server_addr": "ldap://{{ env "meta.site" }}.bottin.service.prod.consul:389",
"base_dn": "{{ key "secrets/directory/ldap_base_dn" }}",
"user_base_dn": "ou=users,{{ key "secrets/directory/ldap_base_dn" }}",
"user_name_attr": "cn",
"group_base_dn": "ou=groups,{{ key "secrets/directory/ldap_base_dn" }}",
"group_name_attr": "cn",
"mailing_list_base_dn": "ou=mailing_lists,ou=groups,{{ key "secrets/directory/ldap_base_dn" }}",
"mailing_list_name_attr": "cn",
"mailing_list_guest_user_base_dn": "ou=guests,ou=users,{{ key "secrets/directory/ldap_base_dn" }}",
"invitation_base_dn": "ou=invitations,{{ key "secrets/directory/ldap_base_dn" }}",
"invitation_name_attr": "cn",
"invited_mail_format": "{}@{{ key "secrets/directory/guichet/mail_domain" | trimSpace }}",
"invited_auto_groups": [
"cn=email,ou=groups,{{ key "secrets/directory/ldap_base_dn" }}"
],
"web_address": "https://{{ key "secrets/directory/guichet/web_hostname" }}",
"mail_from": "{{ key "secrets/directory/guichet/mail_from" }}",
"smtp_server": "{{ key "secrets/directory/guichet/smtp_server" }}",
"smtp_username": "{{ key "secrets/directory/guichet/smtp_user" | trimSpace }}",
"smtp_password": "{{ key "secrets/directory/guichet/smtp_pass" | trimSpace }}",
"admin_account": "cn=admin,{{ key "secrets/directory/ldap_base_dn" }}",
"group_can_admin": "cn=admin,ou=groups,{{ key "secrets/directory/ldap_base_dn" }}",
"group_can_invite": "cn=asso_deuxfleurs,ou=groups,{{ key "secrets/directory/ldap_base_dn" }}",
"s3_admin_endpoint": "garage-admin.service.prod.consul:3903",
"s3_admin_token": "{{ key "secrets/garage/admin_token" | trimSpace }}",
"s3_endpoint": "{{ key "secrets/directory/guichet/s3_endpoint" }}",
"s3_access_key": "{{ key "secrets/directory/guichet/s3_access_key" | trimSpace }}",
"s3_secret_key": "{{ key "secrets/directory/guichet/s3_secret_key" | trimSpace }}",
"s3_region": "{{ key "secrets/directory/guichet/s3_region" }}",
"s3_bucket": "{{ key "secrets/directory/guichet/s3_bucket" }}"
}

Some files were not shown because too many files have changed in this diff Show more