Compare commits

..

1 commit

Author SHA1 Message Date
ef0a9577e2
Fix style gen + publish API 2022-11-13 16:51:44 +01:00
146 changed files with 9048 additions and 24346 deletions

50
.drone.yml Normal file
View file

@ -0,0 +1,50 @@
---
# see https://docs.drone.io/pipeline/configuration/
kind: pipeline
type: docker
name: build
steps:
- name: submodules
image: alpine/git
commands:
- git submodule update --init --recursive
- cp -rv garage/doc/book content/documentation
- cp -rv garage/doc/api ./static/
- name: build-css
image: node
commands:
- npm install
- npx tailwindcss -i ./src/input.css -o ./static/style.css --minify
- name: build-zola
image: ghcr.io/getzola/zola:v0.15.3
entrypoint: [ "/bin/zola" ]
command: [ "build", "-u", "https://garagehq.deuxfleurs.fr" ]
- name: upload
image: plugins/s3
settings:
bucket: garagehq.deuxfleurs.fr
access_key:
from_secret: aws_access_key_id
secret_key:
from_secret: aws_secret_access_key
source: public/**/*
strip_prefix: public/
target: /
path_style: true
endpoint: https://garage.deuxfleurs.fr
region: garage
when:
branch:
- master
event:
exclude:
- pull_request
---
kind: signature
hmac: 8420ca485ee30f847d4f7f3bf8a40eedb5b4697c91ec1e7005366cdb30282c59
...

2
.gitignore vendored
View file

@ -1,4 +1,4 @@
node_modules
public
content/documentation
static/style.css
static/api

3
.gitmodules vendored Normal file
View file

@ -0,0 +1,3 @@
[submodule "garage"]
path = garage
url = https://git.deuxfleurs.fr/Deuxfleurs/garage.git

View file

@ -1,28 +1,46 @@
# Aerogramme Website
# Garage Website
## Install
¡ Work in progress (almost done) !
```
yarn install
```
---
## Dev
## Setup
```
yarn start
zola serve
```
- Install Zola with `pacman -S zola`
- Clone this repo
- Run `npm install` to get the dev dependencies
- Run `zola build` to get the public directory
- Run `npm start` to compile styles and scripts
- Run `zola serve`
## Build
```
yarn build
zola build
yarn build
```
CSS : `28.4 kB`
## Deploy
JS : `6.8 kB (app)` + `1.2 MB (search)`*
```
aws s3 sync --delete ./public s3://aerogramme.deuxfleurs.fr/
```
*<em>The search index in loaded only when the user opens the search modal</em>
Images + Icons : `1.1 MB`
## Fonctions & utilities
JavaScript can be disabled and the website will still run nicely.
It only brings QoL advantages for the user.
The function is [x] if it still runs <u>without</u> JavaScript enabled.
- [x] Responsive main navigation menu (toggle)
- [x] Documentation : user can deploy or reploy ToC submenus
- [ ] Documentation : deploy only the current ToC submenu after a page change
- [ ] Documentation : sidebar focus effect on current section anchor when user scrolls
- [ ] Documentation : ToC that follows the user's scroll
- [ ] Global search
## Screenshots
<a target="_blank" href="https://git.deuxfleurs.fr/sptl/garage_website/raw/branch/master/static/screenshots/screenshot-480.png">480px</a>
<a target="_blank" href="https://git.deuxfleurs.fr/sptl/garage_website/raw/branch/master/static/screenshots/screenshot-768.png">768px</a>
<a target="_blank" href="https://git.deuxfleurs.fr/sptl/garage_website/raw/branch/master/static/screenshots/screenshot-1280.png">1280px</a>

View file

@ -1,6 +1,6 @@
base_url = "https://aerogramme.deuxfleurs.fr"
title = "Aerogramme"
description = "A robust email server"
base_url = "https://garagehq.deuxfleurs.fr"
title = "Garage HQ"
description = "An open-source distributed object storage service tailored for self-hosting"
default_language = "en"
output_dir = "public"
compile_sass = true
@ -40,8 +40,8 @@ include_path = false
include_content = true
[extra]
katex.enabled = true
katex.auto_render = true
katex.enabled = false
katex.auto_render = false
chart.enabled = false
mermaid.enabled = true
galleria.enabled = false
@ -54,19 +54,22 @@ navbar_items = [
]
[extra.favicon]
favicon_svg = "/logo/aerogramme-blue-sq.svg"
favicon_16x16 = "/icons/favicon-16x16.png"
favicon_32x32 = "/icons/favicon-32x32.png"
apple_touch_icon = "/icons/apple-touch-icon.png"
webmanifest = "/icons/site.webmanifest"
[extra.organization]
name = "Aerogramme"
description = "A robust email server"
logo = "/logo/aerogramme-blue-hz.svg"
logo_simple = "/logo/aeogramme-blue-sq.svg"
logo_horizontal = "/logo/aerogramme-blue-sq.svg"
name = "Garage"
description = "An open-source distributed object storage service tailored for self-hosting"
logo = "/images/garage-logo.svg"
logo_simple = "/images/garage-logo-simple.svg"
logo_horizontal = "/images/garage-logo-horizontal.svg"
[extra.author]
name = "Aerogramme"
avatar = "/logo/aerogramme-blue-sq.svg"
name = "Garage"
avatar = "/images/garage-logo.svg"
[extra.social]
git = "https://git.deuxfleurs.fr/Deuxfleurs/aerogramme"
git = "https://git.deuxfleurs.fr/Deuxfleurs/garage"
email = "garagehq@deuxfleurs.fr"

View file

@ -0,0 +1,52 @@
+++
title="Garage will be at FOSDEM'22"
date=2022-02-02
+++
*FOSDEM is an international meeting about Free Software, organized from Brussels.
On next Sunday, February 6th, 2022, we will be there to present Garage.*
<!-- more -->
---
In 2000, a Belgian free software activist going by the name of Raphael Baudin
set out to create a small event for free software developers in Brussels.
This event quickly became the "Free and Open Source Developers' European Meeting",
shorthand FOSDEM. 22 years later, FOSDEM is a major event for free software developers
around the world. And for this year, we have the immense pleasure of announcing
that the Deuxfleurs association will be there to present Garage.
The event is usually hosted by the Université Libre de Bruxelles (ULB) and welcomes
around 5000 people. But due to COVID, the event has been taking place online
in the last few years. Nothing too unfamiliar to us, as the organization is using
the same tools as we are: a combination of Jitsi and Matrix.
We are of course extremely honored that our presentation was accepted.
If technical details are your thing, we invite you to come and share this event with us.
In all cases, the event will be recorded and available as a VOD (Video On Demand)
afterward. Concerning the details of the organization:
**When?** On Sunday, February 6th, 2022, from 10:30 AM to 11:00 AM CET.
**What for?** Introducing the Garage storage platform.
**By whom?** The presentation will be made by Alex,
other developers will be present to answer questions.
**For who?** The presentation is targeted to a technical audience that is knowledgeable in software development or systems administration.
**Price:** FOSDEM'22 is an entirely free event.
**Where?** Online, in the Software Defined Storage devroom.
- [Join the room interactively (video and chat)](https://chat.fosdem.org/#/room/%23sds-devroom:fosdem.org)
- [Join the room as a spectator (video only)](https://live.fosdem.org/watch/dsds)
- [Event details on the FOSDEM'22 website](https://fosdem.org/2022/schedule/event/sds_garage_introduction)
And if you are not so much of a technical person, but you're dreaming of
a more ethical and emancipatory digital world,
keep in tune with news coming from the Deuxfleurs association
as we will likely have other events very soon!

View file

@ -0,0 +1,170 @@
+++
title="Introducing Garage, our self-hosted distributed object storage solution"
date=2022-02-01
+++
*Deuxfleurs is a non-profit based in France that aims to defend and promote
individual freedom and rights on the Internet. In their quest to build a
decentralized, resilient self-hosting infrastructure, they have found that
currently, existing software is often ill-suited to such a particular deployment
scenario. In the context of data storage, Garage was built to provide a highly
available data store that exploits redundancy over different geographical
locations, and does its best to not be too impacted by network latencies.*
<!-- more -->
---
Hello! We are Deuxfleurs, a non-profit based in France working to promote
self-hosting and small-scale hosting.
What does that mean? Well, we figured that big tech monopolies such as Google,
Facebook or Amazon today hold disproportionate power and are becoming quite
dangerous to us, citizens of the Internet. They know everything we are doing,
saying, and even thinking, and they are not making good use of that
information. The interests of these companies are those of the capitalist
elite: they are most interested in making huge profits by exploiting the
Earth's precious resources, producing, advertising, and selling us massive
amounts of stuff we don't need. They don't truly care about the needs of the
people, nor do they care that planetary destruction is under way because of
them.
Big tech monopolies are in a particularly strong position to influence our
behaviors, consciously or not, because we rely on them for selecting the online
content we read, watch, or listen to. Advertising is omnipresent, and because
they know us so well, they can subvert us into thinking that a mindless
consumer society is what we truly want, whereas we most likely would choose
otherwise if we had the chance to think by ourselves.
We don't want that. That's not what the Internet is for. Freedom is freedom
from influence: the ability to do things by oneself, for oneself, on one's own
terms. Self-hosting is both the means by which we reclaim this freedom on the
Internet by not using services of big tech monopolies and thus removing
ourselves from their influence and the result of applying our critical
thinking and our technical abilities to build the Internet that suits us.
Self-hosting means that we don't use cloud services. Instead, we store our
personal data on computers that we own, which we run at home. We build local
communities to share the services that we run with non-technical people. We
communicate with other groups that do the same (or, sometimes, that don't)
thanks to standard protocols such as HTTP, e-mail, or Matrix, that allow a
global community to exist outside of big tech monopolies.
### Self-hosting is a hard problem
As I said, self-hosting means running our own hardware at home, and providing
24/7 Internet services from there. We have many reasons for doing this. One is
because this is the only way we can truly control who has access to our data.
Another one is that it helps us be aware of the physical substrate of which the
Internet is made: making the Internet run has an environmental cost that we
want to evaluate and keep under control. The physical hardware also gives us a
sense of community, calling to mind all of the people that could currently be
connected and making use of our services, and reminding us of the purpose for
which we are doing this.
If you have a home, you know that bad things can happen there too. The power
grid is not infallible, and neither is your Internet connection. Fires and floods
happen. And the computers we are running can themselves crash at any moment,
for any number of reasons. Self-hosted solutions today are often not equipped
to face such challenges and might suffer from unavailability or data loss
as a consequence.
If we want to grow our communities, and attract more people that might be
sympathetic to our vision of the world, we need a baseline of quality for the
services we provide. Users can tolerate some flaws or imperfections, in the
name of defending and promoting their ideals, but if the services are
catastrophic, being unavailable at critical times, or losing users' precious
data, the compromise is much harder to make and people will be tempted to go
back to a comfortable lifestyle bestowed by big tech companies.
Fixing availability, making services reliable even when hosted at unreliable
locations or on unreliable hardware is one of the main objectives of
Deuxfleurs, and in particular of the project Garage which we are building.
### Distributed systems to the rescue
Distributed systems, or distributed computing, is a set of techniques that can
be applied to make computer services more reliable, by making them run on
several computers at once. It so happens that a few of us have studied
distributed systems, which helps a lot (some of us even have PhDs!)
The following concepts of distributed computing are particularly relevant to
us:
- **Crash tolerance** is when a service that runs on several computers at once
can continue operating normally even when one (or a small number) of the
computers stops working.
- **Geo-distribution** is when the computers that make up a distributed system
are not all located in the same facility. Ideally, they would even be spread
over different cities, so that outages affecting one region do not prevent
the rest of the system from working.
We set out to apply these concepts at Deuxfleurs to build our infrastructure,
in order to provide services that are replicated over several machines in several
geographical locations, so that we are able to provide good availability guarantees
to our users. We try to use as most as possible software packages that already
existed and are freely available, for example the Linux operating system
and the HashiCorp suite (Nomad and Consul).
Unfortunately, in the domain of distributed data storage, the available options
weren't entirely satisfactory in our case, which is why we launched the
development of our own solution: Garage. We will talk more in other blog
posts about why Garage is better suited to us than alternative options. In this
post, I will simply try to give a high-level overview of what Garage is.
### What is Garage, exactly?
Garage is a distributed storage solution, that automatically replicates your
data on several servers. Garage takes into account the geographical location
of servers, and ensures that copies of your data are located at different
locations when possible for maximal redundancy, a unique feature in the
landscape of distributed storage systems.
Garage implements the Amazon S3 protocol, a de-facto standard that makes it
compatible with a large variety of existing software. For instance it can be
used as a storage backend for many self-hosted web applications such as
NextCloud, Matrix, Mastodon, Peertube, and many others, replacing the local
file system of a server with a distributed storage layer. Garage can also be
used to synchronize your files or store your backups with utilities such as
Rclone or Restic. Last but not least, Garage can be used to host static
websites, such as the one you are currently reading, which is served directly
by the Garage cluster we host at Deuxfleurs.
Garage leverages the theory of distributed systems, and in particular
*Conflict-free Replicated Data Types* (CRDTs in short), a set of mathematical
tools that help us write distributed software that runs faster, by avoiding
some kinds of unnecessary chit-chat between servers. In a future blog post,
we will show how this allows us to significantly outperform Minio, our closest
competitor (another self-hostable implementation of the S3 protocol).
On the side of software engineering, we are committed to making Garage
a tool that is reliable, lightweight, and easy to administrate.
Garage is written in the Rust programming language, which helps us ensure
the stability and safety of the software, and allows us to build software
that is fast and uses little memory.
### Conclusion
The current version of Garage is version 0.6, which is a *beta* release.
This means that it hasn't yet been tested by many people, and we might have
ignored some edge cases in which it would not perform as expected.
However, we are already actively using Garage at Deuxfleurs for many uses, and
it is working exceptionally well for us. We are currently using it to store
backups of personal files, to store the media files that we send and receive
over the Matrix network, as well as to host a small but increasing number of
static websites. Our current deployment hosts about 200 000 files spread in 50
buckets, for a total size of slightly above 500 GB. These numbers can seem small
when compared to the datasets you could expect your typical cloud provider to
be handling, however these sizes are fairly typical of the small-scale
self-hosted deployments we are targeting, and our Garage cluster is in no way
nearing its capacity limit.
Today, we are proudly releasing Garage's new website, with updated
documentation pages. Poke around to try to understand how the software works,
and try installing your own instance! Your feedback is precious to us, and we
would be glad to hear back from you on our
[issue tracker](https://git.deuxfleurs.fr/Deuxfleurs/garage/issues), by
[e-mail](mailto:garagehq@deuxfleurs.fr), or on our
[Matrix channel](https://matrix.to/#/%23garage:deuxfleurs.fr) (`#garage:deuxfleurs.fr`).

Binary file not shown.

After

Width:  |  Height:  |  Size: 420 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 94 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 42 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 46 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 51 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

View file

@ -0,0 +1,267 @@
+++
title="We tried IPFS over Garage"
date=2022-07-04
+++
*Once you have spawned your Garage cluster, you might be interested in finding ways to share efficiently your content with the rest of the world,
such as by joining federated platforms.
In this blog post, we experiment with interconnecting the InterPlanetary File System (IPFS) daemon with Garage.
We discuss the different bottlenecks and limitations of the software stack in its current state.*
<!-- more -->
---
<!--Garage has been designed to be operated inside the same "administrative area", ie. operated by a single organization made of members that fully trust each other.
It is an intended design decision: trusting each other enables Garage to spread data over the machines instead of duplicating it.
Still, you might want to share and collaborate with the rest of the world, and it can be done in 2 ways with Garage: through the integrated HTTP server that can serve your bucket as a static website,
or by connecting it to an application that will act as a "proxy" between Garage and the rest of the world.
We refer as proxy software that knows how to speak federated protocols (eg. Activity Pub, Solid, RemoteStorage, etc.) or distributed/p2p protocols (eg. BitTorrent, IPFS, etc.).-->
## Some context
People often struggle to see the difference between IPFS and Garage, so let's start by making clear that these projects are complementary and not interchangeable.
Personally, I see IPFS as the intersection between BitTorrent and a file system. BitTorrent remains to this day one of the most efficient ways to deliver
a copy of a file or a folder to a very large number of destinations. It however lacks some form of interactivity: once a torrent file has been generated, you can't simply
add or remove files from it. By presenting itself more like a file system, IPFS is able to handle this use case out of the box.
<!--IPFS is a content-addressable network built in a peer-to-peer fashion.
In simple words, it means that you query the content you want with its identifier without having to know *where* it is hosted on the network, and especially on which machine.
As a side effect, you can share content over the Internet without any configuration (no firewall, NAT, fixed IP, DNS, etc.).-->
<!--However, IPFS does not enforce any property on the durability and availability of your data: the collaboration mentioned earlier is
done only on a spontaneous approach. So at first, if you want to be sure that your content remains alive, you must keep it on your node.
And if nobody makes a copy of your content, you will lose it as soon as your node goes offline and/or crashes.
Furthermore, if you need multiple nodes to store your content, IPFS is not able to automatically place content on your nodes,
enforce a given replication amount, check the integrity of your content, and so on.-->
However, you would probably not rely on BitTorrent to durably store the encrypted holiday pictures you shared with your friends,
as content on BitTorrent tends to vanish when no one in the network has a copy of it anymore. The same applies to IPFS.
Even if at some time everyone has a copy of the pictures on their hard disk, people might delete these copies after a while without you knowing it.
You also can't easily collaborate on storing this common treasure. For example, there is no automatic way to say that Alice and Bob
are in charge of storing the first half of the archive while Charlie and Eve are in charge of the second half.
➡️ **IPFS is designed to deliver content.**
*Note: the IPFS project has another project named [IPFS Cluster](https://cluster.ipfs.io/) that allows servers to collaborate on hosting IPFS content.
[Resilio](https://www.resilio.com/individuals/) and [Syncthing](https://syncthing.net/) both feature protocols inspired by BitTorrent to synchronize a tree of your file system between multiple computers.
Reviewing these solutions is out of the scope of this article, feel free to try them by yourself!*
Garage, on the other hand, is designed to automatically spread your content over all your available nodes, in a manner that makes the best possible use of your storage space.
At the same time, it ensures that your content is always replicated exactly 3 times across the cluster (or less if you change a configuration parameter),
on different geographical zones when possible.
<!--To access this content, you must have an API key, and have a correctly configured machine available over the network (including DNS/IP address/etc.). If the amount of traffic you receive is way larger than what your cluster can handle, your cluster will become simply unresponsive. Sharing content across people that do not trust each other, ie. who operate independent clusters, is not a feature of Garage: you have to rely on external software.-->
However, this means that when content is requested from a Garage cluster, there are only 3 nodes capable of returning it to the user.
As a consequence, when content becomes popular, this subset of nodes might become a bottleneck.
Moreover, all resources (keys, files, buckets) are tightly coupled to the Garage cluster on which they exist;
servers from different clusters can't collaborate to serve together the same data (without additional software).
➡️ **Garage is designed to durably store content.**
In this blog post, we will explore whether we can combine efficient delivery and strong durability by connecting an IPFS node to a Garage cluster.
## Try #1: Vanilla IPFS over Garage
<!--If you are not familiar with IPFS, is available both as a desktop app and a [CLI app](https://docs.ipfs.io/install/command-line/), in this post we will cover the CLI app as it is often easier to understand how things are working internally.
You can quickly follow the official [quick start guide](https://docs.ipfs.io/how-to/command-line-quick-start/#initialize-the-repository) to have an up and running node.-->
IPFS is available as a pre-compiled binary, but to connect it with Garage, we need a plugin named [ipfs/go-ds-s3](https://github.com/ipfs/go-ds-s3).
The Peergos project has a fork because it seems that the plugin is known for hitting Amazon's rate limits
([#105](https://github.com/ipfs/go-ds-s3/issues/105), [#205](https://github.com/ipfs/go-ds-s3/pull/205)).
This is the one we will try in the following.
The easiest solution to use this plugin in IPFS is to bundle it in the main IPFS daemon, and recompile IPFS from sources.
Following the instructions on the README file allowed me to spawn an IPFS daemon configured with S3 as the block store.
I had a small issue when adding the plugin to the `plugin/loader/preload_list` file: the given command lacks a newline.
I had to edit the file manually after running it, the issue was directly visible and easy to fix.
After that, I just ran the daemon and accessed the web interface to upload a photo of my dog:
![A dog](./dog.jpg)
A content identifier (CID) was assigned to this picture:
```
QmNt7NSzyGkJ5K9QzyceDXd18PbLKrMAE93XuSC2487EFn
```
The photo is now accessible on the whole network.
For example, you can inspect it [from the official gateway](https://explore.ipld.io/#/explore/QmNt7NSzyGkJ5K9QzyceDXd18PbLKrMAE93XuSC2487EFn):
![A screenshot of the IPFS explorer](./explorer.png)
At the same time, I was monitoring Garage (through [the OpenTelemetry stack we implemented earlier this year](/blog/2022-v0-7-released/)).
Just after launching the daemon - and before doing anything - I was met by this surprisingly active Grafana plot:
![Grafana API request rate when IPFS is idle](./idle.png)
<center><i>Legend: y axis = requests per 10 seconds, x axis = time</i></center><p></p>
It shows that on average, we handle around 250 requests per second. Most of these requests are in fact the IPFS daemon checking if a block exists in Gargage.
These requests are triggered by IPFS's DHT service: since my node is reachable over the Internet, it acts as a public DHT server and has to answer global
block requests over the whole network. Each time it receives a request for a block, it sends a request to its storage back-end (in our case, to Garage) to see if a copy exists locally.
*We will try to tweak the IPFS configuration later - we know that we can deactivate the DHT server. For now, we will continue with the default parameters.*
When I started interacting with the IPFS node by sending a file or browsing the default proposed catalogs (i.e. the full XKCD archive),
I quickly hit limits with our monitoring stack which, in its default configuration, is not able to ingest the large amount of tracing data produced by the high number of S3 requests originating from the IPFS daemon.
We have the following error in Garage's logs:
```
OpenTelemetry trace error occurred. cannot send span to the batch span processor because the channel is full
```
At this point, I didn't feel that it would be very interesting to fix this issue to see what was exactly the number of requests done on the cluster.
In my opinion, such a simple task of sharing a picture should not require so many requests to the storage server anyway.
As a comparison, this whole webpage, with its pictures, triggers around 10 requests on Garage when loaded, not thousands.
I think we can conclude that this first try was a failure.
The S3 storage plugin for IPFS does too many requests and would need some important work to be optimized.
However, we are aware that the people behind Peergos are known to run their software based on IPFS in production with an S3 backend,
so we should not give up too fast.
## Try #2: Peergos over Garage
[Peergos](https://peergos.org/) is designed as an end-to-end encrypted and federated alternative to Nextcloud.
Internally, it is built on IPFS and is known to have a [deep integration with the S3 API](https://peergos.org/posts/direct-s3).
One important point of this integration is that your browser is able to bypass both the Peergos daemon and the IPFS daemon
to write and read IPFS blocks directly from the S3 API server.
*I don't know exactly if Peergos is still considered alpha quality, or if a beta version was released,
but keep in mind that it might be more experimental than you'd like!*
<!--To give ourselves some courage in this adventure, let's start with a nice screenshot of their web UI:
![Peergos Web UI](./peergos.jpg)-->
Starting Peergos on top of Garage required some small patches on both sides, but in the end, I was able to get it working.
I was able to upload my file, see it in the interface, create a link to share it, rename it, move it to a folder, and so on:
![A screenshot of the Peergos interface](./upload.png)
At the same time, the fans of my computer started to become a bit loud!
A quick look at Grafana showed again a very active Garage:
![Screenshot of a grafana plot showing requests per second over time](./grafa.png)
<center><i>Legend: y axis = requests per 10 seconds on log(10) scale, x axis = time</i></center><p></p>
Again, the workload is dominated by S3 `HeadObject` requests.
After taking a look at `~/.peergos/.ipfs/config`, it seems that the IPFS configuration used by the Peergos project is quite standard,
which means that, as before, we are acting as a DHT server and having to answer to thousands of block requests every second.
We also have some traffic on the `GetObject` and `OPTIONS` endpoints (with peaks up to ~45 req/sec).
This traffic is all generated by Peergos.
The `OPTIONS` HTTP verb is here because we use the direct access feature of Peergos,
meaning that our browser is talking directly to Garage and has to use CORS to validate requests for security.
Internally, IPFS splits files into blocks of less than 256 kB. My picture is thus split into 2 blocks, requiring 2 requests over Garage to fetch it.
But even knowing that IPFS splits files into small blocks, I can't explain why we have so many `GetObject` requests.
## Try #3: Optimizing IPFS
<!--
Routing = dhtclient
![](./grafa2.png)
-->
We have seen in our 2 previous tries that the main source of load was the federation and in particular the DHT server.
In this section, we'd like to artificially remove this problem from the equation by preventing our IPFS node from federating
and see what pressure is put by Peergos alone on our local cluster.
To isolate IPFS, I have set its routing type to `none`, I have cleared its bootstrap node list,
and I configured the swarm socket to listen only on `localhost`.
Finally, I restarted Peergos and was able to observe this more peaceful graph:
![Screenshot of a grafana plot showing requests per second over time](./grafa3.png)
<center><i>Legend: y axis = requests per 10 seconds on log(10) scale, x axis = time</i></center><p></p>
Now, for a given endpoint, we have peaks of around 10 req/sec which is way more reasonable.
Furthermore, we are no longer hammering our back-end with requests on objects that are not there.
After discussing with the developers, it is possible to go even further by running Peergos without IPFS:
this is what they do for some of their tests. If at the same time we increased the size of data blocks,
we might have a non-federated but quite efficient end-to-end encrypted "cloud storage" that works well over Garage,
with our clients directly hitting the S3 API!
For setups where federation is a hard requirement,
the next step would be to gradually allow our node to connect to the IPFS network
while ensuring that the traffic to the Garage cluster remains low.
For example, configuring our IPFS node as a `dhtclient` instead of a `dhtserver` would exempt it from answering public DHT requests.
Keeping an in-memory index (as a hash map and/or a Bloom filter) of the blocks stored on the current node
could also drastically reduce the number of requests.
It could also be interesting to explore ways to run in one process a full IPFS node with a DHT
server on the regular file system, and reserve a second process configured with the S3 back-end to handle only our Peergos data.
However, even with these optimizations, the best we can expect is the traffic we have on the previous plot.
From a theoretical perspective, it is still higher than the optimal number of requests.
On S3, storing a file, downloading a file, and listing available files are all actions that can be done in a single request.
Even if all requests don't have the same cost on the cluster, processing a request has a non-negligible fixed cost.
## Are S3 and IPFS incompatible?
Tweaking IPFS in order to try and make it work on an S3 backend is all and good,
but in some sense, the assumptions made by IPFS are fundamentally incompatible with using S3 as block storage.
First, data on IPFS is split in relatively small chunks: all IPFS blocks must be less than 1 MB, with most being 256 KB or less.
This means that large files or complex directory hierarchies will need thousands of blocks to be stored,
each of which is mapped to a single object in the S3 storage back-end.
On the other side, S3 implementations such as Garage are made to handle very large objects efficiently,
and they also provide their own primitives for rapidly listing all the objects present in a bucket or a directory.
There is thus a huge loss in performance when data is stored in IPFS's block format because this format does not
take advantage of the optimizations provided by S3 back-ends in their standard usage scenarios. Instead, it
requires storing and retrieving thousands of small S3 objects even for very simple operations such
as retrieving a file or listing a directory, incurring a fixed overhead each time.
This problem is compounded by the design of the IPFS data exchange protocol,
in which nodes may request any data blocks to any other node in the network
in its quest to answer a user's request (like retrieving a file, etc.).
When a node is missing a file or a directory it wants to read, it has to do as many requests to other nodes
as there are IPFS blocks in the object to be read.
On the receiving end, this means that any fully-fledged IPFS node has to answer large numbers
of requests for blocks required by users everywhere on the network, which is what we observed in our experiment above.
We were however surprised to observe that many requests coming from the IPFS network were for blocks
which our node didn't have a copy of: this means that somewhere in the IPFS protocol, an overly optimistic
assumption is made on where data could be found in the network, and this ends up translating into many requests
between nodes that return negative results.
When IPFS blocks are stored on a local filesystem, answering these requests fast might be possible.
However, when using an S3 server as a storage back-end, this becomes prohibitively costly.
If one wanted to design a distributed storage system for IPFS data blocks, they would probably need to start at a lower level.
Garage itself makes use of a block storage mechanism that allows small-sized blocks to be stored on a cluster and accessed
rapidly by nodes that need to access them.
However passing through the entire abstraction that provides an S3 API is wasteful and redundant, as this API is
designed to provide advanced functionality such as mutating objects, associating metadata with objects, listing objects, etc.
Plugging the IPFS daemon directly into a lower-level distributed block storage like
Garage's might yield way better results by bypassing all of this complexity.
## Conclusion
Running IPFS over an S3 storage backend does not quite work out of the box in terms of performance.
Having identified that the main problem is linked to the DHT service,
we proposed some improvements (disabling the DHT server, keeping an in-memory index of the blocks, and using the S3 back-end only for user data).
From an IPFS design perspective, it seems however that the numerous small blocks handled by the protocol
do not map trivially to efficient use of the S3 API, and thus could be a limiting factor to any optimization work.
As part of my testing journey, I also stumbled upon some posts about performance issues on IPFS (eg. [#6283](https://github.com/ipfs/go-ipfs/issues/6283))
that are not linked with the S3 connector. I might be negatively influenced by my failure to connect IPFS with S3,
but at this point, I'm tempted to think that IPFS is intrinsically resource-intensive from a block activity perspective.
On our side at Deuxfleurs, we will continue our investigations towards more *minimalistic* software.
This choice makes sense for us as we want to reduce the ecological impact of our services
by deploying fewer servers, that use less energy, and are renewed less frequently.
After discussing with Peergos maintainers, we identified that it is possible to run Peergos without IPFS.
With some optimizations on the block size, we envision great synergies between Garage and Peergos that could lead to
an efficient and lightweight end-to-end encrypted "cloud storage" platform.
*If you happen to be working on this, please inform us!*
*We are also aware of the existence of many other software projects for file sharing
such as Nextcloud, Owncloud, Owncloud Infinite Scale, Seafile, Filestash, Pydio, SOLID, Remote Storage, etc.
Many of these could be connected to an S3 back-end such as Garage.
We might even try some of them in future blog posts, so stay tuned!*

Binary file not shown.

After

Width:  |  Height:  |  Size: 221 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 110 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 295 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 232 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 144 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 194 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 177 KiB

View file

@ -0,0 +1,513 @@
+++
title="Confronting theoretical design with observed performances"
date=2022-09-26
+++
*During the past years, we have thought a lot about possible design decisions and
their theoretical trade-offs for Garage. In particular, we pondered the impacts
of data structures, networking methods, and scheduling algorithms.
Garage worked well enough for our production
cluster at Deuxfleurs, but we also knew that people started to experience some
unexpected behaviors, which motivated us to start a round of benchmarks and performance
measurements to see how Garage behaves compared to our expectations.
This post presents some of our first results, which cover
3 aspects of performance: efficient I/O, "myriads of objects", and resiliency,
reflecting the high-level properties we are seeking.*
<!-- more -->
---
## ⚠️ Disclaimer
The results presented in this blog post must be taken with a (critical) grain of salt due to some
limitations that are inherent to any benchmarking endeavor. We try to reference them as
exhaustively as possible here, but other limitations might exist.
Most of our tests were made on _simulated_ networks, which by definition cannot represent all the
diversity of _real_ networks (dynamic drop, jitter, latency, all of which could be
correlated with throughput or any other external event). We also limited
ourselves to very small workloads that are not representative of a production
cluster. Furthermore, we only benchmarked some very specific aspects of Garage:
our results are not an evaluation of the performance of Garage as a whole.
For some benchmarks, we used Minio as a reference. It must be noted that we did
not try to optimize its configuration as we have done for Garage, and more
generally, we have significantly less knowledge of Minio's internals compared to Garage, which could lead
to underrated performance measurements for Minio. It must also be noted that
Garage and Minio are systems with different feature sets. For instance, Minio supports
erasure coding for higher data density and Garage doesn't, Minio implements
way more S3 endpoints than Garage, etc. Such features necessarily have a cost
that you must keep in mind when reading the plots we will present. You should consider
Minio's results as a way to contextualize Garage's numbers, to justify that our improvements
are not simply artificial in the light of existing object storage implementations.
The impact of the testing environment is also not evaluated (kernel patches,
configuration, parameters, filesystem, hardware configuration, etc.). Some of
these parameters could favor one configuration or software product over another.
Especially, it must be noted that most of the tests were done on a
consumer-grade PC with only a SSD, which is different from most
production setups. Finally, our results are also provided without statistical
tests to validate their significance, and might have insufficient ground
to be claimed as reliable.
When reading this post, please keep in mind that **we are not making any
business or technical recommendations here, and this is not a scientific paper
either**; we only share bits of our development process as honestly as
possible.
Make your own tests if you need to take a decision,
remember to read [benchmarking crimes](https://gernot-heiser.org/benchmarking-crimes.html)
and to remain supportive and caring with your peers ;)
## About our testing environment
We made a first batch of tests on
[Grid5000](https://www.grid5000.fr/w/Grid5000:Home), a large-scale and flexible
testbed for experiment-driven research in all areas of computer science,
which has an
[open access program](https://www.grid5000.fr/w/Grid5000:Open-Access).
During our tests, we used part of the following clusters:
[nova](https://www.grid5000.fr/w/Lyon:Hardware#nova),
[paravance](https://www.grid5000.fr/w/Rennes:Hardware#paravance), and
[econome](https://www.grid5000.fr/w/Nantes:Hardware#econome), to make a
geo-distributed topology. We used the Grid5000 testbed only during our
preliminary tests to identify issues when running Garage on many powerful
servers. We then reproduced these issues in a controlled environment
outside of Grid5000, so don't be
surprised then if Grid5000 is not always mentioned on our plots.
To reproduce some environments locally, we have a small set of Python scripts
called [`mknet`](https://git.deuxfleurs.fr/Deuxfleurs/mknet) tailored to our
needs[^ref1]. Most of the following tests were run locally with `mknet` on a
single computer: a Dell Inspiron 27" 7775 AIO, with a Ryzen 5 1400, 16GB of
RAM and a 512GB SSD. In terms of software, NixOS 22.05 with the 5.15.50 kernel is
used with an ext4 encrypted filesystem. The `vm.dirty_background_ratio` and
`vm.dirty_ratio` have been reduced to `2` and `1` respectively: with default
values, the system tends to freeze under heavy I/O load.
## Efficient I/O
The main purpose of an object storage system is to store and retrieve objects
across the network, and the faster these two functions can be accomplished,
the more efficient the system as a whole will be. For this analysis, we focus on
2 aspects of performance. First, since many applications can start processing a file
before receiving it completely, we will evaluate the time-to-first-byte (TTFB)
on `GetObject` requests, i.e. the duration between the moment a request is sent
and the moment where the first bytes of the returned object are received by the client.
Second, we will evaluate generic throughput, to understand how well
Garage can leverage the underlying machine's performance.
**Time-to-First-Byte** - One specificity of Garage is that we implemented S3
web endpoints, with the idea to make it a platform of choice to publish
static websites. When publishing a website, TTFB can be directly observed
by the end user, as it will impact the perceived reactivity of the page being loaded.
Up to version 0.7.3, time-to-first-byte on Garage used to be relatively high.
This can be explained by the fact that Garage was not able to handle data internally
at a smaller granularity level than entire data blocks, which are up to 1MB chunks of a given object
(a size which [can be configured](https://garagehq.deuxfleurs.fr/documentation/reference-manual/configuration/#block-size)).
Let us take the example of a 4.5MB object, which Garage will split by default into four 1MB blocks and one 0.5MB block.
With the old design, when you were sending a `GET`
request, the first block had to be _fully_ retrieved by the gateway node from the
storage node before it starts to send any data to the client.
With Garage v0.8, we added a data streaming logic that allows the gateway
to send the beginning of a block without having to wait for the full block to be received from
the storage node. We can visually represent the difference as follow:
<center>
<img src="schema-streaming.png" alt="A schema depicting how streaming improves the delivery of a block" />
</center>
As our default block size is only 1MB, the difference should be marginal on
fast networks: it takes only 8ms to transfer 1MB on a 1Gbps network,
adding at most 8ms of latency to a `GetObject` request (assuming no other
data transfer is happening in parallel). However,
on a very slow network, or a very congested link with many parallel requests
handled, the impact can be much more important: on a 5Mbps network, it takes at least 1.6 seconds
to transfer our 1MB block, and streaming will heavily improve user experience.
We wanted to see if this theory holds in practice: we simulated a low latency
but slow network using `mknet` and did some requests with block streaming (Garage v0.8 beta) and
without (Garage v0.7.3). We also added Minio as a reference. To
benchmark this behavior, we wrote a small test named
[s3ttfb](https://git.deuxfleurs.fr/Deuxfleurs/mknet/src/branch/main/benchmarks/s3ttfb),
whose results are shown on the following figure:
![Plot showing the TTFB observed on Garage v0.8, v0.7 and Minio](ttfb.png)
Garage v0.7, which does not support block streaming, gives us a TTFB between 1.6s
and 2s, which matches the time required to transfer the full block which we calculated above.
On the other side of the plot, we can see Garage v0.8 with a very low TTFB thanks to the
streaming feature (the lowest value is 43ms). Minio sits between the two
Garage versions: we suppose that it does some form of batching, but smaller
than our initial 1MB default.
**Throughput** - As soon as we publicly released Garage, people started
benchmarking it, comparing its performances to writing directly on the
filesystem, and observed that Garage was slower (eg.
[#288](https://git.deuxfleurs.fr/Deuxfleurs/garage/issues/288)). To improve the
situation, we did some optimizations, such as putting costly processing like hashing on a dedicated thread,
and many others
([#342](https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/342),
[#343](https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/343)), which led us to
version 0.8 "Beta 1". We also noticed that some of the logic we wrote
to better control resource usage
and detect errors, including semaphores and timeouts, was artificially limiting
performances. In another iteration, we made this logic less restrictive at the
cost of higher resource consumption under load
([#387](https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/387)), resulting in
version 0.8 "Beta 2". Finally, we currently do multiple `fsync` calls each time we
write a block. We know that this is expensive and did a test build without any
`fsync` call ([see the
commit](https://git.deuxfleurs.fr/Deuxfleurs/garage/commit/432131f5b8c2aad113df3b295072a00756da47e7))
that will not be merged, only to assess the impact of `fsync`. We refer to it
as `no-fsync` in the following plot.
*A note about `fsync`: for performance reasons, operating systems often do not
write directly to the disk when a process creates or updates a file in your
filesystem. Instead, the write is kept in memory, and flushed later in a batch
with other writes. If a power loss occurs before the OS has time to flush
data to disk, some writes will be lost. To ensure that a write is effectively
written to disk, the
[`fsync(2)`](https://man7.org/linux/man-pages/man2/fsync.2.html) system call must be used,
which effectively blocks until the file or directory on which it is called has been flushed from volatile
memory to the persistent storage device. Additionally, the exact semantic of
`fsync` [differs from one OS to another](https://mjtsai.com/blog/2022/02/17/apple-ssd-benchmarks-and-f_fullsync/)
and, even on battle-tested software like Postgres, it was
["done wrong for 20 years"](https://archive.fosdem.org/2019/schedule/event/postgresql_fsync/).
Note that on Garage, we are still working on our `fsync` policy and thus, for
now, you should expect limited data durability in case of power loss, as we are
aware of some inconsistencies on this point (which we describe in the following
and plan to solve).*
To assess performance improvements, we used the benchmark tool
[minio/warp](https://github.com/minio/warp) in a non-standard configuration,
adapted for small-scale tests, and we kept only the aggregated result named
"cluster total". The goal of this experiment is to get an idea of the cluster
performance with a standardized and mixed workload.
![Plot showing IO performances of Garage configurations and Minio](io.png)
Minio, our reference point, gives us the best performances in this test.
Looking at Garage, we observe that each improvement we made had a visible
impact on performances. We also note that we have a progress margin in
terms of performances compared to Minio: additional benchmarks, tests, and
monitoring could help us better understand the remaining gap.
## A myriad of objects
Object storage systems do not handle a single object but huge numbers of them:
Amazon claims to handle trillions of objects on their platform, and Red Hat
tout Ceph as being able to handle 10 billion objects. All these
objects must be tracked efficiently in the system to be fetched, listed,
removed, etc. In Garage, we use a "metadata engine" component to track them.
For this analysis, we compare different metadata engines in Garage and see how
well the best one scales to a million objects.
**Testing metadata engines** - With Garage, we chose not to store metadata
directly on the filesystem, like Minio for example, but in a specialized on-disk
B-Tree data structure; in other words, in an embedded database engine. Until now,
the only supported option was [sled](https://sled.rs/), but we started having
serious issues with it - and we were not alone
([#284](https://git.deuxfleurs.fr/Deuxfleurs/garage/issues/284)). With Garage
v0.8, we introduce an abstraction semantic over the features we expect from our
database, allowing us to switch from one metadata back-end to another without touching
the rest of our codebase. We added two additional back-ends: LMDB
(through [heed](https://github.com/meilisearch/heed)) and SQLite
(using [Rusqlite](https://github.com/rusqlite/rusqlite)). **Keep in mind that they
are both experimental: contrarily to sled, we have yet to run them in production
for a significant time.**
Similarly to the impact of `fsync` on block writing, each database engine we use
has its own `fsync` policy. Sled flushes its writes every 2 seconds by
default (this is
[configurable](https://garagehq.deuxfleurs.fr/documentation/reference-manual/configuration/#sled-flush-every-ms)).
LMDB default to an `fsync` on each write, which on early tests led to
abysmal performance. We thus added 2 flags,
[MDB\_NOSYNC](http://www.lmdb.tech/doc/group__mdb__env.html#ga5791dd1adb09123f82dd1f331209e12e)
and
[MDB\_NOMETASYNC](http://www.lmdb.tech/doc/group__mdb__env.html#ga5021c4e96ffe9f383f5b8ab2af8e4b16),
to deactivate `fsync` entirely. On SQLite, it is also possible to deactivate `fsync` with
`pragma synchronous = off`, but we have not started any optimization work on it yet:
our SQLite implementation currently still calls `fsync` for all write operations. Additionally, we are
using these engines through Rust bindings that do not support async Rust,
with which Garage is built, which has an impact on performance as well.
**Our comparison will therefore not reflect the raw performances of
these database engines, but instead, our integration choices.**
Still, we think it makes sense to evaluate our implementations in their current
state in Garage. We designed a benchmark that is intensive on the metadata part
of the software, i.e. handling large numbers of tiny files. We chose again
`minio/warp` as a benchmark tool, but we
configured it with the smallest possible object size it supported, 256
bytes, to put pressure on the metadata engine. We evaluated sled twice:
with its default configuration, and with a configuration where we set a flush
interval of 10 minutes (longer than the test) to disable `fsync`.
*Note that S3 has not been designed for workloads that store huge numbers of small objects;
a regular database, like Cassandra, would be more appropriate. This test has
only been designed to stress our metadata engine and is not indicative of
real-world performances.*
![Plot of our metadata engines comparison with Warp](db_engine.png)
Unsurprisingly, we observe abysmal performances with SQLite, as it is the engine we did not put work on yet,
and that still does an `fsync` for each write. Garage with the `fsync`-disabled LMDB backend performs twice better than
with sled in its default version and 60% better than the "no `fsync`" sled version in our
benchmark. Furthermore, and not depicted on these plots, LMDB uses way less
disk storage and RAM; we would like to quantify that in the future. As we are
only at the very beginning of our work on metadata engines, it is hard to draw
strong conclusions. Still, we can say that SQLite is not ready for production
workloads, and that LMDB looks very promising both in terms of performances and resource
usage, and is a very good candidate for being Garage's default metadata engine in
future releases, once we figure out the proper `fsync` tuning. In the future, we will need to define a data policy for Garage to help us
arbitrate between performance and durability.
*To `fsync` or not to `fsync`? Performance is nothing without reliability, so we
need to better assess the impact of possibly losing a write after it has been validated.
Because Garage is a distributed system, even if a node loses its write due to a
power loss, it will fetch it back from the 2 other nodes that store it. But rare
situations can occur where 1 node is down and the 2 others validate the write and then
lose power before having time to flush to disk. What is our policy in this case? For storage durability,
we are already supposing that we never lose the storage of more than 2 nodes,
so should we also make the hypothesis that we won't lose power on more than 2 nodes at the same
time? What should we do about people hosting all of their nodes at the same
place without an uninterruptible power supply (UPS)? Historically, it seems that Minio developers also accepted
some compromises on this side
([#3536](https://github.com/minio/minio/issues/3536),
[HN Discussion](https://news.ycombinator.com/item?id=28135533)). Now, they seem to
use a combination of `O_DSYNC` and `fdatasync(3p)` - a derivative that ensures
only data and not metadata is persisted on disk - in combination with
`O_DIRECT` for direct I/O
([discussion](https://github.com/minio/minio/discussions/14339#discussioncomment-2200274),
[example in Minio source](https://github.com/minio/minio/blob/master/cmd/xl-storage.go#L1928-L1932)).*
**Storing a million objects** - Object storage systems are designed not only
for data durability and availability but also for scalability, so naturally,
some people asked us how scalable Garage is. If giving a definitive answer to this
question is out of the scope of this study, we wanted to be sure that our
metadata engine would be able to scale to a million objects. To put this
target in context, it remains small compared to other industrial solutions:
Ceph claims to scale up to [10 billion objects](https://www.redhat.com/en/resources/data-solutions-overview),
which is 4 orders of magnitude more than our current target. Of course, their
benchmarking setup has nothing in common with ours, and their tests are way
more exhaustive.
We wrote our own benchmarking tool for this test,
[s3billion](https://git.deuxfleurs.fr/Deuxfleurs/mknet/src/branch/main/benchmarks/s3billion)[^ref2].
The benchmark procedure consists in
concurrently sending a defined number of tiny objects (8192 objects of 16
bytes by default) and measuring the wall clock time to the last object upload. This step is then repeated a given
number of times (128 by default) to effectively create a target number of
objects on the cluster (1M by default). On our local setup with 3
nodes, both Minio and Garage with LMDB were able to achieve this target. In the
following plot, we show how much time it took Garage and Minio to handle
each batch.
Before looking at the plot, **you must keep in mind some important points regarding
the internals of both Minio and Garage**.
Minio has no metadata engine, it stores its objects directly on the filesystem.
Sending 1 million objects on Minio results in creating one million inodes on
the storage server in our current setup. So the performances of the filesystem
probably have a substantial impact on the observed results.
In our precise setup, we know that the
filesystem we used is not adapted at all for Minio (encryption layer, fixed
number of inodes, etc.). Additionally, we mentioned earlier that we deactivated
`fsync` for our metadata engine in Garage, whereas Minio has some `fsync` logic here slowing down the
creation of objects. Finally, object storage is designed for big objects, for which the
costs measured here are negligible. In the end, again, we use Minio only as a
reference point to understand what performance budget we have for each part of our
software.
Conversely, Garage has an optimization for small objects. Below 3KB, a separate file is
not created on the filesystem but the object is directly stored inline in the
metadata engine. In the future, we plan to evaluate how Garage behaves at scale with
objects above 3KB, which we expect to be way closer to Minio, as it will have to create
at least one inode per object. For now, we limit ourselves to evaluating our
metadata engine and focus only on 16-byte objects.
![Showing the time to send 128 batches of 8192 objects for Minio and Garage](1million-both.png)
It appears that the performances of our metadata engine are acceptable, as we
have a comfortable margin compared to Minio (Minio is between 3x and 4x times
slower per batch). We also note that, past the 200k objects mark, Minio's
time to complete a batch of inserts is constant, while on Garage it still increases on the observed range.
It could be interesting to know if Garage's batch completion time would cross Minio's one
for a very large number of objects. If we reason per object, both Minio's and
Garage's performances remain very good: it takes respectively around 20ms and
5ms to create an object. In a real-world scenario, at 100 Mbps, the upload of a 10MB file takes
800ms, and goes up to 8sec for a 100MB file: in both cases
handling the object metadata would be only a fraction of the upload time. The
only cases where a difference would be noticeable would be when uploading a lot of very
small files at once, which again would be an unusual usage of the S3 API.
Let us now focus on Garage's metrics only to better see its specific behavior:
![Showing the time to send 128 batches of 8192 objects for Garage only](1million.png)
Two effects are now more visible: 1., batch completion time increases with the
number of objects in the bucket and 2., measurements are scattered, at least
more than for Minio. We expected this batch completion time increase to be logarithmic,
but we don't have enough data points to conclude confidently it is the case: additional
measurements are needed. Concerning the observed instability, it could
be a symptom of what we saw with some other experiments on this setup,
which sometimes freezes under heavy I/O load. Such freezes could lead to
request timeouts and failures. If this occurs on our testing computer, it might
occur on other servers as well: it would be interesting to better understand this
issue, document how to avoid it, and potentially change how we handle I/O
internally in Garage. But still, this was a very heavy test that will probably not be encountered in
many setups: we were adding 273 objects per second for 30 minutes straight!
To conclude this part, Garage can ingest 1 million tiny objects while remaining
usable on our local setup. To put this result in perspective, our production
cluster at [deuxfleurs.fr](https://deuxfleurs) smoothly manages a bucket with
116k objects. This bucket contains real-world production data: it is used by our Matrix instance
to store people's media files (profile pictures, shared pictures, videos,
audio files, documents...). Thanks to this benchmark, we have identified two points
of vigilance: the increase of batch insert time with the number of existing
objects in the cluster in the observed range, and the volatility in our measured data that
could be a symptom of our system freezing under the load. Despite these two
points, we are confident that Garage could scale way above 1M objects, although
that remains to be proven.
## In an unpredictable world, stay resilient
Supporting a variety of real-world networks and computers, especially ones that
were not designed for software-defined storage or even for server purposes, is the
core value proposition of Garage. For example, our production cluster is
hosted [on refurbished Lenovo Thinkcentre Tiny desktop computers](https://guide.deuxfleurs.fr/img/serv_neptune.jpg)
behind consumer-grade fiber links across France and Belgium (if you are reading this,
congratulation, you fetched this webpage from it!). That's why we are very
careful that our internal protocol (referred to as "RPC protocol" in our documentation)
remains as lightweight as possible. For this analysis, we quantify how network
latency and number of nodes in the cluster impact the duration of the most
important kinds of S3 requests.
**Latency amplification** - With the kind of networks we use (consumer-grade
fiber links across the EU), the observed latency between nodes is in the 50ms range.
When latency is not negligible, you will observe that request completion
time is a factor of the observed latency. That's to be expected: in many cases, the
node of the cluster you are contacting cannot directly answer your request, and
has to reach other nodes of the cluster to get the data. Each
of these sequential remote procedure calls - or RPCs - adds to the final S3 request duration, which can quickly become
expensive. This ratio between request duration and network latency is what we
refer to as *latency amplification*.
For example, on Garage, a `GetObject` request does two sequential calls: first,
it fetches the descriptor of the requested object from the metadata engine, which contains a reference
to the first block of data, and then only in a second step it can start retrieving data blocks
from storage nodes. We can therefore expect that the
request duration of a small `GetObject` request will be close to twice the
network latency.
We tested the latency amplification theory with another benchmark of our own named
[s3lat](https://git.deuxfleurs.fr/Deuxfleurs/mknet/src/branch/main/benchmarks/s3lat)
which does a single request at a time on an endpoint and measures the response
time. As we are not interested in bandwidth but latency, all our requests
involving objects are made on tiny files of around 16 bytes. Our benchmark
tests 5 standard endpoints of the S3 API: ListBuckets, ListObjects, PutObject, GetObject and
RemoveObject. Here are the results:
![Latency amplification](amplification.png)
As Garage has been optimized for this use case from the very beginning, we don't see
any significant evolution from one version to another (Garage v0.7.3 and Garage
v0.8.0 Beta 1 here). Compared to Minio, these values are either similar (for
ListObjects and ListBuckets) or significantly better (for GetObject, PutObject, and
RemoveObject). This can be easily explained by the fact that Minio has not been designed with
environments with high latencies in mind. Instead, it is expected to run on clusters that are built
in a singe data center. In a multi-DC setup, different clusters could then possibly be interconnected with their asynchronous
[bucket replication](https://min.io/docs/minio/linux/administration/bucket-replication.html?ref=docs-redirect)
feature.
*Minio also has a [multi-site active-active replication system](https://blog.min.io/minio-multi-site-active-active-replication/)
but it is even more sensitive to latency: "Multi-site replication has increased
latency sensitivity, as Minio does not consider an object as replicated until
it has synchronized to all configured remote targets. Replication latency is
therefore dictated by the slowest link in the replication mesh."*
**A cluster with many nodes** - Whether you already have many compute nodes
with unused storage, need to store a lot of data, or are experimenting with unusual
system architectures, you might be interested in deploying over a hundred Garage nodes.
However, in some distributed systems, the number of nodes in the cluster will
have an impact on performance. Theoretically, our protocol, which is inspired by distributed
hash tables (DHT), should scale fairly well, but until now, we never took the time to test it
with a hundred nodes or more.
This test was run directly on Grid5000 with 6 physical servers spread
in 3 locations in France: Lyon, Rennes, and Nantes. On each server, we ran up
to 65 instances of Garage simultaneously, for a total of 390 nodes. The
network between physical servers is the dedicated network provided by
the Grid5000 community. Nodes on the same physical machine communicate directly
through the Linux network stack without any limitation. We are aware that this is a
weakness of this test, but we still think that this test can be relevant as, at
each step in the test, each instance of Garage has 83% (5/6) of its connections
that are made over a real network. To measure performances for each cluster size, we used
[s3lat](https://git.deuxfleurs.fr/Deuxfleurs/mknet/src/branch/main/benchmarks/s3lat)
again:
![Impact of response time with bigger clusters](complexity.png)
Up to 250 nodes, we observed response times that remain constant. After this threshold,
results become very noisy. By looking at the server resource usage, we saw
that their load started to become non-negligible: it seems that we are not
hitting a limit on the protocol side, but have simply exhausted the resource
of our testing nodes. In the future, we would like to run this experiment
again, but on many more physical nodes, to confirm our hypothesis. For now, we
are confident that a Garage cluster with 100+ nodes should work.
## Conclusion and Future work
During this work, we identified some sensitive points on Garage,
on which we will have to continue working: our data durability target and interaction with the
filesystem (`O_DSYNC`, `fsync`, `O_DIRECT`, etc.) is not yet homogeneous across
our components; our new metadata engines (LMDB, SQLite) still need some testing
and tuning; and we know that raw I/O performances (GetObject and PutObject for large objects) have a small
improvement margin.
At the same time, Garage has never been in better shape: its next version (version 0.8) will
see drastic improvements in terms of performance and reliability. We are
confident that Garage is already able to cover a wide range of deployment needs, up
to over a hundred nodes and millions of objects.
In the future, on the performance aspect, we would like to evaluate the impact
of introducing an SRPT scheduler
([#361](https://git.deuxfleurs.fr/Deuxfleurs/garage/issues/361)), define a data
durability policy and implement it, make a deeper and larger review of the
state of the art (Minio, Ceph, Swift, OpenIO, Riak CS, SeaweedFS, etc.) to
learn from them and, lastly, benchmark Garage at scale with possibly multiple
terabytes of data and billions of objects on long-lasting experiments.
In the meantime, stay tuned: we have released
[a first release candidate for Garage v0.8](https://git.deuxfleurs.fr/Deuxfleurs/garage/releases/tag/v0.8.0-rc1),
and are already working on several features for the next version.
For instance, we are working on a new layout that will have enhanced optimality properties,
as well as a theoretical proof of correctness
([#296](https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/296)). We are also
working on a Python SDK for Garage's administration API
([#379](https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/379)), and we will
soon officially introduce a new API (as a technical preview) named K2V
([see K2V on our doc for a primer](https://garagehq.deuxfleurs.fr/documentation/reference-manual/k2v/)).
## Notes
[^ref1]: Yes, we are aware of [Jepsen](https://github.com/jepsen-io/jepsen)'s
existence. Jepsen is far more complex than our set of scripts, but
it is also way more versatile.
[^ref2]: The program name contains the word "billion", although we only tested Garage
up to 1 million objects: this is not a typo, we were just a little bit too
enthusiastic when we wrote it ;)
<style>
.footnote-definition p { display: inline; }
</style>

Binary file not shown.

After

Width:  |  Height:  |  Size: 189 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 49 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 128 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 117 KiB

View file

@ -0,0 +1,143 @@
+++
title="Garage v0.7: Kubernetes and OpenTelemetry"
date=2022-04-04
+++
*We just published Garage v0.7, our second public beta release. In this post, we do a quick tour of its 2 new features: Kubernetes integration and OpenTelemetry support.*
<!-- more -->
---
Two months ago, we were impressed by the success of our open beta launch at FOSDEM and on Hacker News: [our initial post](https://garagehq.deuxfleurs.fr/blog/2022-introducing-garage/) lead to more than 40k views in 10 days, peaking at 100 views/minute, and all requests were served by Garage, without even using a caching frontend!
Since this event, we continued to improve Garage, and — 2 months after the initial release — we are happy to announce version 0.7.0.
But first, we would like to thank the contributors that made this new release possible: Alex, Jill, Max Audron, Maximilien, Quentin, Rune Henrisken, Steam, and trinity-1686a.
This is also our first time welcoming contributors external to the core team, and as we wish for Garage to be a community-driven project, we encourage it!
You can get this release using our binaries or the package provided by your distribution.
We ship [statically compiled binaries](https://garagehq.deuxfleurs.fr/download/) for most common Linux architectures (amd64, i386, aarch64 and armv6) and associated [Docker containers](https://hub.docker.com/u/dxflrs).
Garage now is also packaged by third parties on some OS/distributions. We are currently aware of [FreeBSD](https://cgit.freebsd.org/ports/tree/www/garage/Makefile) and [AUR for Arch Linux](https://aur.archlinux.org/packages/garage).
Feel free to [reach out to us](mailto:garagehq@deuxfleurs.fr) if you are packaging (or planning to package) Garage; we welcome maintainers and will upstream specific patches if that can help. If you already did package Garage, please inform us and we'll add it to the documentation.
Speaking about the changes of this new version, it obviously includes many bug fixes.
We listed them in our [changelogs](https://git.deuxfleurs.fr/Deuxfleurs/garage/releases), so take a look, we might have fixed some issues you were having!
Besides bug fixes, there are two new major features in this release: better integration with Kubernetes, and support for observability via OpenTelemetry.
## Kubernetes integration
Before Garage v0.7.0, you had to deploy a Consul cluster or spawn a "coordinating" pod to deploy Garage on [Kubernetes](https://kubernetes.io) (K8S).
In this new version, Garage integrates a method to discover other peers by using Kubernetes [Custom Resources](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) (CR) to simplify cluster discovery.
CR discovery can be quickly enabled in Garage, by configuring the name of the desired service (`kubernetes_namespace`) and which namespace to look for (`kubernetes_service_name`) in your Garage configuration file:
```toml
kubernetes_namespace = "default"
kubernetes_service_name = "garage-daemon"
```
Custom Resources must be defined *a priori* with [Custom Resource Definition](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/) (CRD).
If the CRD does not exist, Garage will create it for you. Automatic CRD creation is enabled by default, but it requires giving additional permissions to Garage to work.
If you prefer strictly controlling access to your K8S cluster, you can create the resource manually and prevent Garage from automatically creating it:
```toml
kubernetes_skip_crd = true
```
If you want to try Garage on K8S, we currently only provide some basic [example files](https://git.deuxfleurs.fr/Deuxfleurs/garage/src/commit/7e1ac51b580afa8e900206e7cc49791ed0a00d94/script/k8s). These files register a [ConfigMap](https://kubernetes.io/docs/concepts/configuration/configmap/), a [ClusterRoleBinding](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding), and a [StatefulSet](https://kubernetes.io/fr/docs/concepts/workloads/controllers/statefulset/) with a [Persistent Volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/).
Once these files are deployed, you will be able to interact with Garage as follow:
```bash
kubectl exec -it garage-0 --container garage -- /garage status
# ==== HEALTHY NODES ====
# ID Hostname Address Tags Zone Capacity
# e628.. garage-0 172.17.0.5:3901 NO ROLE ASSIGNED
# 570f.. garage-2 172.17.0.7:3901 NO ROLE ASSIGNED
# e199.. garage-1 172.17.0.6:3901 NO ROLE ASSIGNED
```
You can then follow the [regular documentation](https://garagehq.deuxfleurs.fr/documentation/cookbook/real-world/#creating-a-cluster-layout) to complete the configuration of your cluster.
If you target a production deployment, you should avoid binding admin rights to your cluster to create Garage's CRD. You will also need to expose some [Services](https://kubernetes.io/docs/concepts/services-networking/service/) to make your cluster reachable. Keep also in mind that Garage is a stateful service, so you must be very careful of how you handle your data in Kubernetes in order not to lose it. In the near future, we plan to release a proper Helm chart and write "best practices" in our documentation.
If Kubernetes is not your thing, know that we are running Garage on a Nomad+Consul cluster, which is also well supported.
We have not documented it yet but you can get a look at [our Nomad service](https://git.deuxfleurs.fr/Deuxfleurs/infrastructure/src/commit/1e5e4af35c073d04698bb10dd4ad1330d6c62a0d/app/garage/deploy/garage.hcl).
## OpenTelemetry support
[OpenTelemetry](https://opentelemetry.io/) standardizes how software generates and collects system telemetry information, namely metrics, logs, and traces.
By implementing this standard in Garage, we hope that it will help you to better monitor, manage and tune your cluster.
Note that to fully leverage this feature, you must be already familiar with monitoring stacks like [Prometheus](https://prometheus.io/)+[Grafana](https://grafana.com/) or [ElasticSearch](https://www.elastic.co/elasticsearch/)+[Kibana](https://www.elastic.co/kibana/).
To activate OpenTelemetry on Garage, you must add to your configuration file the following entries (supposing that your collector is also on localhost):
```toml
[admin]
api_bind_addr = "127.0.0.1:3903"
trace_sink = "http://localhost:4317"
```
The first line, `api_bind_address`, instructs Garage to expose an HTTP endpoint from which metrics can be obtained in Prometheus' data format.
The second line, `trace_sink`, instructs Garage to export tracing information to an OpenTelemetry collector at the given address.
These two options work independently and you can use them separately, depending on if you are interested only in metrics, traces, or both.
We provide [some files](https://git.deuxfleurs.fr/Deuxfleurs/garage/src/branch/main/script/telemetry) to help you quickly bootstrap a testing monitoring stack.
It includes a docker-compose file and a pre-configured Grafana dashboard.
You can use them if you want to reproduce the following examples.
Grafana is particularly adapted to understand how your cluster is performing from a "bird's eye view".
For example, the following graph shows S3 API calls sent to your node per time unit.
You can use it to better understand how your users are interacting with your cluster.
![A screenshot of a plot made by Grafana depicting the number of requests per time units grouped by endpoints](api_rate.png)
Thanks to this graph, we know that starting at 14:55, an important upload has been started.
This upload is made of many small files, as we see many PutObject calls that are often used for small files.
It also has some large objects, as we observe some multipart uploads requests.
Conversely, at this time, no reads are done as the corresponding read endpoints (ListBuckets, ListObjectsV2, etc.) receive 0 request per time unit.
Garage also collects metrics from lower-level parts of the system.
You can use them to better understand how Garage is interacting with your OS and your hardware.
![A screenshot of a plot made by Grafana depicting the write speed (in MB/s) during test time.](writes.png)
This plot has been captured at the same moment as the previous one.
We do not see a correlation between the writes and the API requests for the full upload but only for its beginning.
More precisely, it maps well to multipart upload requests, and this is expected.
Large files (of the multipart uploads) will saturate the writes of your disk but the uploading of small files (via the PutObject endpoint) will be limited by other parts of the system.
This simple example covers only 2 metrics over the 20+ ones that we already defined, but it still allowed us to precisely describe our cluster usage and identify where bottlenecks could be.
We are confident that cleverly using these metrics on a production cluster will give you many more valuable insights into your cluster.
While metrics are good for having a large, general overview of your system, they are however not adapted for digging and pinpointing a specific performance issue on a specific code path.
Thankfully, we also have a solution for this problem: tracing.
Using [Application Performance Monitoring](https://www.elastic.co/observability/application-performance-monitoring) (APM) in conjunction with Kibana,
we can get for instance the following visualization of what happens during a PutObject call (click to enlarge):
[![A screenshot of APM depicting the trace of a PutObject call](apm.png)](apm.png)
On the top of the screenshot, we see the latency distribution of all PutObject requests.
We learn that the selected request took ~1ms to execute, while 95% of all requests took less than 80ms to run.
Having some dispersion between requests is expected as Garage does not run on a strong real-time system, but in this case, you must also consider that
a request duration is impacted by the size of the object that is sent (a 10B object will be quicker to process than a 10MB one).
Consequently, this request probably corresponds to a very small file.
Below this first histogram, you can select the request you want to inspect, and then see its trace on the bottom part.
The trace shown above can be broken down in 4 parts: fetching the API key to check authentication (`key get`), fetching the bucket identifier from its name (`bucket_alias get`), fetching the bucket configuration to check authorizations (`bucket_v2 get`), and finally inserting the object in the storage (`object insert`).
With this example, we demonstrated that we can inspect Garage internals to find slow requests, then see which codepath has been taken by a request, and finally identify which part of the code took time.
Keep in mind that this is our first iteration on telemetry for Garage, so things are a bit rough around the edges (step-by-step documentation is missing, our Grafana dashboard is a work in progress, etc.).
In all cases, your feedback is welcome on our Matrix channel.
## Conclusion
This is only the first iteration of the Kubernetes and OpenTelemetry integrations in Garage, so things are still a bit rough.
We plan to polish their integration in the coming months based on our experience and your feedback.
You may also ask yourself what will be the other works we plan to conduct: stay tuned, we will soon release information on our roadmap!
In the meantime, we hope you will enjoy using Garage v0.7.

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

View file

@ -1,57 +0,0 @@
+++
title="Aerogramme was at FOSDEM'24 modern email track"
date=2024-02-05
+++
*FOSDEM is the European conference for FLOSS developpers. It was the first time Aerogramme was discussed publicly.
If you missed the presentation, a recording and the slides are available.*
<!-- more -->
---
## Recording
<div class="video">
<video controls="controls" width="75%">
<source src="https://video.fosdem.org/2024/h2213/fosdem-2024-2642--servers-aerogramme-a-multi-region-imap-server.av1.webm" type="video/webm; codecs=&quot;av01.0.08M.08.0.110.01.01.01.0&quot;">
<source src="https://video.fosdem.org/2024/h2213/fosdem-2024-2642--servers-aerogramme-a-multi-region-imap-server.mp4" type="video/mp4">
</video>
</div>
## Transcript
*(slide 1)* Aerogramme is a multi-region IMAP server, in this talk we will discuss what a "multi-region IMAP server" means and why it's important.
*(slide 2)*
Let's start with some context, my name is Quentin. I have a PhD in distributed system,
and this talk will be a lot about PhD distributed system because something I know.
I try to work as much as possible for a collective named Deuxfleurs where we try to build a low-tech Internet.
If you want to know more about our project, check yesterday talk about Garage.
Aerogramme is part of Deuxfleurs' strategy, and a very nice thing, this project is supported by NLnet.
*(slide 3)*
First, the problem we want to solve: we want to make other people available when it would be otherwise
impossible (due to geographical distances for example). We can achieve this goal only if the underlying system is working:
so we will talk a lot about availability and reliability.
*(slide 4)*
Today's talk is about 3 main ideas: 1) we should not trust the cloud & hosting providers as they can fail. 2) there is some space to explore alternative IMAP server designs and 3) and finally I will try to convince you that such new designs can work in the real life
*(slide 5)*
In the title talk, I speak about "multi-region", and so we must define first what is a "region".
On the slide is depicted the Google Cloud Platform (GCP) region in Paris: it's made of 3 datacenters.
Last april, the whole region, the 3 datacenters were unavailable for 3 weeks.
The outage lasted for 3 weeks for some services, and it was due to a fire in one datacenter.
And due to tight connections between the 3 datacenters, the 2 others ones were unusable due to a software problem.
3 weeks without emails, you imagine it can make your life very hard if you need to do some important stuff, like seeking a new job.
*(rest of the transcript is missing currently)*
## Links
- [Download Slides](https://fosdem.org/2024/events/attachments/fosdem-2024-2642--servers-aerogramme-a-multi-region-imap-server/slides/22665/aerogramme_vmnrj3Q.pdf)
- [Aerogramme Talk Page](https://fosdem.org/2024/schedule/event/fosdem-2024-2642--servers-aerogramme-a-multi-region-imap-server/)
- [Modern Email Track](https://fosdem.org/2024/schedule/track/modern-email/)
- [2024 edition main page](https://fosdem.org/2024/)

Binary file not shown.

Before

Width:  |  Height:  |  Size: 76 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 52 KiB

View file

@ -1,117 +0,0 @@
+++
title="Aerogramme 0.2.2: predictability & user testing"
date=2024-02-22
+++
*Let's review how Aerogramme performances became more predictable, why it's important, and showcase how user testing helped surfacing bugs.*
<!-- more -->
---
This minor version of Aerogramme put the focus on 2 aspects of the software: predictable performances & collecting user feedbacks.
In the following, I describe both aspect in details.
## Improving predictability
In the previous blog post, we asked ourselves [Does Aerogramme use too much memory?](@/blog/2024-ram-usage-encryption-s3/index.md).
From the discussion, we surfaced it was not acceptable that some specific queries (FETCH FULL or SEARCH TEXT) where loading the
full mailbox in memory. It's concerning as the mail server will be used by multiple users and as a limited amount of resources,
so we don't want a user allocating all the memory. In other words, we want to have a per-user resource usage that remain as stable as possible.
These ideas are developed more in depth in the Amazon article [Reliability & Constant Work](https://aws.amazon.com/fr/builders-library/reliability-and-constant-work/).
As part of the conclusion, we identified that streaming emails content would solve our problem.
In practice, I rewrote the relevant part of the code to return a `futures::stream::Stream` instead of a `Vec`.
Then, for both requests, I re-run the same benchmark to have a before/after comparison for both commands.
Next, the first pictures is for FETCH FULL, the second one for SEARCH TEXT.
![Fetch resource usage for Aerograme 0.2.1 & 0.2.2](fetch-full.png)
![Search resource usage for Aerograme 0.2.1 & 0.2.2](search-full.png)
For both FETCH and SEARCH, the changes are identical. Before, the command was executed in ~5 seconds, allocated up to 300MB of RAM, and used up to 150% of CPU.
After, the command took ~30 to get executed, allocated up to 400MB of RAM, used up to 40MB of RAM, and used up to 80% of CPU sporadically.
Here is why it's a positive thing: now the memory consumption of Aerogramme is capped approximately by the biggest email accepted
by your system (25MB, multiplied by a small constant).
It has also other benefits: it prevents the file descriptor and other network ressource exhaustion,
and it adds fairness between users. Indeed, a user can't monopolize all the ressources of the servers (CPU, I/O, etc.) anymore, and thus multiple users
requests are thus intertwined by the server (we assume the number of users is greatly superior to the number of cores). And again, it leads to better predictability, as per-user requests completion will be less impacted by other requests.
Another concern is the RAM consumption with the IDLE feature.
The real cause is that we retain in-memory the full user profile (mailbox data, IO connectors, etc.): we should instead
keep only the minimum data to be waken-up. That's the ideal fix, the final solution, that would take lots of time to design and implement.
This fix is not necessary *now*, instead we can simply try to optimize the size of a full user profile in memory.
On this aspect, the [aws-sdk-s3](https://crates.io/crates/aws-sdk-s3) crate has [the following note](https://docs.rs/aws-sdk-s3/1.16.0/aws_sdk_s3/client/index.html):
> Client construction is expensive due to connection thread pool initialization, and should be done once at application start-up.
Digging deeper in the crate dependencies, we learn from the [aws-smithy-runtime](https://crates.io/crates/aws-smithy-runtime) crate, [we can read](https://docs.rs/aws-smithy-runtime/1.1.7/aws_smithy_runtime/client/http/hyper_014/struct.HyperClientBuilder.html):
> [Constructing] a Hyper client with the default TLS implementation (rustls) [...] can be useful when you want to share a Hyper connector between multiple generated Smithy clients.
It seems to be exactly what we want to do, but to be effective, we must do the same thing for K2V.
After implementing these features, I got the following plot:
![Idle resource usage for Aerograme 0.2.1 & 0.2.2](idle.png)
First, the spikes are more spaced because I am typing the command by hands, not because Aerogramme is slower!
Another non relevant artifact is the end of the memory plot: memory is not released on the left plot as I cut the recording before typing the LOGOUT command.
What's interesting is the memory usage range: on the left, it's ~20MB, on the right it's ~10MB.
By sharing the HTTP Client, we thus use twice less memory per-user, down to around ~650kB/user for Aerogramme 0.2.2
Back to our 1k users, we get 650MB of RAM, 6.5GB for 10k, and thus 65GB for 100k. So *in theory*, it seems OK,
but our sample and our methodology is too dubious to confirm that *in practice* such memory usage will be observed.
In the end, I think for now IDLE RAM usage in Aerogramme is acceptable, and thus we can move
on other aspects without fearing that IDLE will make the software unusable.
## Users feedbacks
*Dovecot AUTH continuation inlining* - When a username + password is short,
the Dovecot SASL Auth protocol allows the client (here Postfix)
to send the base64 inlined, without having to wait for the continuation.
It was not supported by Aerogramme and was preventing some users from authenticating.
*Pipelining limits (reported by Nicolas)* - Pipeling limit set to 3.
Avoiding DoS resources. But failing some honest clients like Mutt.
Bumped to 64, will be watched in the next months.
*SASL Auth subtleties (reported by Nicolas)* - Authorization can be empty, or can be set to the same value as Authentication.
Second case not handled but required by Fair Email (thx Nicolas)
*Thunderbird Autodiscovery issues (reported by LX & Nicolas)* - K9 stable does not support the `%EMAILLOCALPART%` placeholder.
K9 beta (6.714) does not support some values marked as obsolete in the `authentication` field: `plain` is not supported anymore, `password-cleartext` must be used instead.
Content-Type is important also, if a wrong one is sent, content is silently ignored by some clients.
Today autodiscovery is not part of Aerogramme, but due to these issues, and given that generating these files
on the fly could improve compatibility (email is passed a query parameter), so we could directly put the right username
instead of relying on placeholders that are not well supported.
*Broken LITERAL+ (reported by Maxime)* - It was not possible to copy more than one email at once to an Aerogramme mailbox.
It was due to the fact we were using an old version of imap-flow that was not correctly supporting LITERAL+.
Upgrading imap-flow to the latest version fixed the problem.
*Broken IDLE (reported by Maxime)* - After updating imap-flow, we started noticing some timeouts in Thunderbird due to IDLE bugs.
When IDLE was implemented in Aerogramme, the code was not ready in imap-flow,
and thus I used some hacks. But upgrading the library broke my hacks for the best: now
imap-flow supports IDLE out of the box, and thus Aerogramme code is now cleaner and more maintainable.
Some others quality of life feedbacks not reported here have been made by MrFlos & Nicolas, thanks to all the people that took part in this debugging adventure.
## Conclusion: download and test this new version!
Do not get me wrong: Aerogramme is still not ready for prime time.
But by operating it for real, we start understanding better how it behaves, what are the rough edges, what we want to improve, what we need, etc.
If you are interested in being an Aerogramme early adopter, it might be the good time to setup a test cluster, try Aerogramme,
share your use cases, in order to shape its future and be ready to go in production when its public beta will be announced.
```bash
docker run --net host registry.deuxfleurs.org/aerogramme:0.2.2
```
*It will launch Aerogramme in development mode. Username is `alice`, password is `hunter2`, email is `alice@example.tld`.*
[Download](/download/) - [Changelog](https://git.deuxfleurs.fr/Deuxfleurs/aerogramme/releases/tag/0.2.2) - [Getting started](/documentation/)

Binary file not shown.

Before

Width:  |  Height:  |  Size: 76 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 52 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 50 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 25 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 30 KiB

View file

@ -1,693 +0,0 @@
<?xml version="1.0" encoding="UTF-8"?>
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" width="1049" height="450" viewBox="0 0 1049 450">
<defs>
<g>
<g id="glyph-0-0">
<path d="M 5.671875 -1.953125 L 5.671875 -6.40625 L 4.859375 -6.40625 L 4.859375 -1.953125 C 4.859375 -1.09375 4.234375 -0.5625 3.203125 -0.5625 C 2.25 -0.5625 1.5625 -1.015625 1.5625 -1.953125 L 1.5625 -6.40625 L 0.75 -6.40625 L 0.75 -1.953125 C 0.75 -0.65625 1.6875 0.15625 3.203125 0.15625 C 4.703125 0.15625 5.671875 -0.671875 5.671875 -1.953125 Z M 5.671875 -1.953125 "/>
</g>
<g id="glyph-0-1">
<path d="M 4.28125 0 L 4.28125 -3.484375 C 4.28125 -4.25 3.71875 -4.734375 2.828125 -4.734375 C 2.140625 -4.734375 1.703125 -4.484375 1.296875 -3.828125 L 1.296875 -4.609375 L 0.609375 -4.609375 L 0.609375 0 L 1.34375 0 L 1.34375 -2.546875 C 1.34375 -3.484375 1.859375 -4.09375 2.609375 -4.09375 C 3.1875 -4.09375 3.546875 -3.75 3.546875 -3.1875 L 3.546875 0 Z M 4.28125 0 "/>
</g>
<g id="glyph-0-2">
<path d="M 1.34375 0 L 1.34375 -4.609375 L 0.609375 -4.609375 L 0.609375 0 Z M 1.4375 -5.296875 L 1.4375 -6.21875 L 0.53125 -6.21875 L 0.53125 -5.296875 Z M 1.4375 -5.296875 "/>
</g>
<g id="glyph-0-3">
<path d="M 4.359375 1.921875 L 4.359375 -4.609375 L 3.703125 -4.609375 L 3.703125 -4 C 3.359375 -4.484375 2.84375 -4.734375 2.234375 -4.734375 C 1.015625 -4.734375 0.234375 -3.78125 0.234375 -2.25 C 0.234375 -0.75 0.984375 0.125 2.203125 0.125 C 2.84375 0.125 3.28125 -0.09375 3.625 -0.59375 L 3.625 1.921875 Z M 3.625 -2.28125 C 3.625 -1.21875 3.109375 -0.546875 2.34375 -0.546875 C 1.53125 -0.546875 1 -1.234375 1 -2.3125 C 1 -3.375 1.53125 -4.0625 2.34375 -4.0625 C 3.125 -4.0625 3.625 -3.359375 3.625 -2.28125 Z M 3.625 -2.28125 "/>
</g>
<g id="glyph-0-4">
<path d="M 4.234375 0 L 4.234375 -4.609375 L 3.515625 -4.609375 L 3.515625 -2.0625 C 3.515625 -1.125 3.015625 -0.515625 2.25 -0.515625 C 1.671875 -0.515625 1.296875 -0.859375 1.296875 -1.421875 L 1.296875 -4.609375 L 0.578125 -4.609375 L 0.578125 -1.125 C 0.578125 -0.359375 1.140625 0.125 2.046875 0.125 C 2.71875 0.125 3.15625 -0.109375 3.578125 -0.71875 L 3.578125 0 Z M 4.234375 0 "/>
</g>
<g id="glyph-0-5">
<path d="M 4.515625 -2.09375 C 4.515625 -2.765625 4.453125 -3.1875 4.328125 -3.53125 C 4.03125 -4.28125 3.328125 -4.734375 2.46875 -4.734375 C 1.171875 -4.734375 0.359375 -3.796875 0.359375 -2.28125 C 0.359375 -0.765625 1.15625 0.125 2.453125 0.125 C 3.5 0.125 4.234375 -0.46875 4.421875 -1.40625 L 3.671875 -1.40625 C 3.46875 -0.796875 3.0625 -0.546875 2.46875 -0.546875 C 1.703125 -0.546875 1.140625 -1.03125 1.125 -2.09375 Z M 3.734375 -2.75 C 3.734375 -2.75 3.734375 -2.703125 3.71875 -2.6875 L 1.140625 -2.6875 C 1.203125 -3.515625 1.71875 -4.0625 2.453125 -4.0625 C 3.171875 -4.0625 3.734375 -3.46875 3.734375 -2.75 Z M 3.734375 -2.75 "/>
</g>
<g id="glyph-0-6">
<path d="M 5.96875 0 L 5.96875 -0.203125 C 5.671875 -0.40625 5.59375 -0.640625 5.59375 -1.5 C 5.5625 -2.546875 5.40625 -2.875 4.71875 -3.171875 C 5.4375 -3.515625 5.734375 -3.96875 5.734375 -4.703125 C 5.734375 -5.8125 5.03125 -6.40625 3.78125 -6.40625 L 0.8125 -6.40625 L 0.8125 0 L 1.640625 0 L 1.640625 -2.765625 L 3.75 -2.765625 C 4.46875 -2.765625 4.8125 -2.40625 4.796875 -1.625 L 4.796875 -1.046875 C 4.78125 -0.65625 4.859375 -0.265625 4.984375 0 Z M 4.875 -4.578125 C 4.875 -3.828125 4.484375 -3.484375 3.609375 -3.484375 L 1.640625 -3.484375 L 1.640625 -5.6875 L 3.609375 -5.6875 C 4.53125 -5.6875 4.875 -5.296875 4.875 -4.578125 Z M 4.875 -4.578125 "/>
</g>
<g id="glyph-0-7">
<path d="M 4.703125 -0.015625 L 4.703125 -0.578125 C 4.625 -0.546875 4.59375 -0.546875 4.546875 -0.546875 C 4.296875 -0.546875 4.15625 -0.6875 4.15625 -0.921875 L 4.15625 -3.484375 C 4.15625 -4.296875 3.546875 -4.734375 2.421875 -4.734375 C 1.296875 -4.734375 0.609375 -4.3125 0.578125 -3.25 L 1.3125 -3.25 C 1.375 -3.8125 1.703125 -4.0625 2.390625 -4.0625 C 3.046875 -4.0625 3.421875 -3.8125 3.421875 -3.375 L 3.421875 -3.1875 C 3.421875 -2.875 3.234375 -2.75 2.65625 -2.671875 C 1.625 -2.546875 1.453125 -2.5 1.171875 -2.390625 C 0.640625 -2.171875 0.375 -1.765625 0.375 -1.203125 C 0.375 -0.359375 0.953125 0.125 1.875 0.125 C 2.46875 0.125 3.046875 -0.109375 3.453125 -0.546875 C 3.53125 -0.1875 3.84375 0.0625 4.203125 0.0625 C 4.359375 0.0625 4.46875 0.046875 4.703125 -0.015625 Z M 3.421875 -1.59375 C 3.421875 -0.9375 2.75 -0.515625 2.046875 -0.515625 C 1.46875 -0.515625 1.140625 -0.71875 1.140625 -1.21875 C 1.140625 -1.703125 1.453125 -1.90625 2.25 -2.015625 C 3.015625 -2.125 3.171875 -2.15625 3.421875 -2.28125 Z M 3.421875 -1.59375 "/>
</g>
<g id="glyph-0-8">
<path d="M 6.234375 -4.609375 L 5.40625 -4.609375 L 4.484375 -1.015625 L 3.578125 -4.609375 L 2.6875 -4.609375 L 1.796875 -1.015625 L 0.859375 -4.609375 L 0.046875 -4.609375 L 1.390625 0 L 2.21875 0 L 3.109375 -3.609375 L 4.03125 0 L 4.875 0 Z M 6.234375 -4.609375 "/>
</g>
<g id="glyph-0-9">
<path d="M 5.75 0 L 3.5 -6.40625 L 2.4375 -6.40625 L 0.15625 0 L 1.015625 0 L 1.703125 -1.921875 L 4.171875 -1.921875 L 4.828125 0 Z M 3.9375 -2.609375 L 1.90625 -2.609375 L 2.953125 -5.53125 Z M 3.9375 -2.609375 "/>
</g>
<g id="glyph-0-10">
<path d="M 4.59375 -2.265625 C 4.59375 -3.8125 3.84375 -4.734375 2.625 -4.734375 C 2 -4.734375 1.5 -4.453125 1.15625 -3.921875 L 1.15625 -4.609375 L 0.484375 -4.609375 L 0.484375 1.921875 L 1.21875 1.921875 L 1.21875 -0.546875 C 1.59375 -0.078125 2.03125 0.125 2.625 0.125 C 3.8125 0.125 4.59375 -0.796875 4.59375 -2.265625 Z M 3.828125 -2.28125 C 3.828125 -1.234375 3.296875 -0.546875 2.5 -0.546875 C 1.71875 -0.546875 1.21875 -1.1875 1.21875 -2.265625 C 1.21875 -3.359375 1.71875 -4.0625 2.5 -4.0625 C 3.3125 -4.0625 3.828125 -3.375 3.828125 -2.28125 Z M 3.828125 -2.28125 "/>
</g>
<g id="glyph-0-11">
<path d="M 4.359375 0 L 4.359375 -6.40625 L 3.625 -6.40625 L 3.625 -4.03125 C 3.3125 -4.5 2.828125 -4.734375 2.203125 -4.734375 C 1.015625 -4.734375 0.234375 -3.8125 0.234375 -2.34375 C 0.234375 -0.796875 1 0.125 2.234375 0.125 C 2.875 0.125 3.3125 -0.109375 3.703125 -0.671875 L 3.703125 0 Z M 3.625 -2.28125 C 3.625 -1.21875 3.109375 -0.546875 2.34375 -0.546875 C 1.53125 -0.546875 1 -1.234375 1 -2.3125 C 1 -3.375 1.53125 -4.0625 2.328125 -4.0625 C 3.125 -4.0625 3.625 -3.359375 3.625 -2.28125 Z M 3.625 -2.28125 "/>
</g>
<g id="glyph-0-12">
<path d="M 5.953125 -2.34375 L 5.109375 -2.34375 C 4.921875 -1.125 4.40625 -0.5625 3.328125 -0.5625 C 2.046875 -0.5625 1.234375 -1.546875 1.234375 -3.140625 C 1.234375 -4.78125 2.015625 -5.84375 3.25 -5.84375 C 4.25 -5.84375 4.78125 -5.40625 4.984375 -4.421875 L 5.828125 -4.421875 C 5.5625 -5.828125 4.765625 -6.578125 3.359375 -6.578125 C 1.40625 -6.578125 0.421875 -5 0.421875 -3.125 C 0.421875 -1.265625 1.4375 0.15625 3.3125 0.15625 C 4.875 0.15625 5.765625 -0.671875 5.953125 -2.34375 Z M 5.953125 -2.34375 "/>
</g>
<g id="glyph-0-13">
<path d="M 4.59375 -2.359375 C 4.59375 -3.859375 3.84375 -4.734375 2.625 -4.734375 C 2 -4.734375 1.546875 -4.5 1.203125 -3.984375 L 1.203125 -6.40625 L 0.46875 -6.40625 L 0.46875 0 L 1.140625 0 L 1.140625 -0.65625 C 1.484375 -0.125 1.953125 0.125 2.59375 0.125 C 3.8125 0.125 4.59375 -0.859375 4.59375 -2.359375 Z M 3.828125 -2.3125 C 3.828125 -1.265625 3.296875 -0.546875 2.484375 -0.546875 C 1.71875 -0.546875 1.203125 -1.265625 1.203125 -2.3125 C 1.203125 -3.390625 1.71875 -4.09375 2.484375 -4.0625 C 3.3125 -4.0625 3.828125 -3.34375 3.828125 -2.3125 Z M 3.828125 -2.3125 "/>
</g>
<g id="glyph-0-14">
<path d="M 1.328125 0 L 1.328125 -6.40625 L 0.59375 -6.40625 L 0.59375 0 Z M 1.328125 0 "/>
</g>
<g id="glyph-0-15">
<path d="M 2.234375 0 L 2.234375 -0.609375 C 2.140625 -0.59375 2.015625 -0.578125 1.875 -0.578125 C 1.5625 -0.578125 1.484375 -0.671875 1.484375 -1 L 1.484375 -4.015625 L 2.234375 -4.015625 L 2.234375 -4.609375 L 1.484375 -4.609375 L 1.484375 -5.875 L 0.75 -5.875 L 0.75 -4.609375 L 0.125 -4.609375 L 0.125 -4.015625 L 0.75 -4.015625 L 0.75 -0.671875 C 0.75 -0.203125 1.0625 0.0625 1.640625 0.0625 C 1.8125 0.0625 1.984375 0.046875 2.234375 0 Z M 2.234375 0 "/>
</g>
<g id="glyph-0-16">
<path d="M 4.203125 -4.609375 L 3.40625 -4.609375 L 2.140625 -1.015625 L 0.953125 -4.609375 L 0.171875 -4.609375 L 1.734375 0.015625 L 1.453125 0.75 C 1.328125 1.078125 1.171875 1.203125 0.859375 1.203125 C 0.75 1.203125 0.640625 1.171875 0.46875 1.140625 L 0.46875 1.796875 C 0.625 1.875 0.78125 1.921875 0.96875 1.921875 C 1.484375 1.921875 1.90625 1.625 2.15625 0.96875 Z M 4.203125 -4.609375 "/>
</g>
<g id="glyph-0-17">
<path d="M 4.28125 0 L 4.28125 -3.484375 C 4.28125 -4.25 3.71875 -4.734375 2.828125 -4.734375 C 2.171875 -4.734375 1.78125 -4.546875 1.34375 -3.96875 L 1.34375 -6.40625 L 0.609375 -6.40625 L 0.609375 0 L 1.34375 0 L 1.34375 -2.546875 C 1.34375 -3.484375 1.84375 -4.09375 2.59375 -4.09375 C 3.109375 -4.09375 3.546875 -3.796875 3.546875 -3.1875 L 3.546875 0 Z M 4.28125 0 "/>
</g>
<g id="glyph-0-18">
<path d="M 4.203125 -1.578125 L 3.453125 -1.578125 C 3.328125 -0.84375 2.953125 -0.546875 2.328125 -0.546875 C 1.515625 -0.546875 1.03125 -1.171875 1.03125 -2.265625 C 1.03125 -3.40625 1.515625 -4.0625 2.3125 -4.0625 C 2.921875 -4.0625 3.3125 -3.703125 3.40625 -3.0625 L 4.140625 -3.0625 C 4.0625 -4.1875 3.359375 -4.734375 2.328125 -4.734375 C 1.078125 -4.734375 0.265625 -3.796875 0.265625 -2.265625 C 0.265625 -0.78125 1.0625 0.125 2.3125 0.125 C 3.40625 0.125 4.109375 -0.53125 4.203125 -1.578125 Z M 4.203125 -1.578125 "/>
</g>
<g id="glyph-0-19">
<path d="M 4.421875 0 L 2.53125 -3.015625 L 4.140625 -4.609375 L 3.1875 -4.609375 L 1.234375 -2.65625 L 1.234375 -6.40625 L 0.515625 -6.40625 L 0.515625 0 L 1.234375 0 L 1.234375 -1.796875 L 1.953125 -2.5 L 3.515625 0 Z M 4.421875 0 "/>
</g>
<g id="glyph-0-20">
<path d="M 4.484375 -2.265625 C 4.484375 -3.859375 3.71875 -4.734375 2.390625 -4.734375 C 1.09375 -4.734375 0.3125 -3.859375 0.3125 -2.3125 C 0.3125 -0.75 1.09375 0.125 2.40625 0.125 C 3.6875 0.125 4.484375 -0.75 4.484375 -2.265625 Z M 3.71875 -2.28125 C 3.71875 -1.203125 3.203125 -0.546875 2.40625 -0.546875 C 1.578125 -0.546875 1.078125 -1.1875 1.078125 -2.3125 C 1.078125 -3.40625 1.578125 -4.0625 2.40625 -4.0625 C 3.234375 -4.0625 3.71875 -3.421875 3.71875 -2.28125 Z M 3.71875 -2.28125 "/>
</g>
<g id="glyph-0-21">
<path d="M 4.03125 -1.296875 C 4.03125 -1.984375 3.65625 -2.328125 2.734375 -2.546875 L 2.03125 -2.703125 C 1.4375 -2.84375 1.171875 -3.046875 1.171875 -3.375 C 1.171875 -3.796875 1.5625 -4.0625 2.15625 -4.0625 C 2.75 -4.0625 3.0625 -3.8125 3.078125 -3.328125 L 3.859375 -3.328125 C 3.84375 -4.234375 3.25 -4.734375 2.1875 -4.734375 C 1.109375 -4.734375 0.40625 -4.1875 0.40625 -3.328125 C 0.40625 -2.609375 0.78125 -2.265625 1.875 -2 L 2.5625 -1.84375 C 3.0625 -1.71875 3.265625 -1.5625 3.265625 -1.234375 C 3.265625 -0.796875 2.84375 -0.546875 2.203125 -0.546875 C 1.546875 -0.546875 1.171875 -0.703125 1.078125 -1.40625 L 0.296875 -1.40625 C 0.328125 -0.34375 0.9375 0.125 2.140625 0.125 C 3.296875 0.125 4.03125 -0.40625 4.03125 -1.296875 Z M 4.03125 -1.296875 "/>
</g>
<g id="glyph-0-22">
<path d="M 2.828125 -3.96875 L 2.828125 -4.71875 C 2.703125 -4.734375 2.640625 -4.734375 2.546875 -4.734375 C 2.0625 -4.734375 1.703125 -4.453125 1.28125 -3.78125 L 1.28125 -4.609375 L 0.609375 -4.609375 L 0.609375 0 L 1.34375 0 L 1.34375 -2.390625 C 1.34375 -3.4375 1.6875 -3.953125 2.828125 -3.96875 Z M 2.828125 -3.96875 "/>
</g>
<g id="glyph-0-23">
<path d="M 5.390625 0 L 5.390625 -0.71875 L 1.609375 -0.71875 L 1.609375 -2.921875 L 5.109375 -2.921875 L 5.109375 -3.640625 L 1.609375 -3.640625 L 1.609375 -5.6875 L 5.234375 -5.6875 L 5.234375 -6.40625 L 0.796875 -6.40625 L 0.796875 0 Z M 5.390625 0 "/>
</g>
<g id="glyph-0-24">
<path d="M 4.15625 0 L 2.5625 -2.390625 L 4.109375 -4.609375 L 3.296875 -4.609375 L 2.1875 -2.9375 L 1.078125 -4.609375 L 0.234375 -4.609375 L 1.78125 -2.34375 L 0.15625 0 L 0.984375 0 L 2.15625 -1.765625 L 3.3125 0 Z M 4.15625 0 "/>
</g>
<g id="glyph-0-25">
<path d="M 6.6875 0 L 6.6875 -3.453125 C 6.6875 -4.28125 6.234375 -4.734375 5.359375 -4.734375 C 4.75 -4.734375 4.375 -4.5625 3.953125 -4.03125 C 3.671875 -4.53125 3.3125 -4.734375 2.703125 -4.734375 C 2.09375 -4.734375 1.6875 -4.515625 1.296875 -3.953125 L 1.296875 -4.609375 L 0.625 -4.609375 L 0.625 0 L 1.359375 0 L 1.359375 -2.890625 C 1.359375 -3.5625 1.84375 -4.09375 2.4375 -4.09375 C 2.984375 -4.09375 3.296875 -3.765625 3.296875 -3.171875 L 3.296875 0 L 4.015625 0 L 4.015625 -2.890625 C 4.015625 -3.5625 4.515625 -4.09375 5.109375 -4.09375 C 5.640625 -4.09375 5.953125 -3.75 5.953125 -3.171875 L 5.953125 0 Z M 6.6875 0 "/>
</g>
<g id="glyph-0-26">
<path d="M 4.296875 -0.75 L 4.296875 -4.609375 L 3.625 -4.609375 L 3.625 -3.9375 C 3.25 -4.484375 2.8125 -4.734375 2.21875 -4.734375 C 1.046875 -4.734375 0.25 -3.75 0.25 -2.265625 C 0.25 -0.8125 1.078125 0.125 2.15625 0.125 C 2.734375 0.125 3.140625 -0.109375 3.625 -0.6875 L 3.625 -0.390625 C 3.625 0.828125 3.125 1.296875 2.265625 1.296875 C 1.6875 1.296875 1.234375 1.09375 1.15625 0.53125 L 0.40625 0.53125 C 0.484375 1.40625 1.15625 1.921875 2.25 1.921875 C 3.6875 1.921875 4.296875 1.28125 4.296875 -0.75 Z M 3.546875 -2.28125 C 3.546875 -1.171875 3.078125 -0.546875 2.3125 -0.546875 C 1.5 -0.546875 1.015625 -1.1875 1.015625 -2.3125 C 1.015625 -3.40625 1.515625 -4.0625 2.296875 -4.0625 C 3.09375 -4.0625 3.546875 -3.390625 3.546875 -2.28125 Z M 3.546875 -2.28125 "/>
</g>
<g id="glyph-0-27">
<path d="M 5.09375 -5.6875 L 5.09375 -6.40625 L 0.796875 -6.40625 L 0.796875 0 L 1.609375 0 L 1.609375 -2.921875 L 4.671875 -2.921875 L 4.671875 -3.640625 L 1.609375 -3.640625 L 1.609375 -5.6875 Z M 5.09375 -5.6875 "/>
</g>
<g id="glyph-0-28">
<path d="M 1.703125 0 L 1.703125 -6.40625 L 0.875 -6.40625 L 0.875 0 Z M 1.703125 0 "/>
</g>
<g id="glyph-0-29">
<path d="M 4.6875 0 L 4.6875 -0.71875 L 1.515625 -0.71875 L 1.515625 -6.40625 L 0.703125 -6.40625 L 0.703125 0 Z M 4.6875 0 "/>
</g>
<g id="glyph-0-30">
<path d="M 6.6875 0 L 6.6875 -6.40625 L 5.5625 -6.40625 L 3.6875 -0.828125 L 1.796875 -6.40625 L 0.65625 -6.40625 L 0.65625 0 L 1.4375 0 L 1.4375 -5.375 L 3.25 0 L 4.109375 0 L 5.921875 -5.375 L 5.921875 0 Z M 6.6875 0 "/>
</g>
<g id="glyph-0-31">
<path d="M 4.28125 -4.609375 L 3.453125 -4.609375 L 2.140625 -0.875 L 0.921875 -4.609375 L 0.09375 -4.609375 L 1.703125 0 L 2.5 0 Z M 4.28125 -4.609375 "/>
</g>
<g id="glyph-0-32">
<path d="M 5.6875 0 L 5.6875 -6.40625 L 4.90625 -6.40625 L 4.90625 -1.171875 L 1.5625 -6.40625 L 0.671875 -6.40625 L 0.671875 0 L 1.4375 0 L 1.4375 -5.203125 L 4.765625 0 Z M 5.6875 0 "/>
</g>
<g id="glyph-0-33">
<path d="M 5.46875 -1.765625 C 5.46875 -2.546875 4.96875 -3.125 4.09375 -3.375 L 2.484375 -3.796875 C 1.71875 -4 1.4375 -4.25 1.4375 -4.75 C 1.4375 -5.40625 2 -5.890625 2.875 -5.890625 C 3.890625 -5.890625 4.46875 -5.421875 4.46875 -4.578125 L 5.25 -4.578125 C 5.25 -5.84375 4.375 -6.578125 2.890625 -6.578125 C 1.484375 -6.578125 0.609375 -5.796875 0.609375 -4.640625 C 0.609375 -3.859375 1.03125 -3.359375 1.875 -3.140625 L 3.46875 -2.71875 C 4.28125 -2.5 4.640625 -2.1875 4.640625 -1.6875 C 4.640625 -0.96875 4.109375 -0.5625 3.015625 -0.5625 C 1.796875 -0.5625 1.203125 -1.171875 1.203125 -2.078125 L 0.421875 -2.078125 C 0.421875 -0.578125 1.4375 0.15625 2.953125 0.15625 C 4.578125 0.15625 5.46875 -0.609375 5.46875 -1.765625 Z M 5.46875 -1.765625 "/>
</g>
<g id="glyph-0-34">
<path d="M 4.453125 -3 C 4.453125 -5.15625 3.78125 -6.234375 2.421875 -6.234375 C 1.078125 -6.234375 0.375 -5.140625 0.375 -3.046875 C 0.375 -0.953125 1.078125 0.125 2.421875 0.125 C 3.734375 0.125 4.453125 -0.953125 4.453125 -3 Z M 3.671875 -3.0625 C 3.671875 -1.296875 3.265625 -0.515625 2.40625 -0.515625 C 1.578125 -0.515625 1.171875 -1.34375 1.171875 -3.046875 C 1.171875 -4.75 1.578125 -5.546875 2.421875 -5.546875 C 3.25 -5.546875 3.671875 -4.734375 3.671875 -3.0625 Z M 3.671875 -3.0625 "/>
</g>
<g id="glyph-0-35">
<path d="M 4.5 -4.40625 C 4.5 -5.46875 3.671875 -6.234375 2.5 -6.234375 C 1.21875 -6.234375 0.484375 -5.59375 0.4375 -4.078125 L 1.21875 -4.078125 C 1.28125 -5.125 1.703125 -5.5625 2.46875 -5.5625 C 3.171875 -5.5625 3.703125 -5.0625 3.703125 -4.390625 C 3.703125 -3.890625 3.40625 -3.46875 2.859375 -3.15625 L 2.046875 -2.703125 C 0.75 -1.96875 0.375 -1.375 0.296875 0 L 4.453125 0 L 4.453125 -0.765625 L 1.171875 -0.765625 C 1.25 -1.28125 1.53125 -1.59375 2.296875 -2.046875 L 3.171875 -2.53125 C 4.046875 -2.984375 4.5 -3.640625 4.5 -4.40625 Z M 4.5 -4.40625 "/>
</g>
<g id="glyph-0-36">
<path d="M 4.515625 -2.0625 C 4.515625 -3.296875 3.6875 -4.109375 2.5 -4.109375 C 2.0625 -4.109375 1.703125 -4 1.34375 -3.734375 L 1.59375 -5.34375 L 4.1875 -5.34375 L 4.1875 -6.109375 L 0.96875 -6.109375 L 0.5 -2.84375 L 1.21875 -2.84375 C 1.578125 -3.265625 1.875 -3.421875 2.359375 -3.421875 C 3.1875 -3.421875 3.71875 -2.890625 3.71875 -1.96875 C 3.71875 -1.0625 3.203125 -0.546875 2.359375 -0.546875 C 1.6875 -0.546875 1.265625 -0.890625 1.078125 -1.59375 L 0.3125 -1.59375 C 0.5625 -0.359375 1.265625 0.125 2.375 0.125 C 3.640625 0.125 4.515625 -0.75 4.515625 -2.0625 Z M 4.515625 -2.0625 "/>
</g>
<g id="glyph-0-37">
<path d="M 4.578125 -5.453125 L 4.578125 -6.109375 L 0.40625 -6.109375 L 0.40625 -5.34375 L 3.78125 -5.34375 C 2.53125 -3.78125 1.65625 -1.953125 1.21875 0 L 2.046875 0 C 2.390625 -2.015625 3.265625 -3.890625 4.578125 -5.453125 Z M 4.578125 -5.453125 "/>
</g>
<g id="glyph-0-38">
<path d="M 3.046875 0 L 3.046875 -6.234375 L 2.546875 -6.234375 C 2.265625 -5.28125 2.09375 -5.140625 0.890625 -5 L 0.890625 -4.4375 L 2.28125 -4.4375 L 2.28125 0 Z M 3.046875 0 "/>
</g>
<g id="glyph-1-0">
<path d="M 5.25 -1.984375 L 4.328125 -1.984375 C 4.171875 -1.0625 3.703125 -0.6875 2.921875 -0.6875 C 1.90625 -0.6875 1.296875 -1.46875 1.296875 -2.828125 C 1.296875 -4.265625 1.890625 -5.078125 2.890625 -5.078125 C 3.65625 -5.078125 4.140625 -4.625 4.25 -3.828125 L 5.1875 -3.828125 C 5.078125 -5.234375 4.1875 -5.921875 2.90625 -5.921875 C 1.359375 -5.921875 0.34375 -4.734375 0.34375 -2.828125 C 0.34375 -0.96875 1.328125 0.171875 2.890625 0.171875 C 4.265625 0.171875 5.140625 -0.65625 5.25 -1.984375 Z M 5.25 -1.984375 "/>
</g>
<g id="glyph-1-1">
<path d="M 5.609375 -2.84375 C 5.609375 -4.828125 4.65625 -5.921875 2.984375 -5.921875 C 1.375 -5.921875 0.390625 -4.8125 0.390625 -2.875 C 0.390625 -0.953125 1.359375 0.171875 3 0.171875 C 4.625 0.171875 5.609375 -0.953125 5.609375 -2.84375 Z M 4.65625 -2.84375 C 4.65625 -1.5 4.015625 -0.6875 3 -0.6875 C 1.984375 -0.6875 1.359375 -1.484375 1.359375 -2.875 C 1.359375 -4.265625 1.984375 -5.078125 3 -5.078125 C 4.03125 -5.078125 4.65625 -4.28125 4.65625 -2.84375 Z M 4.65625 -2.84375 "/>
</g>
<g id="glyph-1-2">
<path d="M 8.359375 0 L 8.359375 -4.328125 C 8.359375 -5.359375 7.78125 -5.921875 6.703125 -5.921875 C 5.9375 -5.921875 5.484375 -5.703125 4.9375 -5.046875 C 4.59375 -5.671875 4.140625 -5.921875 3.390625 -5.921875 C 2.625 -5.921875 2.109375 -5.640625 1.609375 -4.953125 L 1.609375 -5.765625 L 0.78125 -5.765625 L 0.78125 0 L 1.6875 0 L 1.6875 -3.625 C 1.6875 -4.453125 2.296875 -5.125 3.046875 -5.125 C 3.734375 -5.125 4.109375 -4.703125 4.109375 -3.96875 L 4.109375 0 L 5.03125 0 L 5.03125 -3.625 C 5.03125 -4.453125 5.640625 -5.125 6.390625 -5.125 C 7.0625 -5.125 7.453125 -4.703125 7.453125 -3.96875 L 7.453125 0 Z M 8.359375 0 "/>
</g>
<g id="glyph-1-3">
<path d="M 5.890625 -0.015625 L 5.890625 -0.71875 C 5.78125 -0.6875 5.734375 -0.6875 5.6875 -0.6875 C 5.375 -0.6875 5.1875 -0.859375 5.1875 -1.140625 L 5.1875 -4.359375 C 5.1875 -5.375 4.4375 -5.921875 3.03125 -5.921875 C 1.625 -5.921875 0.765625 -5.390625 0.71875 -4.0625 L 1.640625 -4.0625 C 1.71875 -4.765625 2.140625 -5.078125 2.984375 -5.078125 C 3.8125 -5.078125 4.28125 -4.78125 4.28125 -4.21875 L 4.28125 -3.984375 C 4.28125 -3.59375 4.046875 -3.4375 3.328125 -3.34375 C 2.03125 -3.171875 1.828125 -3.140625 1.46875 -2.984375 C 0.796875 -2.71875 0.46875 -2.203125 0.46875 -1.5 C 0.46875 -0.453125 1.1875 0.171875 2.359375 0.171875 C 3.09375 0.171875 3.8125 -0.140625 4.3125 -0.6875 C 4.40625 -0.234375 4.8125 0.078125 5.265625 0.078125 C 5.4375 0.078125 5.59375 0.0625 5.890625 -0.015625 Z M 4.28125 -1.984375 C 4.28125 -1.171875 3.4375 -0.640625 2.546875 -0.640625 C 1.84375 -0.640625 1.421875 -0.890625 1.421875 -1.515625 C 1.421875 -2.125 1.828125 -2.390625 2.8125 -2.53125 C 3.765625 -2.65625 3.96875 -2.703125 4.28125 -2.84375 Z M 4.28125 -1.984375 "/>
</g>
<g id="glyph-1-4">
<path d="M 5.359375 0 L 5.359375 -4.359375 C 5.359375 -5.3125 4.640625 -5.921875 3.53125 -5.921875 C 2.671875 -5.921875 2.125 -5.59375 1.609375 -4.796875 L 1.609375 -5.765625 L 0.765625 -5.765625 L 0.765625 0 L 1.6875 0 L 1.6875 -3.171875 C 1.6875 -4.359375 2.328125 -5.125 3.25 -5.125 C 3.984375 -5.125 4.4375 -4.6875 4.4375 -4 L 4.4375 0 Z M 5.359375 0 "/>
</g>
<g id="glyph-1-5">
<path d="M 5.4375 0 L 5.4375 -8.015625 L 4.53125 -8.015625 L 4.53125 -5.03125 C 4.140625 -5.625 3.53125 -5.921875 2.765625 -5.921875 C 1.265625 -5.921875 0.28125 -4.78125 0.28125 -2.9375 C 0.28125 -0.984375 1.25 0.171875 2.796875 0.171875 C 3.59375 0.171875 4.140625 -0.125 4.625 -0.84375 L 4.625 0 Z M 4.53125 -2.859375 C 4.53125 -1.53125 3.890625 -0.6875 2.921875 -0.6875 C 1.90625 -0.6875 1.25 -1.546875 1.25 -2.875 C 1.25 -4.21875 1.90625 -5.078125 2.921875 -5.078125 C 3.90625 -5.078125 4.53125 -4.1875 4.53125 -2.859375 Z M 4.53125 -2.859375 "/>
</g>
<g id="glyph-2-0">
<path d="M -1.984375 -5.25 L -1.984375 -4.328125 C -1.0625 -4.171875 -0.6875 -3.703125 -0.6875 -2.921875 C -0.6875 -1.90625 -1.46875 -1.296875 -2.828125 -1.296875 C -4.265625 -1.296875 -5.078125 -1.890625 -5.078125 -2.890625 C -5.078125 -3.65625 -4.625 -4.140625 -3.828125 -4.25 L -3.828125 -5.1875 C -5.234375 -5.078125 -5.921875 -4.1875 -5.921875 -2.90625 C -5.921875 -1.359375 -4.734375 -0.34375 -2.828125 -0.34375 C -0.96875 -0.34375 0.171875 -1.328125 0.171875 -2.890625 C 0.171875 -4.265625 -0.65625 -5.140625 -1.984375 -5.25 Z M -1.984375 -5.25 "/>
</g>
<g id="glyph-2-1">
<path d="M -2.84375 -5.609375 C -4.828125 -5.609375 -5.921875 -4.65625 -5.921875 -2.984375 C -5.921875 -1.375 -4.8125 -0.390625 -2.875 -0.390625 C -0.953125 -0.390625 0.171875 -1.359375 0.171875 -3 C 0.171875 -4.625 -0.953125 -5.609375 -2.84375 -5.609375 Z M -2.84375 -4.65625 C -1.5 -4.65625 -0.6875 -4.015625 -0.6875 -3 C -0.6875 -1.984375 -1.484375 -1.359375 -2.875 -1.359375 C -4.265625 -1.359375 -5.078125 -1.984375 -5.078125 -3 C -5.078125 -4.03125 -4.28125 -4.65625 -2.84375 -4.65625 Z M -2.84375 -4.65625 "/>
</g>
<g id="glyph-2-2">
<path d="M 0 -5.296875 L -5.765625 -5.296875 L -5.765625 -4.390625 L -2.578125 -4.390625 C -1.40625 -4.390625 -0.640625 -3.765625 -0.640625 -2.8125 C -0.640625 -2.09375 -1.078125 -1.625 -1.765625 -1.625 L -5.765625 -1.625 L -5.765625 -0.71875 L -1.40625 -0.71875 C -0.453125 -0.71875 0.171875 -1.4375 0.171875 -2.546875 C 0.171875 -3.40625 -0.125 -3.9375 -0.890625 -4.484375 L 0 -4.484375 Z M 0 -5.296875 "/>
</g>
<g id="glyph-2-3">
<path d="M 0 -5.359375 L -4.359375 -5.359375 C -5.3125 -5.359375 -5.921875 -4.640625 -5.921875 -3.53125 C -5.921875 -2.671875 -5.59375 -2.125 -4.796875 -1.609375 L -5.765625 -1.609375 L -5.765625 -0.765625 L 0 -0.765625 L 0 -1.6875 L -3.171875 -1.6875 C -4.359375 -1.6875 -5.125 -2.328125 -5.125 -3.25 C -5.125 -3.984375 -4.6875 -4.4375 -4 -4.4375 L 0 -4.4375 Z M 0 -5.359375 "/>
</g>
<g id="glyph-2-4">
<path d="M 0 -2.796875 L -0.765625 -2.796875 C -0.734375 -2.671875 -0.71875 -2.53125 -0.71875 -2.359375 C -0.71875 -1.953125 -0.84375 -1.84375 -1.25 -1.84375 L -5.015625 -1.84375 L -5.015625 -2.796875 L -5.765625 -2.796875 L -5.765625 -1.84375 L -7.34375 -1.84375 L -7.34375 -0.9375 L -5.765625 -0.9375 L -5.765625 -0.15625 L -5.015625 -0.15625 L -5.015625 -0.9375 L -0.84375 -0.9375 C -0.25 -0.9375 0.078125 -1.328125 0.078125 -2.046875 C 0.078125 -2.265625 0.0625 -2.484375 0 -2.796875 Z M 0 -2.796875 "/>
</g>
</g>
<clipPath id="clip-0">
<path clip-rule="nonzero" d="M 46.152344 23.246094 L 1043.523438 23.246094 L 1043.523438 200.957031 L 46.152344 200.957031 Z M 46.152344 23.246094 "/>
</clipPath>
<clipPath id="clip-1">
<path clip-rule="nonzero" d="M 46.152344 238.136719 L 1043.523438 238.136719 L 1043.523438 415.847656 L 46.152344 415.847656 Z M 46.152344 238.136719 "/>
</clipPath>
<clipPath id="clip-2">
<path clip-rule="nonzero" d="M 46.152344 220.371094 L 1043.523438 220.371094 L 1043.523438 238.136719 L 46.152344 238.136719 Z M 46.152344 220.371094 "/>
</clipPath>
<clipPath id="clip-3">
<path clip-rule="nonzero" d="M 46.152344 5.480469 L 1043.523438 5.480469 L 1043.523438 23.246094 L 46.152344 23.246094 Z M 46.152344 5.480469 "/>
</clipPath>
</defs>
<rect x="-104.9" y="-45" width="1258.8" height="540" fill="rgb(100%, 100%, 100%)" fill-opacity="1"/>
<rect x="-104.9" y="-45" width="1258.8" height="540" fill="rgb(100%, 100%, 100%)" fill-opacity="1"/>
<path fill="none" stroke-width="1.066978" stroke-linecap="round" stroke-linejoin="round" stroke="rgb(100%, 100%, 100%)" stroke-opacity="1" stroke-miterlimit="10" d="M 0 450 L 1049 450 L 1049 0 L 0 0 Z M 0 450 "/>
<g clip-path="url(#clip-0)">
<path fill-rule="nonzero" fill="rgb(100%, 100%, 100%)" fill-opacity="1" d="M 46.152344 200.957031 L 1043.523438 200.957031 L 1043.523438 23.246094 L 46.152344 23.246094 Z M 46.152344 200.957031 "/>
</g>
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 52.890625 192.882812 L 93.324219 192.882812 L 93.324219 192.066406 L 52.890625 192.066406 Z M 52.890625 192.882812 "/>
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 97.816406 192.882812 L 138.25 192.882812 L 138.25 188.390625 L 97.816406 188.390625 Z M 97.816406 192.882812 "/>
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 142.742188 192.882812 L 183.175781 192.882812 L 183.175781 179.816406 L 142.742188 179.816406 Z M 142.742188 192.882812 "/>
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 187.667969 192.882812 L 228.101562 192.882812 L 228.101562 192.066406 L 187.667969 192.066406 Z M 187.667969 192.882812 "/>
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 232.597656 192.882812 L 273.03125 192.882812 L 273.03125 192.472656 L 232.597656 192.472656 Z M 232.597656 192.882812 "/>
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 277.523438 192.882812 L 317.957031 192.882812 L 317.957031 191.65625 L 277.523438 191.65625 Z M 277.523438 192.882812 "/>
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 322.449219 192.882812 L 362.882812 192.882812 L 362.882812 189.34375 L 322.449219 189.34375 Z M 322.449219 192.882812 "/>
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 367.375 192.882812 L 407.808594 192.882812 L 407.808594 192.746094 L 367.375 192.746094 Z M 367.375 192.882812 "/>
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 412.300781 192.882812 L 452.734375 192.882812 L 452.734375 31.324219 L 412.300781 31.324219 Z M 412.300781 192.882812 "/>
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 457.230469 192.882812 L 497.664062 192.882812 L 497.664062 159.128906 L 457.230469 159.128906 Z M 457.230469 192.882812 "/>
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 502.15625 192.882812 L 542.589844 192.882812 L 542.589844 174.917969 L 502.15625 174.917969 Z M 502.15625 192.882812 "/>
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 547.082031 192.882812 L 587.515625 192.882812 L 587.515625 159.671875 L 547.082031 159.671875 Z M 547.082031 192.882812 "/>
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 592.007812 192.882812 L 632.441406 192.882812 L 632.441406 169.882812 L 592.007812 169.882812 Z M 592.007812 192.882812 "/>
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 636.933594 192.882812 L 677.367188 192.882812 L 677.367188 190.976562 L 636.933594 190.976562 Z M 636.933594 192.882812 "/>
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 681.863281 192.882812 L 722.296875 192.882812 L 722.296875 192.609375 L 681.863281 192.609375 Z M 681.863281 192.882812 "/>
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 726.789062 192.882812 L 767.222656 192.882812 L 767.222656 152.730469 L 726.789062 152.730469 Z M 726.789062 192.882812 "/>
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 771.714844 192.882812 L 812.148438 192.882812 L 812.148438 103.324219 L 771.714844 103.324219 Z M 771.714844 192.882812 "/>
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 816.640625 192.882812 L 857.074219 192.882812 L 857.074219 91.347656 L 816.640625 91.347656 Z M 816.640625 192.882812 "/>
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 861.566406 192.882812 L 902 192.882812 L 902 165.253906 L 861.566406 165.253906 Z M 861.566406 192.882812 "/>
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 906.496094 192.882812 L 946.929688 192.882812 L 946.929688 189.753906 L 906.496094 189.753906 Z M 906.496094 192.882812 "/>
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 951.421875 192.882812 L 991.855469 192.882812 L 991.855469 192.472656 L 951.421875 192.472656 Z M 951.421875 192.882812 "/>
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 996.347656 192.882812 L 1036.78125 192.882812 L 1036.78125 122.789062 L 996.347656 122.789062 Z M 996.347656 192.882812 "/>
<g clip-path="url(#clip-1)">
<path fill-rule="nonzero" fill="rgb(100%, 100%, 100%)" fill-opacity="1" d="M 46.152344 415.847656 L 1043.523438 415.847656 L 1043.523438 238.136719 L 46.152344 238.136719 Z M 46.152344 415.847656 "/>
</g>
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 52.890625 407.769531 L 93.324219 407.769531 L 93.324219 400.195312 L 52.890625 400.195312 Z M 52.890625 407.769531 "/>
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 97.816406 407.769531 L 138.25 407.769531 L 138.25 406.507812 L 97.816406 406.507812 Z M 97.816406 407.769531 "/>
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 142.742188 407.769531 L 183.175781 407.769531 L 183.175781 406.507812 L 142.742188 406.507812 Z M 142.742188 407.769531 "/>
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 187.667969 407.769531 L 228.101562 407.769531 L 228.101562 406.507812 L 187.667969 406.507812 Z M 187.667969 407.769531 "/>
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 232.597656 407.769531 L 273.03125 407.769531 L 273.03125 403.984375 L 232.597656 403.984375 Z M 232.597656 407.769531 "/>
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 277.523438 407.769531 L 317.957031 407.769531 L 317.957031 405.246094 L 277.523438 405.246094 Z M 277.523438 407.769531 "/>
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 322.449219 407.769531 L 362.882812 407.769531 L 362.882812 406.507812 L 322.449219 406.507812 Z M 322.449219 407.769531 "/>
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 367.375 407.769531 L 407.808594 407.769531 L 407.808594 406.507812 L 367.375 406.507812 Z M 367.375 407.769531 "/>
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 412.300781 407.769531 L 452.734375 407.769531 L 452.734375 246.210938 L 412.300781 246.210938 Z M 412.300781 407.769531 "/>
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 457.230469 407.769531 L 497.664062 407.769531 L 497.664062 406.507812 L 457.230469 406.507812 Z M 457.230469 407.769531 "/>
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 502.15625 407.769531 L 542.589844 407.769531 L 542.589844 392.625 L 502.15625 392.625 Z M 502.15625 407.769531 "/>
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 547.082031 407.769531 L 587.515625 407.769531 L 587.515625 396.410156 L 547.082031 396.410156 Z M 547.082031 407.769531 "/>
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 592.007812 407.769531 L 632.441406 407.769531 L 632.441406 406.507812 L 592.007812 406.507812 Z M 592.007812 407.769531 "/>
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 636.933594 407.769531 L 677.367188 407.769531 L 677.367188 406.507812 L 636.933594 406.507812 Z M 636.933594 407.769531 "/>
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 681.863281 407.769531 L 722.296875 407.769531 L 722.296875 405.246094 L 681.863281 405.246094 Z M 681.863281 407.769531 "/>
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 726.789062 407.769531 L 767.222656 407.769531 L 767.222656 406.507812 L 726.789062 406.507812 Z M 726.789062 407.769531 "/>
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 771.714844 407.769531 L 812.148438 407.769531 L 812.148438 390.097656 L 771.714844 390.097656 Z M 771.714844 407.769531 "/>
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 816.640625 407.769531 L 857.074219 407.769531 L 857.074219 385.050781 L 816.640625 385.050781 Z M 816.640625 407.769531 "/>
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 861.566406 407.769531 L 902 407.769531 L 902 380.003906 L 861.566406 380.003906 Z M 861.566406 407.769531 "/>
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 906.496094 407.769531 L 946.929688 407.769531 L 946.929688 383.789062 L 906.496094 383.789062 Z M 906.496094 407.769531 "/>
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 951.421875 407.769531 L 991.855469 407.769531 L 991.855469 403.984375 L 951.421875 403.984375 Z M 951.421875 407.769531 "/>
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 996.347656 407.769531 L 1036.78125 407.769531 L 1036.78125 406.507812 L 996.347656 406.507812 Z M 996.347656 407.769531 "/>
<g clip-path="url(#clip-2)">
<path fill-rule="nonzero" fill="rgb(100%, 100%, 100%)" fill-opacity="1" stroke-width="2.133957" stroke-linecap="round" stroke-linejoin="round" stroke="rgb(0%, 0%, 0%)" stroke-opacity="1" stroke-miterlimit="10" d="M 46.152344 238.136719 L 1043.523438 238.136719 L 1043.523438 220.371094 L 46.152344 220.371094 Z M 46.152344 238.136719 "/>
</g>
<g fill="rgb(10.196078%, 10.196078%, 10.196078%)" fill-opacity="1">
<use xlink:href="#glyph-0-0" x="530.835938" y="231.856445"/>
<use xlink:href="#glyph-0-1" x="536.835938" y="231.856445"/>
<use xlink:href="#glyph-0-2" x="541.835938" y="231.856445"/>
<use xlink:href="#glyph-0-3" x="543.835938" y="231.856445"/>
<use xlink:href="#glyph-0-4" x="548.835938" y="231.856445"/>
<use xlink:href="#glyph-0-5" x="553.835938" y="231.856445"/>
</g>
<g clip-path="url(#clip-3)">
<path fill-rule="nonzero" fill="rgb(100%, 100%, 100%)" fill-opacity="1" stroke-width="2.133957" stroke-linecap="round" stroke-linejoin="round" stroke="rgb(0%, 0%, 0%)" stroke-opacity="1" stroke-miterlimit="10" d="M 46.152344 23.246094 L 1043.523438 23.246094 L 1043.523438 5.480469 L 46.152344 5.480469 Z M 46.152344 23.246094 "/>
</g>
<g fill="rgb(10.196078%, 10.196078%, 10.196078%)" fill-opacity="1">
<use xlink:href="#glyph-0-6" x="536.335938" y="16.96582"/>
<use xlink:href="#glyph-0-7" x="542.335938" y="16.96582"/>
<use xlink:href="#glyph-0-8" x="547.335938" y="16.96582"/>
</g>
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(0%, 0%, 0%)" stroke-opacity="1" stroke-miterlimit="10" d="M 46.152344 415.847656 L 1043.519531 415.847656 "/>
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 73.105469 418.589844 L 73.105469 415.847656 "/>
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 118.03125 418.589844 L 118.03125 415.847656 "/>
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 162.960938 418.589844 L 162.960938 415.847656 "/>
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 207.886719 418.589844 L 207.886719 415.847656 "/>
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 252.8125 418.589844 L 252.8125 415.847656 "/>
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 297.738281 418.589844 L 297.738281 415.847656 "/>
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 342.667969 418.589844 L 342.667969 415.847656 "/>
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 387.59375 418.589844 L 387.59375 415.847656 "/>
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 432.519531 418.589844 L 432.519531 415.847656 "/>
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 477.445312 418.589844 L 477.445312 415.847656 "/>
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 522.371094 418.589844 L 522.371094 415.847656 "/>
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 567.300781 418.589844 L 567.300781 415.847656 "/>
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 612.226562 418.589844 L 612.226562 415.847656 "/>
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 657.152344 418.589844 L 657.152344 415.847656 "/>
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 702.078125 418.589844 L 702.078125 415.847656 "/>
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 747.003906 418.589844 L 747.003906 415.847656 "/>
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 791.933594 418.589844 L 791.933594 415.847656 "/>
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 836.859375 418.589844 L 836.859375 415.847656 "/>
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 881.785156 418.589844 L 881.785156 415.847656 "/>
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 926.710938 418.589844 L 926.710938 415.847656 "/>
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 971.636719 418.589844 L 971.636719 415.847656 "/>
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 1016.566406 418.589844 L 1016.566406 415.847656 "/>
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
<use xlink:href="#glyph-0-9" x="57.605469" y="426.883789"/>
<use xlink:href="#glyph-0-10" x="63.605469" y="426.883789"/>
<use xlink:href="#glyph-0-10" x="68.605469" y="426.883789"/>
<use xlink:href="#glyph-0-5" x="73.605469" y="426.883789"/>
<use xlink:href="#glyph-0-1" x="78.605469" y="426.883789"/>
<use xlink:href="#glyph-0-11" x="83.605469" y="426.883789"/>
</g>
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
<use xlink:href="#glyph-0-12" x="99.03125" y="426.883789"/>
<use xlink:href="#glyph-0-7" x="105.03125" y="426.883789"/>
<use xlink:href="#glyph-0-10" x="110.03125" y="426.883789"/>
<use xlink:href="#glyph-0-7" x="115.03125" y="426.883789"/>
<use xlink:href="#glyph-0-13" x="120.03125" y="426.883789"/>
<use xlink:href="#glyph-0-2" x="125.03125" y="426.883789"/>
<use xlink:href="#glyph-0-14" x="127.03125" y="426.883789"/>
<use xlink:href="#glyph-0-2" x="129.03125" y="426.883789"/>
<use xlink:href="#glyph-0-15" x="131.03125" y="426.883789"/>
<use xlink:href="#glyph-0-16" x="133.03125" y="426.883789"/>
</g>
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
<use xlink:href="#glyph-0-12" x="150.960938" y="426.883789"/>
<use xlink:href="#glyph-0-17" x="156.960938" y="426.883789"/>
<use xlink:href="#glyph-0-5" x="161.960938" y="426.883789"/>
<use xlink:href="#glyph-0-18" x="166.960938" y="426.883789"/>
<use xlink:href="#glyph-0-19" x="170.960938" y="426.883789"/>
</g>
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
<use xlink:href="#glyph-0-12" x="196.886719" y="426.883789"/>
<use xlink:href="#glyph-0-14" x="202.886719" y="426.883789"/>
<use xlink:href="#glyph-0-20" x="204.886719" y="426.883789"/>
<use xlink:href="#glyph-0-21" x="209.886719" y="426.883789"/>
<use xlink:href="#glyph-0-5" x="213.886719" y="426.883789"/>
</g>
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
<use xlink:href="#glyph-0-12" x="239.8125" y="426.883789"/>
<use xlink:href="#glyph-0-22" x="245.8125" y="426.883789"/>
<use xlink:href="#glyph-0-5" x="248.8125" y="426.883789"/>
<use xlink:href="#glyph-0-7" x="253.8125" y="426.883789"/>
<use xlink:href="#glyph-0-15" x="258.8125" y="426.883789"/>
<use xlink:href="#glyph-0-5" x="260.8125" y="426.883789"/>
</g>
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
<use xlink:href="#glyph-0-23" x="283.738281" y="426.883789"/>
<use xlink:href="#glyph-0-1" x="289.738281" y="426.883789"/>
<use xlink:href="#glyph-0-7" x="294.738281" y="426.883789"/>
<use xlink:href="#glyph-0-13" x="299.738281" y="426.883789"/>
<use xlink:href="#glyph-0-14" x="304.738281" y="426.883789"/>
<use xlink:href="#glyph-0-5" x="306.738281" y="426.883789"/>
</g>
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
<use xlink:href="#glyph-0-23" x="325.667969" y="426.883789"/>
<use xlink:href="#glyph-0-24" x="331.667969" y="426.883789"/>
<use xlink:href="#glyph-0-7" x="335.667969" y="426.883789"/>
<use xlink:href="#glyph-0-25" x="340.667969" y="426.883789"/>
<use xlink:href="#glyph-0-2" x="347.667969" y="426.883789"/>
<use xlink:href="#glyph-0-1" x="349.667969" y="426.883789"/>
<use xlink:href="#glyph-0-5" x="354.667969" y="426.883789"/>
</g>
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
<use xlink:href="#glyph-0-23" x="370.09375" y="426.883789"/>
<use xlink:href="#glyph-0-24" x="376.09375" y="426.883789"/>
<use xlink:href="#glyph-0-10" x="380.09375" y="426.883789"/>
<use xlink:href="#glyph-0-4" x="385.09375" y="426.883789"/>
<use xlink:href="#glyph-0-1" x="390.09375" y="426.883789"/>
<use xlink:href="#glyph-0-26" x="395.09375" y="426.883789"/>
<use xlink:href="#glyph-0-5" x="400.09375" y="426.883789"/>
</g>
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
<use xlink:href="#glyph-0-27" x="422.019531" y="426.883789"/>
<use xlink:href="#glyph-0-5" x="427.019531" y="426.883789"/>
<use xlink:href="#glyph-0-15" x="432.019531" y="426.883789"/>
<use xlink:href="#glyph-0-18" x="434.019531" y="426.883789"/>
<use xlink:href="#glyph-0-17" x="438.019531" y="426.883789"/>
</g>
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
<use xlink:href="#glyph-0-28" x="470.445312" y="426.883789"/>
<use xlink:href="#glyph-0-11" x="472.445312" y="426.883789"/>
<use xlink:href="#glyph-0-14" x="477.445312" y="426.883789"/>
<use xlink:href="#glyph-0-5" x="479.445312" y="426.883789"/>
</g>
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
<use xlink:href="#glyph-0-29" x="515.871094" y="426.883789"/>
<use xlink:href="#glyph-0-2" x="520.871094" y="426.883789"/>
<use xlink:href="#glyph-0-21" x="522.871094" y="426.883789"/>
<use xlink:href="#glyph-0-15" x="526.871094" y="426.883789"/>
</g>
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
<use xlink:href="#glyph-0-29" x="556.300781" y="426.883789"/>
<use xlink:href="#glyph-0-20" x="561.300781" y="426.883789"/>
<use xlink:href="#glyph-0-26" x="566.300781" y="426.883789"/>
<use xlink:href="#glyph-0-2" x="571.300781" y="426.883789"/>
<use xlink:href="#glyph-0-1" x="573.300781" y="426.883789"/>
</g>
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
<use xlink:href="#glyph-0-29" x="598.726562" y="426.883789"/>
<use xlink:href="#glyph-0-20" x="603.726562" y="426.883789"/>
<use xlink:href="#glyph-0-26" x="608.726562" y="426.883789"/>
<use xlink:href="#glyph-0-20" x="613.726562" y="426.883789"/>
<use xlink:href="#glyph-0-4" x="618.726562" y="426.883789"/>
<use xlink:href="#glyph-0-15" x="623.726562" y="426.883789"/>
</g>
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
<use xlink:href="#glyph-0-29" x="647.652344" y="426.883789"/>
<use xlink:href="#glyph-0-21" x="652.652344" y="426.883789"/>
<use xlink:href="#glyph-0-4" x="656.652344" y="426.883789"/>
<use xlink:href="#glyph-0-13" x="661.652344" y="426.883789"/>
</g>
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
<use xlink:href="#glyph-0-30" x="691.578125" y="426.883789"/>
<use xlink:href="#glyph-0-20" x="698.578125" y="426.883789"/>
<use xlink:href="#glyph-0-31" x="703.578125" y="426.883789"/>
<use xlink:href="#glyph-0-5" x="707.578125" y="426.883789"/>
</g>
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
<use xlink:href="#glyph-0-32" x="736.503906" y="426.883789"/>
<use xlink:href="#glyph-0-20" x="742.503906" y="426.883789"/>
<use xlink:href="#glyph-0-20" x="747.503906" y="426.883789"/>
<use xlink:href="#glyph-0-10" x="752.503906" y="426.883789"/>
</g>
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
<use xlink:href="#glyph-0-33" x="777.933594" y="426.883789"/>
<use xlink:href="#glyph-0-5" x="783.933594" y="426.883789"/>
<use xlink:href="#glyph-0-7" x="788.933594" y="426.883789"/>
<use xlink:href="#glyph-0-22" x="793.933594" y="426.883789"/>
<use xlink:href="#glyph-0-18" x="796.933594" y="426.883789"/>
<use xlink:href="#glyph-0-17" x="800.933594" y="426.883789"/>
</g>
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
<use xlink:href="#glyph-0-33" x="824.859375" y="426.883789"/>
<use xlink:href="#glyph-0-5" x="830.859375" y="426.883789"/>
<use xlink:href="#glyph-0-14" x="835.859375" y="426.883789"/>
<use xlink:href="#glyph-0-5" x="837.859375" y="426.883789"/>
<use xlink:href="#glyph-0-18" x="842.859375" y="426.883789"/>
<use xlink:href="#glyph-0-15" x="846.859375" y="426.883789"/>
</g>
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
<use xlink:href="#glyph-0-33" x="869.785156" y="426.883789"/>
<use xlink:href="#glyph-0-15" x="875.785156" y="426.883789"/>
<use xlink:href="#glyph-0-7" x="877.785156" y="426.883789"/>
<use xlink:href="#glyph-0-15" x="882.785156" y="426.883789"/>
<use xlink:href="#glyph-0-4" x="884.785156" y="426.883789"/>
<use xlink:href="#glyph-0-21" x="889.785156" y="426.883789"/>
</g>
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
<use xlink:href="#glyph-0-33" x="916.210938" y="426.883789"/>
<use xlink:href="#glyph-0-15" x="922.210938" y="426.883789"/>
<use xlink:href="#glyph-0-20" x="924.210938" y="426.883789"/>
<use xlink:href="#glyph-0-22" x="929.210938" y="426.883789"/>
<use xlink:href="#glyph-0-5" x="932.210938" y="426.883789"/>
</g>
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
<use xlink:href="#glyph-0-33" x="952.136719" y="426.883789"/>
<use xlink:href="#glyph-0-4" x="958.136719" y="426.883789"/>
<use xlink:href="#glyph-0-13" x="963.136719" y="426.883789"/>
<use xlink:href="#glyph-0-21" x="968.136719" y="426.883789"/>
<use xlink:href="#glyph-0-18" x="972.136719" y="426.883789"/>
<use xlink:href="#glyph-0-22" x="976.136719" y="426.883789"/>
<use xlink:href="#glyph-0-2" x="979.136719" y="426.883789"/>
<use xlink:href="#glyph-0-13" x="981.136719" y="426.883789"/>
<use xlink:href="#glyph-0-5" x="986.136719" y="426.883789"/>
</g>
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
<use xlink:href="#glyph-0-0" x="1000.066406" y="426.883789"/>
<use xlink:href="#glyph-0-1" x="1006.066406" y="426.883789"/>
<use xlink:href="#glyph-0-21" x="1011.066406" y="426.883789"/>
<use xlink:href="#glyph-0-5" x="1015.066406" y="426.883789"/>
<use xlink:href="#glyph-0-14" x="1020.066406" y="426.883789"/>
<use xlink:href="#glyph-0-5" x="1022.066406" y="426.883789"/>
<use xlink:href="#glyph-0-18" x="1027.066406" y="426.883789"/>
<use xlink:href="#glyph-0-15" x="1031.066406" y="426.883789"/>
</g>
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(0%, 0%, 0%)" stroke-opacity="1" stroke-miterlimit="10" d="M 46.152344 200.957031 L 1043.519531 200.957031 "/>
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 73.105469 203.699219 L 73.105469 200.957031 "/>
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 118.03125 203.699219 L 118.03125 200.957031 "/>
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 162.960938 203.699219 L 162.960938 200.957031 "/>
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 207.886719 203.699219 L 207.886719 200.957031 "/>
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 252.8125 203.699219 L 252.8125 200.957031 "/>
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 297.738281 203.699219 L 297.738281 200.957031 "/>
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 342.667969 203.699219 L 342.667969 200.957031 "/>
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 387.59375 203.699219 L 387.59375 200.957031 "/>
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 432.519531 203.699219 L 432.519531 200.957031 "/>
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 477.445312 203.699219 L 477.445312 200.957031 "/>
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 522.371094 203.699219 L 522.371094 200.957031 "/>
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 567.300781 203.699219 L 567.300781 200.957031 "/>
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 612.226562 203.699219 L 612.226562 200.957031 "/>
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 657.152344 203.699219 L 657.152344 200.957031 "/>
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 702.078125 203.699219 L 702.078125 200.957031 "/>
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 747.003906 203.699219 L 747.003906 200.957031 "/>
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 791.933594 203.699219 L 791.933594 200.957031 "/>
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 836.859375 203.699219 L 836.859375 200.957031 "/>
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 881.785156 203.699219 L 881.785156 200.957031 "/>
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 926.710938 203.699219 L 926.710938 200.957031 "/>
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 971.636719 203.699219 L 971.636719 200.957031 "/>
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 1016.566406 203.699219 L 1016.566406 200.957031 "/>
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
<use xlink:href="#glyph-0-9" x="57.605469" y="211.993164"/>
<use xlink:href="#glyph-0-10" x="63.605469" y="211.993164"/>
<use xlink:href="#glyph-0-10" x="68.605469" y="211.993164"/>
<use xlink:href="#glyph-0-5" x="73.605469" y="211.993164"/>
<use xlink:href="#glyph-0-1" x="78.605469" y="211.993164"/>
<use xlink:href="#glyph-0-11" x="83.605469" y="211.993164"/>
</g>
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
<use xlink:href="#glyph-0-12" x="99.03125" y="211.993164"/>
<use xlink:href="#glyph-0-7" x="105.03125" y="211.993164"/>
<use xlink:href="#glyph-0-10" x="110.03125" y="211.993164"/>
<use xlink:href="#glyph-0-7" x="115.03125" y="211.993164"/>
<use xlink:href="#glyph-0-13" x="120.03125" y="211.993164"/>
<use xlink:href="#glyph-0-2" x="125.03125" y="211.993164"/>
<use xlink:href="#glyph-0-14" x="127.03125" y="211.993164"/>
<use xlink:href="#glyph-0-2" x="129.03125" y="211.993164"/>
<use xlink:href="#glyph-0-15" x="131.03125" y="211.993164"/>
<use xlink:href="#glyph-0-16" x="133.03125" y="211.993164"/>
</g>
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
<use xlink:href="#glyph-0-12" x="150.960938" y="211.993164"/>
<use xlink:href="#glyph-0-17" x="156.960938" y="211.993164"/>
<use xlink:href="#glyph-0-5" x="161.960938" y="211.993164"/>
<use xlink:href="#glyph-0-18" x="166.960938" y="211.993164"/>
<use xlink:href="#glyph-0-19" x="170.960938" y="211.993164"/>
</g>
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
<use xlink:href="#glyph-0-12" x="196.886719" y="211.993164"/>
<use xlink:href="#glyph-0-14" x="202.886719" y="211.993164"/>
<use xlink:href="#glyph-0-20" x="204.886719" y="211.993164"/>
<use xlink:href="#glyph-0-21" x="209.886719" y="211.993164"/>
<use xlink:href="#glyph-0-5" x="213.886719" y="211.993164"/>
</g>
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
<use xlink:href="#glyph-0-12" x="239.8125" y="211.993164"/>
<use xlink:href="#glyph-0-22" x="245.8125" y="211.993164"/>
<use xlink:href="#glyph-0-5" x="248.8125" y="211.993164"/>
<use xlink:href="#glyph-0-7" x="253.8125" y="211.993164"/>
<use xlink:href="#glyph-0-15" x="258.8125" y="211.993164"/>
<use xlink:href="#glyph-0-5" x="260.8125" y="211.993164"/>
</g>
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
<use xlink:href="#glyph-0-23" x="283.738281" y="211.993164"/>
<use xlink:href="#glyph-0-1" x="289.738281" y="211.993164"/>
<use xlink:href="#glyph-0-7" x="294.738281" y="211.993164"/>
<use xlink:href="#glyph-0-13" x="299.738281" y="211.993164"/>
<use xlink:href="#glyph-0-14" x="304.738281" y="211.993164"/>
<use xlink:href="#glyph-0-5" x="306.738281" y="211.993164"/>
</g>
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
<use xlink:href="#glyph-0-23" x="325.667969" y="211.993164"/>
<use xlink:href="#glyph-0-24" x="331.667969" y="211.993164"/>
<use xlink:href="#glyph-0-7" x="335.667969" y="211.993164"/>
<use xlink:href="#glyph-0-25" x="340.667969" y="211.993164"/>
<use xlink:href="#glyph-0-2" x="347.667969" y="211.993164"/>
<use xlink:href="#glyph-0-1" x="349.667969" y="211.993164"/>
<use xlink:href="#glyph-0-5" x="354.667969" y="211.993164"/>
</g>
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
<use xlink:href="#glyph-0-23" x="370.09375" y="211.993164"/>
<use xlink:href="#glyph-0-24" x="376.09375" y="211.993164"/>
<use xlink:href="#glyph-0-10" x="380.09375" y="211.993164"/>
<use xlink:href="#glyph-0-4" x="385.09375" y="211.993164"/>
<use xlink:href="#glyph-0-1" x="390.09375" y="211.993164"/>
<use xlink:href="#glyph-0-26" x="395.09375" y="211.993164"/>
<use xlink:href="#glyph-0-5" x="400.09375" y="211.993164"/>
</g>
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
<use xlink:href="#glyph-0-27" x="422.019531" y="211.993164"/>
<use xlink:href="#glyph-0-5" x="427.019531" y="211.993164"/>
<use xlink:href="#glyph-0-15" x="432.019531" y="211.993164"/>
<use xlink:href="#glyph-0-18" x="434.019531" y="211.993164"/>
<use xlink:href="#glyph-0-17" x="438.019531" y="211.993164"/>
</g>
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
<use xlink:href="#glyph-0-28" x="470.445312" y="211.993164"/>
<use xlink:href="#glyph-0-11" x="472.445312" y="211.993164"/>
<use xlink:href="#glyph-0-14" x="477.445312" y="211.993164"/>
<use xlink:href="#glyph-0-5" x="479.445312" y="211.993164"/>
</g>
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
<use xlink:href="#glyph-0-29" x="515.871094" y="211.993164"/>
<use xlink:href="#glyph-0-2" x="520.871094" y="211.993164"/>
<use xlink:href="#glyph-0-21" x="522.871094" y="211.993164"/>
<use xlink:href="#glyph-0-15" x="526.871094" y="211.993164"/>
</g>
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
<use xlink:href="#glyph-0-29" x="556.300781" y="211.993164"/>
<use xlink:href="#glyph-0-20" x="561.300781" y="211.993164"/>
<use xlink:href="#glyph-0-26" x="566.300781" y="211.993164"/>
<use xlink:href="#glyph-0-2" x="571.300781" y="211.993164"/>
<use xlink:href="#glyph-0-1" x="573.300781" y="211.993164"/>
</g>
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
<use xlink:href="#glyph-0-29" x="598.726562" y="211.993164"/>
<use xlink:href="#glyph-0-20" x="603.726562" y="211.993164"/>
<use xlink:href="#glyph-0-26" x="608.726562" y="211.993164"/>
<use xlink:href="#glyph-0-20" x="613.726562" y="211.993164"/>
<use xlink:href="#glyph-0-4" x="618.726562" y="211.993164"/>
<use xlink:href="#glyph-0-15" x="623.726562" y="211.993164"/>
</g>
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
<use xlink:href="#glyph-0-29" x="647.652344" y="211.993164"/>
<use xlink:href="#glyph-0-21" x="652.652344" y="211.993164"/>
<use xlink:href="#glyph-0-4" x="656.652344" y="211.993164"/>
<use xlink:href="#glyph-0-13" x="661.652344" y="211.993164"/>
</g>
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
<use xlink:href="#glyph-0-30" x="691.578125" y="211.993164"/>
<use xlink:href="#glyph-0-20" x="698.578125" y="211.993164"/>
<use xlink:href="#glyph-0-31" x="703.578125" y="211.993164"/>
<use xlink:href="#glyph-0-5" x="707.578125" y="211.993164"/>
</g>
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
<use xlink:href="#glyph-0-32" x="736.503906" y="211.993164"/>
<use xlink:href="#glyph-0-20" x="742.503906" y="211.993164"/>
<use xlink:href="#glyph-0-20" x="747.503906" y="211.993164"/>
<use xlink:href="#glyph-0-10" x="752.503906" y="211.993164"/>
</g>
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
<use xlink:href="#glyph-0-33" x="777.933594" y="211.993164"/>
<use xlink:href="#glyph-0-5" x="783.933594" y="211.993164"/>
<use xlink:href="#glyph-0-7" x="788.933594" y="211.993164"/>
<use xlink:href="#glyph-0-22" x="793.933594" y="211.993164"/>
<use xlink:href="#glyph-0-18" x="796.933594" y="211.993164"/>
<use xlink:href="#glyph-0-17" x="800.933594" y="211.993164"/>
</g>
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
<use xlink:href="#glyph-0-33" x="824.859375" y="211.993164"/>
<use xlink:href="#glyph-0-5" x="830.859375" y="211.993164"/>
<use xlink:href="#glyph-0-14" x="835.859375" y="211.993164"/>
<use xlink:href="#glyph-0-5" x="837.859375" y="211.993164"/>
<use xlink:href="#glyph-0-18" x="842.859375" y="211.993164"/>
<use xlink:href="#glyph-0-15" x="846.859375" y="211.993164"/>
</g>
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
<use xlink:href="#glyph-0-33" x="869.785156" y="211.993164"/>
<use xlink:href="#glyph-0-15" x="875.785156" y="211.993164"/>
<use xlink:href="#glyph-0-7" x="877.785156" y="211.993164"/>
<use xlink:href="#glyph-0-15" x="882.785156" y="211.993164"/>
<use xlink:href="#glyph-0-4" x="884.785156" y="211.993164"/>
<use xlink:href="#glyph-0-21" x="889.785156" y="211.993164"/>
</g>
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
<use xlink:href="#glyph-0-33" x="916.210938" y="211.993164"/>
<use xlink:href="#glyph-0-15" x="922.210938" y="211.993164"/>
<use xlink:href="#glyph-0-20" x="924.210938" y="211.993164"/>
<use xlink:href="#glyph-0-22" x="929.210938" y="211.993164"/>
<use xlink:href="#glyph-0-5" x="932.210938" y="211.993164"/>
</g>
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
<use xlink:href="#glyph-0-33" x="952.136719" y="211.993164"/>
<use xlink:href="#glyph-0-4" x="958.136719" y="211.993164"/>
<use xlink:href="#glyph-0-13" x="963.136719" y="211.993164"/>
<use xlink:href="#glyph-0-21" x="968.136719" y="211.993164"/>
<use xlink:href="#glyph-0-18" x="972.136719" y="211.993164"/>
<use xlink:href="#glyph-0-22" x="976.136719" y="211.993164"/>
<use xlink:href="#glyph-0-2" x="979.136719" y="211.993164"/>
<use xlink:href="#glyph-0-13" x="981.136719" y="211.993164"/>
<use xlink:href="#glyph-0-5" x="986.136719" y="211.993164"/>
</g>
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
<use xlink:href="#glyph-0-0" x="1000.066406" y="211.993164"/>
<use xlink:href="#glyph-0-1" x="1006.066406" y="211.993164"/>
<use xlink:href="#glyph-0-21" x="1011.066406" y="211.993164"/>
<use xlink:href="#glyph-0-5" x="1015.066406" y="211.993164"/>
<use xlink:href="#glyph-0-14" x="1020.066406" y="211.993164"/>
<use xlink:href="#glyph-0-5" x="1022.066406" y="211.993164"/>
<use xlink:href="#glyph-0-18" x="1027.066406" y="211.993164"/>
<use xlink:href="#glyph-0-15" x="1031.066406" y="211.993164"/>
</g>
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(0%, 0%, 0%)" stroke-opacity="1" stroke-miterlimit="10" d="M 46.152344 200.957031 L 46.152344 23.246094 "/>
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
<use xlink:href="#glyph-0-34" x="36.21875" y="195.485352"/>
</g>
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
<use xlink:href="#glyph-0-35" x="26.21875" y="161.458008"/>
<use xlink:href="#glyph-0-36" x="31.21875" y="161.458008"/>
<use xlink:href="#glyph-0-34" x="36.21875" y="161.458008"/>
</g>
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
<use xlink:href="#glyph-0-36" x="26.21875" y="127.430664"/>
<use xlink:href="#glyph-0-34" x="31.21875" y="127.430664"/>
<use xlink:href="#glyph-0-34" x="36.21875" y="127.430664"/>
</g>
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
<use xlink:href="#glyph-0-37" x="26.21875" y="93.40332"/>
<use xlink:href="#glyph-0-36" x="31.21875" y="93.40332"/>
<use xlink:href="#glyph-0-34" x="36.21875" y="93.40332"/>
</g>
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
<use xlink:href="#glyph-0-38" x="21.21875" y="59.379883"/>
<use xlink:href="#glyph-0-34" x="26.21875" y="59.379883"/>
<use xlink:href="#glyph-0-34" x="31.21875" y="59.379883"/>
<use xlink:href="#glyph-0-34" x="36.21875" y="59.379883"/>
</g>
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 43.410156 192.882812 L 46.152344 192.882812 "/>
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 43.410156 158.855469 L 46.152344 158.855469 "/>
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 43.410156 124.828125 L 46.152344 124.828125 "/>
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 43.410156 90.800781 L 46.152344 90.800781 "/>
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 43.410156 56.777344 L 46.152344 56.777344 "/>
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(0%, 0%, 0%)" stroke-opacity="1" stroke-miterlimit="10" d="M 46.152344 415.847656 L 46.152344 238.136719 "/>
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
<use xlink:href="#glyph-0-34" x="36.21875" y="410.37207"/>
</g>
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
<use xlink:href="#glyph-0-36" x="31.21875" y="347.266602"/>
<use xlink:href="#glyph-0-34" x="36.21875" y="347.266602"/>
</g>
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
<use xlink:href="#glyph-0-38" x="26.21875" y="284.157227"/>
<use xlink:href="#glyph-0-34" x="31.21875" y="284.157227"/>
<use xlink:href="#glyph-0-34" x="36.21875" y="284.157227"/>
</g>
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 43.410156 407.769531 L 46.152344 407.769531 "/>
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 43.410156 344.664062 L 46.152344 344.664062 "/>
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 43.410156 281.554688 L 46.152344 281.554688 "/>
<g fill="rgb(0%, 0%, 0%)" fill-opacity="1">
<use xlink:href="#glyph-1-0" x="520.835938" y="441.147461"/>
<use xlink:href="#glyph-1-1" x="526.835938" y="441.147461"/>
<use xlink:href="#glyph-1-2" x="532.835938" y="441.147461"/>
<use xlink:href="#glyph-1-2" x="541.835938" y="441.147461"/>
<use xlink:href="#glyph-1-3" x="550.835938" y="441.147461"/>
<use xlink:href="#glyph-1-4" x="556.835938" y="441.147461"/>
<use xlink:href="#glyph-1-5" x="562.835938" y="441.147461"/>
</g>
<g fill="rgb(0%, 0%, 0%)" fill-opacity="1">
<use xlink:href="#glyph-2-0" x="14.108398" y="233.046875"/>
<use xlink:href="#glyph-2-1" x="14.108398" y="227.046875"/>
<use xlink:href="#glyph-2-2" x="14.108398" y="221.046875"/>
<use xlink:href="#glyph-2-3" x="14.108398" y="215.046875"/>
<use xlink:href="#glyph-2-4" x="14.108398" y="209.046875"/>
</g>
</svg>

Before

Width:  |  Height:  |  Size: 68 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 32 KiB

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 127 KiB

View file

@ -1,417 +0,0 @@
+++
title="Does Aerogramme use too much memory?"
date=2024-02-15
+++
*"Will Aerogramme use too much RAM?" was the first question we asked ourselves
when designing email mailboxes as an encrypted event log, which is very different from
existing designs that are very optimized. This blog post
tries to evaluate our design assumptions to the real world implementation,
similarly to what we have done [on Garage](https://garagehq.deuxfleurs.fr/blog/2022-perf/).*
<!-- more -->
---
## Methodology
Brendan Gregg, a very respected figure in the world of system performances, says that, for many reasons,
[~100% of benchmarks are wrong](https://www.brendangregg.com/Slides/Velocity2015_LinuxPerfTools.pdf).
This benchmark will be wrong too in multiple ways:
1. It will not say anything about Aerogramme performances in real world deployments
2. It will not say anything about Aerogramme performances compared to other email servers
However, I pursue a very specific goal with this benchmark: validating if the assumptions we have done
during the design phase, in term of compute and memory complexity, holds for real.
I will observe only two metrics: the CPU time used by the program (everything except idle and iowait based on the [psutil](https://pypi.org/project/psutil/) code) - for the computing complexity - and the [Resident Set Size](https://en.wikipedia.org/wiki/Resident_set_size) (data held RAM) - for the memory complexity.
<!--My baseline will be the compute and space complexity of the code that I have in mind. For example,
I know we have a "3 layers" data model: an index stored in RAM, a summary of the emails stored in K2V, a database, and the full email stored in S3, an object store.
Commands that can be solved only with the index should use a very low amount of RAM compared to . In turn, commands that require the full email will require to fetch lots of data from S3.-->
## Testing environment
I ran all the tests on my personal computer, a Dell Inspiron 7775 with an AMD Ryzen 7 1700, 16GB of RAM, an encrypted SSD, on NixOS 23.11.
The setup is made of Aerogramme (compiled in release mode) connected to a local, single node, Garage server.
Observations and graphs are done all in once thanks to the [psrecord](https://github.com/astrofrog/psrecord) tool.
I did not try to make the following values reproducible as it is more an exploration than a definitive review.
## Mailbox dataset
I will use [a dataset of 100 emails](https://git.deuxfleurs.fr/Deuxfleurs/aerogramme/src/commit/0b20d726bbc75e0dfd2ba1900ca5ea697645a8f1/tests/emails/aero100.mbox.zstd) I have made specifically for the occasion.
It contains some emails with various attachments, some emails with lots of text, emails generated by many different clients (Thunderbird, Geary, Sogo, Alps, Outlook iOS, GMail iOS, Windows Mail, Postbox, Mailbird, etc.), etc.
The mbox file weighs 23MB uncompressed.
One question that arise is: how representative of a real mailbox is this dataset? While a definitive response is not possible, I compared the email sizes of this dataset to the 2&nbsp;367 emails in my personal inbox.
Below I plotted the empirical distribution for both my dataset and my personal inbox (note that the x axis is not linear but logarithimic).
![ECDF mailbox](ecdf_mbx.svg)
*[Get the 100 emails dataset](https://git.deuxfleurs.fr/Deuxfleurs/aerogramme/src/commit/0b20d726bbc75e0dfd2ba1900ca5ea697645a8f1/tests/emails/aero100.mbox.zstd) - [Get the CSV used to plot this graph](https://git.deuxfleurs.fr/Deuxfleurs/aerogramme/src/branch/perf/cpu-ram-bottleneck/tests/emails/mailbox_email_sizes.csv)*
We see that the curves are close together and follow the same pattern: most emails are between 1kB and 100kB, and then we have a long tail (up to 20MB in my inbox, up to 6MB in the dataset).
It's not that surprising: on many places on the Internet, the limit on emails is set to 25MB. Overall I am quite satisfied by this simple dataset, even if having one or two bigger emails could make it even more representative of my real inbox...
Mailboxes with only 100 emails are not that common (mine has 2k emails...), so to emulate bigger mailboxes, I simply inject the dataset multiple times (eg. 20 times for 2k emails).
## Command dataset
Having a representative mailbox is a thing, but we also need to know what are the typical commands that are sent by IMAP clients.
As I have setup a test instance of Aerogramme (see [my FOSDEM talk](https://fosdem.org/2024/schedule/event/fosdem-2024-2642--servers-aerogramme-a-multi-region-imap-server/)),
I was able to extract 4&nbsp;619 IMAP commands sent by various clients. Many of them are identical, and in the end, only 248 are truly unique.
The following bar plot depicts the command distribution per command name; top is the raw count, bottom is the unique count.
![Commands](command-run.svg)
*[Get the IMAP command log](https://git.deuxfleurs.fr/Deuxfleurs/aerogramme/src/branch/perf/cpu-ram-bottleneck/tests/emails/imap_commands_dataset.log) - [Get the CSV used to plot this graph](https://git.deuxfleurs.fr/Deuxfleurs/aerogramme/src/branch/perf/cpu-ram-bottleneck/tests/emails/imap_commands_summary.csv)*
First, we can handle separately some commands: LOGIN, CAPABILITY, ENABLE, SELECT, EXAMINE, CLOSE, UNSELECT, LOGOUT as they are part of a **connection workflow**.
We do not plan on studying them directly as they will be used in all other tests.
CHECK, NOOP, IDLE, and STATUS are different approaches to detect a change in the current mailbox (or in other mailboxes in the case of STATUS),
I assimilate these commands as a **notification** mechanism.
FETCH, SEARCH and LIST are **query** commands, the first two ones for emails, the last one for mailboxes.
FETCH is from far the most used command (1187 occurencies) with the most variations (128 unique combination of parameters).
SEARCH is also used a lot (658 occurencies, 14 unique).
APPEND, STORE, EXPUNGE, MOVE, COPY, LSUB, SUBSCRIBE, CREATE, DELETE are commands to **write** things: flags, emails or mailboxes.
They are not used a lot but some writes are hidden in other commands (CLOSE, FETCH), and when mails arrive, they are delivered through a different protocol (LMTP) that does not appear here.
In the following, we will assess that APPEND behaves more or less than a LMTP delivery.
<!--
Focus on `FETCH` (128 unique commands), `SEARCH` (14 unique commands)
```
FETCH *:5 (UID ENVELOPE BODY.PEEK[HEADER.FIELDS("References")])
UID FETCH 1:* (UID FLAGS) (CHANGEDSINCE 22)
FETCH 1:1 (UID FLAGS INTERNALDATE RFC822.SIZE BODY.PEEK[HEADER.FIELDS("DATE" "FROM" "SENDER" "SUBJECT" "TO" "CC" "MESSAGE-ID" "REFERENCES" "CONTENT-TYPE" "CONTENT-DESCRIPTION" "IN-REPLY-TO" "REPLY-TO" "LINES" "LIST-POST" "X-LABEL" "CONTENT-CLASS" "IMPORTANCE" "PRIORITY" "X-PRIORITY" "THREAD-TOPIC" "REPLY-TO" "AUTO-SUBMITTED" "BOUNCES-TO" "LIST-ARCHIVE" "LIST-HELP" "LIST-ID" "LIST-OWNER" "LIST-POST" "LIST-SUBSCRIBE" "LIST-UNSUBSCRIBE" "PRECEDENCE" "RESENT-FROM" "RETURN-PATH" "Newsgroups" "Delivery-Date")])
UID FETCH 1:2,11:13,18:19,22:26,33:34,60:62 (FLAGS) (CHANGEDSINCE 165)
UID FETCH 1:7 (UID RFC822.SIZE BODY.PEEK[])
UID FETCH 12:13 (INTERNALDATE UID RFC822.SIZE FLAGS MODSEQ BODY.PEEK[HEADER])
UID FETCH 2 (RFC822.HEADER BODY.PEEK[2]<0.10240>)
```
Flags, date, headers
-->
In the following, I will keep these 3 categories: **writing**, **notification**, and **query** to evaluate Aerogramme's ressource usage
based on command patterns observed on real IMAP commands and the provided dataset.
---
## Write Commands
We start by the write commands as it will enable us to fill the mailboxes for the following evaluations.
I inserted the full dataset (100 emails) to 16 accounts (in other words, in the end, the server handles 1 600 emails) with APPEND.
*[Get the Python script](https://git.deuxfleurs.fr/Deuxfleurs/aerogramme/src/branch/main/tests/instrumentation/mbox-to-imap.py)*
### Filling a mailbox
![Append Custom Build](01-append-tokio-console-musl.png)
First, I observed this *scary* linear memory increase. It seems we are not releasing some memory,
and that's an issue! I quickly suspected tokio-console of being the culprit.
A quick search lead me to an issue entitled [Continuous memory leak with console_subscriber #184](https://github.com/tokio-rs/console/issues/184)
that confirmed my intuition.
Instead of waiting for an hour or trying to tweak the retention time, I built Aerogramme without tokio console.
*So in a first approach, we observed the impact of tokio console instead of our code! Still, we want to
have performances as predictable as possible.*
![Append Cargo Release](02-append-glibc.png)
Which got us to this second pattern: a stable but high memory usage compared to previous run.
It appears I built the binary with `cargo release`, which creates a binary that dynamically link to the GNU libc.
The previous binary was built with our custom Nix toolchain that statically link musl libc to our binary.
In the process, we changed the allocator: it seems the GNU libc allocator allocates bigger chunks at once.
*It would be wrong to conclude the musl libc allocator is more efficient: allocating and unallocating
memory on the kernel side is costly, and thus it might be better for the allocator to keep some kernel allocated memory
for future memory allocations that will not require system calls. This is another example of why this benchmark is wrong: we observe
the memory allocated by the allocator, not the memory used by program itself.*
For the next graph, I removed tokio-console and built Aerogramme with a static musl libc.
![Append Custom Build](03-append-musl.png)
The observed patterns match way better what I was expecting.
We observe 16 spikes of memory allocation, around 50MB, followed by a 25MB memory usage.
In the end, we drop to ~18MB.
In this scenario, we can say that a user needs between 32MB of RAM and 7MB.
In the previous runs, we were doing the inserts sequentially. But in the real world, multiple users interact with the server
at the same time. In the next run, we run the same test but in parrallel.
![Append Parallel](04-append-parallel.png)
We see 2 spikes: a short one at the beggining, and a longer one at the end.
The first spike is probably due to the argon2 decoding, a key derivation function
that is purposedly built to be expensive in term of RAM and CPU.
The second spike is due to the fact that big emails (multiple MB) are at the end of the dataset,
and they are stored fully in RAM before being sent. However, our biggest email weighs 6MB,
and we are running 16 threads, so we should expect around a memory usage that is around 100MB,
not 400MB. This difference would be a good starting point for an investigation: we might
copy a same email multiple times in RAM.
It seems that in this first test that Aerogramme is particularly sensitive to 1) login commands due to argon2 and 2) large emails.
### Re-organizing your mailbox
You might need to organize your folders, copying or moving your email across your mailboxes.
COPY is a standard IMAP command, MOVE is an extension.
I will focus on a brutal test: copying 1k emails from the INBOX to Sent, then moving these 1k emails to Archive.
Below is the graph depicting Aerogramme resource usage during this test.
![Copy and move](copy-move.png)
Memory usage remains stable and low (below 25MB), but the operations are CPU intensive (close to 100% for 40 seconds).
Both COPY and MOVE depict the same pattern: indeed, as emails are considered immutable, Aerogramme only handle pointers in both cases
and do not really copy their content.
Real world clients would probably not send such brutal commands, but would do it progressively, either one by one, or with small batches,
to keep the UI responsive.
While CPU optimizations could probably be imagined, I find this behavior satisfying, especially as memory remains stable and low.
### Messing with flags
Setting flags (Seen, Deleted, Answered, NonJunk, etc.) is done through the STORE command.
Our run will be made in 3 parts: 1) putting one flag on one email, 2) putting 16 flags on one email, and 3) putting one flag on 1k emails.
The result is depicted in the graph below.
![Store flags](store.png)
<!--
`STORE` (19 unique commands).
UID, not uid, silent, not silent, add not set, standard flags mainly.
```
UID STORE 60:62 +FLAGS (\Deleted \Seen)
STORE 2 +FLAGS.SILENT \Answered
```
-->
The first and last spike are due respectively to the LOGIN/SELECT and CLOSE/LOGOUT commands.
We thus have 3 CPU spikes, one for each command, memory remains stable.
The last command is bar far the most expensive, and indeed, it has to generate 1k events in our event log and rebuild many things in the index.
However, there is no reason for the 2nd command to be less expensive than the first one except from the fact it reuses some ressources / cache entries
from the first request.
Interacting with the index is really efficient in term of memory. Generating many changes
lead to high CPU (and possibly lot of IO), but from our dataset we observe most changes are done on one or two emails
and never on all the mailbox.
Interacting with flags should not be an issue for Aerogramme in the near future.
## Notification Commands
Notification commands are expected to be run regularly in background by clients.
They are particularly sensitive as they are correlated to your number of users,
independently of the number of emails they receive. I split them in 2 parts:
the ones that are intermittent, and like HTTP, closes the connection after being run,
and the ones that are continuous, where the socket is maintained open forever.
### The cost of a refresh
NOOP, CHECK, STATUS are commands that trigger a refresh of the IMAP
view, and are part of the "intermittent" commands. In some ways, the SELECT and/or EXAMINE
commands could also be interpreted as a notification command: a client that is configured
to poll a mailbox every 15 minutes will not use the NOOP, running EXAMIME will be enough.
In our case, all these commands are similar in the sense that they load or refresh the in-memory index
of the targeted mailbox. To illustrate my point, I will run SELECT, NOOP, CHECK, and STATUS on another mailbox in a row.
![Refresh plot](refresh.png)
The first CPU spike is LOGIN/SELECT, the second is NOOP, the third CHECK, the last one STATUS.
CPU spikes are short, memory usage is stable.
Refresh commands should not be an issue for Aerogramme in the near future.
### Continuously connected clients
IDLE (and NOTIFY that is currently not implemented in Aerogramme) are commands
that maintain a socket opened. These commands are sensitive, as while many protocols
are one shot, and then your users spread their requests over time, with these commands,
all your users are continuously connected.
In the graph below, we plot the resource usage of 16 users with a 100 emails mailbox each that log into the system,
select their inbox, switch to IDLE, and then, one by one, they receive an email and are notified.
![Idle Parallel](05-idle-parallel.png)
Memory usage is linear with the number of users.
If we extrapolate this observation, it would imply that 1k users = 2GB of RAM.
That's not something negligible, and it should be observed closely.
In the future, if it appears that's an issue, we could consider optimizations like 1) unloading the mailbox index
and 2) mutualizing the notification/wake up mechanism.
## Query Commands
Query commands are the most used commands in IMAP,
they are very expressive and allows the client to fetch only what they need:
a list of the emails without their content, displaying an email body without
having to fetch its attachment, etc.
Of course, this expressivity creates complexity!
### Fetching emails
Often, IMAP clients in first instance, are only interested by email metadata.
For example, the ALL keyword fetches some metadata, like flags, size, sender, recipient, etc.
Ressource usage of fetching this information on 1k email is depicted below.
![Fetch All 1k mail](06-fetch-all.png)
CPU spike is short, memory usage is low: nothing alarming in term of performances.
IMAP standardizes another keyword, FULL, that also returns the "shape" of a MIME email as an S-Expression.
Indeed, MIME emails can be seen as a tree where each node/leaves are a "part".
In Aerogramme, this shape is - as of 2024-02-17 - not pre-computed and not save in database, and thus, the full email must be fetched and parsed.
So, when I tried to fetch this shape on 1k emails, Garage crashed:
```
ERROR hyper::server::tcp: accept error: No file descriptors available (os error 24)
```
Indeed, `ulimit` is set to 1024 on my machine, and apparently, I tried to open more than 1024 descriptors
for a single request... It's definitely an issue that must be fixed, but for this article,
I will increase the limit to make the request succeed.
I get the following graph.
![Fetch Full 1k mail](07-fetch-full.png)
With a spike at 300MB, it's clear we are fetching the full mailbox before starting to process it.
While it's a performance issue, it's also a stability/predictability issue: any user could trigger huge allocations on the server.
### Searching
<!--
```
3 SEARCH HEADER "Message-ID" "<4a83801e-4848-fbe5-0afa-ef8592d99a52@saint-ex.deuxfleurs.org>" UNDELETED SINCE 1-Jan-2020
```
-->
<!--
```
SEARCH UNDELETED SINCE 2023-11-17
UID SEARCH HEADER "Message-ID" "<x@y.z>" UNDELETED
UID SEARCH 1:* UNSEEN
UID SEARCH BEFORE 2024-02-09
```
-->
First, we start with a SEARCH command inspired by what we have seen in the logs on the whole mailbox, and that can be run
without fetching the full email from the blob storage.
![Search meta](search-meta.png)
Spike order: 1) artifact, ignored, 2) login+select, 3) search, 4) logout
We load ~10MB in memory to make our request that is quite fast.
But we also know that some SEARCH requests will require to fetch some content
from the S3 object storage, and in this case, the profile is different.
![Search body](search-body.png)
We have the same profile as FETCH FULL: a huge allocation of memory and a very CPU intensive task.
The conclusion is similar to FETCH: while these commands are OK to be slow, it's not OK to allocate so much memory.
### Listing mailboxes
Another object that can be queried in IMAP are mailboxes, through the LIST command.
The test consists of 1) LOGIN, 2) LIST, and 3) LOGOUT done on a user account with 5 mailboxes.
![List mailboxes](list.png)
There are only 2 spikes (LOGIN and LOGOUT), as the mailbox list is loaded eagerly when the user connects.
Because it's a small datastructure, it's quick to parse it, which explains why there is no CPU/memory spike
for the LIST command in itself.
---
## Discussion
At this level of maturity, the main goal for Aerogramme is predictable & stable resource usage server side.
Indeed, there is nothing more annoying than a user, honest or malicious, breaking the server while running
a resource intensive command.
**Querying bodies** - They may allocate the full mailbox in RAM. For some parameters (eg. FETCH BODY),
it might be possible to precompute data, but for some others (eg. SEARCH TEXT "something") it's not possible, so precomputing
is not a definitive solution. Also being slow is acceptable here: we just want to avoid being resource intensive. The solution I envision is to "stream"
the fetched emails and process them one by one. For some commands like FETCH 1:* BODY[], that are to the best of my knowledge,
never run by real clients, it will not be enough however. Indeed, Aerogramme will drop the fetched emails, but it will have an in-memory copy inside
the response object awaiting for delivery. So we might want to implement response streaming too.
**argon2** - Login resource usage is due to argon2, but compared to many other protocols, authentications in IMAP occure way more often.
For example, if a user configure their email client to poll their mailbox every 5 minutes, this client will authenticate 288 times in a single day!
argon2 resource usage is a tradeoff with the bruteforcing difficulty, reducing its resource usage is thus a possible performance mitigation
solution at the cost of reduced security. Other options might reside in the evolution of our authentication system: even if I don't
know what is the current state of implementation of OAUTH in existing IMAP clients, it could be considered as an option.
Indeed, the user enters their password once, during the configuration of their account, and then a token is generated and stored in the client.
The credentials could be stored in the token (think a JSON Web Token for example), avoiding the expensive KDF on each connection.
**IDLE** - IDLE RAM usage is the same as other commands, but IDLE keeps the RAM allocated for way longer.
To illustrate my point, let's suppose an IMAP session uses 5MB of RAM,
and two populations of 1 000 users. We suppose users monitor only one mailbox (which is not true in many cases).
The first population, aka *the poll population*, configures their client to poll the server every 5 minutes, polling takes 2 seconds (LOGIN + SELECT + LOGOUT).
The second population, aka *the push population*, configures their client to "receive push messages" - ie. using IMAP IDLE.
For the *push population*, the RAM usage will
be a stable 5GB (5MB * 1 000 users): all users will be always connected.
With a 1MB session, we could reduce the RAM usage to 1GB: any improvement on the session base RAM
will be critical to IDLE with the current design.
For the *poll population*, we can split the time in 150 ticks (5 minutes / 2 seconds = 150 ticks).
It seems the problem can be mapped to a [Balls into bins problem](https://en.wikipedia.org/wiki/Balls_into_bins_problem)
with random allocation (yes, I know, assuming random allocation might not hold in many situations).
Based on the Wikipedia formula, if I am not wrong, we can suppose that, with high probability, at most 13 clients will be connected
at once, which means 65MB of RAM usage (5MB * 13 clients/tick = 65MB/tick).
With these *back-of-the-envelope calculations*, we understand how crucial the IDLE RAM consumption is compared
to other commands, and how the base RAM consumption of a user will impact the service.
**Large email streaming** - Finally, email streaming could really improve RAM usage for large emails, especially on APPEND and LMTP delivery,
or even on FETCH when the body is required. But implementing such a feature would require an email parser that
can work on streams, which in turns [is not something trivial](https://github.com/rust-bakery/nom/issues/1160).
While it seems untimely to act now, these spots are great candidates for a closer monitoring for future performance evaluations.
Fixing these points, above the simple mitigations, will involve important design changes in Aerogramme, which means
in the end: writing lot of code! That's why I think Aerogramme can work with these "limitations" for now,
and we will take decisions about these points when it will be *really* required.
## Conclusion
Back to the question "Does Aerogramme use too much RAM?",
it of course depends on the context.
We want to start with 1 000 users, a 1k email INBOX, and a server of 8GB of RAM,
and Aerogramme seems ready for a first deployment in this context.
So for now, the answer could be "No, it does not".
Of course, in the long term,
we expect better ressource usages, and I am convinced that for many people, today the answer is still "Yes, it does".
Based on this benchmark, I identified 3 low-hanging fruits to improve performances that do not require major design changes: 1) in FETCH+SEARCH queries, handling emails one after another instead of loading the full mailbox in memory, 2) streaming FETCH responses instead of aggregating them in memory, and 3) reducing the RAM usage of a base user by tweaking its Garage connectors configuration.
Collecting production data will then help priorize other, more ambitious works, on the authentication side,
on the idling side, and on the email streaming side.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 23 KiB

View file

@ -1,6 +0,0 @@
+++
template = "documentation.html"
page_template = "documentation.html"
redirect_to = "documentation/quick-start/"
+++

Binary file not shown.

Before

Width:  |  Height:  |  Size: 550 KiB

View file

@ -1,6 +0,0 @@
+++
title = "CalDAV"
weight = 10
+++
*not yet implemented*

View file

@ -1,6 +0,0 @@
+++
title = "CardDAV"
weight = 20
+++
*not yet implemented*

View file

@ -1,6 +0,0 @@
+++
title = "Proxy"
weight = 40
+++
*Not yet written*

View file

@ -1,6 +0,0 @@
+++
title = "WASM"
weight = 50
+++
*Not yet implemented*

View file

@ -1,24 +0,0 @@
+++
title = "Connect"
weight = 20
sort_by = "weight"
template = "documentation.html"
+++
This section references all the clients that can be connected to Aerogramme,
what features they can use, how well they behave.
## Standard protocols
- [IMAP](@/documentation/connect/imap.md) - Access to your mailbox
<!-- - [CalDAV](@/documentation/connect/caldav.md) - Access to your calendars
- [CardDAV](@/documentation/connect/carddav.md) - Access to your contacts
-->
## Client-side encryption
- **Companion Proxy** - Companion proxy is not yet documented as it will be refined in the near future.
<!--
- [Proxy](@/documentation/connect/proxy.md) - Proxy approach to access to your profile
- [WASM](@/documentation/connect/wasm.md) - Library approach to access to your profile
-->

View file

@ -1,23 +0,0 @@
+++
title = "IMAP"
weight = 5
+++
The following clients are known to work with Aerogramme:
| Client | Date | Aerogramme version | Status |
|----------------|------------|--------------------|--------|
| Mutt | 2023 | pre 0.1 | ✅ |
| Thunderbird | 2024-01-08 | pre 0.2 | ✅ |
| Geary | 2024-01-08 | pre 0.2 | ✅ |
| Evolution | 2024-01-08 | pre 0.2 | ✅ |
| K9 Mail | 2024-01-08 | pre 0.2 | ✅ |
| Fair Email | 2024-01-08 | pre 0.2 | ✅ |
| Apple Mail iOS | 2024-01-08 | pre 0.2 | ✅ |
| Outlook iOS | 2024-01-08 | pre 0.2 | ✅ |
| Gmail Android | 2024-01-08 | pre 0.2 | ✅ |
| Windows Mail | 2024-01-08 | pre 0.2 | ✅ |
If you find a regression, please report it.
If you use a well-known email client, that work or not work with Aerogramme,
please report it here, so we can track Aerogramme's IMAP compatibility with the rest of the ecosystem.

View file

@ -1,6 +0,0 @@
+++
title = "Auto-discovery"
weight = 90
+++
Todo

View file

@ -1,6 +0,0 @@
+++
title = "Backups"
weight = 120
+++
Todo

View file

@ -1,6 +0,0 @@
+++
title = "Distributed Storage"
weight = 70
+++
Todo

View file

@ -1,6 +0,0 @@
+++
title = "LDAP User Management"
weight = 60
+++
LDAP

View file

@ -1,6 +0,0 @@
+++
title = "Hardened configuration"
weight = 130
+++
Todo

View file

@ -1,6 +0,0 @@
+++
title = "Observability"
weight = 110
+++
Todo

View file

@ -1,6 +0,0 @@
+++
title = "Orchestrators"
weight = 80
+++
Todo

View file

@ -1,6 +0,0 @@
+++
title = "Updates"
weight = 100
+++
Todo

View file

@ -1,50 +0,0 @@
+++
title = "Cookbook"
weight = 10
sort_by = "weight"
template = "documentation.html"
+++
This cookbook will guide you from your first steps
to a real deployment of Aerogramme.
## Single-node, minimal deployment
A minimal deployment will allow you to deploy Aerogramme on a single-server
in a way that it works for a small production deployment.
However, as Aerogramme is a distributed-first server,
you will not benefit from all of its nice properties, and many manual
work will be required.
- [Local storage](@/documentation/cookbook/single-node-garage-storage.md)
- [Configuration file](@/documentation/cookbook/config.md)
- [User management](@/documentation/cookbook/static-user-management.md)
- [TLS encryption](@/documentation/cookbook/tls-encryption.md)
- [Integration with a service manager](@/documentation/cookbook/service-manager.md) (systemd or docker)
- [SMTP server integration](@/documentation/cookbook/smtp-server.md) (MTA)
*Multi-node deployments and lifecycle maintainance are not covered yet.*
<!--
## Multi-nodes, standard deployment
Aerogramme is intended for multi-nodes deployment. This guide
will cover the specificities introduced by a multi-node deployment.
- [LDAP user management](@/documentation/cookbook/ldap.md)
- [Distributed storage](@/documentation/cookbook/garage-cluster-storage.md)
- [Orchestrators](@/documentation/cookbook/orchestrators.md)
- [Auto-discovery](@/documentation/cookbook/autodiscovery.md)
## Lifecycle
Hopefully you will love Aerogramme, and thus you will have to do some maintainance on it.
- [Updates](@/documentation/cookbook/updates.md)
- [Observability](@/documentation/cookbook/observability.md)
- [Backups](@/documentation/cookbook/backups.md)
## Hardened flavor
- [Manual configuration](@/documentation/cookbook/manual-hardened.md)
-->

View file

@ -1,37 +0,0 @@
+++
title = "Configuration file"
weight = 10
+++
In the [quickstart](@/documentation/quick-start/_index.md), you launched Aerogramme in "development mode", that do not require a configuration file. But for a real-world usage, you will need to specificy many things: how your users are managed, which port to use, how and where data is stored, etc.
This page describes how to write your first configuration file.
If you want a complete reference, check the dedicated [Configuration Reference](@/documentation/reference/config.md) page.
```toml
role = "Provider"
pid = "aerogramme.pid"
[imap_unsecure]
bind_addr = "[::]:1143"
[auth]
bind_addr = "[::1]:12345"
[lmtp]
bind_addr = "[::1]:1025"
hostname = "example.tld"
[users]
user_driver = "Static"
user_list = "users.toml"
```
Copy this content in a file named `aerogramme.toml`.
Also create an empty file named `users.toml` (Aerogramme does not know how to create it automatically yet).
And then you can start Aerogramme with the following configuration file:
```bash
aerogramme -c aerogramme.toml provider daemon
```

View file

@ -1,97 +0,0 @@
+++
title = "Service Managers (eg. systemd)"
weight = 40
+++
You may want to start Aerogramme automatically on boot,
restart it if it crashes, etc. Such actions can be achieved through a service manager.
## systemd
We make some assumptions for this systemd deployment.
- Your garage binary is located at `/usr/local/bin/aerogramme`.
- Your configuration file is located at `/etc/aerogramme/config.toml`.
- If you use Aerogramme's user management, the user list is set to `/etc/aerogramme/users.toml`.
Create a file named `/etc/systemd/system/aerogramme.service`:
```ini
[Unit]
Description=Aerogramme Email Server
After=network-online.target
Wants=network-online.target
[Service]
Environment='RUST_LOG=aerogramme=info' 'RUST_BACKTRACE=1'
ExecStart=/usr/local/bin/aerogramme -c /etc/aerogramme/config.toml provider daemon
DynamicUser=true
ProtectHome=true
NoNewPrivileges=true
[Install]
WantedBy=multi-user.target
```
**A note on hardening:** The Aerogramme daemon is not expected to write on the filesystem.
When you use the `aerogramme provider account`, the write is done by your current user/process,
not the daemon process. That's why we don't define a `StateDirectory`.
To start the service then automatically enable it at boot:
```bash
sudo systemctl start aerogramme
sudo systemctl enable aerogramme
```
To see if the service is running and to browse its logs:
```bash
sudo systemctl status aerogramme
sudo journalctl -u aerogramme
```
To add a new user:
```bash
sudo aerogramme \
-c /etc/aerogramme/config.toml \
provider account add --login alice --setup #...
sudo systemctl reload aerogramme
```
## Other service managers
Other service managers exists: SMF (illumos / solaris), OpenRC (alpine & co), rc (FreeBSD, OpenBSD, NetBSD).
Feel free to open a PR to add some documentation.
You would not use System V initialization scripts...
<!--
## docker-compose
An example docker compose deployment with Garage included:
```yml
version: "3"
services:
aerogramme:
image: registry.deuxfleurs.org/aerogramme:0.2.0
restart: unless-stopped
ports:
- "1025:1025"
- "143:1143"
volumes:
- aerogramme.toml:/aerogramme.toml
- users.toml:/users.toml
garage:
image: docker.io/dxflrs/garage:v0.9.1
restart: unless-stopped
volumes:
- garage.toml:/etc/garage.toml
- garage-meta:/var/lib/garage/meta
- garage-data:/var/lib/garage/data
```
-->

View file

@ -1,52 +0,0 @@
+++
title = "Local storage"
weight = 5
+++
Currently, Aerogramme does not support local storage (ie. storing emails
on your filesystem directly). It might support this feature in the future,
but no ETA is announced yet. In the mean time, we will deploy a single-node
Garage cluster to store the data.
Start by following the [Garage Quickstart](https://garagehq.deuxfleurs.fr/documentation/quick-start/) up to "Creating a cluster layout" (included).
## Setup your storage
Create one key per user:
```bash
garage key create aerogramme
# ...
```
*Do not forget to note the key, it will be needed later.*
Create an Aerogramme bucket for each user:
```bash
garage bucket create aerogramme-alice
garage bucket create aerogramme-bob
garage bucket create aerogramme-charlie
# ...
```
*Having one bucket per user is an opinionated design choice made to support Aerogramme [Per-user storage, per-user encryption](@/documentation/design/per-user-encryption.md) goal.*
And then allow the aerogramme key on each bucket:
```bash
garage bucket allow --read --write --key aerogramme aerogramme-alice
garage bucket allow --read --write --key aerogramme aerogramme-bob
garage bucket allow --read --write --key aerogramme aerogramme-charlie
# ...
```
## Automating it
You can automate this by using the [Garage Admin API](https://garagehq.deuxfleurs.fr/documentation/reference-manual/admin-api/) and the [S3 API](https://garagehq.deuxfleurs.fr/documentation/reference-manual/s3-compatibility/).
## Quota
You can use Garage per-bucket quota to limit the amount of space a user can
use, but such data is not reported on the IMAP side.

View file

@ -1,143 +0,0 @@
+++
title = "SMTP servers"
weight = 50
+++
SMTP servers that are recommended for Aerogramme are the ones that support:
- TCP delivery over the LMTP protocol
- TCP authentication over the [Dovecot SASL Auth protocol](https://doc.dovecot.org/developer_manual/design/auth_protocol/)
Postfix supports these 2 features and is the only recommended choice *for now*.
## Postfix
Configuring [Postfix](https://www.postfix.org/) requires to add these 4 lines to `main.cf`:
```ini
smtpd_sasl_type = dovecot
smtpd_sasl_path = inet:localhost:12345
virtual_mailbox_domains = your-domain.tld
virtual_transport = lmtp:[::1]:1025
```
Aerogramme implements Dovecot SASL protocol. By configuring Postfix
with it,
Make sure that `your-domain.tld` is not already configured in the `mydomain` variable,
or it might conflict with Postfix local delivery logic.
*Indeed, Postfix internally has its default configuration for "local" mail delivery,
that maps to the old way of managing emails. LMTP delivery is a more recent, and maps
to the "virtual" mail delivery mechanisms of Postfix. Your goal is thus to deactivate
as much as possible the "local" delivery capabilities of Postfix and only allow
the "virtual" ones.*
You can learn more about Postfix LMTP capabilities on this page: [lmtp(8)](https://www.postfix.org/lmtp.8.html).
## Maddy
[Maddy](https://maddy.email/) is a more recent email server written in Go.
However it does not support LMTP delivery over TCP, only over UNIX socket: without a specific adapter, it's not yet compatible with Aerogramme.
For LMTP delivery, read [SMTP & LMTP transparent forwarding](https://maddy.email/reference/targets/smtp/#smtp-lmtp-transparent-forwarding).
For the Dovecot Auth Protocol, read [Dovecot SASL](https://maddy.email/reference/auth/dovecot_sasl/).
## OpenSMTPD
Something like below might work (untested):
```bash
action "remote_mail" lmtp "/var/run/dovecot/lmtp" rcpt-to virtual <virtuals>
match from any for domain "your-domain.tld" action "remote_mail"
```
The syntax is described in their manpage [smtpd.conf(5)](https://man.openbsd.org/smtpd.conf#lmtp).
opensmtpd does not support Dovecot's SASL protocol, you can signal your interest [in their dedicated issue](https://github.com/OpenSMTPD/OpenSMTPD/issues/1085).
## Chasquid
[chasquid](https://blitiri.com.ar/p/chasquid/) supports [LMTP delivery](https://blitiri.com.ar/p/chasquid/howto/#configure-chasquid)
and the [Dovecot Auth Protocol](https://blitiri.com.ar/p/chasquid/docs/dovecot/) but only over UNIX sockets. Thus, it's not yet compatible with Aerogramme.
## Other servers
[Exim](https://www.exim.org/) has some support [for LMTP](https://www.exim.org/exim-html-current/doc/html/spec_html/ch-the_lmtp_transport.html) too.
[sendmail](https://www.proofpoint.com/us/products/email-protection/open-source-email-solution) might deliver to LMTP through a dedicated binary named [smtpc](https://www.sympa.community/manual/customize/lmtp-delivery.html).
<!--
Let start by creating a folder for Postfix, for example `/opt/aerogramme-postfix`:
```bash
mkdir /tmp/aerogramme-postfix
cd /opt/aerogramme-postfix
mkdir queue
```
To run Postfix, you need some users / groups setup (do it in a container if you don't want to mess up your system):
```bash
sudo useradd postfix
sudo groupadd postdrop
```
The considered `main.cf`:
```
# postfix files
queue_directory=/tmp/postfix-test/queue
data_directory=/tmp/postfix-test/data
maillog_file=/dev/stdout
# nuke postfix legacy as much as possible (an era of UNIX account and open relay on local networks...)
mynetworks=127.0.0.0/8
compatibility_level=3.6
alias_database=
alias_maps=
# add support for authentication
smtpd_sasl_auth_enable=yes
smtpd_tls_auth_only = yes
smtpd_relay_restrictions =
permit_sasl_authenticated
reject_unauth_destination
# add support for TLS (RSA only for now)
smtpd_tls_cert_file=/home/quentin/.lego/certificates/saint-ex.deuxfleurs.org.crt
smtpd_tls_key_file=/home/quentin/.lego/certificates/saint-ex.deuxfleurs.org.key
# aerogramme specific configuration
smtpd_sasl_type = dovecot
smtpd_sasl_path = inet:localhost:12345
virtual_mailbox_domains=saint-ex.deuxfleurs.org
virtual_transport=lmtp:[::1]:1025
```
The considered `master.cf`:
```
smtp inet n - n - - smtpd
smtp unix - - n - - smtp
smtps inet n - n - - smtpd
-o smtpd_tls_wrappermode=yes
-o smtpd_sasl_auth_enable=yes
-o smtpd_client_restrictions=permit_sasl_authenticated,reject
-o milter_macro_daemon_name=ORIGINATING
lmtp unix - - n - - lmtp
anvil unix - - n - 1 anvil
rewrite unix - - n - - trivial-rewrite
cleanup unix n - n - 0 cleanup
qmgr fifo n - n 300 1 qmgr
tlsmgr unix - - n 1000? 1 tlsmgr
bounce unix - - n - 0 bounce
defer unix - - n - 0 bounce
trace unix - - n - 0 bounce
error unix - - n - - error
retry unix - - n - - error
discard unix - - n - - discard
virtual unix - n n - - virtual
proxymap unix - - n - - proxymap
postlog unix-dgram n - n - 1 postlogd
```
-->

View file

@ -1,70 +0,0 @@
+++
title = "User Management"
weight = 20
+++
Aerogramme can externalize its user management through [LDAP](@/documentation/reference/config.md), but if you target a simple deployment, it has also an internal user management system that will be covered here.
As a pre-requisite, you must have your `aerogramme.toml` file configured for "Static" user management as described in [Configuration file](@/documentation/cookbook/config.md). You also need a configured Garage instance, either [local](@/documentation/cookbook/single-node-garage-storage.md) or distributed.
## Adding users
Once you have done all the previous pre-requisites, Aerogramme provides a command-line utility to add a user:
```bash
aerogramme provider account add --login alice --setup <(cat <<EOF
email_addresses = [ "alice@example.tld", "alice.smith@example.tld" ]
clear_password = "hunter2"
storage_driver = "Garage"
s3_endpoint = "http://localhost:3900"
k2v_endpoint = "http://localhost:3904"
aws_region = "garage"
aws_access_key_id = "GKa8..."
aws_secret_access_key = "7ba95..."
bucket = "aerogramme-alice"
EOF
)
aerogramme provider account add --login bob --setup # ...
# ...
aerogramme provider account add --login charlie --setup # ...
# ...
```
*You must run this command for all your users.*
If you don't set the `clear_password` field, it will be interactively asked.
This command will edit your `user_list` file. If your Aerogramme daemon is already running, you must reload it in order to load the newly added users. To reload Aerogramme, run:
```bash
aerogramme provider reload
```
## Change account password
You might need to change an account password, you can run:
```bash
aerogramme provider account change-password --login alice
```
You can pass the old and new password through environment variables:
```bash
AEROGRAMME_OLD_PASSWORD=x \
AEROGRAMME_NEW_PASSWORD=y \
aerogramme provider account change-password --login alice
```
*Do not forget to reload*
## Delete account
```bash
aerogramme provider account delete --login alice
```
*Do not forget to reload*

View file

@ -1,64 +0,0 @@
+++
title = "TLS"
weight = 30
+++
In the [Configuration File](@/documentation/cookbook/config.md) page of the cookbook, we configure a cleartext IMAP service
that is unsecure, as anyone spying on the network can intercept the user's password.
## Activate IMAP TLS
You must replace the `[imap_unsecure]` block of your configuration file with a new `[imap]` block:
```toml
[imap]
bind_addr = "[::]:993"
certs = "cert.pem"
key = "key.pem"
```
## Generate self-signed certificates
If you want to quickly try the TLS endpoint, you can generate a self-signed certificate with openssl:
```bash
openssl ecparam -out key.pem -name secp256r1 -genkey
openssl req -new -key key.pem -x509 -nodes -days 365 -out cert.pem
```
This configuration is not secure as it is vulnerable to man-in-the-middle attacks.
It will also triggers a big red warning in many email clients, and sometimes it will even be impossible to configure an account.
## Generate valid certificates through Let's Encrypt
Automated certificate renewal has been popularized by Let's Encrypt through the [ACME protocol](https://en.wikipedia.org/wiki/Automatic_Certificate_Management_Environment).
Today, many certificate providers implement it, like ZeroSSL, Buypass Go SSL, or even Google Cloud.
Many clients that implement the ACME protocol exist (certbot, lego, etc.), [a very long list exist on LE website](https://letsencrypt.org/docs/client-options/).
Finally, certificates can be obtained in exchange of a validation, that can occur over HTTP (HTTP01 challenge) or DNS (DNS01 challenge).
This example will be given for Let's Encrypt with Lego for a DNS01 challenge with Gandi as the DNS provider.
```bash
GANDIV5_API_KEY=xxx \
GANDIV5_PERSONAL_ACCESS_TOKEN=xxx \
lego \
--email you@example.tld \
--dns gandiv5 \
--domain example.tld \
--domains imap.example.tld \
--domains smtp.example.tld \
run
```
*Note 1: theoretically only `GANDIV5_PERSONAL_ACCESS_TOKEN` should be required, but it did not work for me.*
*Note 2: we generate a certificate for the root domain and SMTP because it will simplify your testing while following the cookbook.
But if you already have a working email stack, it's not required.*
If the command ran successfully, you now have 2 files:
- `.lego/certificates/example.tld.crt`
- `.lego/certificates/example.tld.key`
You can directly use them in Aerogramme (the first one must be put on `certs` and the second one on `key`).
You must configure some way to automatically renew your certificates, the [lego documentation](https://go-acme.github.io/lego/usage/cli/renew-a-certificate/) explains how you can do it.

View file

@ -1,33 +0,0 @@
+++
title = "Concepts"
weight = 40
sort_by = "weight"
template = "documentation.html"
+++
## Goals
**Highly resilient** - Multiple instances of Aerogramme can been run in parallel without coordination.
Multi-region support, survive datacenter failures.
**Easy to operate** - Transparently replicate mailbox and solve conflicts. Integrate with your LDAP server. Privacy friendly
**Per-user encryption of mailboxes.**
Can be run as a local proxy to hide your mailbox content from the server.
## Main concepts
[Per-user encryption](@/documentation/design/per-user-encryption.md) - Aerogramme can't persist data in plain text,
instead its whole data model is built upon the idea that a mailbox is a series of encrypted blob. These blobs do not reveal
the mailbox name, the metadata of stored emails or even the flags that have been put on them.
**Continuous Mailbox Merging** - As multiple instances of Aerogramme can be run simultaneously, and that's possible
that 2 instances interact with the same mailbox (over Garage), each process monitors external writes for the mailbox
they track and automatically do the merging [in a correct way](@/documentation/internals/imap_uid.md).
**Modular design** - Login and Mailbox storage is abstracted behind an interface: multiple
implementations are thus possible.
**Microservice** - Aerogramme is stateless and tries to adhere as much as possible to the [12 factor app](https://12factor.net/) principles so it's easy to run in a cluster.

View file

@ -1,47 +0,0 @@
+++
title = "Per-user encryption"
weight = 10
+++
Aerogramme can't store plaintext data, instead all users data must be encrypted with a per-user key on the storage target.
Of course, cryptography is always a tradeoff with other properties (usability, compatibility, features, etc.),
so the way the key is derived and where the encryption/decryption can take place can be configured.
## Compared to PGP
PGP only encrypts the body of the email, it keeps in cleartext the metadata of your email (fields like From:, To:, or Subject: are readable by an attacker),
it can't protect your flags, your mailbox names, etc. Conversely, all this data is encrypted in Aerogramme.
## Security flavors
These different configurations are identified as flavors:
**Transparent Flavor** - Users' data are decrypted in the server's RAM. The key is derived from the user's password upon IMAP login.
It's completely transparent to end users. While an attacker having full access to the server can still compromise users' data, it reduces the attack surface.
In term of security, it's similar to [TREES](https://0xacab.org/liberate/trees) (from [RiseUp](https://riseup.net)) or [scrambler](https://github.com/posteo/scrambler-plugin) (from [Posteo](https://posteo.de/)).
**Hardened Flavor** - Users' data are decrypted on the user's device. The private key is only stored on the user's device.
The server knows only users' public keys. When an email is received by the server, it is directly encrypted and stored by the server.
The user must run Aerogramme locally as a proxy that will manipulate the encrypted blobs stored in the server
and decrypt them locally, exposing an IMAP proxy interface. An attacker having full access to the server at a point in time
will not be able to compromise your already received data (but can intercept new emails). It's similar to [Proton Mail Bridge](https://proton.me/fr/mail/bridge),
but keep in mind that Aerogramme does not support (yet) end-to-end email encryption like Proton Mail or Tutanota, *so Aerogramme is less secure*.
<!--When run on server (both for the transparent and hardened flavor), Aerogramme must be started in the "provider mode", as in "email service provider".
When run on the end-user device (only the hardened flavor require that), Aerogramme must be started in the "companion mode", as in "a companion process of your email client".
These 2 words are materialized as 2 subcommands on the Aerogramme binary: `aerogramme provider` and `aerogramme companion`.-->
## Aerogramme roles
The transparent flavor only requires Aerogramme to be run on the service provider server, while the hardened flavor require the end-user to run a local proxy.
More specifically:
**Provider** - Provider must be run by the service provider, it is used for both flavors. For the transparent flavor, it both receives emails through LMTP and expose
the mailbox through IMAP. For the hardened mode, it only receives emails through LMTP, encrypt them with user's public key, but can't expose them through IMAP as the server
can't decrypt them. Provider commands are available through the `aerogramme provider` subcommand.
**Companion** - Companion must be run by the end user, it is used only for the hardened flavor. It fetches encrypted blobs from the server
of the email provider, decrypt them locally, and expose the mailbox across the IMAP interface, acting as a local proxy.
Companion commands are avaialble through the `aerogramme companion` subcommand.

View file

@ -1,10 +0,0 @@
+++
title = "Development"
weight = 60
sort_by = "weight"
template = "documentation.html"
+++
To help you in the development, you might need:
- [Help debugging the protocol with socat](@/documentation/development/netcat.md)
- [Help finding datasets of email](@/documentation/development/dataset.md)

View file

@ -1,20 +0,0 @@
+++
title = "Datasets"
weight = 20
+++
To debug / fuzz Aerogramme, we seek some datasets.
## Emails datasets
- [stalwartlabs/mail-parser](https://github.com/stalwartlabs/mail-parser/tree/main/tests)
- [basecamp/mail](https://github.com/basecamp/mail/tree/master/spec/fixtures)
- [Enron dataset - 500k entries](https://www.cs.cmu.edu/~enron/)
- [Jeb Bush dataset - 290k entries](https://ab21www.s3.amazonaws.com/JebBushEmails-Text.7z)
- [spambase dataset](https://archive.ics.uci.edu/ml/datasets/spambase) (also contains legit emails)
- mailing lists
- [W3C](https://lists.w3.org/Archives/Public/)
- [Wikimedia](https://lists.wikimedia.org/hyperkitty/)
- [Apache](https://commons.apache.org/mail-lists.html) - [tomcat](https://lists.apache.org/list.html?dev@tomcat.apache.org), [kafka](https://lists.apache.org/list.html?dev@kafka.apache.org).
- [Linux](https://marc.info/?l=linux-kernel)
- your own inbox

View file

@ -1,120 +0,0 @@
+++
title = "Debug with socat"
weight = 10
+++
In this document, we collected some traces from netcat
that could help you quickly test/debug Aerogramme.
Start with:
```
socat - tcp:localhost:1143,crlf
```
## Login
```
S: * OK Hello
C: A1 LOGIN alice hunter2
S: A1 OK Completed
```
## Select mailbox
```
C: A2 SELECT INBOX
S: * 0 EXISTS
S: * 0 RECENT
S: * FLAGS (\Seen \Answered \Flagged \Deleted \Draft)
S: * OK [PERMANENTFLAGS (\Seen \Answered \Flagged \Deleted \Draft \*)] Flags permitted
S: * OK [UIDVALIDITY 1] UIDs valid
S: * OK [UIDNEXT 1] Predict next UID
S: A2 OK [READ-WRITE] Select completed
```
## Check for new mails
Here we simply use the `NOOP` command to trigger an interaction with the server.
```
C: A4 NOOP
S: * 1 EXISTS
S: A4 OK NOOP completed.
```
## See mail structure
```
C: A5 FETCH 1 FULL
S: * 1 FETCH (UID 1 FLAGS () INTERNALDATE "06-Jul-2022 14:46:42 +0000"
RFC822.SIZE 117 ENVELOPE (NIL "test" (("Alan Smith" NIL "alan" "smith.me"))
NIL NIL (("Alan Smith" NIL "alan" "aerogramme.tld")) NIL NIL NIL NIL)
BODY ("TEXT" "test" NIL "test" "test" "test" 1 1))
```
## See mail content
```
C: A6 FETCH 1 (RFC822)
S: * 1 FETCH (UID 1 RFC822 {117}
S: Subject: test
S: From: Alan Smith <alan@smith.me>
S: To: Alan Smith <alice@example.tld>
S:
S: Hello, world!
S: .
S: )
S: A6 OK FETCH completed
```
## Disconnect
```
C: A7 LOGOUT
S: * BYE Logging out
S: A7 OK Logout completed
```
## Full trace
An (old) IMAP trace extracted from Aerogramme:
```
S: * OK Hello
C: A1 LOGIN alice hunter2
S: A1 OK Completed
C: A2 SELECT INBOX
S: * 0 EXISTS
S: * 0 RECENT
S: * FLAGS (\Seen \Answered \Flagged \Deleted \Draft)
S: * OK [PERMANENTFLAGS (\Seen \Answered \Flagged \Deleted \Draft \*)] Flags permitted
S: * OK [UIDVALIDITY 1] UIDs valid
S: * OK [UIDNEXT 1] Predict next UID
S: A2 OK [READ-WRITE] Select completed
C: A3 NOOP
S: A3 OK NOOP completed.
<---- e-mail arrives through LMTP server ---->
C: A4 NOOP
S: * 1 EXISTS
S: A4 OK NOOP completed.
C: A5 FETCH 1 FULL
S: * 1 FETCH (UID 1 FLAGS () INTERNALDATE "06-Jul-2022 14:46:42 +0000"
RFC822.SIZE 117 ENVELOPE (NIL "test" (("Alan Smith" NIL "alan" "smith.me"))
NIL NIL (("Alan Smith" NIL "alan" "aerogramme.tld")) NIL NIL NIL NIL)
BODY ("TEXT" "test" NIL "test" "test" "test" 1 1))
S: A5 OK FETCH completed
C: A6 FETCH 1 (RFC822)
S: * 1 FETCH (UID 1 RFC822 {117}
S: Subject: test
S: From: Alan Smith <alan@smith.me>
S: To: Alan Smith <alan@aerogramme.tld>
S:
S: Hello, world!
S: .
S: )
S: A6 OK FETCH completed
C: A7 LOGOUT
S: * BYE Logging out
S: A7 OK Logout completed
```

View file

@ -1,10 +0,0 @@
+++
title = "Internals"
weight = 50
sort_by = "weight"
template = "documentation.html"
+++
Internals are document that describe how Aerogramme works internally.
They are currently stored as a "knowledge base" without any proper structure.
Feel free to explore them.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 26 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 73 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 8.9 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 18 KiB

View file

@ -1,102 +0,0 @@
+++
title = "Cryptography & key management"
weight = 20
+++
## Key types
Keys that are used:
- master secret key (for indexes)
- curve25519 public/private key pair (for incoming mail)
## Stored keys
Keys that are stored in K2V under PK `keys`:
- `public`: the public curve25519 key (plain text)
- `salt`: the 32-byte salt `S` used to calculate digests that index keys below
- if a password is used, `password:<truncated(128bit) argon2 digest of password using salt S>`:
- a 32-byte salt `Skey`
- followed a secret box
- that is encrypted with a strong argon2 digest of the password (using the salt `Skey`) and a user secret (see below)
- that contains the master secret key and the curve25519 private key
## User secret
An additionnal secret that is added to the password when deriving the encryption key for the secret box.
This additionnal secret should not be stored in K2V/S3, so that just knowing a user's password isn't enough to be able
to decrypt their mailbox (supposing the attacker has a dump of their K2V/S3 bucket).
This user secret should typically be stored in the LDAP database or just in the configuration file when using
the static login provider.
## Operations pseudo-code
We resume here the key cryptography logic for various operations on the mailbox
### Creating a user account
This logic is run when the mailbox is created.
Two modes are supported: password and certificate.
INITIALIZE(`user_secret`, `password`):
&nbsp;&nbsp; if `"salt"` or `"public"` already exist, BAIL
&nbsp;&nbsp; generate salt `S` (32 random bytes)
&nbsp;&nbsp; generate `public`, `private` (curve25519 keypair)
&nbsp;&nbsp; generate `master` (secretbox secret key)
&nbsp;&nbsp; calculate `digest = argon2_S(password)`
&nbsp;&nbsp; generate salt `Skey` (32 random bytes)
&nbsp;&nbsp; calculate `key = argon2_Skey(user_secret + password)`
&nbsp;&nbsp; serialize `box_contents = (private, master)`
&nbsp;&nbsp; seal box `blob = seal_key(box_contents)`
&nbsp;&nbsp; write `S` at `"salt"`
&nbsp;&nbsp; write `concat(Skey, blob)` at `"password:{hex(digest[..16])}"`
&nbsp;&nbsp; write `public` at `"public"`
INITIALIZE_WITHOUT_PASSWORD(`private`, `master`):
&nbsp;&nbsp; if `"salt"` or `"public"` already exist, BAIL
&nbsp;&nbsp; generate salt `S` (32 random bytes)
&nbsp;&nbsp; write `S` at `"salt"`
&nbsp;&nbsp; calculate `public` the public key associated with `private`
&nbsp;&nbsp; write `public` at `"public"`
### Opening the user's mailboxes (upon login)
OPEN(`user_secret`, `password`):
&nbsp;&nbsp; load `S = read("salt")`
&nbsp;&nbsp; calculate `digest = argon2_S(password)`
&nbsp;&nbsp; load `blob = read("password:{hex(digest[..16])}")`
&nbsp;&nbsp; set `Skey = blob[..32]`
&nbsp;&nbsp; calculate `key = argon2_Skey(user_secret + password)`
&nbsp;&nbsp; open secret box `box_contents = open_key(blob[32..])`
&nbsp;&nbsp; retrieve `master` and `private` from `box_contents`
&nbsp;&nbsp; retrieve `public = read("public")`
OPEN_WITHOUT_PASSWORD(`private`, `master`):
&nbsp;&nbsp; load `public = read("public")`
&nbsp;&nbsp; check that `public` is the correct public key associated with `private`
### Account maintenance
ADD_PASSWORD(`user_secret`, `existing_password`, `new_password`):
&nbsp;&nbsp; load `S = read("salt")`
&nbsp;&nbsp; calculate `digest = argon2_S(existing_password)`
&nbsp;&nbsp; load `blob = read("existing_password:{hex(digest[..16])}")`
&nbsp;&nbsp; set `Skey = blob[..32]`
&nbsp;&nbsp; calculate `key = argon2_Skey(user_secret + existing_password)`
&nbsp;&nbsp; open secret box `box_contents = open_key(blob[32..])`
&nbsp;&nbsp; retrieve `master` and `private` from `box_contents`
&nbsp;&nbsp; calculate `digest_new = argon2_S(new_password)`
&nbsp;&nbsp; generate salt `Skeynew` (32 random bytes)
&nbsp;&nbsp; calculate `key_new = argon2_Skeynew(user_secret + new_password)`
&nbsp;&nbsp; serialize `box_contents_new = (private, master)`
&nbsp;&nbsp; seal box `blob_new = seal_key_new(box_contents_new)`
&nbsp;&nbsp; write `concat(Skeynew, blob_new)` at `"new_password:{hex(digest_new[..16])}"`
REMOVE_PASSWORD(`password`):
&nbsp;&nbsp; load `S = read("salt")`
&nbsp;&nbsp; calculate `digest = argon2_S(existing_password)`
&nbsp;&nbsp; check that `"password:{hex(digest[..16])}"` exists
&nbsp;&nbsp; check that other passwords exist ?? (or not)
&nbsp;&nbsp; delete `"password:{hex(digest[..16])}"`

View file

@ -1,57 +0,0 @@
+++
title = "Data format"
weight = 10
+++
## Raw emails
Raw emails are stored in S3 encrypted.
## Bay(ou)
Checkpoints are stored in S3 at `<path>/checkpoint/<timestamp>`. Example:
```
348 TestMailbox/checkpoint/00000180d77400dc126b16aac546b769
369 TestMailbox/checkpoint/00000180d776e509b68fdc5c376d0abc
357 TestMailbox/checkpoint/00000180d77a7fe68f4f76e3b45aa751
```
Operations are stored in K2V at PK `<path>`, SK `<timestamp>`. Example:
```
TestMailbox 00000180d77400dc126b16aac546b769 RcIsESv7WrjMuHwyI/dvCnkIfy6op5Tiylf0WSnn94aMS2uagl7YeMBwdv09TiSXBpu5nJ5e/9QFSfuEI/NqKrdQkX54MOsnaIGhRb0oqUG3KNaar3BiVSvYvXuzYhk4ii+TUS2Eyd6fCCaNVNM5
TestMailbox 00000180d775f27f5542a13fc21c665e RrTSOup/zO1Ei+QrjBcDLt4vvFSY+WJPBodwY64wy2ftW+Oh3VSArvlO4SAEPmdsx1gt0HPBZYR/OkVWsZpmix1ZLFUmvdib+rjNkorHQW1p+oLVK8tolGrqk4SRwl88cqu466T4vBEpDu7tRbH0
TestMailbox 00000180d775f292b3c8da00718389b4 VAwd8SRycIwsipZW5AcSG+EIYZVWn/Uj/TADbWhb4x5LVMceiRBHWVquY08RgT/lJKdhIcUqBA15bVG3klIg8tLsWJVG784NbsZwdGRczWmngcA=
TestMailbox 00000180d775f29d24842cf375d679e0 /FbXtEwm/bijtvOdqM1XFvKUalQFAOPHp+vF9jZThZn/viY5a6W1PyHeI8kTusF6EsVPAwPHpQyjIv/ghskC0f+zUEsSUhDwQANdwLNqDLAvTA==
TestMailbox 00000180d7768ab1dc01ff504e887c62 W/fF0WitpxJ05yHeOv96BlpGymT1kVOjkIW00t9e6UE7mxkvNflu9cZSCd8PDJd2ymC0sC9bLVFAXKmNZsmCFEEHMQSyrX61qTYo4KFCZMp5zm6fXubaYuurrzjXzfUP/R7kBvICFZlF0daf0SwX
TestMailbox 00000180d7768aba629c7ad6adf25228 IPzYGNsSepCX2AEnee/1Eas9a3c5esPSmrNkvaj4XcFb6Ft2KC8N6ubUR3wB+K0oYCTQym6nhHG5dlAxf6NRu7Rk8YtBTBmSqtGqd6kMZ3bU5b8=
TestMailbox 00000180d7768ac1870cda61784114d4 aaLiaWxfx1mxh6aoKE3xUUfZWhivZ/K7ixabflFDW7FO/qbpvCaa+Y6w4lQemTy6m+leAhXGN+Dbyv2qP20yJ9O4oJF5d3Lz5Iv5uF18OxhVZzw=
TestMailbox 00000180d776e4fb294ccdab2612b406 EtUPrLgEeOyab2QRnSie4I3Me9dDh10UdwWnUKdGa/8ezMJDtiy7XlW+tUfJdqtu6Vj7nduT0emDOXbBZsNwlcmzgYNwuNu3I9AfhZTFWtwLgB+wnAgB/jim82DDrJfLia8kB2eA2ao5jfJ3uMSZ
TestMailbox 00000180d776e501528546d340490291 Lz4Z9wCTk1lZ86lL01urhAan4oHcr1NBqdRe+CDpA51D9IncA5+Fhc8I6knUIh2qQ5/woWgISLAVwzSS+0+TxrYoqxf5FumIQtUJfwDER5La3n0=
TestMailbox 00000180d776e509b68fdc5c376d0abc RUGE2xB3fFX/wRH/p2fHIUa+rMaXSRd7fY9zglw0pRfVPqJfpniOjAe4GHIwGlwbwjtFOwS5a+Q7yr0Wez6QwD+ohhqRFKpbjcFcN7VfMyVAf+k=
TestMailbox 00000180d7784b987a8ad8106dc400c9 K+0LVEtBbTnWNS67jy9DtTvQyd5arovduvu490tLOE2TzVhuVoF4pfvTMTN12bH3KwEAHeDfuwKkKJFqldOywouTYPzEjZFkJzyagHrkl6dfnE5CqmlDv+Vc5TOQRskxjW+wQiZdjU8wGiBiBGYh
TestMailbox 00000180d7784bede69ac3cff2c6b724 XMFY3+b1r1//uolVz80JSI3g/84XCk3Tm7/S0BFv+Qe/Xv3/poLrOvAKEe+GzD2s22j8p/T2RXR/JSZckzgjEZeO0wbPDXVQd94di2Pff7jxAH8=
TestMailbox 00000180d7784bffe2595abe7ed81858 QQZhF+7wSHfikoAp93a+UY/XDIX7TVnnVYOtmQ2XHnDKA2F6snRJCPbYBO4IRHCRfVrjDGi32c41it2C3Mu5PBepabxapsW1rfIV3rlX2lkKHtI=
TestMailbox 00000180d77a7fb3f01dbb147c20cf7f IHOlOa1JI11RUKVvQUq3HQPxiRr4UCeE+pHmL8DtNMkOh62V4spuP0VvvQTJCQcPQ1EQR/QcxZ3s7uHLkrZAHF30BkpUkGqsLBWpnyug/puhdiixWsMyLLb6G90zFjiComUwptnDc/CCXtGEHdSW
TestMailbox 00000180d77a7fbb54b100f521ceb347 Ze4KyyTCgrYbZlXlJSY5hNob8sMXvBAmwIx2cADbX5P0M1IHXwXfloEzvvd6WYOtatFC2GnDSrmQ6RdCfeZ3WV9TZilqa0Fv0XEg48sVyVCcguw=
TestMailbox 00000180d77a7fe68f4f76e3b45aa751 cJJVvvRzTVNKUaIHPCCDY2uY7/HlmkxGgo3ozWBlBSRDeBqU65zgZD3QIPCxa6xaqB/Gc0bQ9BGzfU0cvVmO5jgNeeDnbqqs3oeA2jml/Qv2YO9upApfNQtDT1GiwJ8vrgaIow==
TestMailbox 00000180d8e513d3ea58c679a13178ac Ce5su2YOxNmTzk2dK8SX8V/Uue5uAC7oklEjhesY9wCMqGphhOkdWjzCqq0xOzcb/ZzzZ58t+mTksNSYIU4kddHIHBFPgqIwKthVk2mlUdqYiN/Y2vEGqv+YmtKY+GST/7Ee87ZHpU/5sv0GoXxT
TestMailbox 00000180d8e5145a23f8faee86283900 sp3D8xFZcM9icNlDJXIUDJb3mo6VGD9f1aDHD+4RbPdx6mTYF+qNTsPHKCxHHxT/9NfNe8XPg2+8xYRtm7SXfgERZBDB8ye+Xt3fM1k+wbL6RsaJmDHVECeXeL5KHuITzpI22A==
TestMailbox 00000180d8e51465c38f0585f9bb760e FF0VId2O/bBNzYD5ABWReMs5hHoHwynOoJRKj9vyaUMZ3JykInFmvvRgtCbJBDjTQPwPU8apphKQfwuicO76H7GtZqH009Cbv5l8ZTRJKrmzOQmtjzBQc2eGEUMPfbml5t0GCg==
```
The timestamp of a checkpoint corresponds to the timestamp of the first operation NOT included in the checkpoint.
In other words, to reconstruct the final state:
- find timestamp `<ts>` of last checkpoint
- load checkpoint `<ts>`
- load and apply all operations starting from `<ts>`, included
## UID index
The UID index is an application of the Bayou storage module
used to assign UID numbers to e-mails.
See the [IMAP UID proof](/documentation/design/imap-uid/).

View file

@ -1,236 +0,0 @@
+++
title = "IMAP UID proof"
weight = 40
+++
Aerogramme tolerates mailbox
desynchronization between active sessions. Desynchronization
means that different emails can be assigned the same IMAP UID.
Desynchronization are handled by increasing the IMAP UIDVALIDITY field.
When a session learns an operation in a mailbox that conflicts with is
current view, it rebuilds its view from the operation logs and compute a new UIDVALIDITY.
This document shows that, considering any mailbox state for a given session,
the UIDVALIDITY is increased if a conflict occures. In other words, it proves that clients
will never have a corrupted view.
*Note that changing the IMAPUIDVALIDITY will trigger a full mailbox resynchronization for the user,
so it must be used in last resort. It's a "correctness security net" to ease operations, but in normal conditions,
such desynchronizations should not occure. It will be discussed later in the cookbook.*
## Formal definition of a mailbox
### Notations
- $h$: the hash of a message, $\mathbb{H}$ is the set of hashes
- $i$: the UID of a message $(i \in \mathbb{N})$
- $f$: a flag attributed to a message (it's a string), we write
$\mathbb{F}$ the set of possible flags
- if $M$ is a map (aka a dictionnary), if $x$ has no assigned value in
$M$ we write $M [x] = \bot$ or equivalently $x \not\in M$. If $x$ has a value
in the map we write $x \in M$ and $M [x] \neq \bot$
### State
- A map $I$ such that $I [h]$ is the UID of the message whose hash is
$h$ is the mailbox, or $\bot$ if there is no such message
- A map $F$ such that $F [h]$ is the set of flags attributed to the
message whose hash is $h$
- $v$: the UIDVALIDITY value
- $n$: the UIDNEXT value
- $s$: an internal sequence number that is mostly equal to UIDNEXT but
also grows when mails are deleted
### Operations
- MAIL\_ADD$(h, i)$: the value of $i$ that is put in this operation is
the value of $s$ in the state resulting of all already known operations,
i.e. $s (O_{gen})$ in the notation below where $O_{gen}$ is
the set of all operations known at the time when the MAIL\_ADD is generated.
Moreover, such an operation can only be generated if $I (O_{gen}) [h]
= \bot$, i.e. for a mail $h$ that is not already in the state at
$O_{gen}$.
- MAIL\_DEL$(h)$
- FLAG\_ADD$(h, f)$
- FLAG\_DEL$(h, f)$
## Mailbox handling algorithms
### Add an email
**apply** MAIL\_ADD$(h, i)$:
&nbsp;&nbsp; *if* $i < s$:
&nbsp;&nbsp;&nbsp;&nbsp; $v \leftarrow v + s - i$
&nbsp;&nbsp; *if* $F [h] = \bot$:
&nbsp;&nbsp;&nbsp;&nbsp; $F [h] \leftarrow F_{initial}$
&nbsp;&nbsp;$I [h] \leftarrow s$
&nbsp;&nbsp;$s \leftarrow s + 1$
&nbsp;&nbsp;$n \leftarrow s$
### Delete an email
**apply** MAIL\_DEL$(h)$:
&nbsp;&nbsp; $I [h] \leftarrow \bot$
&nbsp;&nbsp;$F [h] \leftarrow \bot$
&nbsp;&nbsp;$s \leftarrow s + 1$
### Add a flag
**apply** FLAG\_ADD$(h, f)$:
&nbsp;&nbsp; *if* $h \in F$:
&nbsp;&nbsp;&nbsp;&nbsp; $F [h] \leftarrow F [h] \cup \{ f \}$
### Delete a flag
**apply** FLAG\_DEL$(h, f)$:
&nbsp;&nbsp; *if* $h \in F$:
&nbsp;&nbsp;&nbsp;&nbsp; $F [h] \leftarrow F [h] \backslash \{ f \}$
## Formal definition of the mailbox log
The mailbox log is a set of operation:
- $o$ is an operation such as MAIL\_ADD, MAIL\_DEL, etc. $O$ is a set of
operations. Operations embed a timestamp, so a set of operations $O$ can be
written as $O = [o_1, o_2, \ldots, o_n]$ by ordering them by timestamp.
- if $o \in O$, we write $O_{\leqslant o}$, $O_{< o}$, $O_{\geqslant
o}$, $O_{> o}$ the set of items of $O$ that are respectively earlier or
equal, strictly earlier, later or equal, or strictly later than $o$. In
other words, if we write $O = [o_1, \ldots, o_n]$, where $o$ is a certain
$o_i$ in this sequence, then:
$$
\begin{aligned}
O_{\leqslant o} &= \{ o_1, \ldots, o_i \}\\
O_{< o} &= \{ o_1, \ldots, o_{i - 1} \}\\
O_{\geqslant o} &= \{ o_i, \ldots, o_n \}\\
O_{> o} &= \{ o_{i + 1}, \ldots, o_n \}
\end{aligned}
$$
- If $O$ is a set of operations, we write $I (O)$, $F (O)$, $n (O), s
(O)$, and $v (O)$ the values of $I, F, n, s$ and $v$ in the state that
results of applying all of the operations in $O$ in their sorted order. (we
thus write $I (O) [h]$ the value of $I [h]$ in this state)
## Proving that UIDValidity is changed for each conflict
**Hypothesis:**
An operation $o$ can only be in a set $O$ if it was
generated after applying operations of a set $O_{gen}$ such that
$O_{gen} \subset O$ (because causality is respected in how we deliver
operations). Sets of operations that do not respect this property are excluded
from all of the properties, lemmas and proofs below.
**Simplification:** We will now exclude FLAG\_ADD and FLAG\_DEL
operations, as they do not manipulate $n$, $s$ and $v$, and adding them should
have no impact on the properties below.
**Small lemma:** If there are no FLAG\_ADD and FLAG\_DEL operations,
then $s (O) = | O |$. This is easy to see because the possible operations are
only MAIL\_ADD and MAIL\_DEL, and both increment the value of $s$ by 1.
**Definition:** If $o$ is a MAIL\_ADD$(h, i)$ operation, and $O$ is a
set of operations such that $o \in O$, then we define the following value:
$$
C (o, O) = s (O_{< o}) - i
$$
We say that $C (o, O)$ is the *number of conflicts of $o$ in $O$*: it
corresponds to the number of operations that were added before $o$ in $O$ that
were not in $O_{gen}$.
**Property:**
We have that:
$$
v (O) = \sum_{o \in O} C (o, O)
$$
Or in English: $v (O)$ is the sum of the number of conflicts of all of the
MAIL\_ADD operations in $O$. This is easy to see because indeed $v$ is
incremented by $C (o, O)$ for each operation $o \in O$ that is applied.
**Property:**
If $O$ and $O'$ are two sets of operations, and $O \subseteq O'$, then:
$$
\begin{aligned}
\forall o \in O, \qquad C (o, O) \leqslant C (o, O')
\end{aligned}
$$
This is easy to see because $O_{< o} \subseteq O'_{< o} $
and
$C (o, O') - C(o, O) = s (O'\_{< o}) - s (O\_{< o}) = | O'\_{< o} | - | O\_{< o} | \geqslant 0$
**Theorem:**
If $O$ and $O'$ are two sets of operations:
$$
\begin{aligned}
O \subseteq O' & \Rightarrow & v (O) \leqslant v (O')
\end{aligned}
$$
**Proof:**
$$
\begin{aligned}
v (O') &= \sum_{o \in O'} C (o, O')\\
& \geqslant \sum_{o \in O} C (o, O') \qquad \text{(because $O \subseteq
O'$)}\\
& \geqslant \sum_{o \in O} C (o, O) \qquad \text{(because $\forall o \in
O, C (o, O) \leqslant C (o, O')$)}\\
& \geqslant v (O)
\end{aligned}
$$
**Theorem:**
If $O$ and $O'$ are two sets of operations, such that $O \subset O'$,
and if there are two different mails $h$ and $h'$ $(h \neq h')$ such that $I
(O) [h] = I (O') [h']$
then:
$$v (O) < v (O')$$
**Proof:**
We already know that $v (O) \leqslant v (O')$ because of the previous theorem.
We will now look at the sum:
$$
v (O') = \sum_{o \in O'} C (o, O')
$$
and show that there is at least one term in this sum that is strictly larger
than the corresponding term in the other sum:
$$
v (O) = \sum_{o \in O} C (o, O)
$$
Let $o$ be the last MAIL\_ADD{{ katex(body="(h, \_)") }} operation in $O$, i.e. the operation
that gives its definitive UID to mail $h$ in $O$, and similarly $o'$ be the
last MAIL\_ADD$(h', \\_)$ operation in $O'$.
Let us write $I = I (O) [h] = I (O') [h']$
$o$ is the operation at position $I$ in $O$, and $o'$ is the operation at
position $I$ in $O'$. But $o \neq o'$, so if $o$ is not the operation at
position $I$ in $O'$ then it has to be at a later position $I' > I$ in $O'$,
because no operations are removed between $O$ and $O'$, the only possibility
is that some other operations (including $o'$) are added before $o$. Therefore
we have that $C (o, O') > C (o, O)$, i.e. at least one term in the sum above
is strictly larger in the first sum than in the second one. Since all other
terms are greater or equal, we have $v (O') > v (O)$.

View file

@ -1,152 +0,0 @@
+++
title = "Mutation log"
weight = 30
+++
Back to our data structure, we note that one major challenge with this project is to *correctly* handle mutable data.
With our current design, multiple processes can interact with the same mutable data without coordination, and we need a way to detect and solve conflicts.
Directly storing the result in a single k2v key would not work as we have no transaction or lock mechanism, and our state would be always corrupted.
Instead, we choose to record an ordered log of operations, ie. transitions, that each client can use locally to rebuild the state, each transition has its own immutable identifier.
This technique is sometimes referred to as event sourcing.
With this system, we can't have conflict anymore at Garage level, but conflicts at the IMAP level can still occur, like 2 processes assigning the same identifier to different emails.
We thus need a logic to handle these conflicts that is flexible enough to accommodate the application's specific logic.
Our solution is inspired by the work conducted by Terry et al. on [Bayou](https://dl.acm.org/doi/10.1145/224056.224070).
Clients fetch regularly the log from Garage, each entry is ordered by a timestamp and a unique identifier.
One of the 2 conflicting clients will be in the state where it has executed a log entry in the wrong order according to the specified ordering.
This client will need to roll back its changes to reapply the log in the same order as the others, and on conflicts, the same logic will be applied by all the clients to get, in the end, the same state.
## Command definitions
The log is made of a sequence of ordered commands that can be run to get a deterministic state in the end.
We define the following commands:
`FLAG_ADD <email_uuid> <flag>` - Add a flag to the target email
`FLAG_DEL <email_uuid> <flag>` - Remove a flag from a target email
`MAIL_DEL <email_uuid>` - Remove an email
`MAIL_ADD <email_uuid> <uid>` - Register an email in the mailbox with the given identifier
`REMOTE <s3 url>` - Command is not directly stored here, instead it must be fetched from S3, see batching to understand why.
*Note: FLAG commands could be enhanced with a MODSEQ field similar to the uid field for the emails, in order to implement IMAP RFC4551. Adding this field would force us to handle conflicts on flags
the same way as on emails, as MODSEQ must be monotonically incremented but is reset by a uid-validity change. This is out of the scope of this document.*
### A note on UUID
When adding an email to the system, we associate it with a *universally unique identifier* or *UUID.*
We can then reference this email in the rest of the system without fearing a conflict or a race condition are we are confident that this UUID is unique.
We could have used the email hash instead, but we identified some benefits in using UUID.
First, sometimes a mail must be duplicated, because the user received it from 2 different sources, so it is more correct to have 2 entries in the system.
Additionally, UUIDs are smaller and better compressible than a hash, which will lead to better performances.
### Batching commands
Commands that are executed at the same time can be batched together.
Let's imagine a user is deleting its trash containing thousands of emails.
Instead of writing thousands of log lines, we can append them in a single entry.
If this entry becomes big (eg. > 100 commands), we can store it to S3 with the `REMOTE` command.
Batching is important as we want to keep the number of log entries small to be able to fetch them regularly and quickly.
## Fixing conflicts in the operation log
The log is applied in order from the last checkpoint.
To stay in sync, the client regularly asks the server for the last commands.
When the log is applied, our system must enforce the following invariants:
- For all emails e1 and e2 in the log, such as e2.order > e1.order, then e2.uid > e1.uid
- For all emails e1 and e2 in the log, such as e1.uuid == e2.uuid, then e1.order == e2.order
If an invariant is broken, the conflict is solved with the following algorithm and the `uidvalidity` value is increased.
```python
def apply_mail_add(uuid, imap_uid):
if imap_uid < internalseq:
uidvalidity += internalseq - imap_uid
mails.insert(uuid, internalseq, flags=["\Recent"])
internalseq = internalseq + 1
uidnext = internalseq
def apply_mail_del(uuid):
mails.remove(uuid)
internalseq = internalseq + 1
```
A mathematical demonstration in Appendix D. shows that this algorithm indeed guarantees that under the same `uidvalidity`, different e-mails cannot share the same IMAP UID.
To illustrate, let us imagine two processes that have a first operation A in common, and then had a divergent state when one applied an operation B, and another one applied an operation C. For process 1, we have:
```python
# state: uid-validity = 1, uid_next = 1, internalseq = 1
(A) MAIL_ADD x 1
# state: uid-validity = 1, x = 1, uid_next = 2, internalseq = 2
(B) MAIL_ADD y 2
# state: uid-validity = 1, x = 1, y = 2, uid_next = 3, internalseq = 3
```
And for process 2 we have:
```python
# state: uid-validity = 1, uid_next = 1, internalseq = 1
(A) MAIL_ADD x 1
# state: uid-validity = 1, x = 1, uid_next = 2, internalseq = 2
(C) MAIL_ADD z 2
# state: uid-validity = 1, x = 1, z = 2, uid_next = 3, internalseq = 3
```
Suppose that a new client connects to one of the two processes after the conflicting operations have been communicated between them. They may have before connected either to process 1 or to process 2, so they might have observed either mail `y` or mail `z` with UID 2. The only way to make sure that the client will not be confused about mail UIDs is to bump the uidvalidity when the conflict is solved. This is indeed what happens with our algorithm: for both processes, once they have learned of the other's conflicting operation, they will execute the following set of operations and end in a deterministic state:
```python
# state: uid-validity = 1, uid_next = 1, internalseq = 1
(A) MAIL_ADD x 1
# state: uid-validity = 1, x = 1, uid_next = 2, internalseq = 2
(B) MAIL_ADD y 2
# state: uid-validity = 1, x = 1, y = 2, uid_next = 3, internalseq = 3
(C) MAIL_ADD z 2
# conflict detected !
# state: uid-validity = 2, x = 1, y = 2, z = 3, uid_next = 4, internalseq = 4
```
## A computed state for efficient requests
From a data structure perspective, a list of commands is very inefficient to get the current state of the mailbox.
Indeed, we don't want an `O(n)` complexity (where `n` is the number of log commands in the log) each time we want to know how many emails are stored in the mailbox.
<!--To address this issue, we plan to maintain a locally computed (rollbackable) state of the mailbox.-->
To address this issue, and thus query the mailbox efficiently, the MDA keeps an in-memory computed version of the logs, ie. the computed state.
**Mapping IMAP identifiers to email identifiers with B-Tree**
Core features of IMAP are synchronization and listing of emails.
Its associated command is `FETCH`, it has 2 parameters, a range of `uid` (or `seq`) and a filter.
For us, it means that we must be able to efficiently select a range of emails by their identifier, otherwise the user experience will be bad, and compute resources will be wasted.
We identified that by using an ordered map based on a B-Tree, we can satisfy this requirement in an optimal manner.
For example, Rust defines a [BTreeMap](https://doc.rust-lang.org/std/collections/struct.BTreeMap.html) object in its standard library.
We define the following structure for our mailbox:
```rust
struct mailbox {
emails: BTreeMap<ImapUid, (EmailUuid, Flags)>,
flags: BTreeMap<Flag, BTreeMap<ImapUid, EmailUuid>>,
name: String,
uid_next: u32,
uid_validity: u32,
/* other fields */
}
```
This data structure allows us to efficiently select a range of emails by their identifier by walking the tree, allowing the server to be responsive to syncronisation request from clients.
### Checkpoints
Having an in-memory computed state does not solve all the problems of operation on a log only, as 1) bootstrapping a fresh client is expensive as we have to replay possibly thousand of logs, and 2) logs would be kept indefinitely, wasting valuable storage resources.
As a solution to these limitations, the MDA regularly checkpoints the in-memory state. More specifically, it serializes it (eg. with MessagePack), compresses it (eg. with zstd), and then stores it on Garage through the S3 API.
A fresh client would then only have to download the latest checkpoint and the range of logs between the checkpoint and now, allowing swift bootstraping while retaining all of the value of the log model.
Old logs and old checkpoints can be garbage collected after a few days for example as long as 1) the most recent checkpoint remains, 2) that all the logs after this checkpoint remain and 3) that we are confident enough that no log before this checkpoint will appear in the future.

View file

@ -1,59 +0,0 @@
+++
title = "Mailboxes"
weight = 20
+++
IMAP servers, at their root, handle mailboxes.
In this document, we explain the domain logic of IMAP and how we map it to Garage data
with Aerogramme.
## IMAP Domain Logic
The main specification of IMAP is defined in [RFC3501](https://datatracker.ietf.org/doc/html/rfc3501).
It defines 3 main objects: Mailboxes, Emails, and Flags. The following figure depicts how they work together:
![An IMAP mailbox schema](/documentation/internals/mailbox.png)
Emails are stored ordered inside the mailbox, and for legacy reasons, the mailbox assigns 2 identifiers to each email we name `uid` and `seq`.
`seq` is the legacy identifier, it numbers messages in a sequence. Each time an email is deleted, the message numbering will change to keep a continuous sequence without holes.
While this numbering is convenient for interactive operations, it is not efficient to synchronize mail locally and quickly detect missing new emails.
To solve this problem, `uid` identifiers were introduced later. They are monotonically increasing integers that must remain stable across time and sessions: when an email is deleted, its identifier is never reused.
This is what Thunderbird uses for example when it synchronizes its mailboxes.
If this ordering cannot be kept, for example because two independent IMAP daemons were adding an email to the same mailbox at the same time, it is possible to change the ordering as long as we change a value named `uid-validity` to trigger a full resynchronization of all clients. As this operation is expensive, we want to minimize the probability of having to trigger a full resynchronization, but in practice, having this recovery mechanism simplifies the operation of an IMAP server by providing a rather simple solution to rare collision situations.
Flags are tags put on an email, some are defined at the protocol level, like `\Recent`, `\Deleted` or `\Seen`, which can be assigned or removed directly by the IMAP daemon.
Others can be defined arbitrarily by the client, for which the MUA will apply its own logic.
There is no mechanism in RFC3501 to synchronize flags between MUA besides listing the flags of all the emails.
IMAP has many extensions, such as [RFC5465](https://www.rfc-editor.org/rfc/rfc5465.html) or [RFC7162](https://datatracker.ietf.org/doc/html/rfc7162).
They are referred to as capabilities and are [referenced by the IANA](https://www.iana.org/assignments/imap-capabilities/imap-capabilities.xhtml).
For this project, we are aiming to implement only IMAP4rev1 and no extension at all.
## Aerogramme Implementation
From a high-level perspective, we will handle _immutable_ emails differently from _mutable_ mailboxes and flags.
Immutable data can be stored directly on Garage, as we do not fear reading an outdated value.
For mutable data, we cannot store them directly in Garage.
Instead, we choose to store a log of operations. Each client then applies this log of operation locally to rebuild its local state.
During this internals phase, we noted that the S3 API semantic was too limited for us, so we introduced a second API, K2V, to have more flexibility.
K2V is internalsed to store and fetch small values in batches, it uses 2 different keys: one to spread the data on the cluster (`P`), and one to sort linked data on the same node (`S`).
Having data on the same node allows for more efficient queries among this data.
For performance reasons, we plan to introduce 2 optimizations.
First, we store an email summary in K2V that allows fetching multiple entries at once.
Second, we also store checkpoints of the logs in S3 to avoid keeping and replaying all the logs each time a client starts a session.
We have the following data handled by Garage:
![Aerogramme Datatypes](/documentation/internals/aero-states.png)
In Garage, it is important to carefully choose the key(s) that are used to store data to have fast queries, we propose the following model:
![Aerogramme Key Choice](/documentation/internals/aero-states2.png)

Binary file not shown.

Before

Width:  |  Height:  |  Size: 11 KiB

View file

@ -1,64 +0,0 @@
+++
title = "Architecture"
weight = 10
+++
Aerogramme stands at the interface between the Garage storage server, and the user's e-mail client. It provides regular IMAP access on the client-side, and stores encrypted e-mail data on the server-side. Aerogramme also provides an LMTP server interface through which incoming mail can be forwarded by the MTA (e.g. Postfix).
<center>
<img src="/documentation/internals/aero-compo.png" alt="Aerogramme components"/>
<br>
<i>Figure 1: Aerogramme, our IMAP daemon, stores its data encrypted in Garage and provides regular IMAP access to mail clients</i></center>
## Overview of architecture
Figure 2 below shows an overview of Aerogramme's architecture. Each user has a personal Garage bucket in which to store their mailbox contents. We will document below the details of the components that make up Aerogramme, but let us first provide a high-level overview. The two main classes, `User` and `Mailbox`, define how data is stored in this bucket, and provide a high-level interface with primitives such as reading the message index, loading a mail's content, copying, moving, and deleting messages, etc. This mail storage system is supported by two important primitives: a cryptography management system that provides encryption keys for user's data, and a simple log-like database system inspired by Bayou [1] which we have called Bay, that we use to store the index of messages in each mailbox. The mail storage system is made accessible to the outside world by two subsystems: an LMTP server that allows for incoming mail to be received and stored in a user's bucket, in a staging area, and the IMAP server itself which allows full-fledged manipulation of mailbox data by users.
<center>
<img src="/documentation/internals/aero-schema.png" alt="Aerogramme internals"/>
<i>Figure 2: Overview of Aerogramme's architecture and internal data structures for a given user, Alice</i></center>
## Cryptography
Our cryptography module is taking care of: authenticating users against a data source (using their IMAP login and password), returning a set of credentials that allow read/write access to a Garage bucket, as well as a set of secret encryption keys used to encrypt and decrypt data stored in the bucket.
The cryptography module makes use of the user's authentication password as a passphrase to decrypt the user's secret keys, which are stored in the user's bucket in a dedicated K2V section.
This module can use either of two data sources for user authentication:
- LDAP, in which case the password (which is also the passphrase for decrypting the user's secret keys) must match the LDAP password of the user.
- Static, in which case the users are statically declared in Aerogramme's configuration file, and can have any password.
The static authentication source can be used in a deployment scenario shown in Figure 3, where Aerogramme is not running on the side of the service provider, but on the user's device itself. In this case, the user can use any password to encrypt their data in the bucket; the only credentials they need for authentication against the service provider are the S3 and K2V API access keys.
<center>
<img src="/documentation/internals/aero-paranoid.png" alt="user side encryption" />
<br>
<i>Figure 3: alternative deployment of Aerogramme on the user's device: the service provider never gets access to the plaintext data.</i></center>
The cryptography module also has a "public authentication" method, which allows the LMTP module to retrieve only a public key for the user to write incoming messages to the user's bucket but without having access to all of the existing encrypted data.
The cryptography module of Aerogramme is based on standard cryptographic primitives from `libsodium` and follows best practices in the domain.
## Bay, a simplification of Bayou
In our last milestone report, we described how we intended to implement the message index for IMAP mailboxes, based on an eventually-consistent log-like data structure. The principles of this system have been established in Bayou in 1995 [1], allowing users to use a weakly-coordinated datastore to exchange data and solve write conflicts. Bayou is based on a sequential specification, which defines the action that operations in the log have on the shared object's state. To handle concurrent modification, Bayou allows for log entries to be appended in non-sequential order: in case a process reads a log entry that was written earlier by another process, it can rewind its execution of the sequential specification to the point where the newly acquired operation should have been executed, and then execute the log again starting from this point. The challenge then consists in defining a sequential specification that provides the desired semantics for the application. In our last milestone report (milestone 3.A), we described a sequential specification that solves the UID assignment problem in IMAP and proved it correct. We refer the reader to that document for more details.
For milestone 3B, we have implemented our customized version of Bayou, which we call Bay. Bay implements the log-like semantics and the rewind ability of Bayou, however, it makes use of a much simpler data system: Bay is not operating on a relational database that is stored on disk, but simply on a data structure in RAM, for which a full checkpoint is written regularly. We decided against using a complex database as we observed that the expected size of the data structures we would be handling (the message indexes for each mailbox) wouldn't be so big most of the time, and having a full copy in RAM was perfectly acceptable. This allows for a drastic simplification in comparison to the proposal of the original Bayou paper [1]. On the other side, we added encryption in Bay so that both log entries and checkpoints are stored encrypted in Garage using the user's secret key, meaning that a malicious Garage administrator cannot read the content of a user's mailbox index.
## LMTP server and incoming mail handler
To handle incoming mail, we had to add a simple LMTP server to Aerogramme. This server uses the public authentication method of the cryptography module to retrieve a set of public credentials (in particular, a public key for asymmetric encryption) for storing incoming messages. The incoming messages are stored in their raw RFC822 form (encrypted) in a specific folder of the Garage bucket called `incoming/`. When a user logs in with their username and password, at which time Aerogramme can decrypt the user's secret keys, a special process is launched that watches the incoming folder and moves these messages to the `INBOX` folder. This task can only be done by a process that knows the user's secret keys, as it has to modify the mailbox index of the `INBOX` folder, which is encrypted using the user's secret keys. In later versions of Aerogramme, this process would be the perfect place to implement mail filtering logic using user-specified rules. These rules could be stored in a dedicated section of the bucket, again encrypted with the user's secret keys.
To implement the LMTP server, we chose to make use of the `smtp-server` crate from the [Kannader](https://github.com/Ekleog/kannader) project (an MTA written in Rust). The `smtp-server` crate had all of the necessary functionality for building SMTP servers, however, it did not handle LMTP. As LMTP is extremely close to SMTP, we were able to extend the `smtp-server` module to allow it to be used for the implementation of both SMTP and LMTP servers. Our work has been proposed as a [pull request](https://github.com/Ekleog/kannader/pull/178) to be merged back upstream in Kannader, which should be integrated soon.
## IMAP server
The last part that remains to build Aerogramme is to implement the logic behind the IMAP protocol and to link it with the mail storage primitives. We started by implementing a state machine that handled the transitions between the different states in the IMAP protocol: ANONYMOUS (before login), AUTHENTICATED (after login), and SELECTED (once a mailbox has been selected for reading/writing). In the SELECTED state, the IMAP session is linked to a given mailbox of the user. In addition, the IMAP server has to keep track of which updates to the mailbox it has sent (or not) to the client so that it can produce IMAP messages consistent with what the client believes to be in the mailbox. In particular, many IMAP commands make use of mail sequence numbers to identify messages, which are indices in the sorted array of all of the messages in the mailbox. However, if messages are added or removed concurrently, these sequence numbers change: hence we must keep a snapshot of the mailbox's index *as the client knows it*, which is not necessarily the same as what is _actually_ in the mailbox, to generate messages that the client will understand correctly. This snapshot is called a *mailbox view* and is synced regularly with the actual mailbox, at which time the corresponding IMAP updates are sent. This can be done only at specific moments when permitted by the IMAP protocol.
The second part of this task consisted in implementing all of the IMAP protocol commands. Most are relatively straightforward, however, one command, in particular, needed special care: the FETCH command. The FETCH command in the IMAP protocol can return the contents of a message to the client. However, it must also understand precisely the semantics of the content of an e-mail message, as the client can specify very precisely how the message should be returned. For instance, in the case of a multipart message with attachments, the client can emit a FECTH command requesting only a certain attachment of the message to be returned, and not the whole message. To implement such semantics, we have based ourselves on the [`mail-parser`](https://docs.rs/mail-parser/latest/mail_parser/) crate, which can fully parse an RFC822-formatted e-mail message, and also supports some extensions such as MIME. To validate that we were correctly converting the parsed message structure to IMAP messages, we internalsed a test suite composed of several weirdly shaped e-mail messages, whose IMAP structure definition we extracted by taking Dovecot as a reference. We were then able to compare the output of Aerogramme on these messages with the reference consisting in what was returned by Dovecot.
## References
- [1] Terry, D. B., Theimer, M. M., Petersen, K., Demers, A. J., Spreitzer, M. J., & Hauser, C. H. (1995). Managing update conflicts in Bayou, a weakly connected replicated storage system. *ACM SIGOPS Operating Systems Review*, 29(5), 172-182. ([PDF](https://dl.acm.org/doi/pdf/10.1145/224057.224070))

View file

@ -1,34 +0,0 @@
+++
title = "Related work"
weight = 50
+++
## Managing update conflicts in Bayou, a weakly connected replicated storage system
by Terry et al.
*to be written*
## Designing a Planetary-Scale IMAP Service with Conflict-free Replicated Data Types
by Jungnickel et al.
*to be written*
## Technology for Resting Email Encrypted Storage (TREES) used
by Rise Up
*to be written*
## Dovecot obox
*to be written*
## Apache JAMES
*to be written*
## Stalwart IMAP
*to be written*

View file

@ -1,182 +0,0 @@
+++
title = "Quick Start"
weight = 0
sort_by = "weight"
template = "documentation.html"
+++
This quickstart guide will teach you how to run an in-memory instance of Aerogramme,
because it is easier to run than a distributed real-world deployment. The goal here
is that you get familiar with the software, you understand what it does and what it does not,
so you can evaluate it. Once ready for a real deployment, you will find all the information you need in the [cookbook](@/documentation/cookbook/_index.md).
*Enjoy your reading!*
## What is Aerogramme
Aerogramme is a [Message Delivery Agent (MDA)](https://en.wikipedia.org/wiki/Message_delivery_agent) that speaks the [IMAP protocol](https://en.wikipedia.org/wiki/Internet_Message_Access_Protocol).
It makes sure that your mailbox, and thus your emails, are stored safely and stay in sync across all your devices.
<!--It manages a bit more than just your emails, as contacts and calendars support is planned.-->
Aerogramme is not (yet) a [Message Transfer Agent (MTA)](https://en.wikipedia.org/wiki/Message_transfer_agent) as it does not speak the [SMTP protocol](https://fr.wikipedia.org/wiki/Simple_Mail_Transfer_Protocol).
It means you can't send email for example with Aerogramme alone, you need to plug it to a MTA, like [Postfix](https://www.postfix.org/) or [opensmtpd](https://www.opensmtpd.org/).
The connection between Aerogramme and the MTA (Postix, opensmtpd, etc.) is done through a protocol named [LMTP](https://en.wikipedia.org/wiki/Local_Mail_Transfer_Protocol).
## Get a binary
Aerogramme is shipped as a static binary for Linux only on `amd64`, `aarch64` and `armv6` architectures.
For convenience, a docker container is also published both on Docker Hub and on Deuxfleurs' registry.
Also, Aerogramme is an open-source software released under the [EUPL licence](https://commission.europa.eu/content/european-union-public-licence_en), and you can compile it yourself from the source.
To get the latest version of Aerogramme and the precise download information, both for binary and source releases, go to the dedicated download page:
<a
href="/download/"
title="Aerogramme releases"
class="group flex items-center justify-center space-x-1 font-semibold shadow hover:shadow hover:border-none border-none px-2 py-1.5 rounded text-white transition-all duration-500 bg-gradient-to-tl from-aerogramme-blue via-blue-500 to-blue-300 bg-size-200 bg-pos-0 hover:bg-pos-100">
<svg class="w-6 h-6 animate-pulse text-white" fill="none" stroke="currentColor" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg"><path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M7 16a4 4 0 01-.88-7.903A5 5 0 1115.9 6L16 6a5 5 0 011 9.9M9 19l3 3m0 0l3-3m-3 3V10"></path></svg>
<span class="hidden md:inline text-white">Download</span>
</a>
To check if the binary you got is working, just launch it without any parameter, and an help message should be displayed like this one:
```text
aerogramme 0.2.0
Alex Auvolat <alex@adnab.me>, Quentin Dufour <quentin@dufour.io>
A robust email server
USAGE:
aerogramme [OPTIONS] <SUBCOMMAND>
...
```
If you use Docker, you can get the same result with:
```bash
docker run --network host dxflrs/aerogramme:0.2.0 aerogramme
```
*Using host network is not required for this example, but it will be useful for the following,
as, by default, Aerogramme often binds on localhost, and thus its sockets are not available
outside of the container even if you bind the correct ports.*
## Starting the daemon in development mode
To start an in-memory daemon for evaluation purposes, you don't need any configuration file. Just run:
```bash
aerogramme --dev provider daemon
```
If you use the docker container, you must launch:
```bash
docker run --network host dxflrs/aerogramme:0.2.0 aerogramme --dev provider daemon
```
*Starting from now, docker commands will not be given anymore. We assume that you have understood the pattern
on how to map the documentation commands to Docker.*
Now Aerogramme is listening for IMAP on `[::1]:1143` and for LMTP on port `[::1]:1025`. A test account has been created with username `alice` and password `hunter2`.
<!--
## Checking that the service is listening as intended
*If you don't have socat installed on your computer or are not comfortable with interacting
with low-level tools, you can simply skip this section.*
You can try to connect on both ports with `socat` and see a welcome message from the server (CTRL + D to exit):
```
$ socat - tcp:localhost:1143,crlf
* OK [CAPABILITY MOVE UIDPLUS LIST-STATUS ENABLE CONDSTORE LITERAL+ IDLE IMAP4REV1 UNSELECT] Aerogramme
$ socat - tcp:localhost:1025,crlf
220 example.tld Service ready
```
If you got these outputs, well done, your daemon is correctly started and is working!
-->
## Connecting Thunderbird
Now that we have Aerogramme acting as a Mail Delivery Agent, we want to connect our devices to it.
For this quick start guide, we will use Thunderbird on desktop. On the create account page,
start filling the fields.
**Your full name:** `Alice`
**Email address:** `alice@example.tld`
**Password:** `hunter2`
Then click on **Configure Manually**. New fields will appear, fill them as follow.
**Protocol:** `IMAP`
**Hostname:** `::1`
**Port:** `1143`
**Connection Security:** `None`
**Authentication Method:** `Normal password`
**Username:** `alice`
*Aerogramme is not a SMTP server but Thunderbird will not let you configure an account without a SMTP server.
You can lie to Thunderbird by putting any existing SMTP server (like `smtp.gmail.com`) and choose "No Authentication" for the Authentication method field.*
A screenshot for reference:
![Thunderbird Account Creation Page filled with Aerogramme Quickstart configuration](/images/thunderbird-quickstart.png)
Clic "Done" (or "Re-test" then "Done) and your account should be configured:
![Overview of the mailbox list in Thunderbird](/images/thunderbird-quickstart-mbx.png)
As you can see, Thunderbird detected multiple mailboxes but they are all empty.
You can copy emails from your other mailbox to fill your Aerogramme one, test search, add/remove flags, and so on.
Mail reception will be emulated in the following section.
## Emulate the reception of an email
Remember, Aerogramme is about managing your mailbox, not managing mail transfer.
As this is an Aerogramme specific, the goal here is not to deploy Postfix, opensmtpd or any other MTA software.
Instead we will emulate their action: forwarding the messages they received to Aerogramme through the LMTP protocol.
Python natively supports the LMTP protocol: the following command uses Python to deliver a fake email:
```bash
python - <<EOF
from smtplib import LMTP
with LMTP("::1", port=1025) as lmtp:
lmtp.set_debuglevel(2)
lmtp.sendmail(
"bob@example.tld",
"alice@example.tld",
"From: bob@example.tld\r\nTo: alice@example.tld\r\nSubject: Hello\r\n\r\nWorld\r\n",
)
EOF
```
After running three times this script and clicking on 2 of these 3 emails on Thunderbird,
your mailbox should look like that:
![Overview of Thunderbird inbox](/images/thunderbird-quickstart-emails.png)
## That's all folks!
That's (already) the end of the quickstart section of Aerogramme.
You have learnt:
- What Aerogramme does
- How to download Aerogramme
- How to run Aerogramme
- How to connect Thunderbird to Aerogramme
- How MTA are transferring emails to Aerogramme
If you are considering a real-world deployment now, you will
have many topics to consider, like integration with existing MTA, integration with a service manager, TLS encryption,
user management, auto-discovery, persistent storage, high-availability design, observability, backup, updates, etc.
All these points will be covered in the [Cookbook](@/documentation/cookbook/_index.md), it should be your next stop!
<!--
*Earlier, we started Aerogramme in the provider mode with a test account configured with the transparent flavor.*
-->

View file

@ -1,10 +0,0 @@
+++
title = "Reference"
weight = 30
sort_by = "weight"
template = "documentation.html"
+++
Aerogramme uses multiple TOML files, their syntax is described in the dedicated pages:
- [The main configuration](@/documentation/reference/config.md) that must be passed to the daemon to start it
- [The user list configuration](@/documentation/reference/user-list.md) that is used when you choose to use Aerogramme's integrated user management

View file

@ -1,168 +0,0 @@
+++
title = "Configuration"
weight = 10
+++
Aerogramme can be run in 2 modes: `provider` and `companion`.
`provider` is the "traditional" daemon that is run by the service provider on their servers.
`companion` is an optional, local proxy a user can run on their local computer to have better privacy.
This is an Aerogramme-specific concept that is explained in details in the concept page entitled [Per-user encryption](@/documentation/design/per-user-encryption.md).
This page describe both configurations.
## Provider
As the provider is modular (both the user management and the storage can be configured),
the configuration file is also modular. Depending on the modules you choose,
the configuration will look different. The reference configuration file will thus
be presented modules per modules. If you want to see complete configuration files in context,
please refer to the [cookbook](@/documentation/cookbook/_index.md).
The Aerogramme daemon can be launched in provider mode as follow:
```bash
aerogramme -c aerogramme.toml provider daemon
```
The syntax of the `aerogramme.toml` is described below.
### Common
The common part of the provider daemon configuration:
```toml
role = "Provider"
pid = "/var/run/aerogramme.pid"
[auth]
bind_addr = "[::1]:12345"
[imap_unsecure]
bind_addr="[::1]:143"
[imap]
bind_addr="[::]:993"
certs = "my-certs.pem"
key = "my-key.pem"
[lmtp]
bind_addr="[::1]:1025"
hostname="example.tld"
[users]
user_driver = "<PLACEHOLDER>"
# ... depends on the user driver
```
🔑 `role` - *Required, Enum (String)* - Either `Provider` or `Companion`, here only `Provider` is valid. It's a [poka-yoke](https://en.wikipedia.org/wiki/Poka-yoke) to make
sure people run Aerogramme in the intended mode.
🔑 `pid` - *Optional, Path (String)* - The path to the file where the daemon PID will be stored. It's required to use the `aerogramme provider reload` command.
🗃️ `auth` - *Optional* - A socket implementing the [Dovecot SASL protocol](https://doc.dovecot.org/developer_manual/design/auth_protocol/), it can be used by some SMTP servers like Postfix, Chasquid or Maddy to delegate their authentication.
🔑 `bind_addr` - *Required, Socket (String)* - On which IP address and port the Dovecot SASL protocol should listen. Please note that this protocol is not encrypted, and thus you should only bind to localhost.
🗃️ `imap_unsecure` - *Optional* - The cleartext IMAP configuration block, if not set, the IMAP cleartext service is not started. Be careful, it is dangerous to run IMAP without transport encryption.
🔑 `bind_addr` - *Required, Socket (String)* - On which IP address and port the cleartext IMAP service must bind, can be IPv6 or IPv4 syntax. (Port 143 is reserved for this use).
🗃️ `imap` - *Optional* - The TLS IMAP configuration block, if not set, the IMAP TLS service is not started. This is the recommanded way to expose your IMAP service.
🔑 `bind_addr` - *Required, Socket (String)* - On which IP address and port the IMAP service must bind, can be IPv6 or IPv4 syntax. (Port 993 is reserved for this use).
🔑 `certs` - *Required, Path (String)* - A path to the PEM encoded certificate list
🔑 `key` - *Required, Path (String)* - A path to the PEM encoded private key
🗃️ `lmtp` - *Optional* - The LMTP configuration block, if not set, the LMTP service is not started
🔑 `bind_addr` - *Required, Socket (String)* - On which IP address and port the LMTP service must bind, can be IPv6 or IPv4 syntax.
🗃️ `users` - *Required* - How users must be handled
🔑 `user_driver` - *Required, Enum (String)* - Define which user driver must be used, the rest of the configuration depends on it. Valid values are: `Ldap` and `Static`.
### LDAP user_driver
```toml
# root part, see above...
[users]
user_driver = "Ldap"
ldap_server = "ldap://[::1]:389"
pre_bind_on_login = true
bind_dn = "cn=admin,dc=example,dc=tld"
bind_password = "hunter2"
search_base = "ou=users,dc=example,dc=tld"
username_attr = "uid"
mail_attr = "mail"
crypto_root_attr = "aeroCryptoRoot"
storage_driver = "Garage"
s3_endpoint = "s3.example.tld"
k2v_endpoint = "k2v.example.tld"
aws_region = "garage"
aws_access_key_id_attr = "garageS3AccessKey"
aws_secret_access_key_attr = "garageS3AccessKey"
bucket_attr = "aeroBucket"
default_bucket = "aerogramme"
```
🔑 `user_driver="Ldap"` - to configure user management through LDAP, the `user_driver` value introduced in the previous section must be set to `Ldap`.
🔑 `ldap_server` - *Required, LDAP URI* - The LDAP URI of your LDAP server.
🔑 `pre_bind_on_login` - *Optional, boolean, default to false* - Sometimes users can't login directly on the LDAP server and require a service account that will bind to LDAP and be in charge of checking users' credentials. If this is your case, set it to true.
🔑 `bind_dn` - *Optional, LDAP distinguished name (String)* - The distinguished name of your service account (ignored if `pre_bind_on_login` is not set to true)
🔑 `bind_password` - *Optional, Password (String)* - The password of your service account (ignored if `pre_bind_on_login` is not set to true)
🔑 `search_base` - *Optional, LDAP distinguished name (String)* - Once logged with the service account, where the user entries must be searched. (ignored if `pre_bind_on_login` is not set to true)
🔑 `storage_driver` - *Required, Enum (String)* - For now, only `Garage` is supported as a value (`InMemory` can be used for test purposes only)
🔑 `s3_endpoint` - *Required, String* - The S3 endpoint of the Garage instance
🔑 `k2v_endpoint` - *Required, String* - The K2V endpoint of the Garage instance
🔑 `aws_region` - *Required, String* - Regions are an AWS thing, for example `us-east-1`. If you followed Garage's documentation, you probably configured `garage` as your region.
🔑 `aws_access_key_id_attr` - *Required, String* - The LDAP attribute that will contain user's key id: each user has its own key, its own bucket, and so on.
🔑 `aws_secret_access_key_attr` - *Required, String* - The LDAP attribute that will contain user's secret key.
🔑 `bucket_attr` - *Optional, String* - The LDAP attribute that will contain user's bucket that is dedicated to Aerogramme. Again, each user has its own bucket.
🔑 `default_bucket` - *Optional, String* - If `bucket_attr` is not set, we assume that all users have a bucket named following this parameter (as Garage has a local bucket namespace in addition to the global namespace, so it's possible for different users to have independant buckets with the same name).
### Static user_driver
```toml
# root part, see above...
[users]
user_driver = "Static"
user_list = "/etc/aerogramme/users.toml"
```
🔑 `user_driver="Static"` - to configure user management through LDAP, the `user_driver` value introduced in the previous section must be set to `Ldap`.
🔑 `user_list` - *Required, Path (String)* - the path to the file that will contain the list of your users, the hash of their password, and where their data is stored. The file must exist, but it can be blank.
The syntax of the user list file is described in the dedicated reference page [User List](@/documentation/reference/user-list.md).
## Companion
Compared to the provider configuration, the companion configuration is static, so it is presented
in a single block.
```toml
role = "Companion"
pid = "/var/run/user/1000/aerogramme.pid"
[imap]
bind_addr = "[::1]:1143"
[users]
user_list = "/home/user/.config/aerogramme-users.toml"
```
The parameters explanation is the same as above.
Accounts can be configured with `aerogramme companion account add --login alice --setup.toml`.
The content of the `setup.toml` file should be given by your email service provider.
More information about its content is available in [User List](@/documentation/reference/user-list.md).
## Environment variables
Aerogramme supports some environment variables. A non-exhaustive list is given here that apply to both provider and companion modes.
`AEROGRAMME_CONFIG=aerogramme.toml` - *Path* - The path to the configuration file.
`RUST_LOG=info,aerogramme=debug` - *Logger config* - Configure logger verbosity.

View file

@ -1,98 +0,0 @@
+++
title = "RFC Coverage"
weight = 20
+++
Status code :
- 🔴 FAIL - not supported,
- 🟠 PART - partial support,
- 🟢 OK - acceptable support.
## IMF/MIME
Internet Message Format / Multipurpose Internet Mail Extensions are the data representation of an email: its explains how to encode metadata, like the recipient and the sender, but also the content of the email (HTML, plain text, attachments, etc).
*Aerogramme is built on top of its sister project [Deuxfleurs/eml-codec](https://git.deuxfleurs.fr/Deuxfleurs/eml-codec).*
| RFC | Title | Status | Notes |
|-----|--------|-------|-------|
| [822](https://www.rfc-editor.org/rfc/rfc822) | ARPA INTERNET TEXT MESSAGES | 🟢 OK | ... |
| [2822](https://datatracker.ietf.org/doc/html/rfc2822) | Internet Message Format (2001) | 🟢 OK | ... |
| [5322](https://www.rfc-editor.org/rfc/rfc5322.html) | **Internet Message Format (2008)** | 🟢 OK | ... |
| [2045](https://datatracker.ietf.org/doc/html/rfc2045) | ↳ Multipurpose Internet Mail Extensions (MIME) Part One: Format of Internet Message Bodies | 🟢 OK | ... |
| [2046](https://datatracker.ietf.org/doc/html/rfc2046) | ↳ Multipurpose Internet Mail Extensions (MIME) Part Two: Media Types | 🟢 OK | ... |
| [2047](https://datatracker.ietf.org/doc/html/rfc2047) | ↳ MIME (Multipurpose Internet Mail Extensions) Part Three: Message Header Extensions for Non-ASCII Text | 🟢 OK | ... |
| [2048](https://datatracker.ietf.org/doc/html/rfc2048) | ↳ Multipurpose Internet Mail Extensions (MIME) Part Four: Registration Procedures | 🟢 OK | ... |
| [2049](https://datatracker.ietf.org/doc/html/rfc2049) | ↳ Multipurpose Internet Mail Extensions (MIME) Part Five: Conformance Criteria and Examples | 🟢 OK | ... |
| [6532](https://datatracker.ietf.org/doc/html/rfc6532) | Internationalized Email Headers | 🔴 FAIL | ... |
## IMAP
IMAP is a protocol to remotely access a mailbox.
*Based on [duesee/imap-codec](https://github.com/duesee/imap-codec) and [duesee/imap-flow](https://github.com/duesee/imap-flow).*
| RFC | Title | Status | Notes |
|-----|-------|--------|-------|
| [3501](https://www.rfc-editor.org/rfc/rfc3501) | **INTERNET MESSAGE ACCESS PROTOCOL - VERSION 4rev1** | 🟢 OK | ... |
| [9051](https://www.rfc-editor.org/rfc/rfc9051.html) | **Internet Message Access Protocol (IMAP) - Version 4rev2** | 🟠 PART | ... |
| [2177](https://www.rfc-editor.org/rfc/rfc2177.html) | ↳ IMAP4 IDLE command | 🟢 OK | ... |
| [2342](https://www.rfc-editor.org/rfc/rfc2342.html) | ↳ IMAP4 Namespace | 🔴 FAIL | ... |
| [3516](https://www.rfc-editor.org/rfc/rfc3516.html) | ↳ IMAP4 Binary Content Extension | 🔴 FAIL | ... |
| [3691](https://www.rfc-editor.org/rfc/rfc3691.html) | ↳ Internet Message Access Protocol (IMAP) UNSELECT command | 🟢 OK | ... |
| [4315](https://www.rfc-editor.org/rfc/rfc4315.html) | ↳ Internet Message Access Protocol (IMAP) - UIDPLUS extension | 🟢 OK | ... |
| [4731](https://www.rfc-editor.org/rfc/rfc4731.html) | ↳ IMAP4 Extension to SEARCH Command for Controlling What Kind of Information Is Returned | 🔴 FAIL | ... |
| [4959](https://www.rfc-editor.org/rfc/rfc4959.html) | ↳ IMAP Extension for Simple Authentication and Security Layer (SASL) Initial Client Response | 🔴 FAIL | ... |
| [5161](https://www.rfc-editor.org/rfc/rfc5161.html) | ↳ The IMAP ENABLE Extension | 🟢 OK | ... |
| [5182](https://www.rfc-editor.org/rfc/rfc5182.html) | ↳ IMAP Extension for Referencing the Last SEARCH Result | 🔴 FAIL | ... |
| [5258](https://www.rfc-editor.org/rfc/rfc5258.html) | ↳ Internet Message Access Protocol version 4 - LIST Command Extensions | 🔴 FAIL | ... |
| [5530](https://www.rfc-editor.org/rfc/rfc5530.html) | ↳ IMAP Response Codes | 🟠 PART | ... |
| [5819](https://www.rfc-editor.org/rfc/rfc5819.html) | ↳ IMAP4 Extension for Returning STATUS Information in Extended LIST | 🟢 OK | ... |
| [6154](https://www.rfc-editor.org/rfc/rfc6154.html) | ↳ IMAP LIST Extension for Special-Use Mailboxes | 🟠 PART | ... |
| [6851](https://www.rfc-editor.org/rfc/rfc6851.html) | ↳ Internet Message Access Protocol (IMAP) - MOVE Extension | 🟢 OK | ... |
| [7888](https://www.rfc-editor.org/rfc/rfc7888.html) | ↳ IMAP4 Non-synchronizing Literals | 🟢 OK | ... |
| [5550](https://www.rfc-editor.org/rfc/rfc5550.html) | **Lemonade Message Stores** | 🟠 PART | ... |
| [4551](https://www.rfc-editor.org/rfc/rfc4551.html) | ↳ IMAP Extension for Conditional STORE Operation or Quick Flag Changes Resynchronization | 🟢 OK | ... |
| [5162](https://www.rfc-editor.org/rfc/rfc5162.html) | ↳ IMAP4 Extensions for Quick Mailbox Resynchronization | 🔴 FAIL | ... |
| [5465](https://www.rfc-editor.org/rfc/rfc5465.html) | ↳ The IMAP NOTIFY Extension | 🔴 FAIL | ... |
| [7162](https://www.rfc-editor.org/rfc/rfc7162.html) | ↳ IMAP Extensions: Quick Flag Changes Resynchronization (CONDSTORE) and Quick Mailbox Resynchronization (QRESYNC) | 🟠 PART | ... |
## LMTP
LMTP is a protocol used by a Mail Transfer Agent (MTA, like Postfix) to deliver a mail to a local mailbox handled by a Mail Delivery Agent (MDA, like Dovecot or Aerogramme).
*Based on [Ekleog/kannader](https://github.com/Ekleog/kannader).*
| RFC | Title | Status | Notes |
|-----|-------|--------|-------|
| [2033](https://www.rfc-editor.org/rfc/rfc2033) | Local Mail Transfer Protocol | 🟢 OK | ... |
## CalDAV
CalDAV is a protocol to remotely access a calendar and notes.
| RFC | Title | Status | Notes |
|-----|-------|--------|-------|
| [RFC2445](https://www.rfc-editor.org/rfc/rfc2445.html) | Internet Calendaring and Scheduling Core Object Specification (iCalendar) | 🔴 FAIL | ... |
| [RFC2518](https://www.rfc-editor.org/rfc/rfc2518.html) | HTTP Extensions for Distributed Authoring -- WEBDAV | 🔴 FAIL | ... |
| [RFC4791](https://www.rfc-editor.org/rfc/rfc4791) | Calendaring Extensions to WebDAV (CalDAV) | 🔴 FAIL | ... |
## CardDAV
CarDAV is a protocol to remotely access contacts information.
| RFC | Title | Status | Notes |
|-----|-------|--------|-------|
| [6532](https://www.rfc-editor.org/rfc/rfc6352) | CardDAV: vCard Extensions to Web Distributed Authoring and Versioning (WebDAV) | 🔴 FAIL | ... |
## See also
- [Basecamp/Hey reference on email](https://github.com/basecamp/mail/tree/master/reference)
- [Microsoft reference on IMF](https://learn.microsoft.com/en-us/previous-versions/office/developer/exchange-server-2010/aa563098(v=exchg.140))
- [OCaml Mirage project, mrmime](https://github.com/mirage/mrmime)

View file

@ -1,79 +0,0 @@
+++
title = "User list"
weight = 11
+++
This file is used by Aerogramme to store its users
when you choose to use its internal user management feature.
Aerogramme also supports externalized user management (like [LDAP](@/documentation/reference/config.md)): in this case, this file is not needed.
Aerogramme provides utilities commands to generate the `user_list` file, but you can also generate it manually if you prefer. Both methods will be described here.
## CLI-assisted generation of the file
You can add a user to the `user_list` file with the command:
```bash
aerogramme -c aerogramme.toml provider account add --login alice --setup setup.toml
```
The `setup.toml` must be previously created as follow:
```toml
email_addresses = [ "alice@example.tld", "alice.smith@example.tld" ]
clear_password = "hunter2"
storage_driver = "Garage"
s3_endpoint = "s3.example.tld"
k2v_endpoint = "k2v.example.tld"
aws_region = "garage"
aws_access_key_id = "GK01dfa..."
aws_secret_access_key = "a32f..."
bucket = "aerogramme"
```
🔑 `email_addresses` - *Required, Array of emails (Array of String)* - The email addresses that will be associated to this account. Used by the LMTP service to know to which user an email must be delivered.
🔑 `clear_password` - *Optional, Password (String)* - The clear text password of the user. If not set, it will be interactively asked.
🔑 `storage_driver` - *Required, Enum (String)* - The only option is "Garage" for now (you can also use "InMemory" with nothing more for testing purposes).
🔑 `s3_endpoint` - *Required, String* - The S3 endpoint of the Garage instance
🔑 `k2v_endpoint` - *Required, String* - The K2V endpoint of the Garage instance
🔑 `aws_region` - *Required, String* - Regions are an AWS thing, for example `us-east-1`. If you followed Garage's documentation, you probably configured `garage` as your region.
🔑 `aws_access_key_id` - *Required, String* - The user's access key id
🔑 `aws_secret_access_key` - *Required, String* - The user's secret access key
🔑 `bucket` - *Required, String* - The user's bucket in which Aerogramme must store their data
If your Aerogramme daemon is already running, you must reload it to activate this account:
```bash
aerogramme -c aerogramme.toml provider reload
```
*Of course, restarting it also works.*
The previous `... account add ...` command, under the hood, parsed the existing
`users.toml` file, added it the new account, then reserialized the file with the new information. The generated content is given as an example of the following section.
## Manual edit
Some people might want to generate their configuration from another source of truth (eg. Ansible or NixOS). This page will explain the different options available.
The following file has been generated by the `... account add ...` command.
```toml
[alice]
email_addresses = ["alice@example.tld", "alice.smith@example.tld"]
password = "$argon2id$v=19$m=19456,t=2,p=1$lW1IFw59vyZAgQvyPkCB6w$R4y9T+Zekx6tHpTInsXcOZ0H1/HIJoqckiagJq/292U"
crypto_root = "aero:cryptoroot:pass:t5tC2QiL+A543Lg59FmE4XxmS0cSdOWv3ZD1EeeC8CScgR5feMFJT+KyUpjRzplWTEArwTWZ0Ff0VA+HU+P7sbuKqshm5GnN2x7kqePmqRMfLf/q6XiucJmfcNiGVveyrzsRavbs6Vy2J/HyM/FytZ/4eLnZqH8pERpT5UWJdWQehDQnLpG6OEQRgqowun7m+CqF6A/vKydQUBRzMdvX6UGD2bIHLmhRBqIzOYDJQGxQ"
storage_driver = "Garage"
s3_endpoint = "s3.example.tld"
k2v_endpoint = "k2v.example.tld"
aws_region = "garage"
aws_access_key_id = "GK01dfa..."
aws_secret_access_key = "a32f..."
bucket = "aerogramme"
```
🔑 `email_addresses`, `storage_driver`, `s3_endpoint`, `k2v_endpoint`, `aws_region`, `aws_access_key_id`, `aws_secret_access_key`, `bucket` are the same as above.
🔑 `password` - *Required, String* - To generate a compatible hash, run `aerograme tools password-hash`
🔑 `crypto_root` - *Required, String* - To generate a compatible string, run `aerogramme tools crypto-root new`

1
garage Submodule

@ -0,0 +1 @@
Subproject commit b17d59cfabbe92c509f4888cae83f6053a8cab1e

2227
package-lock.json generated Executable file

File diff suppressed because it is too large Load diff

View file

@ -4,16 +4,12 @@
"description": "",
"main": "index.js",
"scripts": {
"build": "NODE_ENV=production npx tailwindcss -i src/input.css -o public/style.css --minify",
"start": "npx tailwindcss -i ./src/input.css -o ./public/style.css --minify --watch"
"start": "npx tailwindcss -i ./src/input.css -o ./static/style.css --minify --watch"
},
"keywords": [],
"author": "",
"license": "ISC",
"devDependencies": {
"tailwindcss": "^3.0.7"
},
"dependencies": {
"html-validate": "^7.18.0"
}
}

Some files were not shown because too many files have changed in this diff Show more