WIP RAM USAGE
|
@ -49,7 +49,7 @@ navbar_items = [
|
|||
{code = "en", nav_items = [
|
||||
{ url = "$BASE_URL/", name = "Overview" },
|
||||
{ url = "$BASE_URL/documentation/", name = "Docs" },
|
||||
{ url = "https://garagehq.deuxfleurs.fr/blog/", name = "Blog ↗" }
|
||||
{ url = "$BASE_URL/blog/", name = "Blog" }
|
||||
]},
|
||||
]
|
||||
|
||||
|
|
|
@ -1,52 +0,0 @@
|
|||
+++
|
||||
title="Garage will be at FOSDEM'22"
|
||||
date=2022-02-02
|
||||
+++
|
||||
|
||||
*FOSDEM is an international meeting about Free Software, organized from Brussels.
|
||||
On next Sunday, February 6th, 2022, we will be there to present Garage.*
|
||||
|
||||
<!-- more -->
|
||||
|
||||
---
|
||||
|
||||
In 2000, a Belgian free software activist going by the name of Raphael Baudin
|
||||
set out to create a small event for free software developers in Brussels.
|
||||
This event quickly became the "Free and Open Source Developers' European Meeting",
|
||||
shorthand FOSDEM. 22 years later, FOSDEM is a major event for free software developers
|
||||
around the world. And for this year, we have the immense pleasure of announcing
|
||||
that the Deuxfleurs association will be there to present Garage.
|
||||
|
||||
The event is usually hosted by the Université Libre de Bruxelles (ULB) and welcomes
|
||||
around 5000 people. But due to COVID, the event has been taking place online
|
||||
in the last few years. Nothing too unfamiliar to us, as the organization is using
|
||||
the same tools as we are: a combination of Jitsi and Matrix.
|
||||
|
||||
We are of course extremely honored that our presentation was accepted.
|
||||
If technical details are your thing, we invite you to come and share this event with us.
|
||||
In all cases, the event will be recorded and available as a VOD (Video On Demand)
|
||||
afterward. Concerning the details of the organization:
|
||||
|
||||
**When?** On Sunday, February 6th, 2022, from 10:30 AM to 11:00 AM CET.
|
||||
|
||||
**What for?** Introducing the Garage storage platform.
|
||||
|
||||
**By whom?** The presentation will be made by Alex,
|
||||
other developers will be present to answer questions.
|
||||
|
||||
**For who?** The presentation is targeted to a technical audience that is knowledgeable in software development or systems administration.
|
||||
|
||||
**Price:** FOSDEM'22 is an entirely free event.
|
||||
|
||||
**Where?** Online, in the Software Defined Storage devroom.
|
||||
|
||||
- [Join the room interactively (video and chat)](https://chat.fosdem.org/#/room/%23sds-devroom:fosdem.org)
|
||||
- [Join the room as a spectator (video only)](https://live.fosdem.org/watch/dsds)
|
||||
- [Event details on the FOSDEM'22 website](https://fosdem.org/2022/schedule/event/sds_garage_introduction)
|
||||
|
||||
And if you are not so much of a technical person, but you're dreaming of
|
||||
a more ethical and emancipatory digital world,
|
||||
keep in tune with news coming from the Deuxfleurs association
|
||||
as we will likely have other events very soon!
|
||||
|
||||
|
|
@ -1,170 +0,0 @@
|
|||
+++
|
||||
title="Introducing Garage, our self-hosted distributed object storage solution"
|
||||
date=2022-02-01
|
||||
+++
|
||||
|
||||
*Deuxfleurs is a non-profit based in France that aims to defend and promote
|
||||
individual freedom and rights on the Internet. In their quest to build a
|
||||
decentralized, resilient self-hosting infrastructure, they have found that
|
||||
currently, existing software is often ill-suited to such a particular deployment
|
||||
scenario. In the context of data storage, Garage was built to provide a highly
|
||||
available data store that exploits redundancy over different geographical
|
||||
locations, and does its best to not be too impacted by network latencies.*
|
||||
|
||||
<!-- more -->
|
||||
|
||||
---
|
||||
|
||||
Hello! We are Deuxfleurs, a non-profit based in France working to promote
|
||||
self-hosting and small-scale hosting.
|
||||
|
||||
What does that mean? Well, we figured that big tech monopolies such as Google,
|
||||
Facebook or Amazon today hold disproportionate power and are becoming quite
|
||||
dangerous to us, citizens of the Internet. They know everything we are doing,
|
||||
saying, and even thinking, and they are not making good use of that
|
||||
information. The interests of these companies are those of the capitalist
|
||||
elite: they are most interested in making huge profits by exploiting the
|
||||
Earth's precious resources, producing, advertising, and selling us massive
|
||||
amounts of stuff we don't need. They don't truly care about the needs of the
|
||||
people, nor do they care that planetary destruction is under way because of
|
||||
them.
|
||||
|
||||
Big tech monopolies are in a particularly strong position to influence our
|
||||
behaviors, consciously or not, because we rely on them for selecting the online
|
||||
content we read, watch, or listen to. Advertising is omnipresent, and because
|
||||
they know us so well, they can subvert us into thinking that a mindless
|
||||
consumer society is what we truly want, whereas we most likely would choose
|
||||
otherwise if we had the chance to think by ourselves.
|
||||
|
||||
We don't want that. That's not what the Internet is for. Freedom is freedom
|
||||
from influence: the ability to do things by oneself, for oneself, on one's own
|
||||
terms. Self-hosting is both the means by which we reclaim this freedom on the
|
||||
Internet – by not using services of big tech monopolies and thus removing
|
||||
ourselves from their influence – and the result of applying our critical
|
||||
thinking and our technical abilities to build the Internet that suits us.
|
||||
|
||||
Self-hosting means that we don't use cloud services. Instead, we store our
|
||||
personal data on computers that we own, which we run at home. We build local
|
||||
communities to share the services that we run with non-technical people. We
|
||||
communicate with other groups that do the same (or, sometimes, that don't)
|
||||
thanks to standard protocols such as HTTP, e-mail, or Matrix, that allow a
|
||||
global community to exist outside of big tech monopolies.
|
||||
|
||||
### Self-hosting is a hard problem
|
||||
|
||||
As I said, self-hosting means running our own hardware at home, and providing
|
||||
24/7 Internet services from there. We have many reasons for doing this. One is
|
||||
because this is the only way we can truly control who has access to our data.
|
||||
Another one is that it helps us be aware of the physical substrate of which the
|
||||
Internet is made: making the Internet run has an environmental cost that we
|
||||
want to evaluate and keep under control. The physical hardware also gives us a
|
||||
sense of community, calling to mind all of the people that could currently be
|
||||
connected and making use of our services, and reminding us of the purpose for
|
||||
which we are doing this.
|
||||
|
||||
If you have a home, you know that bad things can happen there too. The power
|
||||
grid is not infallible, and neither is your Internet connection. Fires and floods
|
||||
happen. And the computers we are running can themselves crash at any moment,
|
||||
for any number of reasons. Self-hosted solutions today are often not equipped
|
||||
to face such challenges and might suffer from unavailability or data loss
|
||||
as a consequence.
|
||||
|
||||
If we want to grow our communities, and attract more people that might be
|
||||
sympathetic to our vision of the world, we need a baseline of quality for the
|
||||
services we provide. Users can tolerate some flaws or imperfections, in the
|
||||
name of defending and promoting their ideals, but if the services are
|
||||
catastrophic, being unavailable at critical times, or losing users' precious
|
||||
data, the compromise is much harder to make and people will be tempted to go
|
||||
back to a comfortable lifestyle bestowed by big tech companies.
|
||||
|
||||
Fixing availability, making services reliable even when hosted at unreliable
|
||||
locations or on unreliable hardware is one of the main objectives of
|
||||
Deuxfleurs, and in particular of the project Garage which we are building.
|
||||
|
||||
### Distributed systems to the rescue
|
||||
|
||||
Distributed systems, or distributed computing, is a set of techniques that can
|
||||
be applied to make computer services more reliable, by making them run on
|
||||
several computers at once. It so happens that a few of us have studied
|
||||
distributed systems, which helps a lot (some of us even have PhDs!)
|
||||
|
||||
The following concepts of distributed computing are particularly relevant to
|
||||
us:
|
||||
|
||||
- **Crash tolerance** is when a service that runs on several computers at once
|
||||
can continue operating normally even when one (or a small number) of the
|
||||
computers stops working.
|
||||
|
||||
- **Geo-distribution** is when the computers that make up a distributed system
|
||||
are not all located in the same facility. Ideally, they would even be spread
|
||||
over different cities, so that outages affecting one region do not prevent
|
||||
the rest of the system from working.
|
||||
|
||||
We set out to apply these concepts at Deuxfleurs to build our infrastructure,
|
||||
in order to provide services that are replicated over several machines in several
|
||||
geographical locations, so that we are able to provide good availability guarantees
|
||||
to our users. We try to use as most as possible software packages that already
|
||||
existed and are freely available, for example the Linux operating system
|
||||
and the HashiCorp suite (Nomad and Consul).
|
||||
|
||||
Unfortunately, in the domain of distributed data storage, the available options
|
||||
weren't entirely satisfactory in our case, which is why we launched the
|
||||
development of our own solution: Garage. We will talk more in other blog
|
||||
posts about why Garage is better suited to us than alternative options. In this
|
||||
post, I will simply try to give a high-level overview of what Garage is.
|
||||
|
||||
### What is Garage, exactly?
|
||||
|
||||
Garage is a distributed storage solution, that automatically replicates your
|
||||
data on several servers. Garage takes into account the geographical location
|
||||
of servers, and ensures that copies of your data are located at different
|
||||
locations when possible for maximal redundancy, a unique feature in the
|
||||
landscape of distributed storage systems.
|
||||
|
||||
Garage implements the Amazon S3 protocol, a de-facto standard that makes it
|
||||
compatible with a large variety of existing software. For instance it can be
|
||||
used as a storage backend for many self-hosted web applications such as
|
||||
NextCloud, Matrix, Mastodon, Peertube, and many others, replacing the local
|
||||
file system of a server with a distributed storage layer. Garage can also be
|
||||
used to synchronize your files or store your backups with utilities such as
|
||||
Rclone or Restic. Last but not least, Garage can be used to host static
|
||||
websites, such as the one you are currently reading, which is served directly
|
||||
by the Garage cluster we host at Deuxfleurs.
|
||||
|
||||
Garage leverages the theory of distributed systems, and in particular
|
||||
*Conflict-free Replicated Data Types* (CRDTs in short), a set of mathematical
|
||||
tools that help us write distributed software that runs faster, by avoiding
|
||||
some kinds of unnecessary chit-chat between servers. In a future blog post,
|
||||
we will show how this allows us to significantly outperform Minio, our closest
|
||||
competitor (another self-hostable implementation of the S3 protocol).
|
||||
|
||||
On the side of software engineering, we are committed to making Garage
|
||||
a tool that is reliable, lightweight, and easy to administrate.
|
||||
Garage is written in the Rust programming language, which helps us ensure
|
||||
the stability and safety of the software, and allows us to build software
|
||||
that is fast and uses little memory.
|
||||
|
||||
### Conclusion
|
||||
|
||||
The current version of Garage is version 0.6, which is a *beta* release.
|
||||
This means that it hasn't yet been tested by many people, and we might have
|
||||
ignored some edge cases in which it would not perform as expected.
|
||||
|
||||
However, we are already actively using Garage at Deuxfleurs for many uses, and
|
||||
it is working exceptionally well for us. We are currently using it to store
|
||||
backups of personal files, to store the media files that we send and receive
|
||||
over the Matrix network, as well as to host a small but increasing number of
|
||||
static websites. Our current deployment hosts about 200 000 files spread in 50
|
||||
buckets, for a total size of slightly above 500 GB. These numbers can seem small
|
||||
when compared to the datasets you could expect your typical cloud provider to
|
||||
be handling, however these sizes are fairly typical of the small-scale
|
||||
self-hosted deployments we are targeting, and our Garage cluster is in no way
|
||||
nearing its capacity limit.
|
||||
|
||||
Today, we are proudly releasing Garage's new website, with updated
|
||||
documentation pages. Poke around to try to understand how the software works,
|
||||
and try installing your own instance! Your feedback is precious to us, and we
|
||||
would be glad to hear back from you on our
|
||||
[issue tracker](https://git.deuxfleurs.fr/Deuxfleurs/garage/issues), by
|
||||
[e-mail](mailto:garagehq@deuxfleurs.fr), or on our
|
||||
[Matrix channel](https://matrix.to/#/%23garage:deuxfleurs.fr) (`#garage:deuxfleurs.fr`).
|
Before Width: | Height: | Size: 420 KiB |
Before Width: | Height: | Size: 94 KiB |
Before Width: | Height: | Size: 42 KiB |
Before Width: | Height: | Size: 46 KiB |
Before Width: | Height: | Size: 51 KiB |
Before Width: | Height: | Size: 28 KiB |
|
@ -1,267 +0,0 @@
|
|||
+++
|
||||
title="We tried IPFS over Garage"
|
||||
date=2022-07-04
|
||||
+++
|
||||
|
||||
|
||||
*Once you have spawned your Garage cluster, you might be interested in finding ways to share efficiently your content with the rest of the world,
|
||||
such as by joining federated platforms.
|
||||
In this blog post, we experiment with interconnecting the InterPlanetary File System (IPFS) daemon with Garage.
|
||||
We discuss the different bottlenecks and limitations of the software stack in its current state.*
|
||||
|
||||
<!-- more -->
|
||||
|
||||
---
|
||||
|
||||
|
||||
<!--Garage has been designed to be operated inside the same "administrative area", ie. operated by a single organization made of members that fully trust each other.
|
||||
It is an intended design decision: trusting each other enables Garage to spread data over the machines instead of duplicating it.
|
||||
Still, you might want to share and collaborate with the rest of the world, and it can be done in 2 ways with Garage: through the integrated HTTP server that can serve your bucket as a static website,
|
||||
or by connecting it to an application that will act as a "proxy" between Garage and the rest of the world.
|
||||
We refer as proxy software that knows how to speak federated protocols (eg. Activity Pub, Solid, RemoteStorage, etc.) or distributed/p2p protocols (eg. BitTorrent, IPFS, etc.).-->
|
||||
|
||||
## Some context
|
||||
|
||||
People often struggle to see the difference between IPFS and Garage, so let's start by making clear that these projects are complementary and not interchangeable.
|
||||
|
||||
Personally, I see IPFS as the intersection between BitTorrent and a file system. BitTorrent remains to this day one of the most efficient ways to deliver
|
||||
a copy of a file or a folder to a very large number of destinations. It however lacks some form of interactivity: once a torrent file has been generated, you can't simply
|
||||
add or remove files from it. By presenting itself more like a file system, IPFS is able to handle this use case out of the box.
|
||||
|
||||
<!--IPFS is a content-addressable network built in a peer-to-peer fashion.
|
||||
In simple words, it means that you query the content you want with its identifier without having to know *where* it is hosted on the network, and especially on which machine.
|
||||
As a side effect, you can share content over the Internet without any configuration (no firewall, NAT, fixed IP, DNS, etc.).-->
|
||||
|
||||
<!--However, IPFS does not enforce any property on the durability and availability of your data: the collaboration mentioned earlier is
|
||||
done only on a spontaneous approach. So at first, if you want to be sure that your content remains alive, you must keep it on your node.
|
||||
And if nobody makes a copy of your content, you will lose it as soon as your node goes offline and/or crashes.
|
||||
Furthermore, if you need multiple nodes to store your content, IPFS is not able to automatically place content on your nodes,
|
||||
enforce a given replication amount, check the integrity of your content, and so on.-->
|
||||
|
||||
However, you would probably not rely on BitTorrent to durably store the encrypted holiday pictures you shared with your friends,
|
||||
as content on BitTorrent tends to vanish when no one in the network has a copy of it anymore. The same applies to IPFS.
|
||||
Even if at some time everyone has a copy of the pictures on their hard disk, people might delete these copies after a while without you knowing it.
|
||||
You also can't easily collaborate on storing this common treasure. For example, there is no automatic way to say that Alice and Bob
|
||||
are in charge of storing the first half of the archive while Charlie and Eve are in charge of the second half.
|
||||
|
||||
➡️ **IPFS is designed to deliver content.**
|
||||
|
||||
*Note: the IPFS project has another project named [IPFS Cluster](https://cluster.ipfs.io/) that allows servers to collaborate on hosting IPFS content.
|
||||
[Resilio](https://www.resilio.com/individuals/) and [Syncthing](https://syncthing.net/) both feature protocols inspired by BitTorrent to synchronize a tree of your file system between multiple computers.
|
||||
Reviewing these solutions is out of the scope of this article, feel free to try them by yourself!*
|
||||
|
||||
Garage, on the other hand, is designed to automatically spread your content over all your available nodes, in a manner that makes the best possible use of your storage space.
|
||||
At the same time, it ensures that your content is always replicated exactly 3 times across the cluster (or less if you change a configuration parameter),
|
||||
on different geographical zones when possible.
|
||||
<!--To access this content, you must have an API key, and have a correctly configured machine available over the network (including DNS/IP address/etc.). If the amount of traffic you receive is way larger than what your cluster can handle, your cluster will become simply unresponsive. Sharing content across people that do not trust each other, ie. who operate independent clusters, is not a feature of Garage: you have to rely on external software.-->
|
||||
However, this means that when content is requested from a Garage cluster, there are only 3 nodes capable of returning it to the user.
|
||||
As a consequence, when content becomes popular, this subset of nodes might become a bottleneck.
|
||||
Moreover, all resources (keys, files, buckets) are tightly coupled to the Garage cluster on which they exist;
|
||||
servers from different clusters can't collaborate to serve together the same data (without additional software).
|
||||
|
||||
➡️ **Garage is designed to durably store content.**
|
||||
|
||||
In this blog post, we will explore whether we can combine efficient delivery and strong durability by connecting an IPFS node to a Garage cluster.
|
||||
|
||||
## Try #1: Vanilla IPFS over Garage
|
||||
|
||||
<!--If you are not familiar with IPFS, is available both as a desktop app and a [CLI app](https://docs.ipfs.io/install/command-line/), in this post we will cover the CLI app as it is often easier to understand how things are working internally.
|
||||
You can quickly follow the official [quick start guide](https://docs.ipfs.io/how-to/command-line-quick-start/#initialize-the-repository) to have an up and running node.-->
|
||||
|
||||
IPFS is available as a pre-compiled binary, but to connect it with Garage, we need a plugin named [ipfs/go-ds-s3](https://github.com/ipfs/go-ds-s3).
|
||||
The Peergos project has a fork because it seems that the plugin is known for hitting Amazon's rate limits
|
||||
([#105](https://github.com/ipfs/go-ds-s3/issues/105), [#205](https://github.com/ipfs/go-ds-s3/pull/205)).
|
||||
This is the one we will try in the following.
|
||||
|
||||
The easiest solution to use this plugin in IPFS is to bundle it in the main IPFS daemon, and recompile IPFS from sources.
|
||||
Following the instructions on the README file allowed me to spawn an IPFS daemon configured with S3 as the block store.
|
||||
|
||||
I had a small issue when adding the plugin to the `plugin/loader/preload_list` file: the given command lacks a newline.
|
||||
I had to edit the file manually after running it, the issue was directly visible and easy to fix.
|
||||
|
||||
After that, I just ran the daemon and accessed the web interface to upload a photo of my dog:
|
||||
|
||||
![A dog](./dog.jpg)
|
||||
|
||||
A content identifier (CID) was assigned to this picture:
|
||||
|
||||
```
|
||||
QmNt7NSzyGkJ5K9QzyceDXd18PbLKrMAE93XuSC2487EFn
|
||||
```
|
||||
|
||||
The photo is now accessible on the whole network.
|
||||
For example, you can inspect it [from the official gateway](https://explore.ipld.io/#/explore/QmNt7NSzyGkJ5K9QzyceDXd18PbLKrMAE93XuSC2487EFn):
|
||||
|
||||
![A screenshot of the IPFS explorer](./explorer.png)
|
||||
|
||||
At the same time, I was monitoring Garage (through [the OpenTelemetry stack we implemented earlier this year](/blog/2022-v0-7-released/)).
|
||||
Just after launching the daemon - and before doing anything - I was met by this surprisingly active Grafana plot:
|
||||
|
||||
![Grafana API request rate when IPFS is idle](./idle.png)
|
||||
<center><i>Legend: y axis = requests per 10 seconds, x axis = time</i></center><p></p>
|
||||
|
||||
It shows that on average, we handle around 250 requests per second. Most of these requests are in fact the IPFS daemon checking if a block exists in Gargage.
|
||||
These requests are triggered by IPFS's DHT service: since my node is reachable over the Internet, it acts as a public DHT server and has to answer global
|
||||
block requests over the whole network. Each time it receives a request for a block, it sends a request to its storage back-end (in our case, to Garage) to see if a copy exists locally.
|
||||
|
||||
*We will try to tweak the IPFS configuration later - we know that we can deactivate the DHT server. For now, we will continue with the default parameters.*
|
||||
|
||||
When I started interacting with the IPFS node by sending a file or browsing the default proposed catalogs (i.e. the full XKCD archive),
|
||||
I quickly hit limits with our monitoring stack which, in its default configuration, is not able to ingest the large amount of tracing data produced by the high number of S3 requests originating from the IPFS daemon.
|
||||
We have the following error in Garage's logs:
|
||||
|
||||
```
|
||||
OpenTelemetry trace error occurred. cannot send span to the batch span processor because the channel is full
|
||||
```
|
||||
|
||||
At this point, I didn't feel that it would be very interesting to fix this issue to see what was exactly the number of requests done on the cluster.
|
||||
In my opinion, such a simple task of sharing a picture should not require so many requests to the storage server anyway.
|
||||
As a comparison, this whole webpage, with its pictures, triggers around 10 requests on Garage when loaded, not thousands.
|
||||
|
||||
I think we can conclude that this first try was a failure.
|
||||
The S3 storage plugin for IPFS does too many requests and would need some important work to be optimized.
|
||||
However, we are aware that the people behind Peergos are known to run their software based on IPFS in production with an S3 backend,
|
||||
so we should not give up too fast.
|
||||
|
||||
## Try #2: Peergos over Garage
|
||||
|
||||
[Peergos](https://peergos.org/) is designed as an end-to-end encrypted and federated alternative to Nextcloud.
|
||||
Internally, it is built on IPFS and is known to have a [deep integration with the S3 API](https://peergos.org/posts/direct-s3).
|
||||
One important point of this integration is that your browser is able to bypass both the Peergos daemon and the IPFS daemon
|
||||
to write and read IPFS blocks directly from the S3 API server.
|
||||
|
||||
*I don't know exactly if Peergos is still considered alpha quality, or if a beta version was released,
|
||||
but keep in mind that it might be more experimental than you'd like!*
|
||||
|
||||
<!--To give ourselves some courage in this adventure, let's start with a nice screenshot of their web UI:
|
||||
|
||||
![Peergos Web UI](./peergos.jpg)-->
|
||||
|
||||
Starting Peergos on top of Garage required some small patches on both sides, but in the end, I was able to get it working.
|
||||
I was able to upload my file, see it in the interface, create a link to share it, rename it, move it to a folder, and so on:
|
||||
|
||||
![A screenshot of the Peergos interface](./upload.png)
|
||||
|
||||
At the same time, the fans of my computer started to become a bit loud!
|
||||
A quick look at Grafana showed again a very active Garage:
|
||||
|
||||
![Screenshot of a grafana plot showing requests per second over time](./grafa.png)
|
||||
<center><i>Legend: y axis = requests per 10 seconds on log(10) scale, x axis = time</i></center><p></p>
|
||||
|
||||
Again, the workload is dominated by S3 `HeadObject` requests.
|
||||
After taking a look at `~/.peergos/.ipfs/config`, it seems that the IPFS configuration used by the Peergos project is quite standard,
|
||||
which means that, as before, we are acting as a DHT server and having to answer to thousands of block requests every second.
|
||||
|
||||
We also have some traffic on the `GetObject` and `OPTIONS` endpoints (with peaks up to ~45 req/sec).
|
||||
This traffic is all generated by Peergos.
|
||||
The `OPTIONS` HTTP verb is here because we use the direct access feature of Peergos,
|
||||
meaning that our browser is talking directly to Garage and has to use CORS to validate requests for security.
|
||||
|
||||
Internally, IPFS splits files into blocks of less than 256 kB. My picture is thus split into 2 blocks, requiring 2 requests over Garage to fetch it.
|
||||
But even knowing that IPFS splits files into small blocks, I can't explain why we have so many `GetObject` requests.
|
||||
|
||||
## Try #3: Optimizing IPFS
|
||||
|
||||
<!--
|
||||
Routing = dhtclient
|
||||
![](./grafa2.png)
|
||||
-->
|
||||
|
||||
We have seen in our 2 previous tries that the main source of load was the federation and in particular the DHT server.
|
||||
In this section, we'd like to artificially remove this problem from the equation by preventing our IPFS node from federating
|
||||
and see what pressure is put by Peergos alone on our local cluster.
|
||||
|
||||
To isolate IPFS, I have set its routing type to `none`, I have cleared its bootstrap node list,
|
||||
and I configured the swarm socket to listen only on `localhost`.
|
||||
Finally, I restarted Peergos and was able to observe this more peaceful graph:
|
||||
|
||||
![Screenshot of a grafana plot showing requests per second over time](./grafa3.png)
|
||||
<center><i>Legend: y axis = requests per 10 seconds on log(10) scale, x axis = time</i></center><p></p>
|
||||
|
||||
Now, for a given endpoint, we have peaks of around 10 req/sec which is way more reasonable.
|
||||
Furthermore, we are no longer hammering our back-end with requests on objects that are not there.
|
||||
|
||||
After discussing with the developers, it is possible to go even further by running Peergos without IPFS:
|
||||
this is what they do for some of their tests. If at the same time we increased the size of data blocks,
|
||||
we might have a non-federated but quite efficient end-to-end encrypted "cloud storage" that works well over Garage,
|
||||
with our clients directly hitting the S3 API!
|
||||
|
||||
For setups where federation is a hard requirement,
|
||||
the next step would be to gradually allow our node to connect to the IPFS network
|
||||
while ensuring that the traffic to the Garage cluster remains low.
|
||||
For example, configuring our IPFS node as a `dhtclient` instead of a `dhtserver` would exempt it from answering public DHT requests.
|
||||
Keeping an in-memory index (as a hash map and/or a Bloom filter) of the blocks stored on the current node
|
||||
could also drastically reduce the number of requests.
|
||||
It could also be interesting to explore ways to run in one process a full IPFS node with a DHT
|
||||
server on the regular file system, and reserve a second process configured with the S3 back-end to handle only our Peergos data.
|
||||
|
||||
However, even with these optimizations, the best we can expect is the traffic we have on the previous plot.
|
||||
From a theoretical perspective, it is still higher than the optimal number of requests.
|
||||
On S3, storing a file, downloading a file, and listing available files are all actions that can be done in a single request.
|
||||
Even if all requests don't have the same cost on the cluster, processing a request has a non-negligible fixed cost.
|
||||
|
||||
## Are S3 and IPFS incompatible?
|
||||
|
||||
Tweaking IPFS in order to try and make it work on an S3 backend is all and good,
|
||||
but in some sense, the assumptions made by IPFS are fundamentally incompatible with using S3 as block storage.
|
||||
|
||||
First, data on IPFS is split in relatively small chunks: all IPFS blocks must be less than 1 MB, with most being 256 KB or less.
|
||||
This means that large files or complex directory hierarchies will need thousands of blocks to be stored,
|
||||
each of which is mapped to a single object in the S3 storage back-end.
|
||||
On the other side, S3 implementations such as Garage are made to handle very large objects efficiently,
|
||||
and they also provide their own primitives for rapidly listing all the objects present in a bucket or a directory.
|
||||
There is thus a huge loss in performance when data is stored in IPFS's block format because this format does not
|
||||
take advantage of the optimizations provided by S3 back-ends in their standard usage scenarios. Instead, it
|
||||
requires storing and retrieving thousands of small S3 objects even for very simple operations such
|
||||
as retrieving a file or listing a directory, incurring a fixed overhead each time.
|
||||
|
||||
This problem is compounded by the design of the IPFS data exchange protocol,
|
||||
in which nodes may request any data blocks to any other node in the network
|
||||
in its quest to answer a user's request (like retrieving a file, etc.).
|
||||
When a node is missing a file or a directory it wants to read, it has to do as many requests to other nodes
|
||||
as there are IPFS blocks in the object to be read.
|
||||
On the receiving end, this means that any fully-fledged IPFS node has to answer large numbers
|
||||
of requests for blocks required by users everywhere on the network, which is what we observed in our experiment above.
|
||||
We were however surprised to observe that many requests coming from the IPFS network were for blocks
|
||||
which our node didn't have a copy of: this means that somewhere in the IPFS protocol, an overly optimistic
|
||||
assumption is made on where data could be found in the network, and this ends up translating into many requests
|
||||
between nodes that return negative results.
|
||||
When IPFS blocks are stored on a local filesystem, answering these requests fast might be possible.
|
||||
However, when using an S3 server as a storage back-end, this becomes prohibitively costly.
|
||||
|
||||
If one wanted to design a distributed storage system for IPFS data blocks, they would probably need to start at a lower level.
|
||||
Garage itself makes use of a block storage mechanism that allows small-sized blocks to be stored on a cluster and accessed
|
||||
rapidly by nodes that need to access them.
|
||||
However passing through the entire abstraction that provides an S3 API is wasteful and redundant, as this API is
|
||||
designed to provide advanced functionality such as mutating objects, associating metadata with objects, listing objects, etc.
|
||||
Plugging the IPFS daemon directly into a lower-level distributed block storage like
|
||||
Garage's might yield way better results by bypassing all of this complexity.
|
||||
|
||||
|
||||
## Conclusion
|
||||
|
||||
Running IPFS over an S3 storage backend does not quite work out of the box in terms of performance.
|
||||
Having identified that the main problem is linked to the DHT service,
|
||||
we proposed some improvements (disabling the DHT server, keeping an in-memory index of the blocks, and using the S3 back-end only for user data).
|
||||
|
||||
From an IPFS design perspective, it seems however that the numerous small blocks handled by the protocol
|
||||
do not map trivially to efficient use of the S3 API, and thus could be a limiting factor to any optimization work.
|
||||
|
||||
As part of my testing journey, I also stumbled upon some posts about performance issues on IPFS (eg. [#6283](https://github.com/ipfs/go-ipfs/issues/6283))
|
||||
that are not linked with the S3 connector. I might be negatively influenced by my failure to connect IPFS with S3,
|
||||
but at this point, I'm tempted to think that IPFS is intrinsically resource-intensive from a block activity perspective.
|
||||
|
||||
On our side at Deuxfleurs, we will continue our investigations towards more *minimalistic* software.
|
||||
This choice makes sense for us as we want to reduce the ecological impact of our services
|
||||
by deploying fewer servers, that use less energy, and are renewed less frequently.
|
||||
|
||||
After discussing with Peergos maintainers, we identified that it is possible to run Peergos without IPFS.
|
||||
With some optimizations on the block size, we envision great synergies between Garage and Peergos that could lead to
|
||||
an efficient and lightweight end-to-end encrypted "cloud storage" platform.
|
||||
*If you happen to be working on this, please inform us!*
|
||||
|
||||
|
||||
*We are also aware of the existence of many other software projects for file sharing
|
||||
such as Nextcloud, Owncloud, Owncloud Infinite Scale, Seafile, Filestash, Pydio, SOLID, Remote Storage, etc.
|
||||
Many of these could be connected to an S3 back-end such as Garage.
|
||||
We might even try some of them in future blog posts, so stay tuned!*
|
Before Width: | Height: | Size: 221 KiB |
Before Width: | Height: | Size: 110 KiB |
Before Width: | Height: | Size: 295 KiB |
Before Width: | Height: | Size: 232 KiB |
Before Width: | Height: | Size: 144 KiB |
Before Width: | Height: | Size: 194 KiB |
Before Width: | Height: | Size: 177 KiB |
|
@ -1,513 +0,0 @@
|
|||
+++
|
||||
title="Confronting theoretical design with observed performances"
|
||||
date=2022-09-26
|
||||
+++
|
||||
|
||||
|
||||
*During the past years, we have thought a lot about possible design decisions and
|
||||
their theoretical trade-offs for Garage. In particular, we pondered the impacts
|
||||
of data structures, networking methods, and scheduling algorithms.
|
||||
Garage worked well enough for our production
|
||||
cluster at Deuxfleurs, but we also knew that people started to experience some
|
||||
unexpected behaviors, which motivated us to start a round of benchmarks and performance
|
||||
measurements to see how Garage behaves compared to our expectations.
|
||||
This post presents some of our first results, which cover
|
||||
3 aspects of performance: efficient I/O, "myriads of objects", and resiliency,
|
||||
reflecting the high-level properties we are seeking.*
|
||||
|
||||
<!-- more -->
|
||||
|
||||
---
|
||||
|
||||
## ⚠️ Disclaimer
|
||||
|
||||
The results presented in this blog post must be taken with a (critical) grain of salt due to some
|
||||
limitations that are inherent to any benchmarking endeavor. We try to reference them as
|
||||
exhaustively as possible here, but other limitations might exist.
|
||||
|
||||
Most of our tests were made on _simulated_ networks, which by definition cannot represent all the
|
||||
diversity of _real_ networks (dynamic drop, jitter, latency, all of which could be
|
||||
correlated with throughput or any other external event). We also limited
|
||||
ourselves to very small workloads that are not representative of a production
|
||||
cluster. Furthermore, we only benchmarked some very specific aspects of Garage:
|
||||
our results are not an evaluation of the performance of Garage as a whole.
|
||||
|
||||
For some benchmarks, we used Minio as a reference. It must be noted that we did
|
||||
not try to optimize its configuration as we have done for Garage, and more
|
||||
generally, we have significantly less knowledge of Minio's internals compared to Garage, which could lead
|
||||
to underrated performance measurements for Minio. It must also be noted that
|
||||
Garage and Minio are systems with different feature sets. For instance, Minio supports
|
||||
erasure coding for higher data density and Garage doesn't, Minio implements
|
||||
way more S3 endpoints than Garage, etc. Such features necessarily have a cost
|
||||
that you must keep in mind when reading the plots we will present. You should consider
|
||||
Minio's results as a way to contextualize Garage's numbers, to justify that our improvements
|
||||
are not simply artificial in the light of existing object storage implementations.
|
||||
|
||||
The impact of the testing environment is also not evaluated (kernel patches,
|
||||
configuration, parameters, filesystem, hardware configuration, etc.). Some of
|
||||
these parameters could favor one configuration or software product over another.
|
||||
Especially, it must be noted that most of the tests were done on a
|
||||
consumer-grade PC with only a SSD, which is different from most
|
||||
production setups. Finally, our results are also provided without statistical
|
||||
tests to validate their significance, and might have insufficient ground
|
||||
to be claimed as reliable.
|
||||
|
||||
When reading this post, please keep in mind that **we are not making any
|
||||
business or technical recommendations here, and this is not a scientific paper
|
||||
either**; we only share bits of our development process as honestly as
|
||||
possible.
|
||||
Make your own tests if you need to take a decision,
|
||||
remember to read [benchmarking crimes](https://gernot-heiser.org/benchmarking-crimes.html)
|
||||
and to remain supportive and caring with your peers ;)
|
||||
|
||||
## About our testing environment
|
||||
|
||||
We made a first batch of tests on
|
||||
[Grid5000](https://www.grid5000.fr/w/Grid5000:Home), a large-scale and flexible
|
||||
testbed for experiment-driven research in all areas of computer science,
|
||||
which has an
|
||||
[open access program](https://www.grid5000.fr/w/Grid5000:Open-Access).
|
||||
During our tests, we used part of the following clusters:
|
||||
[nova](https://www.grid5000.fr/w/Lyon:Hardware#nova),
|
||||
[paravance](https://www.grid5000.fr/w/Rennes:Hardware#paravance), and
|
||||
[econome](https://www.grid5000.fr/w/Nantes:Hardware#econome), to make a
|
||||
geo-distributed topology. We used the Grid5000 testbed only during our
|
||||
preliminary tests to identify issues when running Garage on many powerful
|
||||
servers. We then reproduced these issues in a controlled environment
|
||||
outside of Grid5000, so don't be
|
||||
surprised then if Grid5000 is not always mentioned on our plots.
|
||||
|
||||
To reproduce some environments locally, we have a small set of Python scripts
|
||||
called [`mknet`](https://git.deuxfleurs.fr/Deuxfleurs/mknet) tailored to our
|
||||
needs[^ref1]. Most of the following tests were run locally with `mknet` on a
|
||||
single computer: a Dell Inspiron 27" 7775 AIO, with a Ryzen 5 1400, 16GB of
|
||||
RAM and a 512GB SSD. In terms of software, NixOS 22.05 with the 5.15.50 kernel is
|
||||
used with an ext4 encrypted filesystem. The `vm.dirty_background_ratio` and
|
||||
`vm.dirty_ratio` have been reduced to `2` and `1` respectively: with default
|
||||
values, the system tends to freeze under heavy I/O load.
|
||||
|
||||
## Efficient I/O
|
||||
|
||||
The main purpose of an object storage system is to store and retrieve objects
|
||||
across the network, and the faster these two functions can be accomplished,
|
||||
the more efficient the system as a whole will be. For this analysis, we focus on
|
||||
2 aspects of performance. First, since many applications can start processing a file
|
||||
before receiving it completely, we will evaluate the time-to-first-byte (TTFB)
|
||||
on `GetObject` requests, i.e. the duration between the moment a request is sent
|
||||
and the moment where the first bytes of the returned object are received by the client.
|
||||
Second, we will evaluate generic throughput, to understand how well
|
||||
Garage can leverage the underlying machine's performance.
|
||||
|
||||
**Time-to-First-Byte** - One specificity of Garage is that we implemented S3
|
||||
web endpoints, with the idea to make it a platform of choice to publish
|
||||
static websites. When publishing a website, TTFB can be directly observed
|
||||
by the end user, as it will impact the perceived reactivity of the page being loaded.
|
||||
|
||||
Up to version 0.7.3, time-to-first-byte on Garage used to be relatively high.
|
||||
This can be explained by the fact that Garage was not able to handle data internally
|
||||
at a smaller granularity level than entire data blocks, which are up to 1MB chunks of a given object
|
||||
(a size which [can be configured](https://garagehq.deuxfleurs.fr/documentation/reference-manual/configuration/#block-size)).
|
||||
Let us take the example of a 4.5MB object, which Garage will split by default into four 1MB blocks and one 0.5MB block.
|
||||
With the old design, when you were sending a `GET`
|
||||
request, the first block had to be _fully_ retrieved by the gateway node from the
|
||||
storage node before it starts to send any data to the client.
|
||||
|
||||
With Garage v0.8, we added a data streaming logic that allows the gateway
|
||||
to send the beginning of a block without having to wait for the full block to be received from
|
||||
the storage node. We can visually represent the difference as follow:
|
||||
|
||||
<center>
|
||||
<img src="schema-streaming.png" alt="A schema depicting how streaming improves the delivery of a block" />
|
||||
</center>
|
||||
|
||||
As our default block size is only 1MB, the difference should be marginal on
|
||||
fast networks: it takes only 8ms to transfer 1MB on a 1Gbps network,
|
||||
adding at most 8ms of latency to a `GetObject` request (assuming no other
|
||||
data transfer is happening in parallel). However,
|
||||
on a very slow network, or a very congested link with many parallel requests
|
||||
handled, the impact can be much more important: on a 5Mbps network, it takes at least 1.6 seconds
|
||||
to transfer our 1MB block, and streaming will heavily improve user experience.
|
||||
|
||||
We wanted to see if this theory holds in practice: we simulated a low latency
|
||||
but slow network using `mknet` and did some requests with block streaming (Garage v0.8 beta) and
|
||||
without (Garage v0.7.3). We also added Minio as a reference. To
|
||||
benchmark this behavior, we wrote a small test named
|
||||
[s3ttfb](https://git.deuxfleurs.fr/Deuxfleurs/mknet/src/branch/main/benchmarks/s3ttfb),
|
||||
whose results are shown on the following figure:
|
||||
|
||||
![Plot showing the TTFB observed on Garage v0.8, v0.7 and Minio](ttfb.png)
|
||||
|
||||
Garage v0.7, which does not support block streaming, gives us a TTFB between 1.6s
|
||||
and 2s, which matches the time required to transfer the full block which we calculated above.
|
||||
On the other side of the plot, we can see Garage v0.8 with a very low TTFB thanks to the
|
||||
streaming feature (the lowest value is 43ms). Minio sits between the two
|
||||
Garage versions: we suppose that it does some form of batching, but smaller
|
||||
than our initial 1MB default.
|
||||
|
||||
**Throughput** - As soon as we publicly released Garage, people started
|
||||
benchmarking it, comparing its performances to writing directly on the
|
||||
filesystem, and observed that Garage was slower (eg.
|
||||
[#288](https://git.deuxfleurs.fr/Deuxfleurs/garage/issues/288)). To improve the
|
||||
situation, we did some optimizations, such as putting costly processing like hashing on a dedicated thread,
|
||||
and many others
|
||||
([#342](https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/342),
|
||||
[#343](https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/343)), which led us to
|
||||
version 0.8 "Beta 1". We also noticed that some of the logic we wrote
|
||||
to better control resource usage
|
||||
and detect errors, including semaphores and timeouts, was artificially limiting
|
||||
performances. In another iteration, we made this logic less restrictive at the
|
||||
cost of higher resource consumption under load
|
||||
([#387](https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/387)), resulting in
|
||||
version 0.8 "Beta 2". Finally, we currently do multiple `fsync` calls each time we
|
||||
write a block. We know that this is expensive and did a test build without any
|
||||
`fsync` call ([see the
|
||||
commit](https://git.deuxfleurs.fr/Deuxfleurs/garage/commit/432131f5b8c2aad113df3b295072a00756da47e7))
|
||||
that will not be merged, only to assess the impact of `fsync`. We refer to it
|
||||
as `no-fsync` in the following plot.
|
||||
|
||||
*A note about `fsync`: for performance reasons, operating systems often do not
|
||||
write directly to the disk when a process creates or updates a file in your
|
||||
filesystem. Instead, the write is kept in memory, and flushed later in a batch
|
||||
with other writes. If a power loss occurs before the OS has time to flush
|
||||
data to disk, some writes will be lost. To ensure that a write is effectively
|
||||
written to disk, the
|
||||
[`fsync(2)`](https://man7.org/linux/man-pages/man2/fsync.2.html) system call must be used,
|
||||
which effectively blocks until the file or directory on which it is called has been flushed from volatile
|
||||
memory to the persistent storage device. Additionally, the exact semantic of
|
||||
`fsync` [differs from one OS to another](https://mjtsai.com/blog/2022/02/17/apple-ssd-benchmarks-and-f_fullsync/)
|
||||
and, even on battle-tested software like Postgres, it was
|
||||
["done wrong for 20 years"](https://archive.fosdem.org/2019/schedule/event/postgresql_fsync/).
|
||||
Note that on Garage, we are still working on our `fsync` policy and thus, for
|
||||
now, you should expect limited data durability in case of power loss, as we are
|
||||
aware of some inconsistencies on this point (which we describe in the following
|
||||
and plan to solve).*
|
||||
|
||||
To assess performance improvements, we used the benchmark tool
|
||||
[minio/warp](https://github.com/minio/warp) in a non-standard configuration,
|
||||
adapted for small-scale tests, and we kept only the aggregated result named
|
||||
"cluster total". The goal of this experiment is to get an idea of the cluster
|
||||
performance with a standardized and mixed workload.
|
||||
|
||||
![Plot showing IO performances of Garage configurations and Minio](io.png)
|
||||
|
||||
Minio, our reference point, gives us the best performances in this test.
|
||||
Looking at Garage, we observe that each improvement we made had a visible
|
||||
impact on performances. We also note that we have a progress margin in
|
||||
terms of performances compared to Minio: additional benchmarks, tests, and
|
||||
monitoring could help us better understand the remaining gap.
|
||||
|
||||
|
||||
## A myriad of objects
|
||||
|
||||
Object storage systems do not handle a single object but huge numbers of them:
|
||||
Amazon claims to handle trillions of objects on their platform, and Red Hat
|
||||
tout Ceph as being able to handle 10 billion objects. All these
|
||||
objects must be tracked efficiently in the system to be fetched, listed,
|
||||
removed, etc. In Garage, we use a "metadata engine" component to track them.
|
||||
For this analysis, we compare different metadata engines in Garage and see how
|
||||
well the best one scales to a million objects.
|
||||
|
||||
**Testing metadata engines** - With Garage, we chose not to store metadata
|
||||
directly on the filesystem, like Minio for example, but in a specialized on-disk
|
||||
B-Tree data structure; in other words, in an embedded database engine. Until now,
|
||||
the only supported option was [sled](https://sled.rs/), but we started having
|
||||
serious issues with it - and we were not alone
|
||||
([#284](https://git.deuxfleurs.fr/Deuxfleurs/garage/issues/284)). With Garage
|
||||
v0.8, we introduce an abstraction semantic over the features we expect from our
|
||||
database, allowing us to switch from one metadata back-end to another without touching
|
||||
the rest of our codebase. We added two additional back-ends: LMDB
|
||||
(through [heed](https://github.com/meilisearch/heed)) and SQLite
|
||||
(using [Rusqlite](https://github.com/rusqlite/rusqlite)). **Keep in mind that they
|
||||
are both experimental: contrarily to sled, we have yet to run them in production
|
||||
for a significant time.**
|
||||
|
||||
Similarly to the impact of `fsync` on block writing, each database engine we use
|
||||
has its own `fsync` policy. Sled flushes its writes every 2 seconds by
|
||||
default (this is
|
||||
[configurable](https://garagehq.deuxfleurs.fr/documentation/reference-manual/configuration/#sled-flush-every-ms)).
|
||||
LMDB default to an `fsync` on each write, which on early tests led to
|
||||
abysmal performance. We thus added 2 flags,
|
||||
[MDB\_NOSYNC](http://www.lmdb.tech/doc/group__mdb__env.html#ga5791dd1adb09123f82dd1f331209e12e)
|
||||
and
|
||||
[MDB\_NOMETASYNC](http://www.lmdb.tech/doc/group__mdb__env.html#ga5021c4e96ffe9f383f5b8ab2af8e4b16),
|
||||
to deactivate `fsync` entirely. On SQLite, it is also possible to deactivate `fsync` with
|
||||
`pragma synchronous = off`, but we have not started any optimization work on it yet:
|
||||
our SQLite implementation currently still calls `fsync` for all write operations. Additionally, we are
|
||||
using these engines through Rust bindings that do not support async Rust,
|
||||
with which Garage is built, which has an impact on performance as well.
|
||||
**Our comparison will therefore not reflect the raw performances of
|
||||
these database engines, but instead, our integration choices.**
|
||||
|
||||
Still, we think it makes sense to evaluate our implementations in their current
|
||||
state in Garage. We designed a benchmark that is intensive on the metadata part
|
||||
of the software, i.e. handling large numbers of tiny files. We chose again
|
||||
`minio/warp` as a benchmark tool, but we
|
||||
configured it with the smallest possible object size it supported, 256
|
||||
bytes, to put pressure on the metadata engine. We evaluated sled twice:
|
||||
with its default configuration, and with a configuration where we set a flush
|
||||
interval of 10 minutes (longer than the test) to disable `fsync`.
|
||||
|
||||
*Note that S3 has not been designed for workloads that store huge numbers of small objects;
|
||||
a regular database, like Cassandra, would be more appropriate. This test has
|
||||
only been designed to stress our metadata engine and is not indicative of
|
||||
real-world performances.*
|
||||
|
||||
![Plot of our metadata engines comparison with Warp](db_engine.png)
|
||||
|
||||
Unsurprisingly, we observe abysmal performances with SQLite, as it is the engine we did not put work on yet,
|
||||
and that still does an `fsync` for each write. Garage with the `fsync`-disabled LMDB backend performs twice better than
|
||||
with sled in its default version and 60% better than the "no `fsync`" sled version in our
|
||||
benchmark. Furthermore, and not depicted on these plots, LMDB uses way less
|
||||
disk storage and RAM; we would like to quantify that in the future. As we are
|
||||
only at the very beginning of our work on metadata engines, it is hard to draw
|
||||
strong conclusions. Still, we can say that SQLite is not ready for production
|
||||
workloads, and that LMDB looks very promising both in terms of performances and resource
|
||||
usage, and is a very good candidate for being Garage's default metadata engine in
|
||||
future releases, once we figure out the proper `fsync` tuning. In the future, we will need to define a data policy for Garage to help us
|
||||
arbitrate between performance and durability.
|
||||
|
||||
*To `fsync` or not to `fsync`? Performance is nothing without reliability, so we
|
||||
need to better assess the impact of possibly losing a write after it has been validated.
|
||||
Because Garage is a distributed system, even if a node loses its write due to a
|
||||
power loss, it will fetch it back from the 2 other nodes that store it. But rare
|
||||
situations can occur where 1 node is down and the 2 others validate the write and then
|
||||
lose power before having time to flush to disk. What is our policy in this case? For storage durability,
|
||||
we are already supposing that we never lose the storage of more than 2 nodes,
|
||||
so should we also make the hypothesis that we won't lose power on more than 2 nodes at the same
|
||||
time? What should we do about people hosting all of their nodes at the same
|
||||
place without an uninterruptible power supply (UPS)? Historically, it seems that Minio developers also accepted
|
||||
some compromises on this side
|
||||
([#3536](https://github.com/minio/minio/issues/3536),
|
||||
[HN Discussion](https://news.ycombinator.com/item?id=28135533)). Now, they seem to
|
||||
use a combination of `O_DSYNC` and `fdatasync(3p)` - a derivative that ensures
|
||||
only data and not metadata is persisted on disk - in combination with
|
||||
`O_DIRECT` for direct I/O
|
||||
([discussion](https://github.com/minio/minio/discussions/14339#discussioncomment-2200274),
|
||||
[example in Minio source](https://github.com/minio/minio/blob/master/cmd/xl-storage.go#L1928-L1932)).*
|
||||
|
||||
**Storing a million objects** - Object storage systems are designed not only
|
||||
for data durability and availability but also for scalability, so naturally,
|
||||
some people asked us how scalable Garage is. If giving a definitive answer to this
|
||||
question is out of the scope of this study, we wanted to be sure that our
|
||||
metadata engine would be able to scale to a million objects. To put this
|
||||
target in context, it remains small compared to other industrial solutions:
|
||||
Ceph claims to scale up to [10 billion objects](https://www.redhat.com/en/resources/data-solutions-overview),
|
||||
which is 4 orders of magnitude more than our current target. Of course, their
|
||||
benchmarking setup has nothing in common with ours, and their tests are way
|
||||
more exhaustive.
|
||||
|
||||
We wrote our own benchmarking tool for this test,
|
||||
[s3billion](https://git.deuxfleurs.fr/Deuxfleurs/mknet/src/branch/main/benchmarks/s3billion)[^ref2].
|
||||
The benchmark procedure consists in
|
||||
concurrently sending a defined number of tiny objects (8192 objects of 16
|
||||
bytes by default) and measuring the wall clock time to the last object upload. This step is then repeated a given
|
||||
number of times (128 by default) to effectively create a target number of
|
||||
objects on the cluster (1M by default). On our local setup with 3
|
||||
nodes, both Minio and Garage with LMDB were able to achieve this target. In the
|
||||
following plot, we show how much time it took Garage and Minio to handle
|
||||
each batch.
|
||||
|
||||
Before looking at the plot, **you must keep in mind some important points regarding
|
||||
the internals of both Minio and Garage**.
|
||||
|
||||
Minio has no metadata engine, it stores its objects directly on the filesystem.
|
||||
Sending 1 million objects on Minio results in creating one million inodes on
|
||||
the storage server in our current setup. So the performances of the filesystem
|
||||
probably have a substantial impact on the observed results.
|
||||
In our precise setup, we know that the
|
||||
filesystem we used is not adapted at all for Minio (encryption layer, fixed
|
||||
number of inodes, etc.). Additionally, we mentioned earlier that we deactivated
|
||||
`fsync` for our metadata engine in Garage, whereas Minio has some `fsync` logic here slowing down the
|
||||
creation of objects. Finally, object storage is designed for big objects, for which the
|
||||
costs measured here are negligible. In the end, again, we use Minio only as a
|
||||
reference point to understand what performance budget we have for each part of our
|
||||
software.
|
||||
|
||||
Conversely, Garage has an optimization for small objects. Below 3KB, a separate file is
|
||||
not created on the filesystem but the object is directly stored inline in the
|
||||
metadata engine. In the future, we plan to evaluate how Garage behaves at scale with
|
||||
objects above 3KB, which we expect to be way closer to Minio, as it will have to create
|
||||
at least one inode per object. For now, we limit ourselves to evaluating our
|
||||
metadata engine and focus only on 16-byte objects.
|
||||
|
||||
![Showing the time to send 128 batches of 8192 objects for Minio and Garage](1million-both.png)
|
||||
|
||||
It appears that the performances of our metadata engine are acceptable, as we
|
||||
have a comfortable margin compared to Minio (Minio is between 3x and 4x times
|
||||
slower per batch). We also note that, past the 200k objects mark, Minio's
|
||||
time to complete a batch of inserts is constant, while on Garage it still increases on the observed range.
|
||||
It could be interesting to know if Garage's batch completion time would cross Minio's one
|
||||
for a very large number of objects. If we reason per object, both Minio's and
|
||||
Garage's performances remain very good: it takes respectively around 20ms and
|
||||
5ms to create an object. In a real-world scenario, at 100 Mbps, the upload of a 10MB file takes
|
||||
800ms, and goes up to 8sec for a 100MB file: in both cases
|
||||
handling the object metadata would be only a fraction of the upload time. The
|
||||
only cases where a difference would be noticeable would be when uploading a lot of very
|
||||
small files at once, which again would be an unusual usage of the S3 API.
|
||||
|
||||
Let us now focus on Garage's metrics only to better see its specific behavior:
|
||||
|
||||
![Showing the time to send 128 batches of 8192 objects for Garage only](1million.png)
|
||||
|
||||
Two effects are now more visible: 1., batch completion time increases with the
|
||||
number of objects in the bucket and 2., measurements are scattered, at least
|
||||
more than for Minio. We expected this batch completion time increase to be logarithmic,
|
||||
but we don't have enough data points to conclude confidently it is the case: additional
|
||||
measurements are needed. Concerning the observed instability, it could
|
||||
be a symptom of what we saw with some other experiments on this setup,
|
||||
which sometimes freezes under heavy I/O load. Such freezes could lead to
|
||||
request timeouts and failures. If this occurs on our testing computer, it might
|
||||
occur on other servers as well: it would be interesting to better understand this
|
||||
issue, document how to avoid it, and potentially change how we handle I/O
|
||||
internally in Garage. But still, this was a very heavy test that will probably not be encountered in
|
||||
many setups: we were adding 273 objects per second for 30 minutes straight!
|
||||
|
||||
To conclude this part, Garage can ingest 1 million tiny objects while remaining
|
||||
usable on our local setup. To put this result in perspective, our production
|
||||
cluster at [deuxfleurs.fr](https://deuxfleurs) smoothly manages a bucket with
|
||||
116k objects. This bucket contains real-world production data: it is used by our Matrix instance
|
||||
to store people's media files (profile pictures, shared pictures, videos,
|
||||
audio files, documents...). Thanks to this benchmark, we have identified two points
|
||||
of vigilance: the increase of batch insert time with the number of existing
|
||||
objects in the cluster in the observed range, and the volatility in our measured data that
|
||||
could be a symptom of our system freezing under the load. Despite these two
|
||||
points, we are confident that Garage could scale way above 1M objects, although
|
||||
that remains to be proven.
|
||||
|
||||
## In an unpredictable world, stay resilient
|
||||
|
||||
Supporting a variety of real-world networks and computers, especially ones that
|
||||
were not designed for software-defined storage or even for server purposes, is the
|
||||
core value proposition of Garage. For example, our production cluster is
|
||||
hosted [on refurbished Lenovo Thinkcentre Tiny desktop computers](https://guide.deuxfleurs.fr/img/serv_neptune.jpg)
|
||||
behind consumer-grade fiber links across France and Belgium (if you are reading this,
|
||||
congratulation, you fetched this webpage from it!). That's why we are very
|
||||
careful that our internal protocol (referred to as "RPC protocol" in our documentation)
|
||||
remains as lightweight as possible. For this analysis, we quantify how network
|
||||
latency and number of nodes in the cluster impact the duration of the most
|
||||
important kinds of S3 requests.
|
||||
|
||||
**Latency amplification** - With the kind of networks we use (consumer-grade
|
||||
fiber links across the EU), the observed latency between nodes is in the 50ms range.
|
||||
When latency is not negligible, you will observe that request completion
|
||||
time is a factor of the observed latency. That's to be expected: in many cases, the
|
||||
node of the cluster you are contacting cannot directly answer your request, and
|
||||
has to reach other nodes of the cluster to get the data. Each
|
||||
of these sequential remote procedure calls - or RPCs - adds to the final S3 request duration, which can quickly become
|
||||
expensive. This ratio between request duration and network latency is what we
|
||||
refer to as *latency amplification*.
|
||||
|
||||
For example, on Garage, a `GetObject` request does two sequential calls: first,
|
||||
it fetches the descriptor of the requested object from the metadata engine, which contains a reference
|
||||
to the first block of data, and then only in a second step it can start retrieving data blocks
|
||||
from storage nodes. We can therefore expect that the
|
||||
request duration of a small `GetObject` request will be close to twice the
|
||||
network latency.
|
||||
|
||||
We tested the latency amplification theory with another benchmark of our own named
|
||||
[s3lat](https://git.deuxfleurs.fr/Deuxfleurs/mknet/src/branch/main/benchmarks/s3lat)
|
||||
which does a single request at a time on an endpoint and measures the response
|
||||
time. As we are not interested in bandwidth but latency, all our requests
|
||||
involving objects are made on tiny files of around 16 bytes. Our benchmark
|
||||
tests 5 standard endpoints of the S3 API: ListBuckets, ListObjects, PutObject, GetObject and
|
||||
RemoveObject. Here are the results:
|
||||
|
||||
|
||||
![Latency amplification](amplification.png)
|
||||
|
||||
As Garage has been optimized for this use case from the very beginning, we don't see
|
||||
any significant evolution from one version to another (Garage v0.7.3 and Garage
|
||||
v0.8.0 Beta 1 here). Compared to Minio, these values are either similar (for
|
||||
ListObjects and ListBuckets) or significantly better (for GetObject, PutObject, and
|
||||
RemoveObject). This can be easily explained by the fact that Minio has not been designed with
|
||||
environments with high latencies in mind. Instead, it is expected to run on clusters that are built
|
||||
in a singe data center. In a multi-DC setup, different clusters could then possibly be interconnected with their asynchronous
|
||||
[bucket replication](https://min.io/docs/minio/linux/administration/bucket-replication.html?ref=docs-redirect)
|
||||
feature.
|
||||
|
||||
*Minio also has a [multi-site active-active replication system](https://blog.min.io/minio-multi-site-active-active-replication/)
|
||||
but it is even more sensitive to latency: "Multi-site replication has increased
|
||||
latency sensitivity, as Minio does not consider an object as replicated until
|
||||
it has synchronized to all configured remote targets. Replication latency is
|
||||
therefore dictated by the slowest link in the replication mesh."*
|
||||
|
||||
|
||||
**A cluster with many nodes** - Whether you already have many compute nodes
|
||||
with unused storage, need to store a lot of data, or are experimenting with unusual
|
||||
system architectures, you might be interested in deploying over a hundred Garage nodes.
|
||||
However, in some distributed systems, the number of nodes in the cluster will
|
||||
have an impact on performance. Theoretically, our protocol, which is inspired by distributed
|
||||
hash tables (DHT), should scale fairly well, but until now, we never took the time to test it
|
||||
with a hundred nodes or more.
|
||||
|
||||
This test was run directly on Grid5000 with 6 physical servers spread
|
||||
in 3 locations in France: Lyon, Rennes, and Nantes. On each server, we ran up
|
||||
to 65 instances of Garage simultaneously, for a total of 390 nodes. The
|
||||
network between physical servers is the dedicated network provided by
|
||||
the Grid5000 community. Nodes on the same physical machine communicate directly
|
||||
through the Linux network stack without any limitation. We are aware that this is a
|
||||
weakness of this test, but we still think that this test can be relevant as, at
|
||||
each step in the test, each instance of Garage has 83% (5/6) of its connections
|
||||
that are made over a real network. To measure performances for each cluster size, we used
|
||||
[s3lat](https://git.deuxfleurs.fr/Deuxfleurs/mknet/src/branch/main/benchmarks/s3lat)
|
||||
again:
|
||||
|
||||
|
||||
![Impact of response time with bigger clusters](complexity.png)
|
||||
|
||||
Up to 250 nodes, we observed response times that remain constant. After this threshold,
|
||||
results become very noisy. By looking at the server resource usage, we saw
|
||||
that their load started to become non-negligible: it seems that we are not
|
||||
hitting a limit on the protocol side, but have simply exhausted the resource
|
||||
of our testing nodes. In the future, we would like to run this experiment
|
||||
again, but on many more physical nodes, to confirm our hypothesis. For now, we
|
||||
are confident that a Garage cluster with 100+ nodes should work.
|
||||
|
||||
|
||||
## Conclusion and Future work
|
||||
|
||||
During this work, we identified some sensitive points on Garage,
|
||||
on which we will have to continue working: our data durability target and interaction with the
|
||||
filesystem (`O_DSYNC`, `fsync`, `O_DIRECT`, etc.) is not yet homogeneous across
|
||||
our components; our new metadata engines (LMDB, SQLite) still need some testing
|
||||
and tuning; and we know that raw I/O performances (GetObject and PutObject for large objects) have a small
|
||||
improvement margin.
|
||||
|
||||
At the same time, Garage has never been in better shape: its next version (version 0.8) will
|
||||
see drastic improvements in terms of performance and reliability. We are
|
||||
confident that Garage is already able to cover a wide range of deployment needs, up
|
||||
to over a hundred nodes and millions of objects.
|
||||
|
||||
In the future, on the performance aspect, we would like to evaluate the impact
|
||||
of introducing an SRPT scheduler
|
||||
([#361](https://git.deuxfleurs.fr/Deuxfleurs/garage/issues/361)), define a data
|
||||
durability policy and implement it, make a deeper and larger review of the
|
||||
state of the art (Minio, Ceph, Swift, OpenIO, Riak CS, SeaweedFS, etc.) to
|
||||
learn from them and, lastly, benchmark Garage at scale with possibly multiple
|
||||
terabytes of data and billions of objects on long-lasting experiments.
|
||||
|
||||
In the meantime, stay tuned: we have released
|
||||
[a first release candidate for Garage v0.8](https://git.deuxfleurs.fr/Deuxfleurs/garage/releases/tag/v0.8.0-rc1),
|
||||
and are already working on several features for the next version.
|
||||
For instance, we are working on a new layout that will have enhanced optimality properties,
|
||||
as well as a theoretical proof of correctness
|
||||
([#296](https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/296)). We are also
|
||||
working on a Python SDK for Garage's administration API
|
||||
([#379](https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/379)), and we will
|
||||
soon officially introduce a new API (as a technical preview) named K2V
|
||||
([see K2V on our doc for a primer](https://garagehq.deuxfleurs.fr/documentation/reference-manual/k2v/)).
|
||||
|
||||
|
||||
## Notes
|
||||
|
||||
[^ref1]: Yes, we are aware of [Jepsen](https://github.com/jepsen-io/jepsen)'s
|
||||
existence. Jepsen is far more complex than our set of scripts, but
|
||||
it is also way more versatile.
|
||||
|
||||
[^ref2]: The program name contains the word "billion", although we only tested Garage
|
||||
up to 1 million objects: this is not a typo, we were just a little bit too
|
||||
enthusiastic when we wrote it ;)
|
||||
|
||||
<style>
|
||||
.footnote-definition p { display: inline; }
|
||||
</style>
|
Before Width: | Height: | Size: 189 KiB |
Before Width: | Height: | Size: 49 KiB |
Before Width: | Height: | Size: 128 KiB |
Before Width: | Height: | Size: 28 KiB |
Before Width: | Height: | Size: 117 KiB |
|
@ -1,143 +0,0 @@
|
|||
+++
|
||||
title="Garage v0.7: Kubernetes and OpenTelemetry"
|
||||
date=2022-04-04
|
||||
+++
|
||||
|
||||
*We just published Garage v0.7, our second public beta release. In this post, we do a quick tour of its 2 new features: Kubernetes integration and OpenTelemetry support.*
|
||||
|
||||
<!-- more -->
|
||||
|
||||
---
|
||||
|
||||
Two months ago, we were impressed by the success of our open beta launch at FOSDEM and on Hacker News: [our initial post](https://garagehq.deuxfleurs.fr/blog/2022-introducing-garage/) lead to more than 40k views in 10 days, peaking at 100 views/minute, and all requests were served by Garage, without even using a caching frontend!
|
||||
Since this event, we continued to improve Garage, and — 2 months after the initial release — we are happy to announce version 0.7.0.
|
||||
|
||||
But first, we would like to thank the contributors that made this new release possible: Alex, Jill, Max Audron, Maximilien, Quentin, Rune Henrisken, Steam, and trinity-1686a.
|
||||
This is also our first time welcoming contributors external to the core team, and as we wish for Garage to be a community-driven project, we encourage it!
|
||||
|
||||
You can get this release using our binaries or the package provided by your distribution.
|
||||
We ship [statically compiled binaries](https://garagehq.deuxfleurs.fr/download/) for most common Linux architectures (amd64, i386, aarch64 and armv6) and associated [Docker containers](https://hub.docker.com/u/dxflrs).
|
||||
Garage now is also packaged by third parties on some OS/distributions. We are currently aware of [FreeBSD](https://cgit.freebsd.org/ports/tree/www/garage/Makefile) and [AUR for Arch Linux](https://aur.archlinux.org/packages/garage).
|
||||
Feel free to [reach out to us](mailto:garagehq@deuxfleurs.fr) if you are packaging (or planning to package) Garage; we welcome maintainers and will upstream specific patches if that can help. If you already did package Garage, please inform us and we'll add it to the documentation.
|
||||
|
||||
Speaking about the changes of this new version, it obviously includes many bug fixes.
|
||||
We listed them in our [changelogs](https://git.deuxfleurs.fr/Deuxfleurs/garage/releases), so take a look, we might have fixed some issues you were having!
|
||||
Besides bug fixes, there are two new major features in this release: better integration with Kubernetes, and support for observability via OpenTelemetry.
|
||||
|
||||
## Kubernetes integration
|
||||
|
||||
Before Garage v0.7.0, you had to deploy a Consul cluster or spawn a "coordinating" pod to deploy Garage on [Kubernetes](https://kubernetes.io) (K8S).
|
||||
In this new version, Garage integrates a method to discover other peers by using Kubernetes [Custom Resources](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) (CR) to simplify cluster discovery.
|
||||
|
||||
CR discovery can be quickly enabled in Garage, by configuring the name of the desired service (`kubernetes_namespace`) and which namespace to look for (`kubernetes_service_name`) in your Garage configuration file:
|
||||
|
||||
```toml
|
||||
kubernetes_namespace = "default"
|
||||
kubernetes_service_name = "garage-daemon"
|
||||
```
|
||||
|
||||
Custom Resources must be defined *a priori* with [Custom Resource Definition](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/) (CRD).
|
||||
If the CRD does not exist, Garage will create it for you. Automatic CRD creation is enabled by default, but it requires giving additional permissions to Garage to work.
|
||||
If you prefer strictly controlling access to your K8S cluster, you can create the resource manually and prevent Garage from automatically creating it:
|
||||
|
||||
```toml
|
||||
kubernetes_skip_crd = true
|
||||
```
|
||||
|
||||
If you want to try Garage on K8S, we currently only provide some basic [example files](https://git.deuxfleurs.fr/Deuxfleurs/garage/src/commit/7e1ac51b580afa8e900206e7cc49791ed0a00d94/script/k8s). These files register a [ConfigMap](https://kubernetes.io/docs/concepts/configuration/configmap/), a [ClusterRoleBinding](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding), and a [StatefulSet](https://kubernetes.io/fr/docs/concepts/workloads/controllers/statefulset/) with a [Persistent Volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/).
|
||||
|
||||
Once these files are deployed, you will be able to interact with Garage as follow:
|
||||
|
||||
```bash
|
||||
kubectl exec -it garage-0 --container garage -- /garage status
|
||||
# ==== HEALTHY NODES ====
|
||||
# ID Hostname Address Tags Zone Capacity
|
||||
# e628.. garage-0 172.17.0.5:3901 NO ROLE ASSIGNED
|
||||
# 570f.. garage-2 172.17.0.7:3901 NO ROLE ASSIGNED
|
||||
# e199.. garage-1 172.17.0.6:3901 NO ROLE ASSIGNED
|
||||
```
|
||||
|
||||
You can then follow the [regular documentation](https://garagehq.deuxfleurs.fr/documentation/cookbook/real-world/#creating-a-cluster-layout) to complete the configuration of your cluster.
|
||||
|
||||
If you target a production deployment, you should avoid binding admin rights to your cluster to create Garage's CRD. You will also need to expose some [Services](https://kubernetes.io/docs/concepts/services-networking/service/) to make your cluster reachable. Keep also in mind that Garage is a stateful service, so you must be very careful of how you handle your data in Kubernetes in order not to lose it. In the near future, we plan to release a proper Helm chart and write "best practices" in our documentation.
|
||||
|
||||
If Kubernetes is not your thing, know that we are running Garage on a Nomad+Consul cluster, which is also well supported.
|
||||
We have not documented it yet but you can get a look at [our Nomad service](https://git.deuxfleurs.fr/Deuxfleurs/infrastructure/src/commit/1e5e4af35c073d04698bb10dd4ad1330d6c62a0d/app/garage/deploy/garage.hcl).
|
||||
|
||||
## OpenTelemetry support
|
||||
|
||||
[OpenTelemetry](https://opentelemetry.io/) standardizes how software generates and collects system telemetry information, namely metrics, logs, and traces.
|
||||
By implementing this standard in Garage, we hope that it will help you to better monitor, manage and tune your cluster.
|
||||
Note that to fully leverage this feature, you must be already familiar with monitoring stacks like [Prometheus](https://prometheus.io/)+[Grafana](https://grafana.com/) or [ElasticSearch](https://www.elastic.co/elasticsearch/)+[Kibana](https://www.elastic.co/kibana/).
|
||||
|
||||
To activate OpenTelemetry on Garage, you must add to your configuration file the following entries (supposing that your collector is also on localhost):
|
||||
|
||||
```toml
|
||||
[admin]
|
||||
api_bind_addr = "127.0.0.1:3903"
|
||||
trace_sink = "http://localhost:4317"
|
||||
```
|
||||
|
||||
The first line, `api_bind_address`, instructs Garage to expose an HTTP endpoint from which metrics can be obtained in Prometheus' data format.
|
||||
The second line, `trace_sink`, instructs Garage to export tracing information to an OpenTelemetry collector at the given address.
|
||||
These two options work independently and you can use them separately, depending on if you are interested only in metrics, traces, or both.
|
||||
|
||||
We provide [some files](https://git.deuxfleurs.fr/Deuxfleurs/garage/src/branch/main/script/telemetry) to help you quickly bootstrap a testing monitoring stack.
|
||||
It includes a docker-compose file and a pre-configured Grafana dashboard.
|
||||
You can use them if you want to reproduce the following examples.
|
||||
|
||||
Grafana is particularly adapted to understand how your cluster is performing from a "bird's eye view".
|
||||
For example, the following graph shows S3 API calls sent to your node per time unit.
|
||||
You can use it to better understand how your users are interacting with your cluster.
|
||||
|
||||
![A screenshot of a plot made by Grafana depicting the number of requests per time units grouped by endpoints](api_rate.png)
|
||||
|
||||
Thanks to this graph, we know that starting at 14:55, an important upload has been started.
|
||||
This upload is made of many small files, as we see many PutObject calls that are often used for small files.
|
||||
It also has some large objects, as we observe some multipart uploads requests.
|
||||
Conversely, at this time, no reads are done as the corresponding read endpoints (ListBuckets, ListObjectsV2, etc.) receive 0 request per time unit.
|
||||
|
||||
|
||||
Garage also collects metrics from lower-level parts of the system.
|
||||
You can use them to better understand how Garage is interacting with your OS and your hardware.
|
||||
|
||||
![A screenshot of a plot made by Grafana depicting the write speed (in MB/s) during test time.](writes.png)
|
||||
|
||||
This plot has been captured at the same moment as the previous one.
|
||||
We do not see a correlation between the writes and the API requests for the full upload but only for its beginning.
|
||||
More precisely, it maps well to multipart upload requests, and this is expected.
|
||||
Large files (of the multipart uploads) will saturate the writes of your disk but the uploading of small files (via the PutObject endpoint) will be limited by other parts of the system.
|
||||
|
||||
This simple example covers only 2 metrics over the 20+ ones that we already defined, but it still allowed us to precisely describe our cluster usage and identify where bottlenecks could be.
|
||||
We are confident that cleverly using these metrics on a production cluster will give you many more valuable insights into your cluster.
|
||||
|
||||
While metrics are good for having a large, general overview of your system, they are however not adapted for digging and pinpointing a specific performance issue on a specific code path.
|
||||
Thankfully, we also have a solution for this problem: tracing.
|
||||
|
||||
Using [Application Performance Monitoring](https://www.elastic.co/observability/application-performance-monitoring) (APM) in conjunction with Kibana,
|
||||
we can get for instance the following visualization of what happens during a PutObject call (click to enlarge):
|
||||
|
||||
[![A screenshot of APM depicting the trace of a PutObject call](apm.png)](apm.png)
|
||||
|
||||
On the top of the screenshot, we see the latency distribution of all PutObject requests.
|
||||
We learn that the selected request took ~1ms to execute, while 95% of all requests took less than 80ms to run.
|
||||
Having some dispersion between requests is expected as Garage does not run on a strong real-time system, but in this case, you must also consider that
|
||||
a request duration is impacted by the size of the object that is sent (a 10B object will be quicker to process than a 10MB one).
|
||||
Consequently, this request probably corresponds to a very small file.
|
||||
|
||||
Below this first histogram, you can select the request you want to inspect, and then see its trace on the bottom part.
|
||||
The trace shown above can be broken down in 4 parts: fetching the API key to check authentication (`key get`), fetching the bucket identifier from its name (`bucket_alias get`), fetching the bucket configuration to check authorizations (`bucket_v2 get`), and finally inserting the object in the storage (`object insert`).
|
||||
|
||||
With this example, we demonstrated that we can inspect Garage internals to find slow requests, then see which codepath has been taken by a request, and finally identify which part of the code took time.
|
||||
|
||||
Keep in mind that this is our first iteration on telemetry for Garage, so things are a bit rough around the edges (step-by-step documentation is missing, our Grafana dashboard is a work in progress, etc.).
|
||||
In all cases, your feedback is welcome on our Matrix channel.
|
||||
|
||||
|
||||
## Conclusion
|
||||
|
||||
This is only the first iteration of the Kubernetes and OpenTelemetry integrations in Garage, so things are still a bit rough.
|
||||
We plan to polish their integration in the coming months based on our experience and your feedback.
|
||||
|
||||
You may also ask yourself what will be the other works we plan to conduct: stay tuned, we will soon release information on our roadmap!
|
||||
In the meantime, we hope you will enjoy using Garage v0.7.
|
Before Width: | Height: | Size: 18 KiB |
After Width: | Height: | Size: 52 KiB |
BIN
content/blog/2024-ram-usage-encryption-s3/02-append-glibc.png
Normal file
After Width: | Height: | Size: 40 KiB |
BIN
content/blog/2024-ram-usage-encryption-s3/03-append-musl.png
Normal file
After Width: | Height: | Size: 50 KiB |
BIN
content/blog/2024-ram-usage-encryption-s3/04-append-parallel.png
Normal file
After Width: | Height: | Size: 40 KiB |
BIN
content/blog/2024-ram-usage-encryption-s3/05-idle-parallel.png
Normal file
After Width: | Height: | Size: 25 KiB |
BIN
content/blog/2024-ram-usage-encryption-s3/06-fetch-all.png
Normal file
After Width: | Height: | Size: 22 KiB |
BIN
content/blog/2024-ram-usage-encryption-s3/07-fetch-full.png
Normal file
After Width: | Height: | Size: 30 KiB |
693
content/blog/2024-ram-usage-encryption-s3/command-run.svg
Normal file
|
@ -0,0 +1,693 @@
|
|||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" width="1049" height="450" viewBox="0 0 1049 450">
|
||||
<defs>
|
||||
<g>
|
||||
<g id="glyph-0-0">
|
||||
<path d="M 5.671875 -1.953125 L 5.671875 -6.40625 L 4.859375 -6.40625 L 4.859375 -1.953125 C 4.859375 -1.09375 4.234375 -0.5625 3.203125 -0.5625 C 2.25 -0.5625 1.5625 -1.015625 1.5625 -1.953125 L 1.5625 -6.40625 L 0.75 -6.40625 L 0.75 -1.953125 C 0.75 -0.65625 1.6875 0.15625 3.203125 0.15625 C 4.703125 0.15625 5.671875 -0.671875 5.671875 -1.953125 Z M 5.671875 -1.953125 "/>
|
||||
</g>
|
||||
<g id="glyph-0-1">
|
||||
<path d="M 4.28125 0 L 4.28125 -3.484375 C 4.28125 -4.25 3.71875 -4.734375 2.828125 -4.734375 C 2.140625 -4.734375 1.703125 -4.484375 1.296875 -3.828125 L 1.296875 -4.609375 L 0.609375 -4.609375 L 0.609375 0 L 1.34375 0 L 1.34375 -2.546875 C 1.34375 -3.484375 1.859375 -4.09375 2.609375 -4.09375 C 3.1875 -4.09375 3.546875 -3.75 3.546875 -3.1875 L 3.546875 0 Z M 4.28125 0 "/>
|
||||
</g>
|
||||
<g id="glyph-0-2">
|
||||
<path d="M 1.34375 0 L 1.34375 -4.609375 L 0.609375 -4.609375 L 0.609375 0 Z M 1.4375 -5.296875 L 1.4375 -6.21875 L 0.53125 -6.21875 L 0.53125 -5.296875 Z M 1.4375 -5.296875 "/>
|
||||
</g>
|
||||
<g id="glyph-0-3">
|
||||
<path d="M 4.359375 1.921875 L 4.359375 -4.609375 L 3.703125 -4.609375 L 3.703125 -4 C 3.359375 -4.484375 2.84375 -4.734375 2.234375 -4.734375 C 1.015625 -4.734375 0.234375 -3.78125 0.234375 -2.25 C 0.234375 -0.75 0.984375 0.125 2.203125 0.125 C 2.84375 0.125 3.28125 -0.09375 3.625 -0.59375 L 3.625 1.921875 Z M 3.625 -2.28125 C 3.625 -1.21875 3.109375 -0.546875 2.34375 -0.546875 C 1.53125 -0.546875 1 -1.234375 1 -2.3125 C 1 -3.375 1.53125 -4.0625 2.34375 -4.0625 C 3.125 -4.0625 3.625 -3.359375 3.625 -2.28125 Z M 3.625 -2.28125 "/>
|
||||
</g>
|
||||
<g id="glyph-0-4">
|
||||
<path d="M 4.234375 0 L 4.234375 -4.609375 L 3.515625 -4.609375 L 3.515625 -2.0625 C 3.515625 -1.125 3.015625 -0.515625 2.25 -0.515625 C 1.671875 -0.515625 1.296875 -0.859375 1.296875 -1.421875 L 1.296875 -4.609375 L 0.578125 -4.609375 L 0.578125 -1.125 C 0.578125 -0.359375 1.140625 0.125 2.046875 0.125 C 2.71875 0.125 3.15625 -0.109375 3.578125 -0.71875 L 3.578125 0 Z M 4.234375 0 "/>
|
||||
</g>
|
||||
<g id="glyph-0-5">
|
||||
<path d="M 4.515625 -2.09375 C 4.515625 -2.765625 4.453125 -3.1875 4.328125 -3.53125 C 4.03125 -4.28125 3.328125 -4.734375 2.46875 -4.734375 C 1.171875 -4.734375 0.359375 -3.796875 0.359375 -2.28125 C 0.359375 -0.765625 1.15625 0.125 2.453125 0.125 C 3.5 0.125 4.234375 -0.46875 4.421875 -1.40625 L 3.671875 -1.40625 C 3.46875 -0.796875 3.0625 -0.546875 2.46875 -0.546875 C 1.703125 -0.546875 1.140625 -1.03125 1.125 -2.09375 Z M 3.734375 -2.75 C 3.734375 -2.75 3.734375 -2.703125 3.71875 -2.6875 L 1.140625 -2.6875 C 1.203125 -3.515625 1.71875 -4.0625 2.453125 -4.0625 C 3.171875 -4.0625 3.734375 -3.46875 3.734375 -2.75 Z M 3.734375 -2.75 "/>
|
||||
</g>
|
||||
<g id="glyph-0-6">
|
||||
<path d="M 5.96875 0 L 5.96875 -0.203125 C 5.671875 -0.40625 5.59375 -0.640625 5.59375 -1.5 C 5.5625 -2.546875 5.40625 -2.875 4.71875 -3.171875 C 5.4375 -3.515625 5.734375 -3.96875 5.734375 -4.703125 C 5.734375 -5.8125 5.03125 -6.40625 3.78125 -6.40625 L 0.8125 -6.40625 L 0.8125 0 L 1.640625 0 L 1.640625 -2.765625 L 3.75 -2.765625 C 4.46875 -2.765625 4.8125 -2.40625 4.796875 -1.625 L 4.796875 -1.046875 C 4.78125 -0.65625 4.859375 -0.265625 4.984375 0 Z M 4.875 -4.578125 C 4.875 -3.828125 4.484375 -3.484375 3.609375 -3.484375 L 1.640625 -3.484375 L 1.640625 -5.6875 L 3.609375 -5.6875 C 4.53125 -5.6875 4.875 -5.296875 4.875 -4.578125 Z M 4.875 -4.578125 "/>
|
||||
</g>
|
||||
<g id="glyph-0-7">
|
||||
<path d="M 4.703125 -0.015625 L 4.703125 -0.578125 C 4.625 -0.546875 4.59375 -0.546875 4.546875 -0.546875 C 4.296875 -0.546875 4.15625 -0.6875 4.15625 -0.921875 L 4.15625 -3.484375 C 4.15625 -4.296875 3.546875 -4.734375 2.421875 -4.734375 C 1.296875 -4.734375 0.609375 -4.3125 0.578125 -3.25 L 1.3125 -3.25 C 1.375 -3.8125 1.703125 -4.0625 2.390625 -4.0625 C 3.046875 -4.0625 3.421875 -3.8125 3.421875 -3.375 L 3.421875 -3.1875 C 3.421875 -2.875 3.234375 -2.75 2.65625 -2.671875 C 1.625 -2.546875 1.453125 -2.5 1.171875 -2.390625 C 0.640625 -2.171875 0.375 -1.765625 0.375 -1.203125 C 0.375 -0.359375 0.953125 0.125 1.875 0.125 C 2.46875 0.125 3.046875 -0.109375 3.453125 -0.546875 C 3.53125 -0.1875 3.84375 0.0625 4.203125 0.0625 C 4.359375 0.0625 4.46875 0.046875 4.703125 -0.015625 Z M 3.421875 -1.59375 C 3.421875 -0.9375 2.75 -0.515625 2.046875 -0.515625 C 1.46875 -0.515625 1.140625 -0.71875 1.140625 -1.21875 C 1.140625 -1.703125 1.453125 -1.90625 2.25 -2.015625 C 3.015625 -2.125 3.171875 -2.15625 3.421875 -2.28125 Z M 3.421875 -1.59375 "/>
|
||||
</g>
|
||||
<g id="glyph-0-8">
|
||||
<path d="M 6.234375 -4.609375 L 5.40625 -4.609375 L 4.484375 -1.015625 L 3.578125 -4.609375 L 2.6875 -4.609375 L 1.796875 -1.015625 L 0.859375 -4.609375 L 0.046875 -4.609375 L 1.390625 0 L 2.21875 0 L 3.109375 -3.609375 L 4.03125 0 L 4.875 0 Z M 6.234375 -4.609375 "/>
|
||||
</g>
|
||||
<g id="glyph-0-9">
|
||||
<path d="M 5.75 0 L 3.5 -6.40625 L 2.4375 -6.40625 L 0.15625 0 L 1.015625 0 L 1.703125 -1.921875 L 4.171875 -1.921875 L 4.828125 0 Z M 3.9375 -2.609375 L 1.90625 -2.609375 L 2.953125 -5.53125 Z M 3.9375 -2.609375 "/>
|
||||
</g>
|
||||
<g id="glyph-0-10">
|
||||
<path d="M 4.59375 -2.265625 C 4.59375 -3.8125 3.84375 -4.734375 2.625 -4.734375 C 2 -4.734375 1.5 -4.453125 1.15625 -3.921875 L 1.15625 -4.609375 L 0.484375 -4.609375 L 0.484375 1.921875 L 1.21875 1.921875 L 1.21875 -0.546875 C 1.59375 -0.078125 2.03125 0.125 2.625 0.125 C 3.8125 0.125 4.59375 -0.796875 4.59375 -2.265625 Z M 3.828125 -2.28125 C 3.828125 -1.234375 3.296875 -0.546875 2.5 -0.546875 C 1.71875 -0.546875 1.21875 -1.1875 1.21875 -2.265625 C 1.21875 -3.359375 1.71875 -4.0625 2.5 -4.0625 C 3.3125 -4.0625 3.828125 -3.375 3.828125 -2.28125 Z M 3.828125 -2.28125 "/>
|
||||
</g>
|
||||
<g id="glyph-0-11">
|
||||
<path d="M 4.359375 0 L 4.359375 -6.40625 L 3.625 -6.40625 L 3.625 -4.03125 C 3.3125 -4.5 2.828125 -4.734375 2.203125 -4.734375 C 1.015625 -4.734375 0.234375 -3.8125 0.234375 -2.34375 C 0.234375 -0.796875 1 0.125 2.234375 0.125 C 2.875 0.125 3.3125 -0.109375 3.703125 -0.671875 L 3.703125 0 Z M 3.625 -2.28125 C 3.625 -1.21875 3.109375 -0.546875 2.34375 -0.546875 C 1.53125 -0.546875 1 -1.234375 1 -2.3125 C 1 -3.375 1.53125 -4.0625 2.328125 -4.0625 C 3.125 -4.0625 3.625 -3.359375 3.625 -2.28125 Z M 3.625 -2.28125 "/>
|
||||
</g>
|
||||
<g id="glyph-0-12">
|
||||
<path d="M 5.953125 -2.34375 L 5.109375 -2.34375 C 4.921875 -1.125 4.40625 -0.5625 3.328125 -0.5625 C 2.046875 -0.5625 1.234375 -1.546875 1.234375 -3.140625 C 1.234375 -4.78125 2.015625 -5.84375 3.25 -5.84375 C 4.25 -5.84375 4.78125 -5.40625 4.984375 -4.421875 L 5.828125 -4.421875 C 5.5625 -5.828125 4.765625 -6.578125 3.359375 -6.578125 C 1.40625 -6.578125 0.421875 -5 0.421875 -3.125 C 0.421875 -1.265625 1.4375 0.15625 3.3125 0.15625 C 4.875 0.15625 5.765625 -0.671875 5.953125 -2.34375 Z M 5.953125 -2.34375 "/>
|
||||
</g>
|
||||
<g id="glyph-0-13">
|
||||
<path d="M 4.59375 -2.359375 C 4.59375 -3.859375 3.84375 -4.734375 2.625 -4.734375 C 2 -4.734375 1.546875 -4.5 1.203125 -3.984375 L 1.203125 -6.40625 L 0.46875 -6.40625 L 0.46875 0 L 1.140625 0 L 1.140625 -0.65625 C 1.484375 -0.125 1.953125 0.125 2.59375 0.125 C 3.8125 0.125 4.59375 -0.859375 4.59375 -2.359375 Z M 3.828125 -2.3125 C 3.828125 -1.265625 3.296875 -0.546875 2.484375 -0.546875 C 1.71875 -0.546875 1.203125 -1.265625 1.203125 -2.3125 C 1.203125 -3.390625 1.71875 -4.09375 2.484375 -4.0625 C 3.3125 -4.0625 3.828125 -3.34375 3.828125 -2.3125 Z M 3.828125 -2.3125 "/>
|
||||
</g>
|
||||
<g id="glyph-0-14">
|
||||
<path d="M 1.328125 0 L 1.328125 -6.40625 L 0.59375 -6.40625 L 0.59375 0 Z M 1.328125 0 "/>
|
||||
</g>
|
||||
<g id="glyph-0-15">
|
||||
<path d="M 2.234375 0 L 2.234375 -0.609375 C 2.140625 -0.59375 2.015625 -0.578125 1.875 -0.578125 C 1.5625 -0.578125 1.484375 -0.671875 1.484375 -1 L 1.484375 -4.015625 L 2.234375 -4.015625 L 2.234375 -4.609375 L 1.484375 -4.609375 L 1.484375 -5.875 L 0.75 -5.875 L 0.75 -4.609375 L 0.125 -4.609375 L 0.125 -4.015625 L 0.75 -4.015625 L 0.75 -0.671875 C 0.75 -0.203125 1.0625 0.0625 1.640625 0.0625 C 1.8125 0.0625 1.984375 0.046875 2.234375 0 Z M 2.234375 0 "/>
|
||||
</g>
|
||||
<g id="glyph-0-16">
|
||||
<path d="M 4.203125 -4.609375 L 3.40625 -4.609375 L 2.140625 -1.015625 L 0.953125 -4.609375 L 0.171875 -4.609375 L 1.734375 0.015625 L 1.453125 0.75 C 1.328125 1.078125 1.171875 1.203125 0.859375 1.203125 C 0.75 1.203125 0.640625 1.171875 0.46875 1.140625 L 0.46875 1.796875 C 0.625 1.875 0.78125 1.921875 0.96875 1.921875 C 1.484375 1.921875 1.90625 1.625 2.15625 0.96875 Z M 4.203125 -4.609375 "/>
|
||||
</g>
|
||||
<g id="glyph-0-17">
|
||||
<path d="M 4.28125 0 L 4.28125 -3.484375 C 4.28125 -4.25 3.71875 -4.734375 2.828125 -4.734375 C 2.171875 -4.734375 1.78125 -4.546875 1.34375 -3.96875 L 1.34375 -6.40625 L 0.609375 -6.40625 L 0.609375 0 L 1.34375 0 L 1.34375 -2.546875 C 1.34375 -3.484375 1.84375 -4.09375 2.59375 -4.09375 C 3.109375 -4.09375 3.546875 -3.796875 3.546875 -3.1875 L 3.546875 0 Z M 4.28125 0 "/>
|
||||
</g>
|
||||
<g id="glyph-0-18">
|
||||
<path d="M 4.203125 -1.578125 L 3.453125 -1.578125 C 3.328125 -0.84375 2.953125 -0.546875 2.328125 -0.546875 C 1.515625 -0.546875 1.03125 -1.171875 1.03125 -2.265625 C 1.03125 -3.40625 1.515625 -4.0625 2.3125 -4.0625 C 2.921875 -4.0625 3.3125 -3.703125 3.40625 -3.0625 L 4.140625 -3.0625 C 4.0625 -4.1875 3.359375 -4.734375 2.328125 -4.734375 C 1.078125 -4.734375 0.265625 -3.796875 0.265625 -2.265625 C 0.265625 -0.78125 1.0625 0.125 2.3125 0.125 C 3.40625 0.125 4.109375 -0.53125 4.203125 -1.578125 Z M 4.203125 -1.578125 "/>
|
||||
</g>
|
||||
<g id="glyph-0-19">
|
||||
<path d="M 4.421875 0 L 2.53125 -3.015625 L 4.140625 -4.609375 L 3.1875 -4.609375 L 1.234375 -2.65625 L 1.234375 -6.40625 L 0.515625 -6.40625 L 0.515625 0 L 1.234375 0 L 1.234375 -1.796875 L 1.953125 -2.5 L 3.515625 0 Z M 4.421875 0 "/>
|
||||
</g>
|
||||
<g id="glyph-0-20">
|
||||
<path d="M 4.484375 -2.265625 C 4.484375 -3.859375 3.71875 -4.734375 2.390625 -4.734375 C 1.09375 -4.734375 0.3125 -3.859375 0.3125 -2.3125 C 0.3125 -0.75 1.09375 0.125 2.40625 0.125 C 3.6875 0.125 4.484375 -0.75 4.484375 -2.265625 Z M 3.71875 -2.28125 C 3.71875 -1.203125 3.203125 -0.546875 2.40625 -0.546875 C 1.578125 -0.546875 1.078125 -1.1875 1.078125 -2.3125 C 1.078125 -3.40625 1.578125 -4.0625 2.40625 -4.0625 C 3.234375 -4.0625 3.71875 -3.421875 3.71875 -2.28125 Z M 3.71875 -2.28125 "/>
|
||||
</g>
|
||||
<g id="glyph-0-21">
|
||||
<path d="M 4.03125 -1.296875 C 4.03125 -1.984375 3.65625 -2.328125 2.734375 -2.546875 L 2.03125 -2.703125 C 1.4375 -2.84375 1.171875 -3.046875 1.171875 -3.375 C 1.171875 -3.796875 1.5625 -4.0625 2.15625 -4.0625 C 2.75 -4.0625 3.0625 -3.8125 3.078125 -3.328125 L 3.859375 -3.328125 C 3.84375 -4.234375 3.25 -4.734375 2.1875 -4.734375 C 1.109375 -4.734375 0.40625 -4.1875 0.40625 -3.328125 C 0.40625 -2.609375 0.78125 -2.265625 1.875 -2 L 2.5625 -1.84375 C 3.0625 -1.71875 3.265625 -1.5625 3.265625 -1.234375 C 3.265625 -0.796875 2.84375 -0.546875 2.203125 -0.546875 C 1.546875 -0.546875 1.171875 -0.703125 1.078125 -1.40625 L 0.296875 -1.40625 C 0.328125 -0.34375 0.9375 0.125 2.140625 0.125 C 3.296875 0.125 4.03125 -0.40625 4.03125 -1.296875 Z M 4.03125 -1.296875 "/>
|
||||
</g>
|
||||
<g id="glyph-0-22">
|
||||
<path d="M 2.828125 -3.96875 L 2.828125 -4.71875 C 2.703125 -4.734375 2.640625 -4.734375 2.546875 -4.734375 C 2.0625 -4.734375 1.703125 -4.453125 1.28125 -3.78125 L 1.28125 -4.609375 L 0.609375 -4.609375 L 0.609375 0 L 1.34375 0 L 1.34375 -2.390625 C 1.34375 -3.4375 1.6875 -3.953125 2.828125 -3.96875 Z M 2.828125 -3.96875 "/>
|
||||
</g>
|
||||
<g id="glyph-0-23">
|
||||
<path d="M 5.390625 0 L 5.390625 -0.71875 L 1.609375 -0.71875 L 1.609375 -2.921875 L 5.109375 -2.921875 L 5.109375 -3.640625 L 1.609375 -3.640625 L 1.609375 -5.6875 L 5.234375 -5.6875 L 5.234375 -6.40625 L 0.796875 -6.40625 L 0.796875 0 Z M 5.390625 0 "/>
|
||||
</g>
|
||||
<g id="glyph-0-24">
|
||||
<path d="M 4.15625 0 L 2.5625 -2.390625 L 4.109375 -4.609375 L 3.296875 -4.609375 L 2.1875 -2.9375 L 1.078125 -4.609375 L 0.234375 -4.609375 L 1.78125 -2.34375 L 0.15625 0 L 0.984375 0 L 2.15625 -1.765625 L 3.3125 0 Z M 4.15625 0 "/>
|
||||
</g>
|
||||
<g id="glyph-0-25">
|
||||
<path d="M 6.6875 0 L 6.6875 -3.453125 C 6.6875 -4.28125 6.234375 -4.734375 5.359375 -4.734375 C 4.75 -4.734375 4.375 -4.5625 3.953125 -4.03125 C 3.671875 -4.53125 3.3125 -4.734375 2.703125 -4.734375 C 2.09375 -4.734375 1.6875 -4.515625 1.296875 -3.953125 L 1.296875 -4.609375 L 0.625 -4.609375 L 0.625 0 L 1.359375 0 L 1.359375 -2.890625 C 1.359375 -3.5625 1.84375 -4.09375 2.4375 -4.09375 C 2.984375 -4.09375 3.296875 -3.765625 3.296875 -3.171875 L 3.296875 0 L 4.015625 0 L 4.015625 -2.890625 C 4.015625 -3.5625 4.515625 -4.09375 5.109375 -4.09375 C 5.640625 -4.09375 5.953125 -3.75 5.953125 -3.171875 L 5.953125 0 Z M 6.6875 0 "/>
|
||||
</g>
|
||||
<g id="glyph-0-26">
|
||||
<path d="M 4.296875 -0.75 L 4.296875 -4.609375 L 3.625 -4.609375 L 3.625 -3.9375 C 3.25 -4.484375 2.8125 -4.734375 2.21875 -4.734375 C 1.046875 -4.734375 0.25 -3.75 0.25 -2.265625 C 0.25 -0.8125 1.078125 0.125 2.15625 0.125 C 2.734375 0.125 3.140625 -0.109375 3.625 -0.6875 L 3.625 -0.390625 C 3.625 0.828125 3.125 1.296875 2.265625 1.296875 C 1.6875 1.296875 1.234375 1.09375 1.15625 0.53125 L 0.40625 0.53125 C 0.484375 1.40625 1.15625 1.921875 2.25 1.921875 C 3.6875 1.921875 4.296875 1.28125 4.296875 -0.75 Z M 3.546875 -2.28125 C 3.546875 -1.171875 3.078125 -0.546875 2.3125 -0.546875 C 1.5 -0.546875 1.015625 -1.1875 1.015625 -2.3125 C 1.015625 -3.40625 1.515625 -4.0625 2.296875 -4.0625 C 3.09375 -4.0625 3.546875 -3.390625 3.546875 -2.28125 Z M 3.546875 -2.28125 "/>
|
||||
</g>
|
||||
<g id="glyph-0-27">
|
||||
<path d="M 5.09375 -5.6875 L 5.09375 -6.40625 L 0.796875 -6.40625 L 0.796875 0 L 1.609375 0 L 1.609375 -2.921875 L 4.671875 -2.921875 L 4.671875 -3.640625 L 1.609375 -3.640625 L 1.609375 -5.6875 Z M 5.09375 -5.6875 "/>
|
||||
</g>
|
||||
<g id="glyph-0-28">
|
||||
<path d="M 1.703125 0 L 1.703125 -6.40625 L 0.875 -6.40625 L 0.875 0 Z M 1.703125 0 "/>
|
||||
</g>
|
||||
<g id="glyph-0-29">
|
||||
<path d="M 4.6875 0 L 4.6875 -0.71875 L 1.515625 -0.71875 L 1.515625 -6.40625 L 0.703125 -6.40625 L 0.703125 0 Z M 4.6875 0 "/>
|
||||
</g>
|
||||
<g id="glyph-0-30">
|
||||
<path d="M 6.6875 0 L 6.6875 -6.40625 L 5.5625 -6.40625 L 3.6875 -0.828125 L 1.796875 -6.40625 L 0.65625 -6.40625 L 0.65625 0 L 1.4375 0 L 1.4375 -5.375 L 3.25 0 L 4.109375 0 L 5.921875 -5.375 L 5.921875 0 Z M 6.6875 0 "/>
|
||||
</g>
|
||||
<g id="glyph-0-31">
|
||||
<path d="M 4.28125 -4.609375 L 3.453125 -4.609375 L 2.140625 -0.875 L 0.921875 -4.609375 L 0.09375 -4.609375 L 1.703125 0 L 2.5 0 Z M 4.28125 -4.609375 "/>
|
||||
</g>
|
||||
<g id="glyph-0-32">
|
||||
<path d="M 5.6875 0 L 5.6875 -6.40625 L 4.90625 -6.40625 L 4.90625 -1.171875 L 1.5625 -6.40625 L 0.671875 -6.40625 L 0.671875 0 L 1.4375 0 L 1.4375 -5.203125 L 4.765625 0 Z M 5.6875 0 "/>
|
||||
</g>
|
||||
<g id="glyph-0-33">
|
||||
<path d="M 5.46875 -1.765625 C 5.46875 -2.546875 4.96875 -3.125 4.09375 -3.375 L 2.484375 -3.796875 C 1.71875 -4 1.4375 -4.25 1.4375 -4.75 C 1.4375 -5.40625 2 -5.890625 2.875 -5.890625 C 3.890625 -5.890625 4.46875 -5.421875 4.46875 -4.578125 L 5.25 -4.578125 C 5.25 -5.84375 4.375 -6.578125 2.890625 -6.578125 C 1.484375 -6.578125 0.609375 -5.796875 0.609375 -4.640625 C 0.609375 -3.859375 1.03125 -3.359375 1.875 -3.140625 L 3.46875 -2.71875 C 4.28125 -2.5 4.640625 -2.1875 4.640625 -1.6875 C 4.640625 -0.96875 4.109375 -0.5625 3.015625 -0.5625 C 1.796875 -0.5625 1.203125 -1.171875 1.203125 -2.078125 L 0.421875 -2.078125 C 0.421875 -0.578125 1.4375 0.15625 2.953125 0.15625 C 4.578125 0.15625 5.46875 -0.609375 5.46875 -1.765625 Z M 5.46875 -1.765625 "/>
|
||||
</g>
|
||||
<g id="glyph-0-34">
|
||||
<path d="M 4.453125 -3 C 4.453125 -5.15625 3.78125 -6.234375 2.421875 -6.234375 C 1.078125 -6.234375 0.375 -5.140625 0.375 -3.046875 C 0.375 -0.953125 1.078125 0.125 2.421875 0.125 C 3.734375 0.125 4.453125 -0.953125 4.453125 -3 Z M 3.671875 -3.0625 C 3.671875 -1.296875 3.265625 -0.515625 2.40625 -0.515625 C 1.578125 -0.515625 1.171875 -1.34375 1.171875 -3.046875 C 1.171875 -4.75 1.578125 -5.546875 2.421875 -5.546875 C 3.25 -5.546875 3.671875 -4.734375 3.671875 -3.0625 Z M 3.671875 -3.0625 "/>
|
||||
</g>
|
||||
<g id="glyph-0-35">
|
||||
<path d="M 4.5 -4.40625 C 4.5 -5.46875 3.671875 -6.234375 2.5 -6.234375 C 1.21875 -6.234375 0.484375 -5.59375 0.4375 -4.078125 L 1.21875 -4.078125 C 1.28125 -5.125 1.703125 -5.5625 2.46875 -5.5625 C 3.171875 -5.5625 3.703125 -5.0625 3.703125 -4.390625 C 3.703125 -3.890625 3.40625 -3.46875 2.859375 -3.15625 L 2.046875 -2.703125 C 0.75 -1.96875 0.375 -1.375 0.296875 0 L 4.453125 0 L 4.453125 -0.765625 L 1.171875 -0.765625 C 1.25 -1.28125 1.53125 -1.59375 2.296875 -2.046875 L 3.171875 -2.53125 C 4.046875 -2.984375 4.5 -3.640625 4.5 -4.40625 Z M 4.5 -4.40625 "/>
|
||||
</g>
|
||||
<g id="glyph-0-36">
|
||||
<path d="M 4.515625 -2.0625 C 4.515625 -3.296875 3.6875 -4.109375 2.5 -4.109375 C 2.0625 -4.109375 1.703125 -4 1.34375 -3.734375 L 1.59375 -5.34375 L 4.1875 -5.34375 L 4.1875 -6.109375 L 0.96875 -6.109375 L 0.5 -2.84375 L 1.21875 -2.84375 C 1.578125 -3.265625 1.875 -3.421875 2.359375 -3.421875 C 3.1875 -3.421875 3.71875 -2.890625 3.71875 -1.96875 C 3.71875 -1.0625 3.203125 -0.546875 2.359375 -0.546875 C 1.6875 -0.546875 1.265625 -0.890625 1.078125 -1.59375 L 0.3125 -1.59375 C 0.5625 -0.359375 1.265625 0.125 2.375 0.125 C 3.640625 0.125 4.515625 -0.75 4.515625 -2.0625 Z M 4.515625 -2.0625 "/>
|
||||
</g>
|
||||
<g id="glyph-0-37">
|
||||
<path d="M 4.578125 -5.453125 L 4.578125 -6.109375 L 0.40625 -6.109375 L 0.40625 -5.34375 L 3.78125 -5.34375 C 2.53125 -3.78125 1.65625 -1.953125 1.21875 0 L 2.046875 0 C 2.390625 -2.015625 3.265625 -3.890625 4.578125 -5.453125 Z M 4.578125 -5.453125 "/>
|
||||
</g>
|
||||
<g id="glyph-0-38">
|
||||
<path d="M 3.046875 0 L 3.046875 -6.234375 L 2.546875 -6.234375 C 2.265625 -5.28125 2.09375 -5.140625 0.890625 -5 L 0.890625 -4.4375 L 2.28125 -4.4375 L 2.28125 0 Z M 3.046875 0 "/>
|
||||
</g>
|
||||
<g id="glyph-1-0">
|
||||
<path d="M 5.25 -1.984375 L 4.328125 -1.984375 C 4.171875 -1.0625 3.703125 -0.6875 2.921875 -0.6875 C 1.90625 -0.6875 1.296875 -1.46875 1.296875 -2.828125 C 1.296875 -4.265625 1.890625 -5.078125 2.890625 -5.078125 C 3.65625 -5.078125 4.140625 -4.625 4.25 -3.828125 L 5.1875 -3.828125 C 5.078125 -5.234375 4.1875 -5.921875 2.90625 -5.921875 C 1.359375 -5.921875 0.34375 -4.734375 0.34375 -2.828125 C 0.34375 -0.96875 1.328125 0.171875 2.890625 0.171875 C 4.265625 0.171875 5.140625 -0.65625 5.25 -1.984375 Z M 5.25 -1.984375 "/>
|
||||
</g>
|
||||
<g id="glyph-1-1">
|
||||
<path d="M 5.609375 -2.84375 C 5.609375 -4.828125 4.65625 -5.921875 2.984375 -5.921875 C 1.375 -5.921875 0.390625 -4.8125 0.390625 -2.875 C 0.390625 -0.953125 1.359375 0.171875 3 0.171875 C 4.625 0.171875 5.609375 -0.953125 5.609375 -2.84375 Z M 4.65625 -2.84375 C 4.65625 -1.5 4.015625 -0.6875 3 -0.6875 C 1.984375 -0.6875 1.359375 -1.484375 1.359375 -2.875 C 1.359375 -4.265625 1.984375 -5.078125 3 -5.078125 C 4.03125 -5.078125 4.65625 -4.28125 4.65625 -2.84375 Z M 4.65625 -2.84375 "/>
|
||||
</g>
|
||||
<g id="glyph-1-2">
|
||||
<path d="M 8.359375 0 L 8.359375 -4.328125 C 8.359375 -5.359375 7.78125 -5.921875 6.703125 -5.921875 C 5.9375 -5.921875 5.484375 -5.703125 4.9375 -5.046875 C 4.59375 -5.671875 4.140625 -5.921875 3.390625 -5.921875 C 2.625 -5.921875 2.109375 -5.640625 1.609375 -4.953125 L 1.609375 -5.765625 L 0.78125 -5.765625 L 0.78125 0 L 1.6875 0 L 1.6875 -3.625 C 1.6875 -4.453125 2.296875 -5.125 3.046875 -5.125 C 3.734375 -5.125 4.109375 -4.703125 4.109375 -3.96875 L 4.109375 0 L 5.03125 0 L 5.03125 -3.625 C 5.03125 -4.453125 5.640625 -5.125 6.390625 -5.125 C 7.0625 -5.125 7.453125 -4.703125 7.453125 -3.96875 L 7.453125 0 Z M 8.359375 0 "/>
|
||||
</g>
|
||||
<g id="glyph-1-3">
|
||||
<path d="M 5.890625 -0.015625 L 5.890625 -0.71875 C 5.78125 -0.6875 5.734375 -0.6875 5.6875 -0.6875 C 5.375 -0.6875 5.1875 -0.859375 5.1875 -1.140625 L 5.1875 -4.359375 C 5.1875 -5.375 4.4375 -5.921875 3.03125 -5.921875 C 1.625 -5.921875 0.765625 -5.390625 0.71875 -4.0625 L 1.640625 -4.0625 C 1.71875 -4.765625 2.140625 -5.078125 2.984375 -5.078125 C 3.8125 -5.078125 4.28125 -4.78125 4.28125 -4.21875 L 4.28125 -3.984375 C 4.28125 -3.59375 4.046875 -3.4375 3.328125 -3.34375 C 2.03125 -3.171875 1.828125 -3.140625 1.46875 -2.984375 C 0.796875 -2.71875 0.46875 -2.203125 0.46875 -1.5 C 0.46875 -0.453125 1.1875 0.171875 2.359375 0.171875 C 3.09375 0.171875 3.8125 -0.140625 4.3125 -0.6875 C 4.40625 -0.234375 4.8125 0.078125 5.265625 0.078125 C 5.4375 0.078125 5.59375 0.0625 5.890625 -0.015625 Z M 4.28125 -1.984375 C 4.28125 -1.171875 3.4375 -0.640625 2.546875 -0.640625 C 1.84375 -0.640625 1.421875 -0.890625 1.421875 -1.515625 C 1.421875 -2.125 1.828125 -2.390625 2.8125 -2.53125 C 3.765625 -2.65625 3.96875 -2.703125 4.28125 -2.84375 Z M 4.28125 -1.984375 "/>
|
||||
</g>
|
||||
<g id="glyph-1-4">
|
||||
<path d="M 5.359375 0 L 5.359375 -4.359375 C 5.359375 -5.3125 4.640625 -5.921875 3.53125 -5.921875 C 2.671875 -5.921875 2.125 -5.59375 1.609375 -4.796875 L 1.609375 -5.765625 L 0.765625 -5.765625 L 0.765625 0 L 1.6875 0 L 1.6875 -3.171875 C 1.6875 -4.359375 2.328125 -5.125 3.25 -5.125 C 3.984375 -5.125 4.4375 -4.6875 4.4375 -4 L 4.4375 0 Z M 5.359375 0 "/>
|
||||
</g>
|
||||
<g id="glyph-1-5">
|
||||
<path d="M 5.4375 0 L 5.4375 -8.015625 L 4.53125 -8.015625 L 4.53125 -5.03125 C 4.140625 -5.625 3.53125 -5.921875 2.765625 -5.921875 C 1.265625 -5.921875 0.28125 -4.78125 0.28125 -2.9375 C 0.28125 -0.984375 1.25 0.171875 2.796875 0.171875 C 3.59375 0.171875 4.140625 -0.125 4.625 -0.84375 L 4.625 0 Z M 4.53125 -2.859375 C 4.53125 -1.53125 3.890625 -0.6875 2.921875 -0.6875 C 1.90625 -0.6875 1.25 -1.546875 1.25 -2.875 C 1.25 -4.21875 1.90625 -5.078125 2.921875 -5.078125 C 3.90625 -5.078125 4.53125 -4.1875 4.53125 -2.859375 Z M 4.53125 -2.859375 "/>
|
||||
</g>
|
||||
<g id="glyph-2-0">
|
||||
<path d="M -1.984375 -5.25 L -1.984375 -4.328125 C -1.0625 -4.171875 -0.6875 -3.703125 -0.6875 -2.921875 C -0.6875 -1.90625 -1.46875 -1.296875 -2.828125 -1.296875 C -4.265625 -1.296875 -5.078125 -1.890625 -5.078125 -2.890625 C -5.078125 -3.65625 -4.625 -4.140625 -3.828125 -4.25 L -3.828125 -5.1875 C -5.234375 -5.078125 -5.921875 -4.1875 -5.921875 -2.90625 C -5.921875 -1.359375 -4.734375 -0.34375 -2.828125 -0.34375 C -0.96875 -0.34375 0.171875 -1.328125 0.171875 -2.890625 C 0.171875 -4.265625 -0.65625 -5.140625 -1.984375 -5.25 Z M -1.984375 -5.25 "/>
|
||||
</g>
|
||||
<g id="glyph-2-1">
|
||||
<path d="M -2.84375 -5.609375 C -4.828125 -5.609375 -5.921875 -4.65625 -5.921875 -2.984375 C -5.921875 -1.375 -4.8125 -0.390625 -2.875 -0.390625 C -0.953125 -0.390625 0.171875 -1.359375 0.171875 -3 C 0.171875 -4.625 -0.953125 -5.609375 -2.84375 -5.609375 Z M -2.84375 -4.65625 C -1.5 -4.65625 -0.6875 -4.015625 -0.6875 -3 C -0.6875 -1.984375 -1.484375 -1.359375 -2.875 -1.359375 C -4.265625 -1.359375 -5.078125 -1.984375 -5.078125 -3 C -5.078125 -4.03125 -4.28125 -4.65625 -2.84375 -4.65625 Z M -2.84375 -4.65625 "/>
|
||||
</g>
|
||||
<g id="glyph-2-2">
|
||||
<path d="M 0 -5.296875 L -5.765625 -5.296875 L -5.765625 -4.390625 L -2.578125 -4.390625 C -1.40625 -4.390625 -0.640625 -3.765625 -0.640625 -2.8125 C -0.640625 -2.09375 -1.078125 -1.625 -1.765625 -1.625 L -5.765625 -1.625 L -5.765625 -0.71875 L -1.40625 -0.71875 C -0.453125 -0.71875 0.171875 -1.4375 0.171875 -2.546875 C 0.171875 -3.40625 -0.125 -3.9375 -0.890625 -4.484375 L 0 -4.484375 Z M 0 -5.296875 "/>
|
||||
</g>
|
||||
<g id="glyph-2-3">
|
||||
<path d="M 0 -5.359375 L -4.359375 -5.359375 C -5.3125 -5.359375 -5.921875 -4.640625 -5.921875 -3.53125 C -5.921875 -2.671875 -5.59375 -2.125 -4.796875 -1.609375 L -5.765625 -1.609375 L -5.765625 -0.765625 L 0 -0.765625 L 0 -1.6875 L -3.171875 -1.6875 C -4.359375 -1.6875 -5.125 -2.328125 -5.125 -3.25 C -5.125 -3.984375 -4.6875 -4.4375 -4 -4.4375 L 0 -4.4375 Z M 0 -5.359375 "/>
|
||||
</g>
|
||||
<g id="glyph-2-4">
|
||||
<path d="M 0 -2.796875 L -0.765625 -2.796875 C -0.734375 -2.671875 -0.71875 -2.53125 -0.71875 -2.359375 C -0.71875 -1.953125 -0.84375 -1.84375 -1.25 -1.84375 L -5.015625 -1.84375 L -5.015625 -2.796875 L -5.765625 -2.796875 L -5.765625 -1.84375 L -7.34375 -1.84375 L -7.34375 -0.9375 L -5.765625 -0.9375 L -5.765625 -0.15625 L -5.015625 -0.15625 L -5.015625 -0.9375 L -0.84375 -0.9375 C -0.25 -0.9375 0.078125 -1.328125 0.078125 -2.046875 C 0.078125 -2.265625 0.0625 -2.484375 0 -2.796875 Z M 0 -2.796875 "/>
|
||||
</g>
|
||||
</g>
|
||||
<clipPath id="clip-0">
|
||||
<path clip-rule="nonzero" d="M 46.152344 23.246094 L 1043.523438 23.246094 L 1043.523438 200.957031 L 46.152344 200.957031 Z M 46.152344 23.246094 "/>
|
||||
</clipPath>
|
||||
<clipPath id="clip-1">
|
||||
<path clip-rule="nonzero" d="M 46.152344 238.136719 L 1043.523438 238.136719 L 1043.523438 415.847656 L 46.152344 415.847656 Z M 46.152344 238.136719 "/>
|
||||
</clipPath>
|
||||
<clipPath id="clip-2">
|
||||
<path clip-rule="nonzero" d="M 46.152344 220.371094 L 1043.523438 220.371094 L 1043.523438 238.136719 L 46.152344 238.136719 Z M 46.152344 220.371094 "/>
|
||||
</clipPath>
|
||||
<clipPath id="clip-3">
|
||||
<path clip-rule="nonzero" d="M 46.152344 5.480469 L 1043.523438 5.480469 L 1043.523438 23.246094 L 46.152344 23.246094 Z M 46.152344 5.480469 "/>
|
||||
</clipPath>
|
||||
</defs>
|
||||
<rect x="-104.9" y="-45" width="1258.8" height="540" fill="rgb(100%, 100%, 100%)" fill-opacity="1"/>
|
||||
<rect x="-104.9" y="-45" width="1258.8" height="540" fill="rgb(100%, 100%, 100%)" fill-opacity="1"/>
|
||||
<path fill="none" stroke-width="1.066978" stroke-linecap="round" stroke-linejoin="round" stroke="rgb(100%, 100%, 100%)" stroke-opacity="1" stroke-miterlimit="10" d="M 0 450 L 1049 450 L 1049 0 L 0 0 Z M 0 450 "/>
|
||||
<g clip-path="url(#clip-0)">
|
||||
<path fill-rule="nonzero" fill="rgb(100%, 100%, 100%)" fill-opacity="1" d="M 46.152344 200.957031 L 1043.523438 200.957031 L 1043.523438 23.246094 L 46.152344 23.246094 Z M 46.152344 200.957031 "/>
|
||||
</g>
|
||||
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 52.890625 192.882812 L 93.324219 192.882812 L 93.324219 192.066406 L 52.890625 192.066406 Z M 52.890625 192.882812 "/>
|
||||
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 97.816406 192.882812 L 138.25 192.882812 L 138.25 188.390625 L 97.816406 188.390625 Z M 97.816406 192.882812 "/>
|
||||
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 142.742188 192.882812 L 183.175781 192.882812 L 183.175781 179.816406 L 142.742188 179.816406 Z M 142.742188 192.882812 "/>
|
||||
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 187.667969 192.882812 L 228.101562 192.882812 L 228.101562 192.066406 L 187.667969 192.066406 Z M 187.667969 192.882812 "/>
|
||||
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 232.597656 192.882812 L 273.03125 192.882812 L 273.03125 192.472656 L 232.597656 192.472656 Z M 232.597656 192.882812 "/>
|
||||
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 277.523438 192.882812 L 317.957031 192.882812 L 317.957031 191.65625 L 277.523438 191.65625 Z M 277.523438 192.882812 "/>
|
||||
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 322.449219 192.882812 L 362.882812 192.882812 L 362.882812 189.34375 L 322.449219 189.34375 Z M 322.449219 192.882812 "/>
|
||||
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 367.375 192.882812 L 407.808594 192.882812 L 407.808594 192.746094 L 367.375 192.746094 Z M 367.375 192.882812 "/>
|
||||
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 412.300781 192.882812 L 452.734375 192.882812 L 452.734375 31.324219 L 412.300781 31.324219 Z M 412.300781 192.882812 "/>
|
||||
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 457.230469 192.882812 L 497.664062 192.882812 L 497.664062 159.128906 L 457.230469 159.128906 Z M 457.230469 192.882812 "/>
|
||||
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 502.15625 192.882812 L 542.589844 192.882812 L 542.589844 174.917969 L 502.15625 174.917969 Z M 502.15625 192.882812 "/>
|
||||
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 547.082031 192.882812 L 587.515625 192.882812 L 587.515625 159.671875 L 547.082031 159.671875 Z M 547.082031 192.882812 "/>
|
||||
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 592.007812 192.882812 L 632.441406 192.882812 L 632.441406 169.882812 L 592.007812 169.882812 Z M 592.007812 192.882812 "/>
|
||||
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 636.933594 192.882812 L 677.367188 192.882812 L 677.367188 190.976562 L 636.933594 190.976562 Z M 636.933594 192.882812 "/>
|
||||
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 681.863281 192.882812 L 722.296875 192.882812 L 722.296875 192.609375 L 681.863281 192.609375 Z M 681.863281 192.882812 "/>
|
||||
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 726.789062 192.882812 L 767.222656 192.882812 L 767.222656 152.730469 L 726.789062 152.730469 Z M 726.789062 192.882812 "/>
|
||||
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 771.714844 192.882812 L 812.148438 192.882812 L 812.148438 103.324219 L 771.714844 103.324219 Z M 771.714844 192.882812 "/>
|
||||
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 816.640625 192.882812 L 857.074219 192.882812 L 857.074219 91.347656 L 816.640625 91.347656 Z M 816.640625 192.882812 "/>
|
||||
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 861.566406 192.882812 L 902 192.882812 L 902 165.253906 L 861.566406 165.253906 Z M 861.566406 192.882812 "/>
|
||||
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 906.496094 192.882812 L 946.929688 192.882812 L 946.929688 189.753906 L 906.496094 189.753906 Z M 906.496094 192.882812 "/>
|
||||
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 951.421875 192.882812 L 991.855469 192.882812 L 991.855469 192.472656 L 951.421875 192.472656 Z M 951.421875 192.882812 "/>
|
||||
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 996.347656 192.882812 L 1036.78125 192.882812 L 1036.78125 122.789062 L 996.347656 122.789062 Z M 996.347656 192.882812 "/>
|
||||
<g clip-path="url(#clip-1)">
|
||||
<path fill-rule="nonzero" fill="rgb(100%, 100%, 100%)" fill-opacity="1" d="M 46.152344 415.847656 L 1043.523438 415.847656 L 1043.523438 238.136719 L 46.152344 238.136719 Z M 46.152344 415.847656 "/>
|
||||
</g>
|
||||
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 52.890625 407.769531 L 93.324219 407.769531 L 93.324219 400.195312 L 52.890625 400.195312 Z M 52.890625 407.769531 "/>
|
||||
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 97.816406 407.769531 L 138.25 407.769531 L 138.25 406.507812 L 97.816406 406.507812 Z M 97.816406 407.769531 "/>
|
||||
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 142.742188 407.769531 L 183.175781 407.769531 L 183.175781 406.507812 L 142.742188 406.507812 Z M 142.742188 407.769531 "/>
|
||||
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 187.667969 407.769531 L 228.101562 407.769531 L 228.101562 406.507812 L 187.667969 406.507812 Z M 187.667969 407.769531 "/>
|
||||
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 232.597656 407.769531 L 273.03125 407.769531 L 273.03125 403.984375 L 232.597656 403.984375 Z M 232.597656 407.769531 "/>
|
||||
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 277.523438 407.769531 L 317.957031 407.769531 L 317.957031 405.246094 L 277.523438 405.246094 Z M 277.523438 407.769531 "/>
|
||||
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 322.449219 407.769531 L 362.882812 407.769531 L 362.882812 406.507812 L 322.449219 406.507812 Z M 322.449219 407.769531 "/>
|
||||
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 367.375 407.769531 L 407.808594 407.769531 L 407.808594 406.507812 L 367.375 406.507812 Z M 367.375 407.769531 "/>
|
||||
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 412.300781 407.769531 L 452.734375 407.769531 L 452.734375 246.210938 L 412.300781 246.210938 Z M 412.300781 407.769531 "/>
|
||||
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 457.230469 407.769531 L 497.664062 407.769531 L 497.664062 406.507812 L 457.230469 406.507812 Z M 457.230469 407.769531 "/>
|
||||
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 502.15625 407.769531 L 542.589844 407.769531 L 542.589844 392.625 L 502.15625 392.625 Z M 502.15625 407.769531 "/>
|
||||
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 547.082031 407.769531 L 587.515625 407.769531 L 587.515625 396.410156 L 547.082031 396.410156 Z M 547.082031 407.769531 "/>
|
||||
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 592.007812 407.769531 L 632.441406 407.769531 L 632.441406 406.507812 L 592.007812 406.507812 Z M 592.007812 407.769531 "/>
|
||||
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 636.933594 407.769531 L 677.367188 407.769531 L 677.367188 406.507812 L 636.933594 406.507812 Z M 636.933594 407.769531 "/>
|
||||
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 681.863281 407.769531 L 722.296875 407.769531 L 722.296875 405.246094 L 681.863281 405.246094 Z M 681.863281 407.769531 "/>
|
||||
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 726.789062 407.769531 L 767.222656 407.769531 L 767.222656 406.507812 L 726.789062 406.507812 Z M 726.789062 407.769531 "/>
|
||||
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 771.714844 407.769531 L 812.148438 407.769531 L 812.148438 390.097656 L 771.714844 390.097656 Z M 771.714844 407.769531 "/>
|
||||
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 816.640625 407.769531 L 857.074219 407.769531 L 857.074219 385.050781 L 816.640625 385.050781 Z M 816.640625 407.769531 "/>
|
||||
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 861.566406 407.769531 L 902 407.769531 L 902 380.003906 L 861.566406 380.003906 Z M 861.566406 407.769531 "/>
|
||||
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 906.496094 407.769531 L 946.929688 407.769531 L 946.929688 383.789062 L 906.496094 383.789062 Z M 906.496094 407.769531 "/>
|
||||
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 951.421875 407.769531 L 991.855469 407.769531 L 991.855469 403.984375 L 951.421875 403.984375 Z M 951.421875 407.769531 "/>
|
||||
<path fill-rule="nonzero" fill="rgb(34.901961%, 34.901961%, 34.901961%)" fill-opacity="1" d="M 996.347656 407.769531 L 1036.78125 407.769531 L 1036.78125 406.507812 L 996.347656 406.507812 Z M 996.347656 407.769531 "/>
|
||||
<g clip-path="url(#clip-2)">
|
||||
<path fill-rule="nonzero" fill="rgb(100%, 100%, 100%)" fill-opacity="1" stroke-width="2.133957" stroke-linecap="round" stroke-linejoin="round" stroke="rgb(0%, 0%, 0%)" stroke-opacity="1" stroke-miterlimit="10" d="M 46.152344 238.136719 L 1043.523438 238.136719 L 1043.523438 220.371094 L 46.152344 220.371094 Z M 46.152344 238.136719 "/>
|
||||
</g>
|
||||
<g fill="rgb(10.196078%, 10.196078%, 10.196078%)" fill-opacity="1">
|
||||
<use xlink:href="#glyph-0-0" x="530.835938" y="231.856445"/>
|
||||
<use xlink:href="#glyph-0-1" x="536.835938" y="231.856445"/>
|
||||
<use xlink:href="#glyph-0-2" x="541.835938" y="231.856445"/>
|
||||
<use xlink:href="#glyph-0-3" x="543.835938" y="231.856445"/>
|
||||
<use xlink:href="#glyph-0-4" x="548.835938" y="231.856445"/>
|
||||
<use xlink:href="#glyph-0-5" x="553.835938" y="231.856445"/>
|
||||
</g>
|
||||
<g clip-path="url(#clip-3)">
|
||||
<path fill-rule="nonzero" fill="rgb(100%, 100%, 100%)" fill-opacity="1" stroke-width="2.133957" stroke-linecap="round" stroke-linejoin="round" stroke="rgb(0%, 0%, 0%)" stroke-opacity="1" stroke-miterlimit="10" d="M 46.152344 23.246094 L 1043.523438 23.246094 L 1043.523438 5.480469 L 46.152344 5.480469 Z M 46.152344 23.246094 "/>
|
||||
</g>
|
||||
<g fill="rgb(10.196078%, 10.196078%, 10.196078%)" fill-opacity="1">
|
||||
<use xlink:href="#glyph-0-6" x="536.335938" y="16.96582"/>
|
||||
<use xlink:href="#glyph-0-7" x="542.335938" y="16.96582"/>
|
||||
<use xlink:href="#glyph-0-8" x="547.335938" y="16.96582"/>
|
||||
</g>
|
||||
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(0%, 0%, 0%)" stroke-opacity="1" stroke-miterlimit="10" d="M 46.152344 415.847656 L 1043.519531 415.847656 "/>
|
||||
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 73.105469 418.589844 L 73.105469 415.847656 "/>
|
||||
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 118.03125 418.589844 L 118.03125 415.847656 "/>
|
||||
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 162.960938 418.589844 L 162.960938 415.847656 "/>
|
||||
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 207.886719 418.589844 L 207.886719 415.847656 "/>
|
||||
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 252.8125 418.589844 L 252.8125 415.847656 "/>
|
||||
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 297.738281 418.589844 L 297.738281 415.847656 "/>
|
||||
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 342.667969 418.589844 L 342.667969 415.847656 "/>
|
||||
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 387.59375 418.589844 L 387.59375 415.847656 "/>
|
||||
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 432.519531 418.589844 L 432.519531 415.847656 "/>
|
||||
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 477.445312 418.589844 L 477.445312 415.847656 "/>
|
||||
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 522.371094 418.589844 L 522.371094 415.847656 "/>
|
||||
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 567.300781 418.589844 L 567.300781 415.847656 "/>
|
||||
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 612.226562 418.589844 L 612.226562 415.847656 "/>
|
||||
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 657.152344 418.589844 L 657.152344 415.847656 "/>
|
||||
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 702.078125 418.589844 L 702.078125 415.847656 "/>
|
||||
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 747.003906 418.589844 L 747.003906 415.847656 "/>
|
||||
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 791.933594 418.589844 L 791.933594 415.847656 "/>
|
||||
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 836.859375 418.589844 L 836.859375 415.847656 "/>
|
||||
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 881.785156 418.589844 L 881.785156 415.847656 "/>
|
||||
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 926.710938 418.589844 L 926.710938 415.847656 "/>
|
||||
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 971.636719 418.589844 L 971.636719 415.847656 "/>
|
||||
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 1016.566406 418.589844 L 1016.566406 415.847656 "/>
|
||||
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
|
||||
<use xlink:href="#glyph-0-9" x="57.605469" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-10" x="63.605469" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-10" x="68.605469" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-5" x="73.605469" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-1" x="78.605469" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-11" x="83.605469" y="426.883789"/>
|
||||
</g>
|
||||
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
|
||||
<use xlink:href="#glyph-0-12" x="99.03125" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-7" x="105.03125" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-10" x="110.03125" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-7" x="115.03125" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-13" x="120.03125" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-2" x="125.03125" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-14" x="127.03125" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-2" x="129.03125" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-15" x="131.03125" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-16" x="133.03125" y="426.883789"/>
|
||||
</g>
|
||||
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
|
||||
<use xlink:href="#glyph-0-12" x="150.960938" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-17" x="156.960938" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-5" x="161.960938" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-18" x="166.960938" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-19" x="170.960938" y="426.883789"/>
|
||||
</g>
|
||||
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
|
||||
<use xlink:href="#glyph-0-12" x="196.886719" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-14" x="202.886719" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-20" x="204.886719" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-21" x="209.886719" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-5" x="213.886719" y="426.883789"/>
|
||||
</g>
|
||||
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
|
||||
<use xlink:href="#glyph-0-12" x="239.8125" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-22" x="245.8125" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-5" x="248.8125" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-7" x="253.8125" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-15" x="258.8125" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-5" x="260.8125" y="426.883789"/>
|
||||
</g>
|
||||
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
|
||||
<use xlink:href="#glyph-0-23" x="283.738281" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-1" x="289.738281" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-7" x="294.738281" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-13" x="299.738281" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-14" x="304.738281" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-5" x="306.738281" y="426.883789"/>
|
||||
</g>
|
||||
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
|
||||
<use xlink:href="#glyph-0-23" x="325.667969" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-24" x="331.667969" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-7" x="335.667969" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-25" x="340.667969" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-2" x="347.667969" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-1" x="349.667969" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-5" x="354.667969" y="426.883789"/>
|
||||
</g>
|
||||
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
|
||||
<use xlink:href="#glyph-0-23" x="370.09375" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-24" x="376.09375" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-10" x="380.09375" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-4" x="385.09375" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-1" x="390.09375" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-26" x="395.09375" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-5" x="400.09375" y="426.883789"/>
|
||||
</g>
|
||||
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
|
||||
<use xlink:href="#glyph-0-27" x="422.019531" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-5" x="427.019531" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-15" x="432.019531" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-18" x="434.019531" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-17" x="438.019531" y="426.883789"/>
|
||||
</g>
|
||||
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
|
||||
<use xlink:href="#glyph-0-28" x="470.445312" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-11" x="472.445312" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-14" x="477.445312" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-5" x="479.445312" y="426.883789"/>
|
||||
</g>
|
||||
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
|
||||
<use xlink:href="#glyph-0-29" x="515.871094" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-2" x="520.871094" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-21" x="522.871094" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-15" x="526.871094" y="426.883789"/>
|
||||
</g>
|
||||
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
|
||||
<use xlink:href="#glyph-0-29" x="556.300781" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-20" x="561.300781" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-26" x="566.300781" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-2" x="571.300781" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-1" x="573.300781" y="426.883789"/>
|
||||
</g>
|
||||
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
|
||||
<use xlink:href="#glyph-0-29" x="598.726562" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-20" x="603.726562" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-26" x="608.726562" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-20" x="613.726562" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-4" x="618.726562" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-15" x="623.726562" y="426.883789"/>
|
||||
</g>
|
||||
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
|
||||
<use xlink:href="#glyph-0-29" x="647.652344" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-21" x="652.652344" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-4" x="656.652344" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-13" x="661.652344" y="426.883789"/>
|
||||
</g>
|
||||
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
|
||||
<use xlink:href="#glyph-0-30" x="691.578125" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-20" x="698.578125" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-31" x="703.578125" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-5" x="707.578125" y="426.883789"/>
|
||||
</g>
|
||||
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
|
||||
<use xlink:href="#glyph-0-32" x="736.503906" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-20" x="742.503906" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-20" x="747.503906" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-10" x="752.503906" y="426.883789"/>
|
||||
</g>
|
||||
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
|
||||
<use xlink:href="#glyph-0-33" x="777.933594" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-5" x="783.933594" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-7" x="788.933594" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-22" x="793.933594" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-18" x="796.933594" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-17" x="800.933594" y="426.883789"/>
|
||||
</g>
|
||||
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
|
||||
<use xlink:href="#glyph-0-33" x="824.859375" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-5" x="830.859375" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-14" x="835.859375" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-5" x="837.859375" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-18" x="842.859375" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-15" x="846.859375" y="426.883789"/>
|
||||
</g>
|
||||
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
|
||||
<use xlink:href="#glyph-0-33" x="869.785156" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-15" x="875.785156" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-7" x="877.785156" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-15" x="882.785156" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-4" x="884.785156" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-21" x="889.785156" y="426.883789"/>
|
||||
</g>
|
||||
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
|
||||
<use xlink:href="#glyph-0-33" x="916.210938" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-15" x="922.210938" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-20" x="924.210938" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-22" x="929.210938" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-5" x="932.210938" y="426.883789"/>
|
||||
</g>
|
||||
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
|
||||
<use xlink:href="#glyph-0-33" x="952.136719" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-4" x="958.136719" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-13" x="963.136719" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-21" x="968.136719" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-18" x="972.136719" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-22" x="976.136719" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-2" x="979.136719" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-13" x="981.136719" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-5" x="986.136719" y="426.883789"/>
|
||||
</g>
|
||||
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
|
||||
<use xlink:href="#glyph-0-0" x="1000.066406" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-1" x="1006.066406" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-21" x="1011.066406" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-5" x="1015.066406" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-14" x="1020.066406" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-5" x="1022.066406" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-18" x="1027.066406" y="426.883789"/>
|
||||
<use xlink:href="#glyph-0-15" x="1031.066406" y="426.883789"/>
|
||||
</g>
|
||||
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(0%, 0%, 0%)" stroke-opacity="1" stroke-miterlimit="10" d="M 46.152344 200.957031 L 1043.519531 200.957031 "/>
|
||||
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 73.105469 203.699219 L 73.105469 200.957031 "/>
|
||||
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 118.03125 203.699219 L 118.03125 200.957031 "/>
|
||||
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 162.960938 203.699219 L 162.960938 200.957031 "/>
|
||||
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 207.886719 203.699219 L 207.886719 200.957031 "/>
|
||||
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 252.8125 203.699219 L 252.8125 200.957031 "/>
|
||||
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 297.738281 203.699219 L 297.738281 200.957031 "/>
|
||||
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 342.667969 203.699219 L 342.667969 200.957031 "/>
|
||||
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 387.59375 203.699219 L 387.59375 200.957031 "/>
|
||||
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 432.519531 203.699219 L 432.519531 200.957031 "/>
|
||||
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 477.445312 203.699219 L 477.445312 200.957031 "/>
|
||||
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 522.371094 203.699219 L 522.371094 200.957031 "/>
|
||||
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 567.300781 203.699219 L 567.300781 200.957031 "/>
|
||||
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 612.226562 203.699219 L 612.226562 200.957031 "/>
|
||||
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 657.152344 203.699219 L 657.152344 200.957031 "/>
|
||||
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 702.078125 203.699219 L 702.078125 200.957031 "/>
|
||||
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 747.003906 203.699219 L 747.003906 200.957031 "/>
|
||||
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 791.933594 203.699219 L 791.933594 200.957031 "/>
|
||||
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 836.859375 203.699219 L 836.859375 200.957031 "/>
|
||||
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 881.785156 203.699219 L 881.785156 200.957031 "/>
|
||||
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 926.710938 203.699219 L 926.710938 200.957031 "/>
|
||||
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 971.636719 203.699219 L 971.636719 200.957031 "/>
|
||||
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 1016.566406 203.699219 L 1016.566406 200.957031 "/>
|
||||
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
|
||||
<use xlink:href="#glyph-0-9" x="57.605469" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-10" x="63.605469" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-10" x="68.605469" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-5" x="73.605469" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-1" x="78.605469" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-11" x="83.605469" y="211.993164"/>
|
||||
</g>
|
||||
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
|
||||
<use xlink:href="#glyph-0-12" x="99.03125" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-7" x="105.03125" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-10" x="110.03125" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-7" x="115.03125" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-13" x="120.03125" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-2" x="125.03125" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-14" x="127.03125" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-2" x="129.03125" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-15" x="131.03125" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-16" x="133.03125" y="211.993164"/>
|
||||
</g>
|
||||
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
|
||||
<use xlink:href="#glyph-0-12" x="150.960938" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-17" x="156.960938" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-5" x="161.960938" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-18" x="166.960938" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-19" x="170.960938" y="211.993164"/>
|
||||
</g>
|
||||
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
|
||||
<use xlink:href="#glyph-0-12" x="196.886719" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-14" x="202.886719" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-20" x="204.886719" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-21" x="209.886719" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-5" x="213.886719" y="211.993164"/>
|
||||
</g>
|
||||
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
|
||||
<use xlink:href="#glyph-0-12" x="239.8125" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-22" x="245.8125" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-5" x="248.8125" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-7" x="253.8125" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-15" x="258.8125" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-5" x="260.8125" y="211.993164"/>
|
||||
</g>
|
||||
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
|
||||
<use xlink:href="#glyph-0-23" x="283.738281" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-1" x="289.738281" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-7" x="294.738281" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-13" x="299.738281" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-14" x="304.738281" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-5" x="306.738281" y="211.993164"/>
|
||||
</g>
|
||||
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
|
||||
<use xlink:href="#glyph-0-23" x="325.667969" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-24" x="331.667969" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-7" x="335.667969" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-25" x="340.667969" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-2" x="347.667969" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-1" x="349.667969" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-5" x="354.667969" y="211.993164"/>
|
||||
</g>
|
||||
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
|
||||
<use xlink:href="#glyph-0-23" x="370.09375" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-24" x="376.09375" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-10" x="380.09375" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-4" x="385.09375" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-1" x="390.09375" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-26" x="395.09375" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-5" x="400.09375" y="211.993164"/>
|
||||
</g>
|
||||
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
|
||||
<use xlink:href="#glyph-0-27" x="422.019531" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-5" x="427.019531" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-15" x="432.019531" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-18" x="434.019531" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-17" x="438.019531" y="211.993164"/>
|
||||
</g>
|
||||
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
|
||||
<use xlink:href="#glyph-0-28" x="470.445312" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-11" x="472.445312" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-14" x="477.445312" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-5" x="479.445312" y="211.993164"/>
|
||||
</g>
|
||||
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
|
||||
<use xlink:href="#glyph-0-29" x="515.871094" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-2" x="520.871094" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-21" x="522.871094" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-15" x="526.871094" y="211.993164"/>
|
||||
</g>
|
||||
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
|
||||
<use xlink:href="#glyph-0-29" x="556.300781" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-20" x="561.300781" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-26" x="566.300781" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-2" x="571.300781" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-1" x="573.300781" y="211.993164"/>
|
||||
</g>
|
||||
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
|
||||
<use xlink:href="#glyph-0-29" x="598.726562" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-20" x="603.726562" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-26" x="608.726562" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-20" x="613.726562" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-4" x="618.726562" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-15" x="623.726562" y="211.993164"/>
|
||||
</g>
|
||||
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
|
||||
<use xlink:href="#glyph-0-29" x="647.652344" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-21" x="652.652344" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-4" x="656.652344" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-13" x="661.652344" y="211.993164"/>
|
||||
</g>
|
||||
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
|
||||
<use xlink:href="#glyph-0-30" x="691.578125" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-20" x="698.578125" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-31" x="703.578125" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-5" x="707.578125" y="211.993164"/>
|
||||
</g>
|
||||
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
|
||||
<use xlink:href="#glyph-0-32" x="736.503906" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-20" x="742.503906" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-20" x="747.503906" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-10" x="752.503906" y="211.993164"/>
|
||||
</g>
|
||||
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
|
||||
<use xlink:href="#glyph-0-33" x="777.933594" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-5" x="783.933594" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-7" x="788.933594" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-22" x="793.933594" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-18" x="796.933594" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-17" x="800.933594" y="211.993164"/>
|
||||
</g>
|
||||
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
|
||||
<use xlink:href="#glyph-0-33" x="824.859375" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-5" x="830.859375" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-14" x="835.859375" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-5" x="837.859375" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-18" x="842.859375" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-15" x="846.859375" y="211.993164"/>
|
||||
</g>
|
||||
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
|
||||
<use xlink:href="#glyph-0-33" x="869.785156" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-15" x="875.785156" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-7" x="877.785156" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-15" x="882.785156" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-4" x="884.785156" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-21" x="889.785156" y="211.993164"/>
|
||||
</g>
|
||||
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
|
||||
<use xlink:href="#glyph-0-33" x="916.210938" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-15" x="922.210938" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-20" x="924.210938" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-22" x="929.210938" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-5" x="932.210938" y="211.993164"/>
|
||||
</g>
|
||||
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
|
||||
<use xlink:href="#glyph-0-33" x="952.136719" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-4" x="958.136719" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-13" x="963.136719" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-21" x="968.136719" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-18" x="972.136719" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-22" x="976.136719" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-2" x="979.136719" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-13" x="981.136719" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-5" x="986.136719" y="211.993164"/>
|
||||
</g>
|
||||
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
|
||||
<use xlink:href="#glyph-0-0" x="1000.066406" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-1" x="1006.066406" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-21" x="1011.066406" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-5" x="1015.066406" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-14" x="1020.066406" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-5" x="1022.066406" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-18" x="1027.066406" y="211.993164"/>
|
||||
<use xlink:href="#glyph-0-15" x="1031.066406" y="211.993164"/>
|
||||
</g>
|
||||
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(0%, 0%, 0%)" stroke-opacity="1" stroke-miterlimit="10" d="M 46.152344 200.957031 L 46.152344 23.246094 "/>
|
||||
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
|
||||
<use xlink:href="#glyph-0-34" x="36.21875" y="195.485352"/>
|
||||
</g>
|
||||
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
|
||||
<use xlink:href="#glyph-0-35" x="26.21875" y="161.458008"/>
|
||||
<use xlink:href="#glyph-0-36" x="31.21875" y="161.458008"/>
|
||||
<use xlink:href="#glyph-0-34" x="36.21875" y="161.458008"/>
|
||||
</g>
|
||||
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
|
||||
<use xlink:href="#glyph-0-36" x="26.21875" y="127.430664"/>
|
||||
<use xlink:href="#glyph-0-34" x="31.21875" y="127.430664"/>
|
||||
<use xlink:href="#glyph-0-34" x="36.21875" y="127.430664"/>
|
||||
</g>
|
||||
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
|
||||
<use xlink:href="#glyph-0-37" x="26.21875" y="93.40332"/>
|
||||
<use xlink:href="#glyph-0-36" x="31.21875" y="93.40332"/>
|
||||
<use xlink:href="#glyph-0-34" x="36.21875" y="93.40332"/>
|
||||
</g>
|
||||
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
|
||||
<use xlink:href="#glyph-0-38" x="21.21875" y="59.379883"/>
|
||||
<use xlink:href="#glyph-0-34" x="26.21875" y="59.379883"/>
|
||||
<use xlink:href="#glyph-0-34" x="31.21875" y="59.379883"/>
|
||||
<use xlink:href="#glyph-0-34" x="36.21875" y="59.379883"/>
|
||||
</g>
|
||||
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 43.410156 192.882812 L 46.152344 192.882812 "/>
|
||||
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 43.410156 158.855469 L 46.152344 158.855469 "/>
|
||||
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 43.410156 124.828125 L 46.152344 124.828125 "/>
|
||||
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 43.410156 90.800781 L 46.152344 90.800781 "/>
|
||||
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 43.410156 56.777344 L 46.152344 56.777344 "/>
|
||||
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(0%, 0%, 0%)" stroke-opacity="1" stroke-miterlimit="10" d="M 46.152344 415.847656 L 46.152344 238.136719 "/>
|
||||
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
|
||||
<use xlink:href="#glyph-0-34" x="36.21875" y="410.37207"/>
|
||||
</g>
|
||||
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
|
||||
<use xlink:href="#glyph-0-36" x="31.21875" y="347.266602"/>
|
||||
<use xlink:href="#glyph-0-34" x="36.21875" y="347.266602"/>
|
||||
</g>
|
||||
<g fill="rgb(30.196078%, 30.196078%, 30.196078%)" fill-opacity="1">
|
||||
<use xlink:href="#glyph-0-38" x="26.21875" y="284.157227"/>
|
||||
<use xlink:href="#glyph-0-34" x="31.21875" y="284.157227"/>
|
||||
<use xlink:href="#glyph-0-34" x="36.21875" y="284.157227"/>
|
||||
</g>
|
||||
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 43.410156 407.769531 L 46.152344 407.769531 "/>
|
||||
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 43.410156 344.664062 L 46.152344 344.664062 "/>
|
||||
<path fill="none" stroke-width="1.066978" stroke-linecap="butt" stroke-linejoin="round" stroke="rgb(20%, 20%, 20%)" stroke-opacity="1" stroke-miterlimit="10" d="M 43.410156 281.554688 L 46.152344 281.554688 "/>
|
||||
<g fill="rgb(0%, 0%, 0%)" fill-opacity="1">
|
||||
<use xlink:href="#glyph-1-0" x="520.835938" y="441.147461"/>
|
||||
<use xlink:href="#glyph-1-1" x="526.835938" y="441.147461"/>
|
||||
<use xlink:href="#glyph-1-2" x="532.835938" y="441.147461"/>
|
||||
<use xlink:href="#glyph-1-2" x="541.835938" y="441.147461"/>
|
||||
<use xlink:href="#glyph-1-3" x="550.835938" y="441.147461"/>
|
||||
<use xlink:href="#glyph-1-4" x="556.835938" y="441.147461"/>
|
||||
<use xlink:href="#glyph-1-5" x="562.835938" y="441.147461"/>
|
||||
</g>
|
||||
<g fill="rgb(0%, 0%, 0%)" fill-opacity="1">
|
||||
<use xlink:href="#glyph-2-0" x="14.108398" y="233.046875"/>
|
||||
<use xlink:href="#glyph-2-1" x="14.108398" y="227.046875"/>
|
||||
<use xlink:href="#glyph-2-2" x="14.108398" y="221.046875"/>
|
||||
<use xlink:href="#glyph-2-3" x="14.108398" y="215.046875"/>
|
||||
<use xlink:href="#glyph-2-4" x="14.108398" y="209.046875"/>
|
||||
</g>
|
||||
</svg>
|
After Width: | Height: | Size: 68 KiB |
240
content/blog/2024-ram-usage-encryption-s3/ecdf_mbx.svg
Normal file
After Width: | Height: | Size: 127 KiB |
218
content/blog/2024-ram-usage-encryption-s3/index.md
Normal file
|
@ -0,0 +1,218 @@
|
|||
+++
|
||||
title="Does Aerogramme use lot of RAM?"
|
||||
date=2024-02-15
|
||||
+++
|
||||
|
||||
*"Will Aerogramme use lot of RAM" was the first question we asked ourselves
|
||||
when designing email mailboxes as an encrypted event log. This blog post
|
||||
tries to evaluate our design assumptions to the real world implementation,
|
||||
similarly to what we have done [on Garage](https://garagehq.deuxfleurs.fr/blog/2022-perf/).*
|
||||
|
||||
<!-- more -->
|
||||
|
||||
---
|
||||
|
||||
## Methodology
|
||||
|
||||
Brendan Gregg, a very respected figure in the world of system performances, says that, for many reasons,
|
||||
[~100% of benchmarks are wrong](https://www.brendangregg.com/Slides/Velocity2015_LinuxPerfTools.pdf).
|
||||
This benchmark will be wrong too in multiple ways:
|
||||
|
||||
1. It will not say anything about Aerogramme performances in real world deployments
|
||||
2. It will not say anything about Aerogramme performances compared to other email servers
|
||||
|
||||
However, I pursue a very specific goal with this benchmark: validating if the assumptions we have done
|
||||
during the design phase, in term of compute and memory complexity, holds for real.
|
||||
|
||||
I will observe only two metrics: the CPU time used by the program (everything except idle and iowait based on the [psutil](https://pypi.org/project/psutil/) code) - for the computing complexity - and the [Resident Set Size](https://en.wikipedia.org/wiki/Resident_set_size) (data held RAM) - for the memory complexity.
|
||||
|
||||
<!--My baseline will be the compute and space complexity of the code that I have in mind. For example,
|
||||
I know we have a "3 layers" data model: an index stored in RAM, a summary of the emails stored in K2V, a database, and the full email stored in S3, an object store.
|
||||
Commands that can be solved only with the index should use a very low amount of RAM compared to . In turn, commands that require the full email will require to fetch lots of data from S3.-->
|
||||
|
||||
|
||||
## Testing environment
|
||||
|
||||
I ran all the tests on my personal computer, a Dell Inspiron 7775 with an AMD Ryzen 7 1700, 16GB of RAM, an encrypted SSD, on NixOS 23.11.
|
||||
The setup is made of Aerogramme (compiled in release mode) connected to a local, single node, Garage server.
|
||||
|
||||
Observations and graphs are done all in once thanks to the [psrecord](https://github.com/astrofrog/psrecord) tool.
|
||||
I did not try to make the following values reproducible as it is more an exploration than a definitive review.
|
||||
|
||||
## Mailbox dataset
|
||||
|
||||
I will use [a dataset of 100 emails](https://git.deuxfleurs.fr/Deuxfleurs/aerogramme/src/commit/0b20d726bbc75e0dfd2ba1900ca5ea697645a8f1/tests/emails/aero100.mbox.zstd) I have made specifically for the occasion.
|
||||
It contains some emails with various attachments, some emails with lots of text, emails generated by many different clients (Thunderbird, Geary, Sogo, Alps, Outlook iOS, GMail iOS, Windows Mail, Postbox, Mailbird, etc.), etc.
|
||||
The mbox file weighs 23MB uncompressed.
|
||||
|
||||
One question that arise is: how representative of a real mailbox is this dataset? While a definitive response is not possible, I compared the email sizes of this dataset to the 2 367 emails in my personal inbox.
|
||||
Below I plotted the empirical distribution for both my dataset and my personal inbox (note that the x axis is not linear but logarithimic).
|
||||
|
||||
|
||||
![ECDF mailbox](ecdf_mbx.svg)
|
||||
|
||||
We see that the curves are close together and follow the same pattern: most emails are between 1kB and 100kB, and then we have a long tail (up to 20MB in my inbox, up to 6MB in the dataset).
|
||||
It's not that surprising: on many places on the Internet, the limit on emails is set to 25MB. Overall I am quite satisfied by this simple dataset, even if having one or two bigger emails could make it even more representative of my real inbox...
|
||||
|
||||
Mailboxes with only 100 emails are not that common (mine has 2k emails...), so to emulate bigger mailboxes, I simply inject the dataset multiple times (eg. 20 times for 2k emails).
|
||||
|
||||
## Command dataset
|
||||
|
||||
Having a representative mailbox is a thing, but we also need to know what are the typical commands that are sent by IMAP clients.
|
||||
As I have setup a test instance of Aerogramme (see [my FOSDEM talk](https://fosdem.org/2024/schedule/event/fosdem-2024-2642--servers-aerogramme-a-multi-region-imap-server/)),
|
||||
I was able to extract 4 619 IMAP commands sent by various clients. Many of them are identical, and in the end, only 248 are truly unique.
|
||||
The following bar plot depicts the command distribution per command name; top is the raw count, bottom is the unique count.
|
||||
|
||||
![Commands](command-run.svg)
|
||||
|
||||
First, we can handle separately some commands: LOGIN, CAPABILITY, ENABLE, SELECT, EXAMINE, CLOSE, UNSELECT, LOGOUT as they are part of a **connection workflow**.
|
||||
We do not plan on studying them directly as they will be used in all other tests.
|
||||
|
||||
CHECK, NOOP, IDLE, and STATUS are different approaches to detect a change in the current mailbox (or in other mailboxes in the case of STATUS),
|
||||
I assimilate these commands as a **notification** mechanism.
|
||||
|
||||
FETCH, SEARCH and LIST are **query** commands, the first two ones for emails, the last one for mailboxes.
|
||||
FETCH is from far the most used command (1187 occurencies) with the most variations (128 unique combination of parameters).
|
||||
SEARCH is also used a lot (658 occurencies, 14 unique).
|
||||
|
||||
APPEND, STORE, EXPUNGE, MOVE, COPY, LSUB, SUBSCRIBE, CREATE, DELETE are commands to **write** things: flags, emails or mailboxes.
|
||||
They are not used a lot but some writes are hidden in other commands (CLOSE, FETCH), and when mails arrive, they are delivered through a different protocol (LMTP) that does not appear here.
|
||||
In the following, we will assess that APPEND behaves more or less than a LMTP delivery.
|
||||
|
||||
|
||||
<!--
|
||||
Focus on `FETCH` (128 unique commands), `SEARCH` (14 unique commands)
|
||||
|
||||
|
||||
```
|
||||
FETCH *:5 (UID ENVELOPE BODY.PEEK[HEADER.FIELDS("References")])
|
||||
UID FETCH 1:* (UID FLAGS) (CHANGEDSINCE 22)
|
||||
FETCH 1:1 (UID FLAGS INTERNALDATE RFC822.SIZE BODY.PEEK[HEADER.FIELDS("DATE" "FROM" "SENDER" "SUBJECT" "TO" "CC" "MESSAGE-ID" "REFERENCES" "CONTENT-TYPE" "CONTENT-DESCRIPTION" "IN-REPLY-TO" "REPLY-TO" "LINES" "LIST-POST" "X-LABEL" "CONTENT-CLASS" "IMPORTANCE" "PRIORITY" "X-PRIORITY" "THREAD-TOPIC" "REPLY-TO" "AUTO-SUBMITTED" "BOUNCES-TO" "LIST-ARCHIVE" "LIST-HELP" "LIST-ID" "LIST-OWNER" "LIST-POST" "LIST-SUBSCRIBE" "LIST-UNSUBSCRIBE" "PRECEDENCE" "RESENT-FROM" "RETURN-PATH" "Newsgroups" "Delivery-Date")])
|
||||
UID FETCH 1:2,11:13,18:19,22:26,33:34,60:62 (FLAGS) (CHANGEDSINCE 165)
|
||||
UID FETCH 1:7 (UID RFC822.SIZE BODY.PEEK[])
|
||||
UID FETCH 12:13 (INTERNALDATE UID RFC822.SIZE FLAGS MODSEQ BODY.PEEK[HEADER])
|
||||
UID FETCH 2 (RFC822.HEADER BODY.PEEK[2]<0.10240>)
|
||||
```
|
||||
|
||||
Flags, date, headers
|
||||
|
||||
```
|
||||
SEARCH UNDELETED SINCE 2023-11-17
|
||||
UID SEARCH HEADER "Message-ID" "<x@y.z>" UNDELETED
|
||||
UID SEARCH 1:* UNSEEN
|
||||
UID SEARCH BEFORE 2024-02-09
|
||||
```
|
||||
-->
|
||||
|
||||
<!--
|
||||
`STORE` (19 unique commands).
|
||||
UID, not uid, silent, not silent, add not set, standard flags mainly.
|
||||
|
||||
```
|
||||
UID STORE 60:62 +FLAGS (\Deleted \Seen)
|
||||
STORE 2 +FLAGS.SILENT \Answered
|
||||
```
|
||||
-->
|
||||
|
||||
In the following, I will keep these 3 categories: **writing**, **notification**, and **query** to evaluate Aerogramme's ressource usage
|
||||
based on command patterns observed on real IMAP commands.
|
||||
|
||||
---
|
||||
|
||||
|
||||
## Write Commands
|
||||
|
||||
I inserted the full dataset (100 emails) to 16 accounts (the server now handles 1 600 emails then).
|
||||
*[See the script](https://git.deuxfleurs.fr/Deuxfleurs/aerogramme/src/branch/main/tests/instrumentation/mbox-to-imap.py)*
|
||||
|
||||
`APPEND`
|
||||
|
||||
![Append Custom Build](01-append-tokio-console-musl.png)
|
||||
|
||||
|
||||
First, I observed this *scary* linear memory increase. It seems we are not releasing some memory,
|
||||
and that's an issue! I quickly suspected tokio-console of being the culprit.
|
||||
A quick search lead me to an issue entitled [Continuous memory leak with console_subscriber #184](https://github.com/tokio-rs/console/issues/184)
|
||||
that confirmed my intuition.
|
||||
Instead of waiting for an hour or trying to tweak the retention time, I tried a build without tokio console.
|
||||
|
||||
*So in a first approach, we observed the impact of tokio console instead of our code! Still, we want to
|
||||
have performances as predictable as possible.*
|
||||
|
||||
![Append Cargo Release](02-append-glibc.png)
|
||||
|
||||
Which got us to this second pattern: a stable but high memory usage compared to previous run.
|
||||
It appears I built the binary with `cargo release`, which creates a binary that dynamically link to the GNU libc.
|
||||
While the previous binary was made with our custom Nix toolchain that statically compiles the Musl libc.
|
||||
In the process, we changed the allocator: it seems the GNU libc allocator allocates bigger chunks at once.
|
||||
|
||||
*It would be wrong to conclude the musl libc allocator is more efficient: allocating and unallocating
|
||||
memory on the kernel side is costly, and thus it might be better for the allocator to keep some kernel allocated memory
|
||||
for future memory allocations that will not require system calls. This is another example of why this benchmark is wrong: we observe
|
||||
the memory allocated by the allocator, not the memory used by program itself.*
|
||||
|
||||
For the next graph, I removed tokio-console but built Aerogramme with static musl libc.
|
||||
|
||||
![Append Custom Build](03-append-musl.png)
|
||||
|
||||
We observe 16 spikes of memory allocation, around 50MB, followed by a 25MB memory usage. In the end,
|
||||
we drop to ~18MB. We do not try to analyze the spike for now. However, we can assume the 25MB memory usage accounts for the base memory consumption
|
||||
plus the index of the user's mailbox. Once the last user logged out, memory drops to 18MB.
|
||||
In this scenario, a user accounts for around 7MB.
|
||||
|
||||
*We will see later that some other use cases lead to a lower per-user RAM consumption.
|
||||
An hypothesis: we are doing some requests on S3 with the aws-sdk library that is intended to be configured once
|
||||
per process, and handles internally the threading logic. In our case, we instantiate it once per user,
|
||||
tweaking its configuration might help. Again, we are not observing - only - our code!*
|
||||
|
||||
In the previous runs, we were doing the inserts sequentially. But in the real world, multiple users interact with the server
|
||||
at the same time. In the next run, we run the same test but in parrallel.
|
||||
|
||||
![Append Parallel](04-append-parallel.png)
|
||||
|
||||
We see 2 spikes: a short one at the beggining, and a longer one at the end.
|
||||
|
||||
|
||||
## Notification Commands
|
||||
|
||||
`NOOP` & `CHECK`
|
||||
|
||||
*TODO*
|
||||
|
||||
`STATUS`
|
||||
|
||||
*TODO*
|
||||
|
||||
`IDLE`
|
||||
|
||||
![Idle Parallel](05-idle-parallel.png)
|
||||
|
||||
## Query Commands
|
||||
|
||||
`FETCH 1:* ALL`
|
||||
|
||||
![Fetch All 1k mail](06-fetch-all.png)
|
||||
|
||||
`FETCH 1:* FULL`
|
||||
|
||||
![Fetch Full 1k mail](07-fetch-full.png)
|
||||
|
||||
Which crashed the Garage server:
|
||||
|
||||
```
|
||||
ERROR hyper::server::tcp: accept error: No file descriptors available (os error 24)
|
||||
```
|
||||
|
||||
`SEARCH`
|
||||
|
||||
*TODO*
|
||||
|
||||
`LIST`
|
||||
|
||||
*TODO*
|
||||
|
||||
---
|
||||
|
||||
## Conclusion
|
||||
|
||||
*TBD*
|