Compare commits
No commits in common. "master" and "fixfixfix" have entirely different histories.
|
@ -10,7 +10,6 @@ steps:
|
||||||
commands:
|
commands:
|
||||||
- git submodule update --init --recursive
|
- git submodule update --init --recursive
|
||||||
- cp -rv garage/doc/book content/documentation
|
- cp -rv garage/doc/book content/documentation
|
||||||
- cp -rv garage/doc/api static/api
|
|
||||||
|
|
||||||
- name: build-css
|
- name: build-css
|
||||||
image: node
|
image: node
|
||||||
|
|
1
.gitignore
vendored
|
@ -2,4 +2,3 @@ node_modules
|
||||||
public
|
public
|
||||||
content/documentation
|
content/documentation
|
||||||
static/style.css
|
static/style.css
|
||||||
static/api
|
|
||||||
|
|
6
config.toml
Normal file → Executable file
|
@ -1,6 +1,6 @@
|
||||||
base_url = "https://garagehq.deuxfleurs.fr"
|
base_url = "https://garagehq.deuxfleurs.fr"
|
||||||
title = "Garage HQ"
|
title = "Garage"
|
||||||
description = "An open-source distributed object storage service tailored for self-hosting"
|
description = "An open-source distributed storage service you can self-host to fullfill many needs"
|
||||||
default_language = "en"
|
default_language = "en"
|
||||||
output_dir = "public"
|
output_dir = "public"
|
||||||
compile_sass = true
|
compile_sass = true
|
||||||
|
@ -61,7 +61,7 @@ webmanifest = "/icons/site.webmanifest"
|
||||||
|
|
||||||
[extra.organization]
|
[extra.organization]
|
||||||
name = "Garage"
|
name = "Garage"
|
||||||
description = "An open-source distributed object storage service tailored for self-hosting"
|
description = "An open-source distributed storage service you can self-host to fullfill many needs"
|
||||||
logo = "/images/garage-logo.svg"
|
logo = "/images/garage-logo.svg"
|
||||||
logo_simple = "/images/garage-logo-simple.svg"
|
logo_simple = "/images/garage-logo-simple.svg"
|
||||||
logo_horizontal = "/images/garage-logo-horizontal.svg"
|
logo_horizontal = "/images/garage-logo-horizontal.svg"
|
||||||
|
|
|
@ -4,16 +4,16 @@ date=2022-02-02
|
||||||
+++
|
+++
|
||||||
|
|
||||||
*FOSDEM is an international meeting about Free Software, organized from Brussels.
|
*FOSDEM is an international meeting about Free Software, organized from Brussels.
|
||||||
On next Sunday, February 6th, 2022, we will be there to present Garage.*
|
On next Sunday, Febuary 6th, 2022, we will be there to present Garage.*
|
||||||
|
|
||||||
<!-- more -->
|
<!-- more -->
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
In 2000, a Belgian free software activist going by the name of Raphael Baudin
|
In 2000, a belgian free software activist going by the name of Raphael Baudin
|
||||||
set out to create a small event for free software developers in Brussels.
|
set out to create a small event for free software developpers in Brussels.
|
||||||
This event quickly became the "Free and Open Source Developers' European Meeting",
|
This event quickly became the "Free and Open Source Developers' European Meeting",
|
||||||
shorthand FOSDEM. 22 years later, FOSDEM is a major event for free software developers
|
shorthand FOSDEM. 22 years later, FOSDEM is a major event for free software developpers
|
||||||
around the world. And for this year, we have the immense pleasure of announcing
|
around the world. And for this year, we have the immense pleasure of announcing
|
||||||
that the Deuxfleurs association will be there to present Garage.
|
that the Deuxfleurs association will be there to present Garage.
|
||||||
|
|
||||||
|
@ -23,18 +23,18 @@ in the last few years. Nothing too unfamiliar to us, as the organization is usin
|
||||||
the same tools as we are: a combination of Jitsi and Matrix.
|
the same tools as we are: a combination of Jitsi and Matrix.
|
||||||
|
|
||||||
We are of course extremely honored that our presentation was accepted.
|
We are of course extremely honored that our presentation was accepted.
|
||||||
If technical details are your thing, we invite you to come and share this event with us.
|
If technical details are your thing, we invite you to come share this event with us.
|
||||||
In all cases, the event will be recorded and available as a VOD (Video On Demand)
|
In all cases, the event will be recorded and available as a VOD (Video On Demand)
|
||||||
afterward. Concerning the details of the organization:
|
afterwards. Concerning the details of the organization:
|
||||||
|
|
||||||
**When?** On Sunday, February 6th, 2022, from 10:30 AM to 11:00 AM CET.
|
**When?** On Sunday, Febuary 6th, 2022, from 10:30 AM to 11:00 AM CET.
|
||||||
|
|
||||||
**What for?** Introducing the Garage storage platform.
|
**What for?** Introducing the Garage storage platform.
|
||||||
|
|
||||||
**By whom?** The presentation will be made by Alex,
|
**By whom?** The presentation will be made by Alex,
|
||||||
other developers will be present to answer questions.
|
other developpers will be present to answer questions.
|
||||||
|
|
||||||
**For who?** The presentation is targeted to a technical audience that is knowledgeable in software development or systems administration.
|
**For who?** The presentation is targetted to a technical audience that is knowledgable in software developpement or systems administration.
|
||||||
|
|
||||||
**Price:** FOSDEM'22 is an entirely free event.
|
**Price:** FOSDEM'22 is an entirely free event.
|
||||||
|
|
||||||
|
@ -46,7 +46,7 @@ afterward. Concerning the details of the organization:
|
||||||
|
|
||||||
And if you are not so much of a technical person, but you're dreaming of
|
And if you are not so much of a technical person, but you're dreaming of
|
||||||
a more ethical and emancipatory digital world,
|
a more ethical and emancipatory digital world,
|
||||||
keep in tune with news coming from the Deuxfleurs association
|
keep in tune with news comming from the Deuxfleurs association
|
||||||
as we will likely have other events very soon!
|
as we will likely have other events very soon!
|
||||||
|
|
||||||
|
|
||||||
|
|
|
@ -6,7 +6,7 @@ date=2022-02-01
|
||||||
*Deuxfleurs is a non-profit based in France that aims to defend and promote
|
*Deuxfleurs is a non-profit based in France that aims to defend and promote
|
||||||
individual freedom and rights on the Internet. In their quest to build a
|
individual freedom and rights on the Internet. In their quest to build a
|
||||||
decentralized, resilient self-hosting infrastructure, they have found that
|
decentralized, resilient self-hosting infrastructure, they have found that
|
||||||
currently, existing software is often ill-suited to such a particular deployment
|
currently existing software is often ill suited to such a particular deployment
|
||||||
scenario. In the context of data storage, Garage was built to provide a highly
|
scenario. In the context of data storage, Garage was built to provide a highly
|
||||||
available data store that exploits redundancy over different geographical
|
available data store that exploits redundancy over different geographical
|
||||||
locations, and does its best to not be too impacted by network latencies.*
|
locations, and does its best to not be too impacted by network latencies.*
|
||||||
|
@ -23,8 +23,8 @@ Facebook or Amazon today hold disproportionate power and are becoming quite
|
||||||
dangerous to us, citizens of the Internet. They know everything we are doing,
|
dangerous to us, citizens of the Internet. They know everything we are doing,
|
||||||
saying, and even thinking, and they are not making good use of that
|
saying, and even thinking, and they are not making good use of that
|
||||||
information. The interests of these companies are those of the capitalist
|
information. The interests of these companies are those of the capitalist
|
||||||
elite: they are most interested in making huge profits by exploiting the
|
elite: they are mostly interested in making huge profits by exploiting the
|
||||||
Earth's precious resources, producing, advertising, and selling us massive
|
Earth's precious resources, producing, advertising and selling us massive
|
||||||
amounts of stuff we don't need. They don't truly care about the needs of the
|
amounts of stuff we don't need. They don't truly care about the needs of the
|
||||||
people, nor do they care that planetary destruction is under way because of
|
people, nor do they care that planetary destruction is under way because of
|
||||||
them.
|
them.
|
||||||
|
@ -56,17 +56,17 @@ As I said, self-hosting means running our own hardware at home, and providing
|
||||||
24/7 Internet services from there. We have many reasons for doing this. One is
|
24/7 Internet services from there. We have many reasons for doing this. One is
|
||||||
because this is the only way we can truly control who has access to our data.
|
because this is the only way we can truly control who has access to our data.
|
||||||
Another one is that it helps us be aware of the physical substrate of which the
|
Another one is that it helps us be aware of the physical substrate of which the
|
||||||
Internet is made: making the Internet run has an environmental cost that we
|
Internet is made: making the Internet run has an environmental cost which we
|
||||||
want to evaluate and keep under control. The physical hardware also gives us a
|
want to evaluate and keep under control. The physical hardware also gives us a
|
||||||
sense of community, calling to mind all of the people that could currently be
|
sense of community, calling to mind all of the people that could currently be
|
||||||
connected and making use of our services, and reminding us of the purpose for
|
connected and making use of our services, and reminding us of the purpose for
|
||||||
which we are doing this.
|
which we are doing this.
|
||||||
|
|
||||||
If you have a home, you know that bad things can happen there too. The power
|
If you have a home, you know that bad things can happen there too. The power
|
||||||
grid is not infallible, and neither is your Internet connection. Fires and floods
|
grid is not infallible, neither is your Internet connection. Fires and floods
|
||||||
happen. And the computers we are running can themselves crash at any moment,
|
happen. And the computers we are running can themselves crash at any moment,
|
||||||
for any number of reasons. Self-hosted solutions today are often not equipped
|
for any number of reasons. Self-hosted solutions today are often not equipped
|
||||||
to face such challenges and might suffer from unavailability or data loss
|
to face such challenges, and might suffer from unavailability or data loss
|
||||||
as a consequence.
|
as a consequence.
|
||||||
|
|
||||||
If we want to grow our communities, and attract more people that might be
|
If we want to grow our communities, and attract more people that might be
|
||||||
|
@ -78,7 +78,7 @@ data, the compromise is much harder to make and people will be tempted to go
|
||||||
back to a comfortable lifestyle bestowed by big tech companies.
|
back to a comfortable lifestyle bestowed by big tech companies.
|
||||||
|
|
||||||
Fixing availability, making services reliable even when hosted at unreliable
|
Fixing availability, making services reliable even when hosted at unreliable
|
||||||
locations or on unreliable hardware is one of the main objectives of
|
locations or on unreliable hardware, is one of the main objectives of
|
||||||
Deuxfleurs, and in particular of the project Garage which we are building.
|
Deuxfleurs, and in particular of the project Garage which we are building.
|
||||||
|
|
||||||
### Distributed systems to the rescue
|
### Distributed systems to the rescue
|
||||||
|
@ -123,9 +123,9 @@ landscape of distributed storage systems.
|
||||||
|
|
||||||
Garage implements the Amazon S3 protocol, a de-facto standard that makes it
|
Garage implements the Amazon S3 protocol, a de-facto standard that makes it
|
||||||
compatible with a large variety of existing software. For instance it can be
|
compatible with a large variety of existing software. For instance it can be
|
||||||
used as a storage backend for many self-hosted web applications such as
|
used as a storage back-end for many self-hosted web applications such as
|
||||||
NextCloud, Matrix, Mastodon, Peertube, and many others, replacing the local
|
NextCloud, Matrix, Mastodon, Peertube, and many others, replacing the local
|
||||||
file system of a server with a distributed storage layer. Garage can also be
|
file system of a server by a distributed storage layer. Garage can also be
|
||||||
used to synchronize your files or store your backups with utilities such as
|
used to synchronize your files or store your backups with utilities such as
|
||||||
Rclone or Restic. Last but not least, Garage can be used to host static
|
Rclone or Restic. Last but not least, Garage can be used to host static
|
||||||
websites, such as the one you are currently reading, which is served directly
|
websites, such as the one you are currently reading, which is served directly
|
||||||
|
@ -135,7 +135,7 @@ Garage leverages the theory of distributed systems, and in particular
|
||||||
*Conflict-free Replicated Data Types* (CRDTs in short), a set of mathematical
|
*Conflict-free Replicated Data Types* (CRDTs in short), a set of mathematical
|
||||||
tools that help us write distributed software that runs faster, by avoiding
|
tools that help us write distributed software that runs faster, by avoiding
|
||||||
some kinds of unnecessary chit-chat between servers. In a future blog post,
|
some kinds of unnecessary chit-chat between servers. In a future blog post,
|
||||||
we will show how this allows us to significantly outperform Minio, our closest
|
we will show how this allow us to significantly outperform Minio, our closest
|
||||||
competitor (another self-hostable implementation of the S3 protocol).
|
competitor (another self-hostable implementation of the S3 protocol).
|
||||||
|
|
||||||
On the side of software engineering, we are committed to making Garage
|
On the side of software engineering, we are committed to making Garage
|
||||||
|
@ -155,7 +155,7 @@ it is working exceptionally well for us. We are currently using it to store
|
||||||
backups of personal files, to store the media files that we send and receive
|
backups of personal files, to store the media files that we send and receive
|
||||||
over the Matrix network, as well as to host a small but increasing number of
|
over the Matrix network, as well as to host a small but increasing number of
|
||||||
static websites. Our current deployment hosts about 200 000 files spread in 50
|
static websites. Our current deployment hosts about 200 000 files spread in 50
|
||||||
buckets, for a total size of slightly above 500 GB. These numbers can seem small
|
buckets, for a total size of slightly above 500 GB. These number can seem small
|
||||||
when compared to the datasets you could expect your typical cloud provider to
|
when compared to the datasets you could expect your typical cloud provider to
|
||||||
be handling, however these sizes are fairly typical of the small-scale
|
be handling, however these sizes are fairly typical of the small-scale
|
||||||
self-hosted deployments we are targeting, and our Garage cluster is in no way
|
self-hosted deployments we are targeting, and our Garage cluster is in no way
|
||||||
|
|
Before Width: | Height: | Size: 420 KiB |
Before Width: | Height: | Size: 94 KiB |
Before Width: | Height: | Size: 42 KiB |
Before Width: | Height: | Size: 46 KiB |
Before Width: | Height: | Size: 51 KiB |
Before Width: | Height: | Size: 28 KiB |
|
@ -1,267 +0,0 @@
|
||||||
+++
|
|
||||||
title="We tried IPFS over Garage"
|
|
||||||
date=2022-07-04
|
|
||||||
+++
|
|
||||||
|
|
||||||
|
|
||||||
*Once you have spawned your Garage cluster, you might be interested in finding ways to share efficiently your content with the rest of the world,
|
|
||||||
such as by joining federated platforms.
|
|
||||||
In this blog post, we experiment with interconnecting the InterPlanetary File System (IPFS) daemon with Garage.
|
|
||||||
We discuss the different bottlenecks and limitations of the software stack in its current state.*
|
|
||||||
|
|
||||||
<!-- more -->
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
|
|
||||||
<!--Garage has been designed to be operated inside the same "administrative area", ie. operated by a single organization made of members that fully trust each other.
|
|
||||||
It is an intended design decision: trusting each other enables Garage to spread data over the machines instead of duplicating it.
|
|
||||||
Still, you might want to share and collaborate with the rest of the world, and it can be done in 2 ways with Garage: through the integrated HTTP server that can serve your bucket as a static website,
|
|
||||||
or by connecting it to an application that will act as a "proxy" between Garage and the rest of the world.
|
|
||||||
We refer as proxy software that knows how to speak federated protocols (eg. Activity Pub, Solid, RemoteStorage, etc.) or distributed/p2p protocols (eg. BitTorrent, IPFS, etc.).-->
|
|
||||||
|
|
||||||
## Some context
|
|
||||||
|
|
||||||
People often struggle to see the difference between IPFS and Garage, so let's start by making clear that these projects are complementary and not interchangeable.
|
|
||||||
|
|
||||||
Personally, I see IPFS as the intersection between BitTorrent and a file system. BitTorrent remains to this day one of the most efficient ways to deliver
|
|
||||||
a copy of a file or a folder to a very large number of destinations. It however lacks some form of interactivity: once a torrent file has been generated, you can't simply
|
|
||||||
add or remove files from it. By presenting itself more like a file system, IPFS is able to handle this use case out of the box.
|
|
||||||
|
|
||||||
<!--IPFS is a content-addressable network built in a peer-to-peer fashion.
|
|
||||||
In simple words, it means that you query the content you want with its identifier without having to know *where* it is hosted on the network, and especially on which machine.
|
|
||||||
As a side effect, you can share content over the Internet without any configuration (no firewall, NAT, fixed IP, DNS, etc.).-->
|
|
||||||
|
|
||||||
<!--However, IPFS does not enforce any property on the durability and availability of your data: the collaboration mentioned earlier is
|
|
||||||
done only on a spontaneous approach. So at first, if you want to be sure that your content remains alive, you must keep it on your node.
|
|
||||||
And if nobody makes a copy of your content, you will lose it as soon as your node goes offline and/or crashes.
|
|
||||||
Furthermore, if you need multiple nodes to store your content, IPFS is not able to automatically place content on your nodes,
|
|
||||||
enforce a given replication amount, check the integrity of your content, and so on.-->
|
|
||||||
|
|
||||||
However, you would probably not rely on BitTorrent to durably store the encrypted holiday pictures you shared with your friends,
|
|
||||||
as content on BitTorrent tends to vanish when no one in the network has a copy of it anymore. The same applies to IPFS.
|
|
||||||
Even if at some time everyone has a copy of the pictures on their hard disk, people might delete these copies after a while without you knowing it.
|
|
||||||
You also can't easily collaborate on storing this common treasure. For example, there is no automatic way to say that Alice and Bob
|
|
||||||
are in charge of storing the first half of the archive while Charlie and Eve are in charge of the second half.
|
|
||||||
|
|
||||||
➡️ **IPFS is designed to deliver content.**
|
|
||||||
|
|
||||||
*Note: the IPFS project has another project named [IPFS Cluster](https://cluster.ipfs.io/) that allows servers to collaborate on hosting IPFS content.
|
|
||||||
[Resilio](https://www.resilio.com/individuals/) and [Syncthing](https://syncthing.net/) both feature protocols inspired by BitTorrent to synchronize a tree of your file system between multiple computers.
|
|
||||||
Reviewing these solutions is out of the scope of this article, feel free to try them by yourself!*
|
|
||||||
|
|
||||||
Garage, on the other hand, is designed to automatically spread your content over all your available nodes, in a manner that makes the best possible use of your storage space.
|
|
||||||
At the same time, it ensures that your content is always replicated exactly 3 times across the cluster (or less if you change a configuration parameter),
|
|
||||||
on different geographical zones when possible.
|
|
||||||
<!--To access this content, you must have an API key, and have a correctly configured machine available over the network (including DNS/IP address/etc.). If the amount of traffic you receive is way larger than what your cluster can handle, your cluster will become simply unresponsive. Sharing content across people that do not trust each other, ie. who operate independent clusters, is not a feature of Garage: you have to rely on external software.-->
|
|
||||||
However, this means that when content is requested from a Garage cluster, there are only 3 nodes capable of returning it to the user.
|
|
||||||
As a consequence, when content becomes popular, this subset of nodes might become a bottleneck.
|
|
||||||
Moreover, all resources (keys, files, buckets) are tightly coupled to the Garage cluster on which they exist;
|
|
||||||
servers from different clusters can't collaborate to serve together the same data (without additional software).
|
|
||||||
|
|
||||||
➡️ **Garage is designed to durably store content.**
|
|
||||||
|
|
||||||
In this blog post, we will explore whether we can combine efficient delivery and strong durability by connecting an IPFS node to a Garage cluster.
|
|
||||||
|
|
||||||
## Try #1: Vanilla IPFS over Garage
|
|
||||||
|
|
||||||
<!--If you are not familiar with IPFS, is available both as a desktop app and a [CLI app](https://docs.ipfs.io/install/command-line/), in this post we will cover the CLI app as it is often easier to understand how things are working internally.
|
|
||||||
You can quickly follow the official [quick start guide](https://docs.ipfs.io/how-to/command-line-quick-start/#initialize-the-repository) to have an up and running node.-->
|
|
||||||
|
|
||||||
IPFS is available as a pre-compiled binary, but to connect it with Garage, we need a plugin named [ipfs/go-ds-s3](https://github.com/ipfs/go-ds-s3).
|
|
||||||
The Peergos project has a fork because it seems that the plugin is known for hitting Amazon's rate limits
|
|
||||||
([#105](https://github.com/ipfs/go-ds-s3/issues/105), [#205](https://github.com/ipfs/go-ds-s3/pull/205)).
|
|
||||||
This is the one we will try in the following.
|
|
||||||
|
|
||||||
The easiest solution to use this plugin in IPFS is to bundle it in the main IPFS daemon, and recompile IPFS from sources.
|
|
||||||
Following the instructions on the README file allowed me to spawn an IPFS daemon configured with S3 as the block store.
|
|
||||||
|
|
||||||
I had a small issue when adding the plugin to the `plugin/loader/preload_list` file: the given command lacks a newline.
|
|
||||||
I had to edit the file manually after running it, the issue was directly visible and easy to fix.
|
|
||||||
|
|
||||||
After that, I just ran the daemon and accessed the web interface to upload a photo of my dog:
|
|
||||||
|
|
||||||
![A dog](./dog.jpg)
|
|
||||||
|
|
||||||
A content identifier (CID) was assigned to this picture:
|
|
||||||
|
|
||||||
```
|
|
||||||
QmNt7NSzyGkJ5K9QzyceDXd18PbLKrMAE93XuSC2487EFn
|
|
||||||
```
|
|
||||||
|
|
||||||
The photo is now accessible on the whole network.
|
|
||||||
For example, you can inspect it [from the official gateway](https://explore.ipld.io/#/explore/QmNt7NSzyGkJ5K9QzyceDXd18PbLKrMAE93XuSC2487EFn):
|
|
||||||
|
|
||||||
![A screenshot of the IPFS explorer](./explorer.png)
|
|
||||||
|
|
||||||
At the same time, I was monitoring Garage (through [the OpenTelemetry stack we implemented earlier this year](/blog/2022-v0-7-released/)).
|
|
||||||
Just after launching the daemon - and before doing anything - I was met by this surprisingly active Grafana plot:
|
|
||||||
|
|
||||||
![Grafana API request rate when IPFS is idle](./idle.png)
|
|
||||||
<center><i>Legend: y axis = requests per 10 seconds, x axis = time</i></center><p></p>
|
|
||||||
|
|
||||||
It shows that on average, we handle around 250 requests per second. Most of these requests are in fact the IPFS daemon checking if a block exists in Gargage.
|
|
||||||
These requests are triggered by IPFS's DHT service: since my node is reachable over the Internet, it acts as a public DHT server and has to answer global
|
|
||||||
block requests over the whole network. Each time it receives a request for a block, it sends a request to its storage back-end (in our case, to Garage) to see if a copy exists locally.
|
|
||||||
|
|
||||||
*We will try to tweak the IPFS configuration later - we know that we can deactivate the DHT server. For now, we will continue with the default parameters.*
|
|
||||||
|
|
||||||
When I started interacting with the IPFS node by sending a file or browsing the default proposed catalogs (i.e. the full XKCD archive),
|
|
||||||
I quickly hit limits with our monitoring stack which, in its default configuration, is not able to ingest the large amount of tracing data produced by the high number of S3 requests originating from the IPFS daemon.
|
|
||||||
We have the following error in Garage's logs:
|
|
||||||
|
|
||||||
```
|
|
||||||
OpenTelemetry trace error occurred. cannot send span to the batch span processor because the channel is full
|
|
||||||
```
|
|
||||||
|
|
||||||
At this point, I didn't feel that it would be very interesting to fix this issue to see what was exactly the number of requests done on the cluster.
|
|
||||||
In my opinion, such a simple task of sharing a picture should not require so many requests to the storage server anyway.
|
|
||||||
As a comparison, this whole webpage, with its pictures, triggers around 10 requests on Garage when loaded, not thousands.
|
|
||||||
|
|
||||||
I think we can conclude that this first try was a failure.
|
|
||||||
The S3 storage plugin for IPFS does too many requests and would need some important work to be optimized.
|
|
||||||
However, we are aware that the people behind Peergos are known to run their software based on IPFS in production with an S3 backend,
|
|
||||||
so we should not give up too fast.
|
|
||||||
|
|
||||||
## Try #2: Peergos over Garage
|
|
||||||
|
|
||||||
[Peergos](https://peergos.org/) is designed as an end-to-end encrypted and federated alternative to Nextcloud.
|
|
||||||
Internally, it is built on IPFS and is known to have a [deep integration with the S3 API](https://peergos.org/posts/direct-s3).
|
|
||||||
One important point of this integration is that your browser is able to bypass both the Peergos daemon and the IPFS daemon
|
|
||||||
to write and read IPFS blocks directly from the S3 API server.
|
|
||||||
|
|
||||||
*I don't know exactly if Peergos is still considered alpha quality, or if a beta version was released,
|
|
||||||
but keep in mind that it might be more experimental than you'd like!*
|
|
||||||
|
|
||||||
<!--To give ourselves some courage in this adventure, let's start with a nice screenshot of their web UI:
|
|
||||||
|
|
||||||
![Peergos Web UI](./peergos.jpg)-->
|
|
||||||
|
|
||||||
Starting Peergos on top of Garage required some small patches on both sides, but in the end, I was able to get it working.
|
|
||||||
I was able to upload my file, see it in the interface, create a link to share it, rename it, move it to a folder, and so on:
|
|
||||||
|
|
||||||
![A screenshot of the Peergos interface](./upload.png)
|
|
||||||
|
|
||||||
At the same time, the fans of my computer started to become a bit loud!
|
|
||||||
A quick look at Grafana showed again a very active Garage:
|
|
||||||
|
|
||||||
![Screenshot of a grafana plot showing requests per second over time](./grafa.png)
|
|
||||||
<center><i>Legend: y axis = requests per 10 seconds on log(10) scale, x axis = time</i></center><p></p>
|
|
||||||
|
|
||||||
Again, the workload is dominated by S3 `HeadObject` requests.
|
|
||||||
After taking a look at `~/.peergos/.ipfs/config`, it seems that the IPFS configuration used by the Peergos project is quite standard,
|
|
||||||
which means that, as before, we are acting as a DHT server and having to answer to thousands of block requests every second.
|
|
||||||
|
|
||||||
We also have some traffic on the `GetObject` and `OPTIONS` endpoints (with peaks up to ~45 req/sec).
|
|
||||||
This traffic is all generated by Peergos.
|
|
||||||
The `OPTIONS` HTTP verb is here because we use the direct access feature of Peergos,
|
|
||||||
meaning that our browser is talking directly to Garage and has to use CORS to validate requests for security.
|
|
||||||
|
|
||||||
Internally, IPFS splits files into blocks of less than 256 kB. My picture is thus split into 2 blocks, requiring 2 requests over Garage to fetch it.
|
|
||||||
But even knowing that IPFS splits files into small blocks, I can't explain why we have so many `GetObject` requests.
|
|
||||||
|
|
||||||
## Try #3: Optimizing IPFS
|
|
||||||
|
|
||||||
<!--
|
|
||||||
Routing = dhtclient
|
|
||||||
![](./grafa2.png)
|
|
||||||
-->
|
|
||||||
|
|
||||||
We have seen in our 2 previous tries that the main source of load was the federation and in particular the DHT server.
|
|
||||||
In this section, we'd like to artificially remove this problem from the equation by preventing our IPFS node from federating
|
|
||||||
and see what pressure is put by Peergos alone on our local cluster.
|
|
||||||
|
|
||||||
To isolate IPFS, I have set its routing type to `none`, I have cleared its bootstrap node list,
|
|
||||||
and I configured the swarm socket to listen only on `localhost`.
|
|
||||||
Finally, I restarted Peergos and was able to observe this more peaceful graph:
|
|
||||||
|
|
||||||
![Screenshot of a grafana plot showing requests per second over time](./grafa3.png)
|
|
||||||
<center><i>Legend: y axis = requests per 10 seconds on log(10) scale, x axis = time</i></center><p></p>
|
|
||||||
|
|
||||||
Now, for a given endpoint, we have peaks of around 10 req/sec which is way more reasonable.
|
|
||||||
Furthermore, we are no longer hammering our back-end with requests on objects that are not there.
|
|
||||||
|
|
||||||
After discussing with the developers, it is possible to go even further by running Peergos without IPFS:
|
|
||||||
this is what they do for some of their tests. If at the same time we increased the size of data blocks,
|
|
||||||
we might have a non-federated but quite efficient end-to-end encrypted "cloud storage" that works well over Garage,
|
|
||||||
with our clients directly hitting the S3 API!
|
|
||||||
|
|
||||||
For setups where federation is a hard requirement,
|
|
||||||
the next step would be to gradually allow our node to connect to the IPFS network
|
|
||||||
while ensuring that the traffic to the Garage cluster remains low.
|
|
||||||
For example, configuring our IPFS node as a `dhtclient` instead of a `dhtserver` would exempt it from answering public DHT requests.
|
|
||||||
Keeping an in-memory index (as a hash map and/or a Bloom filter) of the blocks stored on the current node
|
|
||||||
could also drastically reduce the number of requests.
|
|
||||||
It could also be interesting to explore ways to run in one process a full IPFS node with a DHT
|
|
||||||
server on the regular file system, and reserve a second process configured with the S3 back-end to handle only our Peergos data.
|
|
||||||
|
|
||||||
However, even with these optimizations, the best we can expect is the traffic we have on the previous plot.
|
|
||||||
From a theoretical perspective, it is still higher than the optimal number of requests.
|
|
||||||
On S3, storing a file, downloading a file, and listing available files are all actions that can be done in a single request.
|
|
||||||
Even if all requests don't have the same cost on the cluster, processing a request has a non-negligible fixed cost.
|
|
||||||
|
|
||||||
## Are S3 and IPFS incompatible?
|
|
||||||
|
|
||||||
Tweaking IPFS in order to try and make it work on an S3 backend is all and good,
|
|
||||||
but in some sense, the assumptions made by IPFS are fundamentally incompatible with using S3 as block storage.
|
|
||||||
|
|
||||||
First, data on IPFS is split in relatively small chunks: all IPFS blocks must be less than 1 MB, with most being 256 KB or less.
|
|
||||||
This means that large files or complex directory hierarchies will need thousands of blocks to be stored,
|
|
||||||
each of which is mapped to a single object in the S3 storage back-end.
|
|
||||||
On the other side, S3 implementations such as Garage are made to handle very large objects efficiently,
|
|
||||||
and they also provide their own primitives for rapidly listing all the objects present in a bucket or a directory.
|
|
||||||
There is thus a huge loss in performance when data is stored in IPFS's block format because this format does not
|
|
||||||
take advantage of the optimizations provided by S3 back-ends in their standard usage scenarios. Instead, it
|
|
||||||
requires storing and retrieving thousands of small S3 objects even for very simple operations such
|
|
||||||
as retrieving a file or listing a directory, incurring a fixed overhead each time.
|
|
||||||
|
|
||||||
This problem is compounded by the design of the IPFS data exchange protocol,
|
|
||||||
in which nodes may request any data blocks to any other node in the network
|
|
||||||
in its quest to answer a user's request (like retrieving a file, etc.).
|
|
||||||
When a node is missing a file or a directory it wants to read, it has to do as many requests to other nodes
|
|
||||||
as there are IPFS blocks in the object to be read.
|
|
||||||
On the receiving end, this means that any fully-fledged IPFS node has to answer large numbers
|
|
||||||
of requests for blocks required by users everywhere on the network, which is what we observed in our experiment above.
|
|
||||||
We were however surprised to observe that many requests coming from the IPFS network were for blocks
|
|
||||||
which our node didn't have a copy of: this means that somewhere in the IPFS protocol, an overly optimistic
|
|
||||||
assumption is made on where data could be found in the network, and this ends up translating into many requests
|
|
||||||
between nodes that return negative results.
|
|
||||||
When IPFS blocks are stored on a local filesystem, answering these requests fast might be possible.
|
|
||||||
However, when using an S3 server as a storage back-end, this becomes prohibitively costly.
|
|
||||||
|
|
||||||
If one wanted to design a distributed storage system for IPFS data blocks, they would probably need to start at a lower level.
|
|
||||||
Garage itself makes use of a block storage mechanism that allows small-sized blocks to be stored on a cluster and accessed
|
|
||||||
rapidly by nodes that need to access them.
|
|
||||||
However passing through the entire abstraction that provides an S3 API is wasteful and redundant, as this API is
|
|
||||||
designed to provide advanced functionality such as mutating objects, associating metadata with objects, listing objects, etc.
|
|
||||||
Plugging the IPFS daemon directly into a lower-level distributed block storage like
|
|
||||||
Garage's might yield way better results by bypassing all of this complexity.
|
|
||||||
|
|
||||||
|
|
||||||
## Conclusion
|
|
||||||
|
|
||||||
Running IPFS over an S3 storage backend does not quite work out of the box in terms of performance.
|
|
||||||
Having identified that the main problem is linked to the DHT service,
|
|
||||||
we proposed some improvements (disabling the DHT server, keeping an in-memory index of the blocks, and using the S3 back-end only for user data).
|
|
||||||
|
|
||||||
From an IPFS design perspective, it seems however that the numerous small blocks handled by the protocol
|
|
||||||
do not map trivially to efficient use of the S3 API, and thus could be a limiting factor to any optimization work.
|
|
||||||
|
|
||||||
As part of my testing journey, I also stumbled upon some posts about performance issues on IPFS (eg. [#6283](https://github.com/ipfs/go-ipfs/issues/6283))
|
|
||||||
that are not linked with the S3 connector. I might be negatively influenced by my failure to connect IPFS with S3,
|
|
||||||
but at this point, I'm tempted to think that IPFS is intrinsically resource-intensive from a block activity perspective.
|
|
||||||
|
|
||||||
On our side at Deuxfleurs, we will continue our investigations towards more *minimalistic* software.
|
|
||||||
This choice makes sense for us as we want to reduce the ecological impact of our services
|
|
||||||
by deploying fewer servers, that use less energy, and are renewed less frequently.
|
|
||||||
|
|
||||||
After discussing with Peergos maintainers, we identified that it is possible to run Peergos without IPFS.
|
|
||||||
With some optimizations on the block size, we envision great synergies between Garage and Peergos that could lead to
|
|
||||||
an efficient and lightweight end-to-end encrypted "cloud storage" platform.
|
|
||||||
*If you happen to be working on this, please inform us!*
|
|
||||||
|
|
||||||
|
|
||||||
*We are also aware of the existence of many other software projects for file sharing
|
|
||||||
such as Nextcloud, Owncloud, Owncloud Infinite Scale, Seafile, Filestash, Pydio, SOLID, Remote Storage, etc.
|
|
||||||
Many of these could be connected to an S3 back-end such as Garage.
|
|
||||||
We might even try some of them in future blog posts, so stay tuned!*
|
|
Before Width: | Height: | Size: 221 KiB |
Before Width: | Height: | Size: 110 KiB |
Before Width: | Height: | Size: 295 KiB |
Before Width: | Height: | Size: 232 KiB |
Before Width: | Height: | Size: 144 KiB |
Before Width: | Height: | Size: 194 KiB |
Before Width: | Height: | Size: 177 KiB |
|
@ -1,513 +0,0 @@
|
||||||
+++
|
|
||||||
title="Confronting theoretical design with observed performances"
|
|
||||||
date=2022-09-26
|
|
||||||
+++
|
|
||||||
|
|
||||||
|
|
||||||
*During the past years, we have thought a lot about possible design decisions and
|
|
||||||
their theoretical trade-offs for Garage. In particular, we pondered the impacts
|
|
||||||
of data structures, networking methods, and scheduling algorithms.
|
|
||||||
Garage worked well enough for our production
|
|
||||||
cluster at Deuxfleurs, but we also knew that people started to experience some
|
|
||||||
unexpected behaviors, which motivated us to start a round of benchmarks and performance
|
|
||||||
measurements to see how Garage behaves compared to our expectations.
|
|
||||||
This post presents some of our first results, which cover
|
|
||||||
3 aspects of performance: efficient I/O, "myriads of objects", and resiliency,
|
|
||||||
reflecting the high-level properties we are seeking.*
|
|
||||||
|
|
||||||
<!-- more -->
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## ⚠️ Disclaimer
|
|
||||||
|
|
||||||
The results presented in this blog post must be taken with a (critical) grain of salt due to some
|
|
||||||
limitations that are inherent to any benchmarking endeavor. We try to reference them as
|
|
||||||
exhaustively as possible here, but other limitations might exist.
|
|
||||||
|
|
||||||
Most of our tests were made on _simulated_ networks, which by definition cannot represent all the
|
|
||||||
diversity of _real_ networks (dynamic drop, jitter, latency, all of which could be
|
|
||||||
correlated with throughput or any other external event). We also limited
|
|
||||||
ourselves to very small workloads that are not representative of a production
|
|
||||||
cluster. Furthermore, we only benchmarked some very specific aspects of Garage:
|
|
||||||
our results are not an evaluation of the performance of Garage as a whole.
|
|
||||||
|
|
||||||
For some benchmarks, we used Minio as a reference. It must be noted that we did
|
|
||||||
not try to optimize its configuration as we have done for Garage, and more
|
|
||||||
generally, we have significantly less knowledge of Minio's internals compared to Garage, which could lead
|
|
||||||
to underrated performance measurements for Minio. It must also be noted that
|
|
||||||
Garage and Minio are systems with different feature sets. For instance, Minio supports
|
|
||||||
erasure coding for higher data density and Garage doesn't, Minio implements
|
|
||||||
way more S3 endpoints than Garage, etc. Such features necessarily have a cost
|
|
||||||
that you must keep in mind when reading the plots we will present. You should consider
|
|
||||||
Minio's results as a way to contextualize Garage's numbers, to justify that our improvements
|
|
||||||
are not simply artificial in the light of existing object storage implementations.
|
|
||||||
|
|
||||||
The impact of the testing environment is also not evaluated (kernel patches,
|
|
||||||
configuration, parameters, filesystem, hardware configuration, etc.). Some of
|
|
||||||
these parameters could favor one configuration or software product over another.
|
|
||||||
Especially, it must be noted that most of the tests were done on a
|
|
||||||
consumer-grade PC with only a SSD, which is different from most
|
|
||||||
production setups. Finally, our results are also provided without statistical
|
|
||||||
tests to validate their significance, and might have insufficient ground
|
|
||||||
to be claimed as reliable.
|
|
||||||
|
|
||||||
When reading this post, please keep in mind that **we are not making any
|
|
||||||
business or technical recommendations here, and this is not a scientific paper
|
|
||||||
either**; we only share bits of our development process as honestly as
|
|
||||||
possible.
|
|
||||||
Make your own tests if you need to take a decision,
|
|
||||||
remember to read [benchmarking crimes](https://gernot-heiser.org/benchmarking-crimes.html)
|
|
||||||
and to remain supportive and caring with your peers ;)
|
|
||||||
|
|
||||||
## About our testing environment
|
|
||||||
|
|
||||||
We made a first batch of tests on
|
|
||||||
[Grid5000](https://www.grid5000.fr/w/Grid5000:Home), a large-scale and flexible
|
|
||||||
testbed for experiment-driven research in all areas of computer science,
|
|
||||||
which has an
|
|
||||||
[open access program](https://www.grid5000.fr/w/Grid5000:Open-Access).
|
|
||||||
During our tests, we used part of the following clusters:
|
|
||||||
[nova](https://www.grid5000.fr/w/Lyon:Hardware#nova),
|
|
||||||
[paravance](https://www.grid5000.fr/w/Rennes:Hardware#paravance), and
|
|
||||||
[econome](https://www.grid5000.fr/w/Nantes:Hardware#econome), to make a
|
|
||||||
geo-distributed topology. We used the Grid5000 testbed only during our
|
|
||||||
preliminary tests to identify issues when running Garage on many powerful
|
|
||||||
servers. We then reproduced these issues in a controlled environment
|
|
||||||
outside of Grid5000, so don't be
|
|
||||||
surprised then if Grid5000 is not always mentioned on our plots.
|
|
||||||
|
|
||||||
To reproduce some environments locally, we have a small set of Python scripts
|
|
||||||
called [`mknet`](https://git.deuxfleurs.fr/Deuxfleurs/mknet) tailored to our
|
|
||||||
needs[^ref1]. Most of the following tests were run locally with `mknet` on a
|
|
||||||
single computer: a Dell Inspiron 27" 7775 AIO, with a Ryzen 5 1400, 16GB of
|
|
||||||
RAM and a 512GB SSD. In terms of software, NixOS 22.05 with the 5.15.50 kernel is
|
|
||||||
used with an ext4 encrypted filesystem. The `vm.dirty_background_ratio` and
|
|
||||||
`vm.dirty_ratio` have been reduced to `2` and `1` respectively: with default
|
|
||||||
values, the system tends to freeze under heavy I/O load.
|
|
||||||
|
|
||||||
## Efficient I/O
|
|
||||||
|
|
||||||
The main purpose of an object storage system is to store and retrieve objects
|
|
||||||
across the network, and the faster these two functions can be accomplished,
|
|
||||||
the more efficient the system as a whole will be. For this analysis, we focus on
|
|
||||||
2 aspects of performance. First, since many applications can start processing a file
|
|
||||||
before receiving it completely, we will evaluate the time-to-first-byte (TTFB)
|
|
||||||
on `GetObject` requests, i.e. the duration between the moment a request is sent
|
|
||||||
and the moment where the first bytes of the returned object are received by the client.
|
|
||||||
Second, we will evaluate generic throughput, to understand how well
|
|
||||||
Garage can leverage the underlying machine's performance.
|
|
||||||
|
|
||||||
**Time-to-First-Byte** - One specificity of Garage is that we implemented S3
|
|
||||||
web endpoints, with the idea to make it a platform of choice to publish
|
|
||||||
static websites. When publishing a website, TTFB can be directly observed
|
|
||||||
by the end user, as it will impact the perceived reactivity of the page being loaded.
|
|
||||||
|
|
||||||
Up to version 0.7.3, time-to-first-byte on Garage used to be relatively high.
|
|
||||||
This can be explained by the fact that Garage was not able to handle data internally
|
|
||||||
at a smaller granularity level than entire data blocks, which are up to 1MB chunks of a given object
|
|
||||||
(a size which [can be configured](https://garagehq.deuxfleurs.fr/documentation/reference-manual/configuration/#block-size)).
|
|
||||||
Let us take the example of a 4.5MB object, which Garage will split by default into four 1MB blocks and one 0.5MB block.
|
|
||||||
With the old design, when you were sending a `GET`
|
|
||||||
request, the first block had to be _fully_ retrieved by the gateway node from the
|
|
||||||
storage node before it starts to send any data to the client.
|
|
||||||
|
|
||||||
With Garage v0.8, we added a data streaming logic that allows the gateway
|
|
||||||
to send the beginning of a block without having to wait for the full block to be received from
|
|
||||||
the storage node. We can visually represent the difference as follow:
|
|
||||||
|
|
||||||
<center>
|
|
||||||
<img src="schema-streaming.png" alt="A schema depicting how streaming improves the delivery of a block" />
|
|
||||||
</center>
|
|
||||||
|
|
||||||
As our default block size is only 1MB, the difference should be marginal on
|
|
||||||
fast networks: it takes only 8ms to transfer 1MB on a 1Gbps network,
|
|
||||||
adding at most 8ms of latency to a `GetObject` request (assuming no other
|
|
||||||
data transfer is happening in parallel). However,
|
|
||||||
on a very slow network, or a very congested link with many parallel requests
|
|
||||||
handled, the impact can be much more important: on a 5Mbps network, it takes at least 1.6 seconds
|
|
||||||
to transfer our 1MB block, and streaming will heavily improve user experience.
|
|
||||||
|
|
||||||
We wanted to see if this theory holds in practice: we simulated a low latency
|
|
||||||
but slow network using `mknet` and did some requests with block streaming (Garage v0.8 beta) and
|
|
||||||
without (Garage v0.7.3). We also added Minio as a reference. To
|
|
||||||
benchmark this behavior, we wrote a small test named
|
|
||||||
[s3ttfb](https://git.deuxfleurs.fr/Deuxfleurs/mknet/src/branch/main/benchmarks/s3ttfb),
|
|
||||||
whose results are shown on the following figure:
|
|
||||||
|
|
||||||
![Plot showing the TTFB observed on Garage v0.8, v0.7 and Minio](ttfb.png)
|
|
||||||
|
|
||||||
Garage v0.7, which does not support block streaming, gives us a TTFB between 1.6s
|
|
||||||
and 2s, which matches the time required to transfer the full block which we calculated above.
|
|
||||||
On the other side of the plot, we can see Garage v0.8 with a very low TTFB thanks to the
|
|
||||||
streaming feature (the lowest value is 43ms). Minio sits between the two
|
|
||||||
Garage versions: we suppose that it does some form of batching, but smaller
|
|
||||||
than our initial 1MB default.
|
|
||||||
|
|
||||||
**Throughput** - As soon as we publicly released Garage, people started
|
|
||||||
benchmarking it, comparing its performances to writing directly on the
|
|
||||||
filesystem, and observed that Garage was slower (eg.
|
|
||||||
[#288](https://git.deuxfleurs.fr/Deuxfleurs/garage/issues/288)). To improve the
|
|
||||||
situation, we did some optimizations, such as putting costly processing like hashing on a dedicated thread,
|
|
||||||
and many others
|
|
||||||
([#342](https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/342),
|
|
||||||
[#343](https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/343)), which led us to
|
|
||||||
version 0.8 "Beta 1". We also noticed that some of the logic we wrote
|
|
||||||
to better control resource usage
|
|
||||||
and detect errors, including semaphores and timeouts, was artificially limiting
|
|
||||||
performances. In another iteration, we made this logic less restrictive at the
|
|
||||||
cost of higher resource consumption under load
|
|
||||||
([#387](https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/387)), resulting in
|
|
||||||
version 0.8 "Beta 2". Finally, we currently do multiple `fsync` calls each time we
|
|
||||||
write a block. We know that this is expensive and did a test build without any
|
|
||||||
`fsync` call ([see the
|
|
||||||
commit](https://git.deuxfleurs.fr/Deuxfleurs/garage/commit/432131f5b8c2aad113df3b295072a00756da47e7))
|
|
||||||
that will not be merged, only to assess the impact of `fsync`. We refer to it
|
|
||||||
as `no-fsync` in the following plot.
|
|
||||||
|
|
||||||
*A note about `fsync`: for performance reasons, operating systems often do not
|
|
||||||
write directly to the disk when a process creates or updates a file in your
|
|
||||||
filesystem. Instead, the write is kept in memory, and flushed later in a batch
|
|
||||||
with other writes. If a power loss occurs before the OS has time to flush
|
|
||||||
data to disk, some writes will be lost. To ensure that a write is effectively
|
|
||||||
written to disk, the
|
|
||||||
[`fsync(2)`](https://man7.org/linux/man-pages/man2/fsync.2.html) system call must be used,
|
|
||||||
which effectively blocks until the file or directory on which it is called has been flushed from volatile
|
|
||||||
memory to the persistent storage device. Additionally, the exact semantic of
|
|
||||||
`fsync` [differs from one OS to another](https://mjtsai.com/blog/2022/02/17/apple-ssd-benchmarks-and-f_fullsync/)
|
|
||||||
and, even on battle-tested software like Postgres, it was
|
|
||||||
["done wrong for 20 years"](https://archive.fosdem.org/2019/schedule/event/postgresql_fsync/).
|
|
||||||
Note that on Garage, we are still working on our `fsync` policy and thus, for
|
|
||||||
now, you should expect limited data durability in case of power loss, as we are
|
|
||||||
aware of some inconsistencies on this point (which we describe in the following
|
|
||||||
and plan to solve).*
|
|
||||||
|
|
||||||
To assess performance improvements, we used the benchmark tool
|
|
||||||
[minio/warp](https://github.com/minio/warp) in a non-standard configuration,
|
|
||||||
adapted for small-scale tests, and we kept only the aggregated result named
|
|
||||||
"cluster total". The goal of this experiment is to get an idea of the cluster
|
|
||||||
performance with a standardized and mixed workload.
|
|
||||||
|
|
||||||
![Plot showing IO performances of Garage configurations and Minio](io.png)
|
|
||||||
|
|
||||||
Minio, our reference point, gives us the best performances in this test.
|
|
||||||
Looking at Garage, we observe that each improvement we made had a visible
|
|
||||||
impact on performances. We also note that we have a progress margin in
|
|
||||||
terms of performances compared to Minio: additional benchmarks, tests, and
|
|
||||||
monitoring could help us better understand the remaining gap.
|
|
||||||
|
|
||||||
|
|
||||||
## A myriad of objects
|
|
||||||
|
|
||||||
Object storage systems do not handle a single object but huge numbers of them:
|
|
||||||
Amazon claims to handle trillions of objects on their platform, and Red Hat
|
|
||||||
tout Ceph as being able to handle 10 billion objects. All these
|
|
||||||
objects must be tracked efficiently in the system to be fetched, listed,
|
|
||||||
removed, etc. In Garage, we use a "metadata engine" component to track them.
|
|
||||||
For this analysis, we compare different metadata engines in Garage and see how
|
|
||||||
well the best one scales to a million objects.
|
|
||||||
|
|
||||||
**Testing metadata engines** - With Garage, we chose not to store metadata
|
|
||||||
directly on the filesystem, like Minio for example, but in a specialized on-disk
|
|
||||||
B-Tree data structure; in other words, in an embedded database engine. Until now,
|
|
||||||
the only supported option was [sled](https://sled.rs/), but we started having
|
|
||||||
serious issues with it - and we were not alone
|
|
||||||
([#284](https://git.deuxfleurs.fr/Deuxfleurs/garage/issues/284)). With Garage
|
|
||||||
v0.8, we introduce an abstraction semantic over the features we expect from our
|
|
||||||
database, allowing us to switch from one metadata back-end to another without touching
|
|
||||||
the rest of our codebase. We added two additional back-ends: LMDB
|
|
||||||
(through [heed](https://github.com/meilisearch/heed)) and SQLite
|
|
||||||
(using [Rusqlite](https://github.com/rusqlite/rusqlite)). **Keep in mind that they
|
|
||||||
are both experimental: contrarily to sled, we have yet to run them in production
|
|
||||||
for a significant time.**
|
|
||||||
|
|
||||||
Similarly to the impact of `fsync` on block writing, each database engine we use
|
|
||||||
has its own `fsync` policy. Sled flushes its writes every 2 seconds by
|
|
||||||
default (this is
|
|
||||||
[configurable](https://garagehq.deuxfleurs.fr/documentation/reference-manual/configuration/#sled-flush-every-ms)).
|
|
||||||
LMDB default to an `fsync` on each write, which on early tests led to
|
|
||||||
abysmal performance. We thus added 2 flags,
|
|
||||||
[MDB\_NOSYNC](http://www.lmdb.tech/doc/group__mdb__env.html#ga5791dd1adb09123f82dd1f331209e12e)
|
|
||||||
and
|
|
||||||
[MDB\_NOMETASYNC](http://www.lmdb.tech/doc/group__mdb__env.html#ga5021c4e96ffe9f383f5b8ab2af8e4b16),
|
|
||||||
to deactivate `fsync` entirely. On SQLite, it is also possible to deactivate `fsync` with
|
|
||||||
`pragma synchronous = off`, but we have not started any optimization work on it yet:
|
|
||||||
our SQLite implementation currently still calls `fsync` for all write operations. Additionally, we are
|
|
||||||
using these engines through Rust bindings that do not support async Rust,
|
|
||||||
with which Garage is built, which has an impact on performance as well.
|
|
||||||
**Our comparison will therefore not reflect the raw performances of
|
|
||||||
these database engines, but instead, our integration choices.**
|
|
||||||
|
|
||||||
Still, we think it makes sense to evaluate our implementations in their current
|
|
||||||
state in Garage. We designed a benchmark that is intensive on the metadata part
|
|
||||||
of the software, i.e. handling large numbers of tiny files. We chose again
|
|
||||||
`minio/warp` as a benchmark tool, but we
|
|
||||||
configured it with the smallest possible object size it supported, 256
|
|
||||||
bytes, to put pressure on the metadata engine. We evaluated sled twice:
|
|
||||||
with its default configuration, and with a configuration where we set a flush
|
|
||||||
interval of 10 minutes (longer than the test) to disable `fsync`.
|
|
||||||
|
|
||||||
*Note that S3 has not been designed for workloads that store huge numbers of small objects;
|
|
||||||
a regular database, like Cassandra, would be more appropriate. This test has
|
|
||||||
only been designed to stress our metadata engine and is not indicative of
|
|
||||||
real-world performances.*
|
|
||||||
|
|
||||||
![Plot of our metadata engines comparison with Warp](db_engine.png)
|
|
||||||
|
|
||||||
Unsurprisingly, we observe abysmal performances with SQLite, as it is the engine we did not put work on yet,
|
|
||||||
and that still does an `fsync` for each write. Garage with the `fsync`-disabled LMDB backend performs twice better than
|
|
||||||
with sled in its default version and 60% better than the "no `fsync`" sled version in our
|
|
||||||
benchmark. Furthermore, and not depicted on these plots, LMDB uses way less
|
|
||||||
disk storage and RAM; we would like to quantify that in the future. As we are
|
|
||||||
only at the very beginning of our work on metadata engines, it is hard to draw
|
|
||||||
strong conclusions. Still, we can say that SQLite is not ready for production
|
|
||||||
workloads, and that LMDB looks very promising both in terms of performances and resource
|
|
||||||
usage, and is a very good candidate for being Garage's default metadata engine in
|
|
||||||
future releases, once we figure out the proper `fsync` tuning. In the future, we will need to define a data policy for Garage to help us
|
|
||||||
arbitrate between performance and durability.
|
|
||||||
|
|
||||||
*To `fsync` or not to `fsync`? Performance is nothing without reliability, so we
|
|
||||||
need to better assess the impact of possibly losing a write after it has been validated.
|
|
||||||
Because Garage is a distributed system, even if a node loses its write due to a
|
|
||||||
power loss, it will fetch it back from the 2 other nodes that store it. But rare
|
|
||||||
situations can occur where 1 node is down and the 2 others validate the write and then
|
|
||||||
lose power before having time to flush to disk. What is our policy in this case? For storage durability,
|
|
||||||
we are already supposing that we never lose the storage of more than 2 nodes,
|
|
||||||
so should we also make the hypothesis that we won't lose power on more than 2 nodes at the same
|
|
||||||
time? What should we do about people hosting all of their nodes at the same
|
|
||||||
place without an uninterruptible power supply (UPS)? Historically, it seems that Minio developers also accepted
|
|
||||||
some compromises on this side
|
|
||||||
([#3536](https://github.com/minio/minio/issues/3536),
|
|
||||||
[HN Discussion](https://news.ycombinator.com/item?id=28135533)). Now, they seem to
|
|
||||||
use a combination of `O_DSYNC` and `fdatasync(3p)` - a derivative that ensures
|
|
||||||
only data and not metadata is persisted on disk - in combination with
|
|
||||||
`O_DIRECT` for direct I/O
|
|
||||||
([discussion](https://github.com/minio/minio/discussions/14339#discussioncomment-2200274),
|
|
||||||
[example in Minio source](https://github.com/minio/minio/blob/master/cmd/xl-storage.go#L1928-L1932)).*
|
|
||||||
|
|
||||||
**Storing a million objects** - Object storage systems are designed not only
|
|
||||||
for data durability and availability but also for scalability, so naturally,
|
|
||||||
some people asked us how scalable Garage is. If giving a definitive answer to this
|
|
||||||
question is out of the scope of this study, we wanted to be sure that our
|
|
||||||
metadata engine would be able to scale to a million objects. To put this
|
|
||||||
target in context, it remains small compared to other industrial solutions:
|
|
||||||
Ceph claims to scale up to [10 billion objects](https://www.redhat.com/en/resources/data-solutions-overview),
|
|
||||||
which is 4 orders of magnitude more than our current target. Of course, their
|
|
||||||
benchmarking setup has nothing in common with ours, and their tests are way
|
|
||||||
more exhaustive.
|
|
||||||
|
|
||||||
We wrote our own benchmarking tool for this test,
|
|
||||||
[s3billion](https://git.deuxfleurs.fr/Deuxfleurs/mknet/src/branch/main/benchmarks/s3billion)[^ref2].
|
|
||||||
The benchmark procedure consists in
|
|
||||||
concurrently sending a defined number of tiny objects (8192 objects of 16
|
|
||||||
bytes by default) and measuring the wall clock time to the last object upload. This step is then repeated a given
|
|
||||||
number of times (128 by default) to effectively create a target number of
|
|
||||||
objects on the cluster (1M by default). On our local setup with 3
|
|
||||||
nodes, both Minio and Garage with LMDB were able to achieve this target. In the
|
|
||||||
following plot, we show how much time it took Garage and Minio to handle
|
|
||||||
each batch.
|
|
||||||
|
|
||||||
Before looking at the plot, **you must keep in mind some important points regarding
|
|
||||||
the internals of both Minio and Garage**.
|
|
||||||
|
|
||||||
Minio has no metadata engine, it stores its objects directly on the filesystem.
|
|
||||||
Sending 1 million objects on Minio results in creating one million inodes on
|
|
||||||
the storage server in our current setup. So the performances of the filesystem
|
|
||||||
probably have a substantial impact on the observed results.
|
|
||||||
In our precise setup, we know that the
|
|
||||||
filesystem we used is not adapted at all for Minio (encryption layer, fixed
|
|
||||||
number of inodes, etc.). Additionally, we mentioned earlier that we deactivated
|
|
||||||
`fsync` for our metadata engine in Garage, whereas Minio has some `fsync` logic here slowing down the
|
|
||||||
creation of objects. Finally, object storage is designed for big objects, for which the
|
|
||||||
costs measured here are negligible. In the end, again, we use Minio only as a
|
|
||||||
reference point to understand what performance budget we have for each part of our
|
|
||||||
software.
|
|
||||||
|
|
||||||
Conversely, Garage has an optimization for small objects. Below 3KB, a separate file is
|
|
||||||
not created on the filesystem but the object is directly stored inline in the
|
|
||||||
metadata engine. In the future, we plan to evaluate how Garage behaves at scale with
|
|
||||||
objects above 3KB, which we expect to be way closer to Minio, as it will have to create
|
|
||||||
at least one inode per object. For now, we limit ourselves to evaluating our
|
|
||||||
metadata engine and focus only on 16-byte objects.
|
|
||||||
|
|
||||||
![Showing the time to send 128 batches of 8192 objects for Minio and Garage](1million-both.png)
|
|
||||||
|
|
||||||
It appears that the performances of our metadata engine are acceptable, as we
|
|
||||||
have a comfortable margin compared to Minio (Minio is between 3x and 4x times
|
|
||||||
slower per batch). We also note that, past the 200k objects mark, Minio's
|
|
||||||
time to complete a batch of inserts is constant, while on Garage it still increases on the observed range.
|
|
||||||
It could be interesting to know if Garage's batch completion time would cross Minio's one
|
|
||||||
for a very large number of objects. If we reason per object, both Minio's and
|
|
||||||
Garage's performances remain very good: it takes respectively around 20ms and
|
|
||||||
5ms to create an object. In a real-world scenario, at 100 Mbps, the upload of a 10MB file takes
|
|
||||||
800ms, and goes up to 8sec for a 100MB file: in both cases
|
|
||||||
handling the object metadata would be only a fraction of the upload time. The
|
|
||||||
only cases where a difference would be noticeable would be when uploading a lot of very
|
|
||||||
small files at once, which again would be an unusual usage of the S3 API.
|
|
||||||
|
|
||||||
Let us now focus on Garage's metrics only to better see its specific behavior:
|
|
||||||
|
|
||||||
![Showing the time to send 128 batches of 8192 objects for Garage only](1million.png)
|
|
||||||
|
|
||||||
Two effects are now more visible: 1., batch completion time increases with the
|
|
||||||
number of objects in the bucket and 2., measurements are scattered, at least
|
|
||||||
more than for Minio. We expected this batch completion time increase to be logarithmic,
|
|
||||||
but we don't have enough data points to conclude confidently it is the case: additional
|
|
||||||
measurements are needed. Concerning the observed instability, it could
|
|
||||||
be a symptom of what we saw with some other experiments on this setup,
|
|
||||||
which sometimes freezes under heavy I/O load. Such freezes could lead to
|
|
||||||
request timeouts and failures. If this occurs on our testing computer, it might
|
|
||||||
occur on other servers as well: it would be interesting to better understand this
|
|
||||||
issue, document how to avoid it, and potentially change how we handle I/O
|
|
||||||
internally in Garage. But still, this was a very heavy test that will probably not be encountered in
|
|
||||||
many setups: we were adding 273 objects per second for 30 minutes straight!
|
|
||||||
|
|
||||||
To conclude this part, Garage can ingest 1 million tiny objects while remaining
|
|
||||||
usable on our local setup. To put this result in perspective, our production
|
|
||||||
cluster at [deuxfleurs.fr](https://deuxfleurs) smoothly manages a bucket with
|
|
||||||
116k objects. This bucket contains real-world production data: it is used by our Matrix instance
|
|
||||||
to store people's media files (profile pictures, shared pictures, videos,
|
|
||||||
audio files, documents...). Thanks to this benchmark, we have identified two points
|
|
||||||
of vigilance: the increase of batch insert time with the number of existing
|
|
||||||
objects in the cluster in the observed range, and the volatility in our measured data that
|
|
||||||
could be a symptom of our system freezing under the load. Despite these two
|
|
||||||
points, we are confident that Garage could scale way above 1M objects, although
|
|
||||||
that remains to be proven.
|
|
||||||
|
|
||||||
## In an unpredictable world, stay resilient
|
|
||||||
|
|
||||||
Supporting a variety of real-world networks and computers, especially ones that
|
|
||||||
were not designed for software-defined storage or even for server purposes, is the
|
|
||||||
core value proposition of Garage. For example, our production cluster is
|
|
||||||
hosted [on refurbished Lenovo Thinkcentre Tiny desktop computers](https://guide.deuxfleurs.fr/img/serv_neptune.jpg)
|
|
||||||
behind consumer-grade fiber links across France and Belgium (if you are reading this,
|
|
||||||
congratulation, you fetched this webpage from it!). That's why we are very
|
|
||||||
careful that our internal protocol (referred to as "RPC protocol" in our documentation)
|
|
||||||
remains as lightweight as possible. For this analysis, we quantify how network
|
|
||||||
latency and number of nodes in the cluster impact the duration of the most
|
|
||||||
important kinds of S3 requests.
|
|
||||||
|
|
||||||
**Latency amplification** - With the kind of networks we use (consumer-grade
|
|
||||||
fiber links across the EU), the observed latency between nodes is in the 50ms range.
|
|
||||||
When latency is not negligible, you will observe that request completion
|
|
||||||
time is a factor of the observed latency. That's to be expected: in many cases, the
|
|
||||||
node of the cluster you are contacting cannot directly answer your request, and
|
|
||||||
has to reach other nodes of the cluster to get the data. Each
|
|
||||||
of these sequential remote procedure calls - or RPCs - adds to the final S3 request duration, which can quickly become
|
|
||||||
expensive. This ratio between request duration and network latency is what we
|
|
||||||
refer to as *latency amplification*.
|
|
||||||
|
|
||||||
For example, on Garage, a `GetObject` request does two sequential calls: first,
|
|
||||||
it fetches the descriptor of the requested object from the metadata engine, which contains a reference
|
|
||||||
to the first block of data, and then only in a second step it can start retrieving data blocks
|
|
||||||
from storage nodes. We can therefore expect that the
|
|
||||||
request duration of a small `GetObject` request will be close to twice the
|
|
||||||
network latency.
|
|
||||||
|
|
||||||
We tested the latency amplification theory with another benchmark of our own named
|
|
||||||
[s3lat](https://git.deuxfleurs.fr/Deuxfleurs/mknet/src/branch/main/benchmarks/s3lat)
|
|
||||||
which does a single request at a time on an endpoint and measures the response
|
|
||||||
time. As we are not interested in bandwidth but latency, all our requests
|
|
||||||
involving objects are made on tiny files of around 16 bytes. Our benchmark
|
|
||||||
tests 5 standard endpoints of the S3 API: ListBuckets, ListObjects, PutObject, GetObject and
|
|
||||||
RemoveObject. Here are the results:
|
|
||||||
|
|
||||||
|
|
||||||
![Latency amplification](amplification.png)
|
|
||||||
|
|
||||||
As Garage has been optimized for this use case from the very beginning, we don't see
|
|
||||||
any significant evolution from one version to another (Garage v0.7.3 and Garage
|
|
||||||
v0.8.0 Beta 1 here). Compared to Minio, these values are either similar (for
|
|
||||||
ListObjects and ListBuckets) or significantly better (for GetObject, PutObject, and
|
|
||||||
RemoveObject). This can be easily explained by the fact that Minio has not been designed with
|
|
||||||
environments with high latencies in mind. Instead, it is expected to run on clusters that are built
|
|
||||||
in a singe data center. In a multi-DC setup, different clusters could then possibly be interconnected with their asynchronous
|
|
||||||
[bucket replication](https://min.io/docs/minio/linux/administration/bucket-replication.html?ref=docs-redirect)
|
|
||||||
feature.
|
|
||||||
|
|
||||||
*Minio also has a [multi-site active-active replication system](https://blog.min.io/minio-multi-site-active-active-replication/)
|
|
||||||
but it is even more sensitive to latency: "Multi-site replication has increased
|
|
||||||
latency sensitivity, as Minio does not consider an object as replicated until
|
|
||||||
it has synchronized to all configured remote targets. Replication latency is
|
|
||||||
therefore dictated by the slowest link in the replication mesh."*
|
|
||||||
|
|
||||||
|
|
||||||
**A cluster with many nodes** - Whether you already have many compute nodes
|
|
||||||
with unused storage, need to store a lot of data, or are experimenting with unusual
|
|
||||||
system architectures, you might be interested in deploying over a hundred Garage nodes.
|
|
||||||
However, in some distributed systems, the number of nodes in the cluster will
|
|
||||||
have an impact on performance. Theoretically, our protocol, which is inspired by distributed
|
|
||||||
hash tables (DHT), should scale fairly well, but until now, we never took the time to test it
|
|
||||||
with a hundred nodes or more.
|
|
||||||
|
|
||||||
This test was run directly on Grid5000 with 6 physical servers spread
|
|
||||||
in 3 locations in France: Lyon, Rennes, and Nantes. On each server, we ran up
|
|
||||||
to 65 instances of Garage simultaneously, for a total of 390 nodes. The
|
|
||||||
network between physical servers is the dedicated network provided by
|
|
||||||
the Grid5000 community. Nodes on the same physical machine communicate directly
|
|
||||||
through the Linux network stack without any limitation. We are aware that this is a
|
|
||||||
weakness of this test, but we still think that this test can be relevant as, at
|
|
||||||
each step in the test, each instance of Garage has 83% (5/6) of its connections
|
|
||||||
that are made over a real network. To measure performances for each cluster size, we used
|
|
||||||
[s3lat](https://git.deuxfleurs.fr/Deuxfleurs/mknet/src/branch/main/benchmarks/s3lat)
|
|
||||||
again:
|
|
||||||
|
|
||||||
|
|
||||||
![Impact of response time with bigger clusters](complexity.png)
|
|
||||||
|
|
||||||
Up to 250 nodes, we observed response times that remain constant. After this threshold,
|
|
||||||
results become very noisy. By looking at the server resource usage, we saw
|
|
||||||
that their load started to become non-negligible: it seems that we are not
|
|
||||||
hitting a limit on the protocol side, but have simply exhausted the resource
|
|
||||||
of our testing nodes. In the future, we would like to run this experiment
|
|
||||||
again, but on many more physical nodes, to confirm our hypothesis. For now, we
|
|
||||||
are confident that a Garage cluster with 100+ nodes should work.
|
|
||||||
|
|
||||||
|
|
||||||
## Conclusion and Future work
|
|
||||||
|
|
||||||
During this work, we identified some sensitive points on Garage,
|
|
||||||
on which we will have to continue working: our data durability target and interaction with the
|
|
||||||
filesystem (`O_DSYNC`, `fsync`, `O_DIRECT`, etc.) is not yet homogeneous across
|
|
||||||
our components; our new metadata engines (LMDB, SQLite) still need some testing
|
|
||||||
and tuning; and we know that raw I/O performances (GetObject and PutObject for large objects) have a small
|
|
||||||
improvement margin.
|
|
||||||
|
|
||||||
At the same time, Garage has never been in better shape: its next version (version 0.8) will
|
|
||||||
see drastic improvements in terms of performance and reliability. We are
|
|
||||||
confident that Garage is already able to cover a wide range of deployment needs, up
|
|
||||||
to over a hundred nodes and millions of objects.
|
|
||||||
|
|
||||||
In the future, on the performance aspect, we would like to evaluate the impact
|
|
||||||
of introducing an SRPT scheduler
|
|
||||||
([#361](https://git.deuxfleurs.fr/Deuxfleurs/garage/issues/361)), define a data
|
|
||||||
durability policy and implement it, make a deeper and larger review of the
|
|
||||||
state of the art (Minio, Ceph, Swift, OpenIO, Riak CS, SeaweedFS, etc.) to
|
|
||||||
learn from them and, lastly, benchmark Garage at scale with possibly multiple
|
|
||||||
terabytes of data and billions of objects on long-lasting experiments.
|
|
||||||
|
|
||||||
In the meantime, stay tuned: we have released
|
|
||||||
[a first release candidate for Garage v0.8](https://git.deuxfleurs.fr/Deuxfleurs/garage/releases/tag/v0.8.0-rc1),
|
|
||||||
and are already working on several features for the next version.
|
|
||||||
For instance, we are working on a new layout that will have enhanced optimality properties,
|
|
||||||
as well as a theoretical proof of correctness
|
|
||||||
([#296](https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/296)). We are also
|
|
||||||
working on a Python SDK for Garage's administration API
|
|
||||||
([#379](https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/379)), and we will
|
|
||||||
soon officially introduce a new API (as a technical preview) named K2V
|
|
||||||
([see K2V on our doc for a primer](https://garagehq.deuxfleurs.fr/documentation/reference-manual/k2v/)).
|
|
||||||
|
|
||||||
|
|
||||||
## Notes
|
|
||||||
|
|
||||||
[^ref1]: Yes, we are aware of [Jepsen](https://github.com/jepsen-io/jepsen)'s
|
|
||||||
existence. Jepsen is far more complex than our set of scripts, but
|
|
||||||
it is also way more versatile.
|
|
||||||
|
|
||||||
[^ref2]: The program name contains the word "billion", although we only tested Garage
|
|
||||||
up to 1 million objects: this is not a typo, we were just a little bit too
|
|
||||||
enthusiastic when we wrote it ;)
|
|
||||||
|
|
||||||
<style>
|
|
||||||
.footnote-definition p { display: inline; }
|
|
||||||
</style>
|
|
Before Width: | Height: | Size: 189 KiB |
Before Width: | Height: | Size: 49 KiB |
Before Width: | Height: | Size: 128 KiB |
Before Width: | Height: | Size: 28 KiB |
Before Width: | Height: | Size: 117 KiB |
|
@ -1,143 +0,0 @@
|
||||||
+++
|
|
||||||
title="Garage v0.7: Kubernetes and OpenTelemetry"
|
|
||||||
date=2022-04-04
|
|
||||||
+++
|
|
||||||
|
|
||||||
*We just published Garage v0.7, our second public beta release. In this post, we do a quick tour of its 2 new features: Kubernetes integration and OpenTelemetry support.*
|
|
||||||
|
|
||||||
<!-- more -->
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
Two months ago, we were impressed by the success of our open beta launch at FOSDEM and on Hacker News: [our initial post](https://garagehq.deuxfleurs.fr/blog/2022-introducing-garage/) lead to more than 40k views in 10 days, peaking at 100 views/minute, and all requests were served by Garage, without even using a caching frontend!
|
|
||||||
Since this event, we continued to improve Garage, and — 2 months after the initial release — we are happy to announce version 0.7.0.
|
|
||||||
|
|
||||||
But first, we would like to thank the contributors that made this new release possible: Alex, Jill, Max Audron, Maximilien, Quentin, Rune Henrisken, Steam, and trinity-1686a.
|
|
||||||
This is also our first time welcoming contributors external to the core team, and as we wish for Garage to be a community-driven project, we encourage it!
|
|
||||||
|
|
||||||
You can get this release using our binaries or the package provided by your distribution.
|
|
||||||
We ship [statically compiled binaries](https://garagehq.deuxfleurs.fr/download/) for most common Linux architectures (amd64, i386, aarch64 and armv6) and associated [Docker containers](https://hub.docker.com/u/dxflrs).
|
|
||||||
Garage now is also packaged by third parties on some OS/distributions. We are currently aware of [FreeBSD](https://cgit.freebsd.org/ports/tree/www/garage/Makefile) and [AUR for Arch Linux](https://aur.archlinux.org/packages/garage).
|
|
||||||
Feel free to [reach out to us](mailto:garagehq@deuxfleurs.fr) if you are packaging (or planning to package) Garage; we welcome maintainers and will upstream specific patches if that can help. If you already did package Garage, please inform us and we'll add it to the documentation.
|
|
||||||
|
|
||||||
Speaking about the changes of this new version, it obviously includes many bug fixes.
|
|
||||||
We listed them in our [changelogs](https://git.deuxfleurs.fr/Deuxfleurs/garage/releases), so take a look, we might have fixed some issues you were having!
|
|
||||||
Besides bug fixes, there are two new major features in this release: better integration with Kubernetes, and support for observability via OpenTelemetry.
|
|
||||||
|
|
||||||
## Kubernetes integration
|
|
||||||
|
|
||||||
Before Garage v0.7.0, you had to deploy a Consul cluster or spawn a "coordinating" pod to deploy Garage on [Kubernetes](https://kubernetes.io) (K8S).
|
|
||||||
In this new version, Garage integrates a method to discover other peers by using Kubernetes [Custom Resources](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/) (CR) to simplify cluster discovery.
|
|
||||||
|
|
||||||
CR discovery can be quickly enabled in Garage, by configuring the name of the desired service (`kubernetes_namespace`) and which namespace to look for (`kubernetes_service_name`) in your Garage configuration file:
|
|
||||||
|
|
||||||
```toml
|
|
||||||
kubernetes_namespace = "default"
|
|
||||||
kubernetes_service_name = "garage-daemon"
|
|
||||||
```
|
|
||||||
|
|
||||||
Custom Resources must be defined *a priori* with [Custom Resource Definition](https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/) (CRD).
|
|
||||||
If the CRD does not exist, Garage will create it for you. Automatic CRD creation is enabled by default, but it requires giving additional permissions to Garage to work.
|
|
||||||
If you prefer strictly controlling access to your K8S cluster, you can create the resource manually and prevent Garage from automatically creating it:
|
|
||||||
|
|
||||||
```toml
|
|
||||||
kubernetes_skip_crd = true
|
|
||||||
```
|
|
||||||
|
|
||||||
If you want to try Garage on K8S, we currently only provide some basic [example files](https://git.deuxfleurs.fr/Deuxfleurs/garage/src/commit/7e1ac51b580afa8e900206e7cc49791ed0a00d94/script/k8s). These files register a [ConfigMap](https://kubernetes.io/docs/concepts/configuration/configmap/), a [ClusterRoleBinding](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#rolebinding-and-clusterrolebinding), and a [StatefulSet](https://kubernetes.io/fr/docs/concepts/workloads/controllers/statefulset/) with a [Persistent Volumes](https://kubernetes.io/docs/concepts/storage/persistent-volumes/).
|
|
||||||
|
|
||||||
Once these files are deployed, you will be able to interact with Garage as follow:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
kubectl exec -it garage-0 --container garage -- /garage status
|
|
||||||
# ==== HEALTHY NODES ====
|
|
||||||
# ID Hostname Address Tags Zone Capacity
|
|
||||||
# e628.. garage-0 172.17.0.5:3901 NO ROLE ASSIGNED
|
|
||||||
# 570f.. garage-2 172.17.0.7:3901 NO ROLE ASSIGNED
|
|
||||||
# e199.. garage-1 172.17.0.6:3901 NO ROLE ASSIGNED
|
|
||||||
```
|
|
||||||
|
|
||||||
You can then follow the [regular documentation](https://garagehq.deuxfleurs.fr/documentation/cookbook/real-world/#creating-a-cluster-layout) to complete the configuration of your cluster.
|
|
||||||
|
|
||||||
If you target a production deployment, you should avoid binding admin rights to your cluster to create Garage's CRD. You will also need to expose some [Services](https://kubernetes.io/docs/concepts/services-networking/service/) to make your cluster reachable. Keep also in mind that Garage is a stateful service, so you must be very careful of how you handle your data in Kubernetes in order not to lose it. In the near future, we plan to release a proper Helm chart and write "best practices" in our documentation.
|
|
||||||
|
|
||||||
If Kubernetes is not your thing, know that we are running Garage on a Nomad+Consul cluster, which is also well supported.
|
|
||||||
We have not documented it yet but you can get a look at [our Nomad service](https://git.deuxfleurs.fr/Deuxfleurs/infrastructure/src/commit/1e5e4af35c073d04698bb10dd4ad1330d6c62a0d/app/garage/deploy/garage.hcl).
|
|
||||||
|
|
||||||
## OpenTelemetry support
|
|
||||||
|
|
||||||
[OpenTelemetry](https://opentelemetry.io/) standardizes how software generates and collects system telemetry information, namely metrics, logs, and traces.
|
|
||||||
By implementing this standard in Garage, we hope that it will help you to better monitor, manage and tune your cluster.
|
|
||||||
Note that to fully leverage this feature, you must be already familiar with monitoring stacks like [Prometheus](https://prometheus.io/)+[Grafana](https://grafana.com/) or [ElasticSearch](https://www.elastic.co/elasticsearch/)+[Kibana](https://www.elastic.co/kibana/).
|
|
||||||
|
|
||||||
To activate OpenTelemetry on Garage, you must add to your configuration file the following entries (supposing that your collector is also on localhost):
|
|
||||||
|
|
||||||
```toml
|
|
||||||
[admin]
|
|
||||||
api_bind_addr = "127.0.0.1:3903"
|
|
||||||
trace_sink = "http://localhost:4317"
|
|
||||||
```
|
|
||||||
|
|
||||||
The first line, `api_bind_address`, instructs Garage to expose an HTTP endpoint from which metrics can be obtained in Prometheus' data format.
|
|
||||||
The second line, `trace_sink`, instructs Garage to export tracing information to an OpenTelemetry collector at the given address.
|
|
||||||
These two options work independently and you can use them separately, depending on if you are interested only in metrics, traces, or both.
|
|
||||||
|
|
||||||
We provide [some files](https://git.deuxfleurs.fr/Deuxfleurs/garage/src/branch/main/script/telemetry) to help you quickly bootstrap a testing monitoring stack.
|
|
||||||
It includes a docker-compose file and a pre-configured Grafana dashboard.
|
|
||||||
You can use them if you want to reproduce the following examples.
|
|
||||||
|
|
||||||
Grafana is particularly adapted to understand how your cluster is performing from a "bird's eye view".
|
|
||||||
For example, the following graph shows S3 API calls sent to your node per time unit.
|
|
||||||
You can use it to better understand how your users are interacting with your cluster.
|
|
||||||
|
|
||||||
![A screenshot of a plot made by Grafana depicting the number of requests per time units grouped by endpoints](api_rate.png)
|
|
||||||
|
|
||||||
Thanks to this graph, we know that starting at 14:55, an important upload has been started.
|
|
||||||
This upload is made of many small files, as we see many PutObject calls that are often used for small files.
|
|
||||||
It also has some large objects, as we observe some multipart uploads requests.
|
|
||||||
Conversely, at this time, no reads are done as the corresponding read endpoints (ListBuckets, ListObjectsV2, etc.) receive 0 request per time unit.
|
|
||||||
|
|
||||||
|
|
||||||
Garage also collects metrics from lower-level parts of the system.
|
|
||||||
You can use them to better understand how Garage is interacting with your OS and your hardware.
|
|
||||||
|
|
||||||
![A screenshot of a plot made by Grafana depicting the write speed (in MB/s) during test time.](writes.png)
|
|
||||||
|
|
||||||
This plot has been captured at the same moment as the previous one.
|
|
||||||
We do not see a correlation between the writes and the API requests for the full upload but only for its beginning.
|
|
||||||
More precisely, it maps well to multipart upload requests, and this is expected.
|
|
||||||
Large files (of the multipart uploads) will saturate the writes of your disk but the uploading of small files (via the PutObject endpoint) will be limited by other parts of the system.
|
|
||||||
|
|
||||||
This simple example covers only 2 metrics over the 20+ ones that we already defined, but it still allowed us to precisely describe our cluster usage and identify where bottlenecks could be.
|
|
||||||
We are confident that cleverly using these metrics on a production cluster will give you many more valuable insights into your cluster.
|
|
||||||
|
|
||||||
While metrics are good for having a large, general overview of your system, they are however not adapted for digging and pinpointing a specific performance issue on a specific code path.
|
|
||||||
Thankfully, we also have a solution for this problem: tracing.
|
|
||||||
|
|
||||||
Using [Application Performance Monitoring](https://www.elastic.co/observability/application-performance-monitoring) (APM) in conjunction with Kibana,
|
|
||||||
we can get for instance the following visualization of what happens during a PutObject call (click to enlarge):
|
|
||||||
|
|
||||||
[![A screenshot of APM depicting the trace of a PutObject call](apm.png)](apm.png)
|
|
||||||
|
|
||||||
On the top of the screenshot, we see the latency distribution of all PutObject requests.
|
|
||||||
We learn that the selected request took ~1ms to execute, while 95% of all requests took less than 80ms to run.
|
|
||||||
Having some dispersion between requests is expected as Garage does not run on a strong real-time system, but in this case, you must also consider that
|
|
||||||
a request duration is impacted by the size of the object that is sent (a 10B object will be quicker to process than a 10MB one).
|
|
||||||
Consequently, this request probably corresponds to a very small file.
|
|
||||||
|
|
||||||
Below this first histogram, you can select the request you want to inspect, and then see its trace on the bottom part.
|
|
||||||
The trace shown above can be broken down in 4 parts: fetching the API key to check authentication (`key get`), fetching the bucket identifier from its name (`bucket_alias get`), fetching the bucket configuration to check authorizations (`bucket_v2 get`), and finally inserting the object in the storage (`object insert`).
|
|
||||||
|
|
||||||
With this example, we demonstrated that we can inspect Garage internals to find slow requests, then see which codepath has been taken by a request, and finally identify which part of the code took time.
|
|
||||||
|
|
||||||
Keep in mind that this is our first iteration on telemetry for Garage, so things are a bit rough around the edges (step-by-step documentation is missing, our Grafana dashboard is a work in progress, etc.).
|
|
||||||
In all cases, your feedback is welcome on our Matrix channel.
|
|
||||||
|
|
||||||
|
|
||||||
## Conclusion
|
|
||||||
|
|
||||||
This is only the first iteration of the Kubernetes and OpenTelemetry integrations in Garage, so things are still a bit rough.
|
|
||||||
We plan to polish their integration in the coming months based on our experience and your feedback.
|
|
||||||
|
|
||||||
You may also ask yourself what will be the other works we plan to conduct: stay tuned, we will soon release information on our roadmap!
|
|
||||||
In the meantime, we hope you will enjoy using Garage v0.7.
|
|
Before Width: | Height: | Size: 18 KiB |
|
@ -1,192 +0,0 @@
|
||||||
+++
|
|
||||||
title='Thoughts on "Leaderless Consensus"'
|
|
||||||
date=2023-11-30
|
|
||||||
+++
|
|
||||||
|
|
||||||
*Consensus algorithms such as Raft and Paxos, which are used in many distributed databases,
|
|
||||||
have notoriously unpredictable performance in low-quality networks that suffer from
|
|
||||||
latency, jitter, packet loss and/or unavailable nodes, which is why Garage does not use
|
|
||||||
them and uses only CRDTs. A new paper by Antoniadis et al., [*Leaderless Consensus*](https://www.sciencedirect.com/science/article/abs/pii/S0743731523000151),
|
|
||||||
introduces a new category of algorithms that better tolerate the frequent
|
|
||||||
unavailability of a subset of nodes. However, additional research and practical work is required before
|
|
||||||
these results can be put into practice. Read for more details.*
|
|
||||||
|
|
||||||
<!-- more -->
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
As I have said many times when presenting Garage, we have made a point of not
|
|
||||||
using any consensus algorithm in Garage and using only CRDTs, for several
|
|
||||||
reasons. The first, and most important reason, is that all of the consensus
|
|
||||||
algorithms that we know of[^1] (in particular Raft, which is very popular in
|
|
||||||
distributed databases) suffer from unpredictable performance when nodes or the
|
|
||||||
network are unreliable. Even in relatively stable conditions, Raft-like
|
|
||||||
algorithms can still be much slower than CRDTs (as we have shown in some
|
|
||||||
[benchmarks](https://garagehq.deuxfleurs.fr/documentation/design/benchmarks/#on-a-complex-simulated-network))
|
|
||||||
because they elect a leader node and require all operations to pass through the
|
|
||||||
leader, which can become a bottleneck. Other than performance issues, Raft is
|
|
||||||
a complex algorithm and implementing it correctly is a challenging software
|
|
||||||
engineering endeavor that we did not wish to undertake, preferring instead
|
|
||||||
simplicity as a foundational principle to help us write correct software.
|
|
||||||
|
|
||||||
However, writing a distributed system such as Garage can be challenging when
|
|
||||||
consensus is not available, as we can only use CRDTs (conflict-free replicated
|
|
||||||
data types) in the code, and we cannot rely on state machine replication. This
|
|
||||||
means that the specific semantics of CRDTs have to be taken into account
|
|
||||||
everywhere in the code, which is often not a problem but sometimes adds some
|
|
||||||
complexity. More importantly, this means that a whole class of features cannot
|
|
||||||
be implemented in Garage, like those that would require some form of locking or
|
|
||||||
exclusive access. In practice, this has been causing us issues on the
|
|
||||||
CreateBucket endpoint, which by definition is meant to exclusively associate a
|
|
||||||
bucket name to a newly created bucket. In current Garage versions, concurrent
|
|
||||||
calls to CreateBucket with the same name may create several buckets and leave
|
|
||||||
Garage in an inconsistent state.
|
|
||||||
|
|
||||||
This leads naturally to the following question: is it possible to implement a
|
|
||||||
consensus algorithm that eschews the shortcomings of Raft-like algorithms in
|
|
||||||
unreliable systems? And in particular, is it possible to implement a consensus
|
|
||||||
algorithm that does not elect a leader, and is therefore not sensitive to
|
|
||||||
temporary slowdowns or unavailabilities of individual nodes? A new paper by
|
|
||||||
Antoniadis et al., [*Leaderless
|
|
||||||
Consensus*](https://www.sciencedirect.com/science/article/abs/pii/S0743731523000151)
|
|
||||||
[[PDF](/blog/2023-11-thoughts-on-leaderless-consensus/2023-Leaderless_consensus_JPDC.pdf)],
|
|
||||||
suggests that the answer is *yes*. However, as with all new research, putting
|
|
||||||
it into practice will take some time and a lot of work. I will discuss in this
|
|
||||||
article practical questions posed by the *Leaderless Consensus* paper, and
|
|
||||||
further steps that could be taken to advance on these issues.
|
|
||||||
|
|
||||||
Please note that the entire content of this article is **purely speculative**
|
|
||||||
and does not include any *positive results*. Note also that we are not
|
|
||||||
discussing Byzantine-tolerant systems, which seem to be the main focus of
|
|
||||||
*Leaderless Consensus*, even though the authors also propose an algorithm for
|
|
||||||
non-Byzantine systems (the one we are interested in).
|
|
||||||
|
|
||||||
## Main takeaways of *Leaderless Consensus*
|
|
||||||
|
|
||||||
To be able to meaningfully say that an algorithm is *leaderless*, one has to first
|
|
||||||
determine what *leaderless* precisely means. The paper starts by offering such
|
|
||||||
a definition, using a network model they call *synchronous-k* ("synchronous minus *k*"),
|
|
||||||
where *n* nodes are running in synchronous steps where at most *k* nodes might be
|
|
||||||
offline, paused, or otherwise unavailable, at each step.
|
|
||||||
The *synchronous-k* model has a variant called *eventually synchronous-k* which seems
|
|
||||||
to better model the behaviour of WAN links on the Internet, although I am not sure
|
|
||||||
of the precise difference between the two. Once the *synchronous-k* network model
|
|
||||||
is defined, a leaderless consensus algorithm is simply defined as a consensus algorithm
|
|
||||||
that still works (i.e. it terminates, giving a decision), in a *synchronous-1* system.
|
|
||||||
Concretely, this means that at any given time, a random node in the network may be
|
|
||||||
disconnected (not always the same one), and the consensus algorithm will be impacted
|
|
||||||
only minimally. In other words, we can say that a leaderless consensus algorithm
|
|
||||||
degrades gracefully in the presence of transient node failures.
|
|
||||||
This "graceful degradation" property, which Raft does not have,
|
|
||||||
seems to be exactly what we are looking for in a potential consensus algorithm that
|
|
||||||
could be added to Garage.
|
|
||||||
|
|
||||||
Having given this definition, the paper continues by offering concrete
|
|
||||||
algorithms to implement leaderless consensus. Of particular interest to us, the
|
|
||||||
paper presents in Section 5 a leaderless consensus algorithm, which they call
|
|
||||||
OFT-Archipelago, which works in message passing systems without Byzantine
|
|
||||||
nodes, where the only faults that can occur are message omissions (like
|
|
||||||
messages being dropped by the network, or temporary node crashes). This is
|
|
||||||
exactly the premise made by Garage, so this algorithm could be a good candidate
|
|
||||||
for us. Interestingly, while leaderless consensus is formally defined as a
|
|
||||||
consensus algorithm that works in a *synchronous-1* system (i.e. tolerating
|
|
||||||
only one failed node at each step), Archipelago works with up to *f < n/2*
|
|
||||||
unavailable nodes at each time steps.
|
|
||||||
|
|
||||||
According to the benchmarks in the leaderless consensus paper, while
|
|
||||||
Archipelago has very good throughput (around 50kops/s), the latency of
|
|
||||||
individual operations is generally between 1 or 2 seconds. This seems to be
|
|
||||||
acceptable for application in Garage if used only for administrative operations
|
|
||||||
on buckets and access keys which are relatively rare. From a theoretical point
|
|
||||||
of view, OFT-Archipelago can terminate in 3 RTT in the optimal scenario,
|
|
||||||
however it is not clear to me whether there is an upper bound on the
|
|
||||||
termination time, or whether there is a probabilistic analysis of the
|
|
||||||
termination delays that could be made. It is also not very clear to me the
|
|
||||||
link between this algorithm and the FLP impossibility theorem: since
|
|
||||||
Archipelago seems to do things that are forbidden by FLP, it means that the
|
|
||||||
premise of a *synchronous-k* system is probably in fact much stronger that the
|
|
||||||
network asynchrony assumed by FLP.
|
|
||||||
|
|
||||||
Among the other advantages of OFT-Archipelago is the fact that the algorithm
|
|
||||||
seems to be very simple, much more than Raft, as it is described in the paper
|
|
||||||
in only 42 lines of very understandable pseudocode. There is also a BFT
|
|
||||||
variant of Archipelago, which is not of interest to us in the context of Garage
|
|
||||||
as we are making the hypothesis that all nodes are trusted.
|
|
||||||
|
|
||||||
## Where to go from now?
|
|
||||||
|
|
||||||
Before an algorithm such as OFT-Archipelago can be added to Garage, a few fundamental
|
|
||||||
questions need to be answered, among which:
|
|
||||||
|
|
||||||
- How should Archipelago interact with Garage's use of CRDT data types? Do we
|
|
||||||
have to create a fully separate subsystem for things that are managed under
|
|
||||||
consensus, or can we hopefully share some logic? More precisely, can we use
|
|
||||||
a consensus algorithm simply as a total order broadcast primitive that
|
|
||||||
becomes a mandatory passing point for all modification requests on a set of
|
|
||||||
metadata tables, with those tables still being based on the CRDT table
|
|
||||||
replication and synchronisation library which is currently in use in Garage?
|
|
||||||
In this situation, nodes that come back from a crash can simply catch up on
|
|
||||||
old changes using the Merkle tree algorithm synchronisation algorithm that we
|
|
||||||
already have. Or must we use the consensus algorithm as the only way to
|
|
||||||
broadcast operations and data for the tables that are managed by it? This
|
|
||||||
would mean that we must add specific logic to handle the case of a node
|
|
||||||
coming back from a crash, where it must either download all the log of
|
|
||||||
operations since it was last up, or an entire snapshot of the metadata tables
|
|
||||||
in question. I think this is mostly related to the reason we want to add
|
|
||||||
consensus, and the exact consistency guarantees we are expecting it to
|
|
||||||
provide to us.
|
|
||||||
|
|
||||||
- Can Archipelago be made correct under cluster reconfiguration scenarios? This
|
|
||||||
is linked to the work done for task 3 of the 2023 NLnet project
|
|
||||||
([#495](https://git.deuxfleurs.fr/Deuxfleurs/garage/issues/495),
|
|
||||||
[#667](https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/667)), which focuses
|
|
||||||
on making the Quorum-based algorithm for CRDT updates reliable even when the
|
|
||||||
cluster layout is updated. I will be writing more about this topic in a
|
|
||||||
future blog post, but in a nutshell, the NLnet task is mainly focused on
|
|
||||||
maintaining read-after-write consistency in Garage at all times, which has
|
|
||||||
led us to develop a relatively general framework for modeling algorithm based
|
|
||||||
on quorums. Since Archipelago also guarantees its correctness using a
|
|
||||||
non-empty-intersection-of-quorums property, it could benefit from the work
|
|
||||||
that was originally made on quorums for the CRDT algorithms.
|
|
||||||
|
|
||||||
If we obtain satisfactory answers to these questions, the remaining work will be
|
|
||||||
the technical implementation of Archipelago in Garage and its validation:
|
|
||||||
|
|
||||||
- Determine more precisely how the pipelined version of Archipelago is made,
|
|
||||||
as its complete description is not given in the leaderless consensus paper,
|
|
||||||
only a few basic pointers (Section 8.1 of the JPDC version).
|
|
||||||
|
|
||||||
- Implement Archipelago in Rust, ideally under the form of a generic reusable crate
|
|
||||||
that could be used outside of the context of Garage.
|
|
||||||
|
|
||||||
- Do a benchmark of Archipelago vs. existing Raft implementations (for instance
|
|
||||||
the async-raft crate). We should benchmark the algorithms in the following
|
|
||||||
scenarios: stable networking, high latency and jitter, evolutive situation
|
|
||||||
with different phases. My hypothesis is that Archipelago could be slower (in
|
|
||||||
terms of latency, not necessarily in throughput) than Raft in the stable
|
|
||||||
networking scenario, but the other two scenarios would force Raft to
|
|
||||||
reconfigure often (i.e. change leaders), which could be the source of huge
|
|
||||||
performance penalties, which Archipelago would not suffer from.
|
|
||||||
|
|
||||||
- Integrate Archipelago with Garage to solve the CreateBucket issue.
|
|
||||||
|
|
||||||
- To validate our implementation, we would want to test it using automated
|
|
||||||
testing frameworks such as Jepsen. I've been using Jepsen for the NLnet task
|
|
||||||
3 and I'm starting to understand quite well how it works, so this could be
|
|
||||||
relatively easy.
|
|
||||||
|
|
||||||
- If we want to go further, there is always the possibility of formalizing a
|
|
||||||
proof of our implementation, however I don't know what are the good tools to
|
|
||||||
do this, and in all cases it would be an extreme amount of work.
|
|
||||||
|
|
||||||
|
|
||||||
Please send your comments and feedback to
|
|
||||||
[garagehq@deuxfleurs.fr](mailto:garagehq@deuxfleurs.fr) if you have any.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
<sup id="1">1</sup>: We are concerned only with consensus algorithms in the
|
|
||||||
context of closed, trusted systems such as distributed databases, and not in
|
|
||||||
large trustless networks such as blockchains.
|
|
||||||
|
|
||||||
Written by [Alex Auvolat](https://adnab.me).
|
|
|
@ -1,281 +0,0 @@
|
||||||
+++
|
|
||||||
title="Maintaining read-after-write consistency in all circumstances"
|
|
||||||
date=2023-12-06
|
|
||||||
+++
|
|
||||||
|
|
||||||
*Garage is a data storage system that is based on CRDTs internally. It does not
|
|
||||||
use a consensus algorithm such as Raft, therefore maintaining consistency in a
|
|
||||||
cluster has to be done by other means. Since its inception, Garage has made use
|
|
||||||
of read and write quorums to guarantee read-after-write consistency, the only
|
|
||||||
consistency guarantee it provides. However, as of Garage v0.9.0, this guarantee
|
|
||||||
is not maintained when the composition of a cluster is updated and data is
|
|
||||||
moved between storage nodes. As part of our current NLnet-funded project, we
|
|
||||||
are developing a solution to this problem. This blog post proposes a
|
|
||||||
high-level overview of the proposed solution.*
|
|
||||||
|
|
||||||
<!-- more -->
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
Garage provides mainly one consistency guarantee, read-after-write for objects, which can be described as follows:
|
|
||||||
|
|
||||||
**Read-after-write consistency.** *If a client A writes an object x (e.g. using
|
|
||||||
PutObject) and receives a `HTTP 200 OK` response, and later a client B tries to
|
|
||||||
read object x (e.g. using GetObject), then B will read the version written by
|
|
||||||
A, or a more recent version.*
|
|
||||||
|
|
||||||
The consistency guarantee offered by Garage is slightly more general than this
|
|
||||||
simplistic formulation, as it also applies to other S3 endpoints such as
|
|
||||||
ListObjects, which are always guaranteed to reflect the latest version of
|
|
||||||
objects inserted in a bucket. Note that Amazon calls this guarantee [*strong*
|
|
||||||
read-after-write consistency](https://aws.amazon.com/s3/consistency/) (they
|
|
||||||
also have it on AWS), to differentiate it from [another definition of
|
|
||||||
read-after-write
|
|
||||||
consistency](https://avikdas.com/2020/04/13/scalability-concepts-read-after-write-consistency.html)
|
|
||||||
that only applies to data that is read by the same client that wrote it. Since
|
|
||||||
that weaker form is also called
|
|
||||||
[read-your-writes](https://jepsen.io/consistency/models/read-your-writes), I
|
|
||||||
will always be referring to the strong version when using the term
|
|
||||||
"read-after-write consistency".
|
|
||||||
|
|
||||||
In Garage, this consistency guarantee at the level of objects in the S3 API is
|
|
||||||
in fact a reflection of read-after-write consistency in the internal metadata
|
|
||||||
engine (which is a distributed key/value store with CRDT values). Reads and
|
|
||||||
writes to metadata tables use quorums of 2 out of 3 nodes for each operation,
|
|
||||||
ensuring that if operation B starts after operation A has completed, then there
|
|
||||||
is at least one node that is handling both operation A and B. In the case where
|
|
||||||
A is a write (an update) and B is a read, that node will have the opportunity
|
|
||||||
to return the value written in A to the reading client B. A visual depiction
|
|
||||||
of this process can be found in [this
|
|
||||||
presentation](https://git.deuxfleurs.fr/Deuxfleurs/garage/src/commit/a8b0e01f88b947bc34c05d818d51860b4d171967/doc/talks/2023-09-20-ocp/talk.pdf)
|
|
||||||
on slide 32 (pages 57-64), and the algorithm is written down on slide 33 (page
|
|
||||||
54).
|
|
||||||
|
|
||||||
Note that read-after-write guarantees [are broken and have always
|
|
||||||
been](https://git.deuxfleurs.fr/Deuxfleurs/garage/issues/147) for metadata
|
|
||||||
related to buckets and access keys, which might not be something we can fix due
|
|
||||||
to different requirements on the quorums for the related metadata tables.
|
|
||||||
|
|
||||||
|
|
||||||
## Current issues with read-after-write consistency
|
|
||||||
|
|
||||||
Maintaining read-after-write consistency depends crucially on the intersection
|
|
||||||
of the quorums being non-empty. There is however a scenario where these quorums
|
|
||||||
may be empty: when the set of nodes affected to storing some entries changes,
|
|
||||||
for instance when nodes are added or removed and data is being rebalanced
|
|
||||||
between nodes.
|
|
||||||
|
|
||||||
### A concrete example
|
|
||||||
|
|
||||||
Take the case of a partition (a subset of the data stored by Garage) which is
|
|
||||||
stored on nodes A, B and C. At some point, a layout change occurs in the
|
|
||||||
cluster, and after the change, nodes A, D and E are responsible for storing the
|
|
||||||
partition. All read and write operations that were initiated before the layout
|
|
||||||
change, or by nodes that were not yet aware of the new layout version, will be
|
|
||||||
directed to nodes A, B and C, and will be handled by a quorum of two nodes among
|
|
||||||
those three. However, once the new layout is introduced in the cluster, read
|
|
||||||
and write operations will start being directed to nodes A, D and E, expecting a
|
|
||||||
quorum of two nodes among this new set of three nodes.
|
|
||||||
|
|
||||||
Crucially, coordinating when operations start being directed to the new layout
|
|
||||||
is a hard problem, and in all cases we must assume that due to some network
|
|
||||||
asynchrony, there can still be some nodes that keep sending requests to nodes
|
|
||||||
A, B and C for a long time even after everyone else is aware of the new layout.
|
|
||||||
Moreover, data will be progressively moved from nodes B and C to nodes D and E,
|
|
||||||
which can take a long time depending on the quantity of data. This creates a
|
|
||||||
period of uncertainty as to where exactly the data is stored in the cluster.
|
|
||||||
Overall, this basically means that this simplistic scheme gives us no way to
|
|
||||||
guarantee the intersection-of-quorums property, which is necessary for
|
|
||||||
read-after-write.
|
|
||||||
|
|
||||||
Concretely, here is a very simple scenario in which read-after-write is broken:
|
|
||||||
|
|
||||||
1. A write operation is directed to nodes A, B and C (the old layout), and
|
|
||||||
receives OK responses from nodes B and C, forming a quorum, so the write
|
|
||||||
completes successfully. The written data then arrives to node A as well.
|
|
||||||
|
|
||||||
2. The new layout version is introduced in the cluster.
|
|
||||||
|
|
||||||
3. Before nodes D and E have had the chance to retrieve the data that was
|
|
||||||
stored on nodes B and C, a read operation for the same key is directed to
|
|
||||||
nodes A, D and E. D and E both return an OK response with no data (a null
|
|
||||||
value), because they is not yet up-to-date. An answer from node A is not
|
|
||||||
received in time. The two responses from nodes D and E, that contain no
|
|
||||||
data, still form a quorum, so the read returns a null value instead of the
|
|
||||||
value that was written before, even though the write operation reported a
|
|
||||||
success.
|
|
||||||
|
|
||||||
|
|
||||||
### Evidencing the issue with Jepsen testing
|
|
||||||
|
|
||||||
The first thing that I had to do for the NLnet project was to develop a testing
|
|
||||||
framework to show that read-after-write consistency issues could in fact arise
|
|
||||||
in Garage when the cluster layout was updated. To make such tests, I chose to
|
|
||||||
use the [Jepsen](https://jepsen.io/) testing framework, which helps us put
|
|
||||||
distributed software in complex adverse scenarios and verify whether they
|
|
||||||
respect some claimed consistency guarantees or not.
|
|
||||||
|
|
||||||
I will not enter into too much detail on the testing procedure, but suffice to
|
|
||||||
say that issues were found. More precisely, I was able to show that Garage
|
|
||||||
*did* guarantee read-after-write in a variety of adverse scenarios such as
|
|
||||||
network partitions, node crashes and clock scrambling, but that it was unable
|
|
||||||
to do so as soon as regular layout updates were introduced.
|
|
||||||
|
|
||||||
The progress of the Jepsen testing work is tracked in [PR
|
|
||||||
#544](https://git.deuxfleurs.fr/Deuxfleurs/garage/pulls/544)
|
|
||||||
|
|
||||||
|
|
||||||
## Fixing read-after-write consistency when layouts change
|
|
||||||
|
|
||||||
To solve this issue, we will have to keep track of several pieces of
|
|
||||||
information in the cluster. We will also have to adapt our read/write quorums
|
|
||||||
and our data transfer strategy during rebalancing to make sure that data can be
|
|
||||||
found when it is requested.
|
|
||||||
|
|
||||||
First of all, we adapted Garage's code to be able to handle *several versions
|
|
||||||
of the cluster layout* that can be live in the cluster at the same time, to
|
|
||||||
keep track of multiple possible locations for data that is currently being
|
|
||||||
transferred between nodes. When multiple cluster layout versions are live,
|
|
||||||
write operations are directed to all of the nodes responsible for storing the
|
|
||||||
data in all the live versions. This ensures that the nodes in the oldest live
|
|
||||||
layout version always have an up-to-date view of the data, and that a read
|
|
||||||
quorum among those nodes is always a safe way to ensure read-after-write
|
|
||||||
consistency.
|
|
||||||
|
|
||||||
Nodes will progressively synchronize data so that the nodes in the newest live
|
|
||||||
layout version will catch up with data stored by nodes in the older live layout
|
|
||||||
version. Once nodes in the newer layout versions also have an up-to-date view
|
|
||||||
of the data, read operations will progressively start using a quorum of nodes
|
|
||||||
in the new layout version instead of the old one.
|
|
||||||
|
|
||||||
Once all nodes are reading from newer layout versions, the oldest live versions
|
|
||||||
can be pruned. This means that writes will stop being directed to those nodes,
|
|
||||||
and the nodes will delete the data they were storing. Obviously, in the (very
|
|
||||||
common) case where some nodes are both in the old and new layout versions,
|
|
||||||
those nodes will not delete their data and they will continue to receive
|
|
||||||
writes.
|
|
||||||
|
|
||||||
### Performance impacts
|
|
||||||
|
|
||||||
When multiple layout versions are live, writes are sent to all nodes
|
|
||||||
responsible for the partition of the requested key in all live layout
|
|
||||||
versions, and will return OK only when they receive a quorum of OK responses
|
|
||||||
for each of the live layout versions. This means that writes could be a bit
|
|
||||||
slower when a layout change is being synchronized in the cluster. Typically if
|
|
||||||
only one node is changing between the old and the new layout version, the write
|
|
||||||
operation will await for 3 responses among 4 requests, instead of the classical
|
|
||||||
2 responses among 3 requests.
|
|
||||||
|
|
||||||
Concerning reads, they are still sent to only three nodes. Indeed, they are
|
|
||||||
sent to the nodes of the newest live layout version for which nodes have
|
|
||||||
completed a sync to catch up on existing data, and they only expect a quorum of
|
|
||||||
2 responses among the three nodes of that layout version. This way, reads
|
|
||||||
always stay as performant as when no layout change is being processed.
|
|
||||||
|
|
||||||
### Ensuring that new nodes are up-to-date
|
|
||||||
|
|
||||||
An additional coordination mechanism is necessary for the data synchronization
|
|
||||||
procedure, to ensure that it is not started too early and that after it
|
|
||||||
completes, the nodes in the new layout indeed contains an up-to-date view of
|
|
||||||
the data.
|
|
||||||
|
|
||||||
Indeed, imagine the following adverse scenario, which we want to avoid: a new
|
|
||||||
layout version is introduced in the cluster, and nodes immediately start
|
|
||||||
copying the data to the new nodes. However, some write operations that were
|
|
||||||
initiated before the new layout was introduced (or that were handled by a node
|
|
||||||
not yet aware of the layout) could be delayed, and the written data was not yet
|
|
||||||
received by the old nodes when they sent their copy of everything. When the
|
|
||||||
sync reports completion, and read operations start being directed to nodes of
|
|
||||||
the new layout, the written data might be missing from the nodes handling the
|
|
||||||
read, and read-after-write consistency could be violated.
|
|
||||||
|
|
||||||
To avoid this situation, the synchronization operation is not initiated until
|
|
||||||
all cluster nodes have reported an "acknowledge" of the new layout version,
|
|
||||||
indicating that they have received the new layout version, and that they are no
|
|
||||||
longer processing write operations that were only addressed to nodes of the
|
|
||||||
previous layout versions. This makes sure that no data will be missed by the
|
|
||||||
sync: once the sync has started, no more data can be written only to old layout
|
|
||||||
versions. All of the writes will also be directed to the new nodes. More
|
|
||||||
exactly: all data that the source nodes of the sync does not yet contain when
|
|
||||||
the sync starts, is written by a write operation that is also directed at a
|
|
||||||
quorum of nodes among the new ones. This means that at the end of the sync, a
|
|
||||||
read quorum among the new nodes will necessarily return an up-to-date copy of
|
|
||||||
all of the data.
|
|
||||||
|
|
||||||
### Details on update trackers
|
|
||||||
|
|
||||||
As you can see, the previous algorithm needs to keep track of a lot of
|
|
||||||
information in the cluster. This information is kept in three "layout update trackers",
|
|
||||||
which keep track of the following information:
|
|
||||||
|
|
||||||
- The `ack` layout tracker keeps track of nodes receiving the latest layout
|
|
||||||
versions and indicating that they are no longer processing writes addressed
|
|
||||||
only to older layout versions. Once all nodes have acknowledged a new
|
|
||||||
version, we know that all in-progress and future write operations that are
|
|
||||||
made in the cluster are directed to the nodes that were added in this layout
|
|
||||||
version as well.
|
|
||||||
|
|
||||||
- The `sync` layout tracker keeps track of nodes finishing a full metadata table
|
|
||||||
sync, that was started after all nodes `ack`'ed the new layout version.
|
|
||||||
|
|
||||||
- The `sync_ack` layout tracker keeps track of nodes receiving the `sync`
|
|
||||||
tracker update for all cluster nodes, and thus starting to direct reads to
|
|
||||||
the newly synchronized layout version. This makes it possible to know when no
|
|
||||||
more nodes are reading from an old version, at which point the corresponding
|
|
||||||
data can be deleted.
|
|
||||||
|
|
||||||
In the simplest scenario, only two layout versions are live, and these trackers
|
|
||||||
therefore can only have the values `n` (the new layout version) and `n-1` (the
|
|
||||||
old one). However this mechanism handles the general case where several
|
|
||||||
successive layout updates are being processed and more than two layout versions
|
|
||||||
are live simultaneously. The layout update trackers can take as values the
|
|
||||||
version numbers of any currently live layout version.
|
|
||||||
|
|
||||||
### What about dead nodes?
|
|
||||||
|
|
||||||
In this post I have used many times the phrases "once all nodes have
|
|
||||||
acknowledged a new layout version", or "once all nodes have completed a sync".
|
|
||||||
This obviously means that if some nodes are dead or unresponsive, the
|
|
||||||
processing of the layout update can be delayed indefinitely, and nodes in the
|
|
||||||
old layout versions will keep receiving writes and storing unnecessary data.
|
|
||||||
This is an unfortunate fact with the method proposed here. To cover for these
|
|
||||||
situations, the following workarounds can be made:
|
|
||||||
|
|
||||||
- A layout change is generally a supervised operation, meaning that a system
|
|
||||||
administrator may manually intervene to inform the cluster that certain nodes
|
|
||||||
are dead and that their layout tracker values should not be taken into
|
|
||||||
account.
|
|
||||||
|
|
||||||
- For the `sync` update tracker, we don't actually need to wait for all of the
|
|
||||||
synchronizations to terminate, quorums can be used instead as they should be
|
|
||||||
sufficient to ensure that the copied data is up-to-date.
|
|
||||||
|
|
||||||
- For the `ack` and `sync_ack` update trackers, we can automatically increase
|
|
||||||
them for all nodes (even dead ones) after a certain time delay, as there is
|
|
||||||
no reason for the changes taking more than e.g. 10 minutes to propagate in
|
|
||||||
regular conditions. We might not enable this behaviour by default, though,
|
|
||||||
due to its possible impacts on consistency.
|
|
||||||
|
|
||||||
|
|
||||||
## Current status and future work
|
|
||||||
|
|
||||||
The work described in this blog post is currently almost complete but it still
|
|
||||||
needs to be ironed out. I have made a first run of Jepsen testing on the new
|
|
||||||
code that showed that the changes seem to be fixing the issue. I will be
|
|
||||||
running longer and more intensive runs of Jepsen testing once the code is
|
|
||||||
finished, to make sure everything is fine. The changes will require a major
|
|
||||||
update of Garage: this will be the v0.10.0 release, which will probably be
|
|
||||||
finished in January or February of 2024. This update will be a very safe and
|
|
||||||
transparent update, as only the layout data structure is changed and nothing
|
|
||||||
related to object storage itself is touched.
|
|
||||||
|
|
||||||
If I had the time to do so, I would write the algorithm described in this post
|
|
||||||
in a formal way, in the form of a scientific paper. I believe such a paper
|
|
||||||
would be worthy of presenting at a scientific conference or journal, especially
|
|
||||||
given the fact that it is motivated by a very concrete use case and has been
|
|
||||||
validated quite thoroughly (with Jepsen). Unfortunately, this is not my
|
|
||||||
highest priority at the moment.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
Written by [Alex Auvolat](https://adnab.me).
|
|
|
@ -1,49 +0,0 @@
|
||||||
+++
|
|
||||||
title="PhD offering to work on Garage and Distributed Systems"
|
|
||||||
date=2024-01-10
|
|
||||||
+++
|
|
||||||
|
|
||||||
*Deuxfleurs and IMT Atlantique are partnering to fund a PhD student to work on
|
|
||||||
Garage and distributed systems theory during three years. The recruitment
|
|
||||||
process is open and we are currently looking for candidates. Applications are
|
|
||||||
accepted until Jan 31, 2024. Read for details.*
|
|
||||||
|
|
||||||
<!-- more -->
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
|
|
||||||
Deuxfleurs and IMT Atlantique are partnering to fund a PhD student to work on
|
|
||||||
Garage and distributed systems theory, as part of the SEED PhD program, and we
|
|
||||||
are looking for a candidate. This is a French PhD so the program duration is 3
|
|
||||||
years, starting in September 2024, and the student is expected to already have
|
|
||||||
a master's degree or to obtain one before September 2024. The PhD will take
|
|
||||||
place mostly at IMT Atlantique in Nantes (France) within the STACK team, with a
|
|
||||||
three-month stay at Deuxfleurs and a three-month stay abroad (probably in the
|
|
||||||
US). Ideally we are looking for a candidate that already has solid Rust coding
|
|
||||||
skills and a good understanding of distributed systems theory, however both
|
|
||||||
skills can be learnt during the program. Dr. Alex Auvolat from Deuxfleurs will
|
|
||||||
be supervising the student along with Dr. Daniel Balouek from IMT Atlantique.
|
|
||||||
This is a great opportunity to improve your Rust coding skills, learn
|
|
||||||
distributed systems theory, travel to France, meet the great people behind
|
|
||||||
Deuxfleurs and, incidentally, obtain a diploma. Feel free to apply or pass the
|
|
||||||
information to anyone you know who might be interested.
|
|
||||||
|
|
||||||
- Read the [PhD topic proposal](https://www.imt-atlantique.fr/sites/default/files/recherche/doctorat/seed/research-topics/6-consensus-algorithms.html)
|
|
||||||
|
|
||||||
- For more context, also read the following blog posts:
|
|
||||||
|
|
||||||
- [Maintaining read-after-write consistency in all circumstances](@/blog/2023-12-preserving-read-after-write-consistency/index.md)
|
|
||||||
- [Thoughts on "Leaderless Consensus"](@/blog/2023-11-thoughts-on-leaderless-consensus/index.md)
|
|
||||||
|
|
||||||
- [Apply here](https://www.imt-atlantique.fr/en/research-innovation/phd/seed/application)
|
|
||||||
|
|
||||||
- Read about [the SEED PhD program](https://www.imt-atlantique.fr/en/research-innovation/phd/seed)
|
|
||||||
|
|
||||||
- Read about [the STACK team at IMT Atlantique](https://stack-research-group.gitlabpages.inria.fr/web/)
|
|
||||||
|
|
||||||
A webinar will be held on Friday, Jan 10, 2024, at 11:00 CET, to introduce
|
|
||||||
the PhD subject and the context. See details at [this
|
|
||||||
address](https://www.imt-atlantique.fr/en/research-innovation/phd/seed/events#webinars) (we are subject 6-consensus-algorithms).
|
|
||||||
|
|
||||||
The PhD topic is open for applications until Jan 31, 2024.
|
|
|
@ -1,81 +0,0 @@
|
||||||
+++
|
|
||||||
title="Open letter to the European Commission"
|
|
||||||
date=2024-07-14
|
|
||||||
+++
|
|
||||||
|
|
||||||
*Deuxfleurs has benefitted multiple times from European grants via the NGI project, for the developpement of Garage and Aerogramme, two pieces of software
|
|
||||||
that we have developped for the needs of our association. Today, these grants are in peril, as the European Commission wishes to finance AI projects instead.
|
|
||||||
We relay and sign an open letter from our friends at petites singularités, that asks that the NGI project be maintained, as it provides great assistance
|
|
||||||
for the development of free software and commons on the Internet.*
|
|
||||||
|
|
||||||
<!-- more -->
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
Since 2020, Next Generation Internet (NGI) programmes, part of European
|
|
||||||
Commission's Horizon programme, fund free software in Europe using a cascade
|
|
||||||
funding mechanism (see for example NLnet's calls). This year, according to the
|
|
||||||
Horizon Europe working draft detailing funding programmes for 2025, we notice
|
|
||||||
that Next Generation Internet is not mentioned any more as part of Cluster 4.
|
|
||||||
|
|
||||||
NGI programmes have shown their strength and importance to support the European
|
|
||||||
software infrastructure, as a generic funding instrument to fund digital
|
|
||||||
commons and ensure their long-term sustainability. We find this transformation
|
|
||||||
incomprehensible, moreover when NGI has proven efficient and ecomomical to
|
|
||||||
support free software as a whole, from the smallest to the most established
|
|
||||||
initiatives. This ecosystem diversity backs the strength of European
|
|
||||||
technological innovation, and maintaining the NGI initiative to provide
|
|
||||||
structural support to software projects at the heart of worldwide innovation is
|
|
||||||
key to enforce the sovereignty of a European infrastructure.
|
|
||||||
|
|
||||||
Contrary to common perception, technical innovations often originate from
|
|
||||||
European rather than North American programming communities, and are mostly
|
|
||||||
initiated by small-scaled organizations.
|
|
||||||
|
|
||||||
Previous Cluster 4 allocated 27 millions euros to:
|
|
||||||
|
|
||||||
- "Human centric Internet aligned with values and principles commonly shared in Europe" ;
|
|
||||||
|
|
||||||
- "A flourishing internet, based on common building blocks created within NGI, that enables better control of our digital life" ;
|
|
||||||
|
|
||||||
- "A structured eco-system of talented contributors driving the creation of new internet commons and the evolution of existing internet commons".
|
|
||||||
|
|
||||||
In the name of these challenges, more than 500 projects received NGI funding in
|
|
||||||
the first 5 years, backed by 18 organisations managing these European funding
|
|
||||||
consortia.
|
|
||||||
|
|
||||||
NGI contributes to a vast ecosystem, as most of its budget is allocated to fund
|
|
||||||
third parties by the means of open calls, to structure commons that cover the
|
|
||||||
whole Internet scope - from hardware to application, operating systems, digital
|
|
||||||
identities or data traffic supervision. This third-party funding is not renewed
|
|
||||||
in the current program, leaving many projects short on resources for research
|
|
||||||
and innovation in Europe.
|
|
||||||
|
|
||||||
Moreover, NGI allows exchanges and collaborations across all the Euro zone
|
|
||||||
countries as well as "widening countries"[^1], currently both a success and and an
|
|
||||||
ongoing progress, likewise the Erasmus programme before us. NGI also
|
|
||||||
contributes to opening and supporting longer relationships than strict project
|
|
||||||
funding does. It encourages to implement projects funded as pilots, backing
|
|
||||||
collaboration, identification and reuse of common elements across projects,
|
|
||||||
interoperability in identification systems and beyond, and setting up
|
|
||||||
development models that mix diverse scales and types of European funding
|
|
||||||
schemes.
|
|
||||||
|
|
||||||
While the USA, China or Russia deploy huge public and private resources to
|
|
||||||
develop software and infrastructure that massively capture private consumer
|
|
||||||
data, the EU can't afford this renunciation.
|
|
||||||
|
|
||||||
Free and open source software, as supported by NGI since 2020, is by design the
|
|
||||||
opposite of potential vectors for foreign interference. It lets us keep our
|
|
||||||
data local and favors a community-wide economy and know-how, while allowing an
|
|
||||||
international collaboration. This is all the more essential in the current
|
|
||||||
geopolitical context: the challenge of technological sovereignty is central,
|
|
||||||
and free software allows to address it while acting for peace and sovereignty
|
|
||||||
in the digital world as a whole.
|
|
||||||
|
|
||||||
*The list of all other collectives that have also signed the letter is available
|
|
||||||
at the following address: <https://pad.public.cat/lettre-NCP-NGI>.*
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
[^1]: As defined by Horizon Europe, widening Member States are Bulgaria, Croatia, Cyprus, the Czech Republic, Estonia, Greece, Hungary, Latvia, Lituania, Malta, Poland, Portugal, Romania, Slovakia and Slovenia. Widening associated countries (under condition of an association agreement) include Albania, Armenia, Bosnia, Feroe Islands, Georgia, Kosovo, Moldavia, Montenegro, Morocco, North Macedonia, Serbia, Tunisia, Turkey and Ukraine. Widening overseas regions are : Guadeloupe, French Guyana, Martinique, Reunion Island, Mayotte, Saint-Martin, The Azores, Madeira, the Canary Islands.
|
|
2
content/blog/_index.md
Normal file → Executable file
|
@ -1,8 +1,6 @@
|
||||||
+++
|
+++
|
||||||
title = "Blog"
|
title = "Blog"
|
||||||
description = "This is our developer journal"
|
description = "This is our developer journal"
|
||||||
template = "blog_index.html"
|
|
||||||
page_template = "blog_article.html"
|
|
||||||
sort_by = "date"
|
sort_by = "date"
|
||||||
paginate_by = 5
|
paginate_by = 5
|
||||||
+++
|
+++
|
2
garage
|
@ -1 +1 @@
|
||||||
Subproject commit 070a8ad110cb75dd2df7ddc9ecbb5c814291ac89
|
Subproject commit 3e1373fafcf1789efa876fc9c66fb85cd74d3a31
|
0
package-lock.json
generated
Normal file → Executable file
0
package.json
Normal file → Executable file
20
shell.nix
|
@ -1,20 +0,0 @@
|
||||||
with import <nixpkgs> {};
|
|
||||||
|
|
||||||
stdenv.mkDerivation {
|
|
||||||
name = "node";
|
|
||||||
buildInputs = [
|
|
||||||
nodejs
|
|
||||||
zola
|
|
||||||
];
|
|
||||||
shellHook = ''
|
|
||||||
export PATH="$PWD/node_modules/.bin/:$PATH"
|
|
||||||
function build {
|
|
||||||
rm -r content/documentation static/api
|
|
||||||
cp -rv garage/doc/book content/documentation
|
|
||||||
cp -rv garage/doc/api static/api
|
|
||||||
npm install
|
|
||||||
npx tailwindcss -i ./src/input.css -o ./static/style.css --minify
|
|
||||||
zola build -u https://garagehq.deuxfleurs.fr
|
|
||||||
}
|
|
||||||
'';
|
|
||||||
}
|
|
0
src/input.css
Normal file → Executable file
0
static/icons/browserconfig.xml
Normal file → Executable file
0
static/icons/cpu.svg
Normal file → Executable file
Before Width: | Height: | Size: 4.5 KiB After Width: | Height: | Size: 4.5 KiB |
0
static/icons/disk.svg
Normal file → Executable file
Before Width: | Height: | Size: 2.7 KiB After Width: | Height: | Size: 2.7 KiB |
0
static/icons/hardware.svg
Normal file → Executable file
Before Width: | Height: | Size: 2.8 KiB After Width: | Height: | Size: 2.8 KiB |
0
static/icons/mstile-150x150.png
Normal file → Executable file
Before Width: | Height: | Size: 5.7 KiB After Width: | Height: | Size: 5.7 KiB |
0
static/icons/network.svg
Normal file → Executable file
Before Width: | Height: | Size: 3.3 KiB After Width: | Height: | Size: 3.3 KiB |
0
static/icons/ram.svg
Normal file → Executable file
Before Width: | Height: | Size: 2.5 KiB After Width: | Height: | Size: 2.5 KiB |
6
static/icons/site.webmanifest
Normal file → Executable file
|
@ -1,6 +1,6 @@
|
||||||
{
|
{
|
||||||
"name": "Garage",
|
"name": "",
|
||||||
"short_name": "Garage",
|
"short_name": "",
|
||||||
"icons": [
|
"icons": [
|
||||||
{
|
{
|
||||||
"src": "/android-chrome-192x192.png",
|
"src": "/android-chrome-192x192.png",
|
||||||
|
@ -15,5 +15,5 @@
|
||||||
],
|
],
|
||||||
"theme_color": "#ffffff",
|
"theme_color": "#ffffff",
|
||||||
"background_color": "#ffffff",
|
"background_color": "#ffffff",
|
||||||
"display": "browser"
|
"display": "standalone"
|
||||||
}
|
}
|
||||||
|
|
|
@ -1,121 +0,0 @@
|
||||||
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
|
|
||||||
<!-- Created with Inkscape (http://www.inkscape.org/) -->
|
|
||||||
|
|
||||||
<svg
|
|
||||||
version="1.1"
|
|
||||||
id="svg2"
|
|
||||||
xml:space="preserve"
|
|
||||||
width="1600.5095"
|
|
||||||
height="502.77777"
|
|
||||||
viewBox="0 0 480.15286 150.83333"
|
|
||||||
xmlns:xlink="http://www.w3.org/1999/xlink"
|
|
||||||
xmlns="http://www.w3.org/2000/svg"
|
|
||||||
xmlns:svg="http://www.w3.org/2000/svg"
|
|
||||||
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
|
|
||||||
xmlns:cc="http://creativecommons.org/ns#"
|
|
||||||
xmlns:dc="http://purl.org/dc/elements/1.1/"><metadata
|
|
||||||
id="metadata8"><rdf:RDF><cc:Work
|
|
||||||
rdf:about=""><dc:format>image/svg+xml</dc:format><dc:type
|
|
||||||
rdf:resource="http://purl.org/dc/dcmitype/StillImage" /></cc:Work></rdf:RDF></metadata><defs
|
|
||||||
id="defs6"><linearGradient
|
|
||||||
id="linearGradient1220"><stop
|
|
||||||
id="stop1216"
|
|
||||||
offset="0"
|
|
||||||
style="stop-color:#98bf00;stop-opacity:1;" /><stop
|
|
||||||
id="stop1218"
|
|
||||||
offset="1"
|
|
||||||
style="stop-color:#98bf00;stop-opacity:0.51" /></linearGradient><linearGradient
|
|
||||||
x1="0"
|
|
||||||
y1="0"
|
|
||||||
x2="1"
|
|
||||||
y2="0"
|
|
||||||
gradientUnits="userSpaceOnUse"
|
|
||||||
gradientTransform="matrix(-139.45511,-135.52185,-135.52185,139.45511,177.4727,131.75308)"
|
|
||||||
spreadMethod="pad"
|
|
||||||
id="linearGradient28"><stop
|
|
||||||
style="stop-opacity:1;stop-color:#00afbc"
|
|
||||||
offset="0"
|
|
||||||
id="stop24" /><stop
|
|
||||||
style="stop-opacity:1;stop-color:#205374"
|
|
||||||
offset="1"
|
|
||||||
id="stop26" /></linearGradient><clipPath
|
|
||||||
clipPathUnits="userSpaceOnUse"
|
|
||||||
id="clipPath38"><path
|
|
||||||
d="M 0,127.984 H 415.474 V 0 H 0 Z"
|
|
||||||
id="path36" /></clipPath><linearGradient
|
|
||||||
xlink:href="#linearGradient1220"
|
|
||||||
id="linearGradient947"
|
|
||||||
gradientUnits="userSpaceOnUse"
|
|
||||||
x1="14.915152"
|
|
||||||
y1="14.167241"
|
|
||||||
x2="214.11908"
|
|
||||||
y2="111.76186"
|
|
||||||
gradientTransform="matrix(4.4444443,0,0,-4.4444443,-33.008887,535.8)" /><clipPath
|
|
||||||
clipPathUnits="userSpaceOnUse"
|
|
||||||
id="clipPath38-9"><path
|
|
||||||
d="M 0,127.984 H 415.474 V 0 H 0 Z"
|
|
||||||
id="path36-1" /></clipPath></defs><g
|
|
||||||
id="g10"
|
|
||||||
transform="matrix(1.3333333,0,0,-1.3333333,-9.9026662,160.74)"><g
|
|
||||||
id="g40"
|
|
||||||
transform="translate(175.9982,95.8645)" /><g
|
|
||||||
id="g44"
|
|
||||||
transform="translate(152.1193,64.9934)" />
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
<g
|
|
||||||
id="NGI0Entrust"><title
|
|
||||||
id="title12661">NGI Zero Entrust</title><path
|
|
||||||
id="path7692"
|
|
||||||
style="fill:#ffffff;fill-opacity:1;stroke:none;stroke-width:0.999999"
|
|
||||||
d="m 133.10651,96.933602 c -6.67899,0 -12.68988,-1.41201 -18.02988,-4.23501 -5.344,-2.822 -9.51678,-6.73803 -12.52178,-11.74702 -3.004994,-5.008 -4.507906,-10.66967 -4.507906,-16.982669 0,-6.314995 1.502912,-11.974991 4.507906,-16.983985 3.005,-5.008995 7.14794,-8.924024 12.42993,-11.747021 5.282,-2.823998 11.23084,-4.23501 17.84883,-4.23501 4.613,0 9.19693,0.698875 13.75093,2.094873 0.045,0.014 0.0912,0.02819 0.13623,0.04219 7.10399,2.201999 11.88413,8.859686 11.88413,16.29668 v 9.047022 c 0,3.581996 -2.90333,6.485889 -6.48633,6.485889 h -0.50581 c -0.064,0 -0.12704,-0.0077 -0.19204,-0.0097 -0.064,0.002 -0.12704,0.0097 -0.19204,0.0097 h -7.28306 c -3.92899,0 -7.35908,-2.964914 -7.61308,-6.884912 -0.278,-4.295996 3.12428,-7.86709 7.36128,-7.86709 0.776,0 1.34293,-0.753702 1.11093,-1.493702 -0.65799,-2.087998 -2.34102,-3.751009 -4.54702,-4.333008 -2.07399,-0.546999 -4.27598,-0.820898 -6.60498,-0.820898 -4.00699,0 -7.57381,0.864972 -10.6998,2.594971 -3.127,1.729999 -5.5704,4.143993 -7.3314,7.23999 -1.761,3.095997 -2.64067,6.617018 -2.64067,10.564014 0,4.005996 0.87967,7.557666 2.64067,10.653656 1.761,3.097 4.2191,5.49317 7.3771,7.19517 3.156,1.698 6.76804,2.54883 10.83604,2.54883 4.68099,0 8.8649,-1.26899 12.5499,-3.80699 2.341,-1.61199 5.52423,-1.58761 7.75723,0.17139 3.47999,2.741 3.2889,8.04495 -0.31509,10.45196 -1.7,1.13599 -3.53807,2.11163 -5.51206,2.92763 -4.553,1.881 -9.62316,2.82305 -15.20816,2.82305 z m -93.706345,-1.09248 c -4.022996,0 -7.284815,-3.26081 -7.284815,-7.28482 v -49.17612 c 0,-4.022993 3.261819,-7.284815 7.284815,-7.284815 4.023996,0 7.284814,3.261822 7.284814,7.284815 V 62.34029 c 0,2.842996 3.564362,4.118722 5.36836,1.921728 L 76.282148,34.757135 c 1.383999,-1.685 3.450155,-2.661768 5.631153,-2.661768 h 1.380761 c 4.023997,0 7.286133,3.261822 7.286133,7.284815 v 49.17612 c 0,4.02401 -3.262136,7.28482 -7.286133,7.28482 -4.023995,0 -7.284815,-3.26081 -7.284815,-7.28482 V 65.615095 c 0,-2.844997 -3.568118,-4.119773 -5.370117,-1.917774 L 46.503925,93.172322 c -1.382997,1.69 -3.45199,2.6688 -5.635987,2.6688 z m 136.597415,-4.4e-4 c -4.074,0 -7.37578,-3.30178 -7.37578,-7.37578 V 39.472027 c 0,-4.073996 3.30178,-7.37622 7.37578,-7.37622 4.074,0 7.37622,3.302224 7.37622,7.37622 v 48.992875 c 0,4.074 -3.30222,7.37578 -7.37622,7.37578 z" /><path
|
|
||||||
id="path30"
|
|
||||||
style="fill:url(#linearGradient947);fill-opacity:1;stroke:none;stroke-width:4.44444"
|
|
||||||
d="M 79.115234 30 C 52.097457 30 30 52.101902 30 79.115234 L 30 423.66211 C 30 450.67989 52.097457 472.77734 79.115234 472.77734 L 812.60352 472.77734 C 839.61685 472.77734 861.7207 450.67544 861.7207 423.66211 L 861.7207 342.50586 C 861.7207 333.51919 865.28844 324.89711 871.64844 318.53711 L 912.07617 278.11133 C 923.36506 266.82688 923.33313 248.52428 912.01758 237.27539 L 871.7207 197.19922 C 865.3207 190.83922 861.7207 182.18238 861.7207 173.16016 L 861.7207 79.115234 C 861.7207 52.101902 839.61685 30 812.60352 30 L 79.115234 30 z M 558.57812 104.87891 C 583.40035 104.87891 605.93437 109.06578 626.16992 117.42578 C 634.94325 121.05245 643.11241 125.38861 650.66797 130.4375 C 666.68575 141.13528 667.53503 164.7084 652.06836 176.89062 C 642.14392 184.7084 627.99624 184.81679 617.5918 177.65234 C 601.21402 166.37234 582.6189 160.73242 561.81445 160.73242 C 543.73445 160.73242 527.68096 164.51388 513.6543 172.06055 C 499.61874 179.62499 488.69385 190.27462 480.86719 204.03906 C 473.04052 217.79906 469.13086 233.58423 469.13086 251.38867 C 469.13086 268.93089 473.04052 284.57984 480.86719 298.33984 C 488.69385 312.09984 499.55339 322.82869 513.45117 330.51758 C 527.3445 338.20647 543.19697 342.05078 561.00586 342.05078 C 571.35697 342.05078 581.14355 340.83345 590.36133 338.40234 C 600.16577 335.81568 607.64587 328.42453 610.57031 319.14453 C 611.60142 315.85564 609.0817 312.50586 605.63281 312.50586 C 586.8017 312.50586 571.68046 296.63435 572.91602 277.54102 C 574.0449 260.11879 589.28973 246.94141 606.75195 246.94141 L 639.12109 246.94141 C 639.40998 246.94141 639.69016 246.97549 639.97461 246.98438 C 640.2635 246.97549 640.54368 246.94141 640.82812 246.94141 L 643.07617 246.94141 C 659.00062 246.94141 671.9043 259.84758 671.9043 275.76758 L 671.9043 315.97656 C 671.9043 349.0299 650.65927 378.61958 619.08594 388.40625 C 618.88594 388.46847 618.68047 388.53153 618.48047 388.59375 C 598.24047 394.79819 577.86746 397.9043 557.36523 397.9043 C 527.9519 397.9043 501.51266 391.63314 478.03711 379.08203 C 454.56155 366.53536 436.14852 349.13527 422.79297 326.87305 C 409.43741 304.61083 402.75781 279.45534 402.75781 251.38867 C 402.75781 223.33089 409.43741 198.16793 422.79297 175.91016 C 436.14852 153.64793 454.6942 136.24339 478.44531 123.70117 C 502.17865 111.15451 528.89368 104.87891 558.57812 104.87891 z M 142.10547 109.73438 L 148.62891 109.73438 C 158.33557 109.73438 167.53107 114.08459 173.67773 121.5957 L 280.94531 252.5957 C 288.9542 262.38237 304.8125 256.71671 304.8125 244.07227 L 304.8125 142.11133 C 304.8125 124.22688 319.30501 109.73438 337.18945 109.73438 C 355.0739 109.73438 369.57227 124.22688 369.57227 142.11133 L 369.57227 360.67188 C 369.57227 378.55187 355.0739 393.04883 337.18945 393.04883 L 331.05273 393.04883 C 321.3594 393.04883 312.1765 388.70764 306.02539 381.21875 L 198.3418 250.08594 C 190.32402 240.32149 174.48242 245.9914 174.48242 258.62695 L 174.48242 360.67188 C 174.48242 378.55187 159.98991 393.04883 142.10547 393.04883 C 124.22547 393.04883 109.72852 378.55187 109.72852 360.67188 L 109.72852 142.11133 C 109.72852 124.22688 124.22547 109.73438 142.10547 109.73438 z M 749.20508 109.73633 C 767.31174 109.73633 781.98828 124.41091 781.98828 142.51758 L 781.98828 360.26367 C 781.98828 378.37034 767.31174 393.04688 749.20508 393.04688 C 731.09841 393.04688 716.42383 378.37034 716.42383 360.26367 L 716.42383 142.51758 C 716.42383 124.41091 731.09841 109.73633 749.20508 109.73633 z "
|
|
||||||
transform="matrix(0.22500001,0,0,-0.22500001,7.4269998,120.555)" /><g
|
|
||||||
aria-label="Z E R O"
|
|
||||||
transform="scale(1,-1)"
|
|
||||||
id="text56"
|
|
||||||
style="font-weight:600;font-size:31.76px;font-family:'Montserrat SemiBold';-inkscape-font-specification:Montserrat-SemiBold;fill:#6f9aa8"><path
|
|
||||||
d="m 261.75384,-85.665085 -13.08512,15.97528 h 13.498 v 3.4936 H 243.206 v -2.76312 l 13.08512,-15.97528 h -12.8628 v -3.4936 h 18.32552 z"
|
|
||||||
id="path12603" /><path
|
|
||||||
d="m 278.84063,-75.787725 v 6.12968 h 12.5452 v 3.46184 h -16.674 v -22.232 h 16.22936 v 3.46184 h -12.10056 v 5.78032 h 10.73488 v 3.39832 z"
|
|
||||||
id="path12605" /><path
|
|
||||||
d="m 323.74919,-66.196205 h -4.4464 l -4.54168,-6.5108 q -0.28584,0.03176 -0.85752,0.03176 h -5.01808 v 6.47904 h -4.1288 v -22.232 h 9.14688 q 2.89016,0 5.01808,0.9528 2.15968,0.9528 3.30304,2.73136 1.14336,1.77856 1.14336,4.22408 0,2.50904 -1.23864,4.31936 -1.20688,1.81032 -3.4936,2.6996 z m -4.54168,-14.32376 q 0,-2.12792 -1.39744,-3.27128 -1.39744,-1.14336 -4.09704,-1.14336 h -4.82752 v 8.86104 h 4.82752 q 2.6996,0 4.09704,-1.14336 1.39744,-1.17512 1.39744,-3.30304 z"
|
|
||||||
id="path12607" /><path
|
|
||||||
d="m 347.12448,-65.878605 q -3.39832,0 -6.12968,-1.46096 -2.73136,-1.49272 -4.2876,-4.09704 -1.55624,-2.63608 -1.55624,-5.8756 0,-3.23952 1.55624,-5.84384 1.55624,-2.63608 4.2876,-4.09704 2.73136,-1.49272 6.12968,-1.49272 3.39832,0 6.12968,1.49272 2.73136,1.46096 4.2876,4.06528 1.55624,2.60432 1.55624,5.8756 0,3.27128 -1.55624,5.8756 -1.55624,2.60432 -4.2876,4.09704 -2.73136,1.46096 -6.12968,1.46096 z m 0,-3.62064 q 2.2232,0 4.00176,-0.98456 1.77856,-1.01632 2.79488,-2.79488 1.01632,-1.81032 1.01632,-4.03352 0,-2.2232 -1.01632,-4.00176 -1.01632,-1.81032 -2.79488,-2.79488 -1.77856,-1.01632 -4.00176,-1.01632 -2.2232,0 -4.00176,1.01632 -1.77856,0.98456 -2.79488,2.79488 -1.01632,1.77856 -1.01632,4.00176 0,2.2232 1.01632,4.03352 1.01632,1.77856 2.79488,2.79488 1.77856,0.98456 4.00176,0.98456 z"
|
|
||||||
id="path12609" /></g><g
|
|
||||||
aria-label="ENTRUST"
|
|
||||||
transform="scale(0.99994801,-1.000052)"
|
|
||||||
id="Entrust"
|
|
||||||
style="font-weight:bold;font-size:20.009px;font-family:'Montserrat SemiBold';-inkscape-font-specification:'Montserrat SemiBold, Bold';letter-spacing:3.55932px;fill:#6f9aa8;stroke-width:0.999947"><path
|
|
||||||
d="m 245.81989,-41.935548 v 3.861737 h 7.90356 v 2.180981 h -10.50473 v -14.0063 h 10.2246 v 2.180981 h -7.62343 v 3.641638 h 6.76304 v 2.140963 z"
|
|
||||||
id="path12612" /><path
|
|
||||||
d="m 270.04847,-40.414864 v -9.484266 h 2.58116 v 14.0063 h -2.14096 l -7.72347,-9.484266 v 9.484266 h -2.58117 v -14.0063 h 2.14097 z"
|
|
||||||
id="path12614" /><path
|
|
||||||
d="m 285.39308,-35.89283 h -2.60117 v -11.80531 h -4.64209 v -2.20099 h 11.88535 v 2.20099 h -4.64209 z"
|
|
||||||
id="path12616" /><path
|
|
||||||
d="m 307.52074,-35.89283 h -2.80126 l -2.86129,-4.101845 q -0.18008,0.02001 -0.54024,0.02001 h -3.16142 v 4.081836 h -2.60117 v -14.0063 h 5.76259 q 1.82082,0 3.16142,0.60027 1.36061,0.60027 2.08094,1.720774 0.72032,1.120504 0.72032,2.661197 0,1.580711 -0.78035,2.721224 -0.76034,1.140513 -2.20099,1.700765 z m -2.86129,-9.024059 q 0,-1.340603 -0.88039,-2.060927 -0.8804,-0.720324 -2.58116,-0.720324 h -3.04137 v 5.582511 h 3.04137 q 1.70076,0 2.58116,-0.720324 0.88039,-0.740333 0.88039,-2.080936 z"
|
|
||||||
id="path12618" /><path
|
|
||||||
d="m 319.76395,-35.69274 q -2.90131,0 -4.52204,-1.620729 -1.62073,-1.640738 -1.62073,-4.682106 v -7.903555 h 2.60117 v 7.80351 q 0,4.121854 3.5616,4.121854 3.5416,0 3.5416,-4.121854 v -7.80351 h 2.56115 v 7.903555 q 0,3.041368 -1.62073,4.682106 -1.60072,1.620729 -4.50202,1.620729 z"
|
|
||||||
id="path12620" /><path
|
|
||||||
d="m 337.4296,-35.69274 q -1.62073,0 -3.14141,-0.460207 -1.50068,-0.460207 -2.38107,-1.220549 l 0.9004,-2.020909 q 0.86039,0.680306 2.10095,1.120504 1.26056,0.420189 2.52113,0.420189 1.5607,0 2.32105,-0.500225 0.78035,-0.500225 0.78035,-1.320594 0,-0.60027 -0.4402,-0.980441 -0.42019,-0.40018 -1.08049,-0.620279 -0.66029,-0.220099 -1.80081,-0.500225 -1.60072,-0.380171 -2.60117,-0.760342 -0.98044,-0.380171 -1.70076,-1.180531 -0.70032,-0.820369 -0.70032,-2.20099 0,-1.160522 0.62028,-2.100945 0.64029,-0.960432 1.90086,-1.520684 1.28057,-0.560252 3.1214,-0.560252 1.28058,0 2.52113,0.320144 1.24056,0.320144 2.14097,0.920414 l -0.82037,2.020909 q -0.92042,-0.540243 -1.92087,-0.820369 -1.00045,-0.280126 -1.94087,-0.280126 -1.54069,0 -2.30103,0.520234 -0.74034,0.520234 -0.74034,1.380621 0,0.60027 0.42019,0.980441 0.4402,0.380171 1.1005,0.60027 0.66029,0.220099 1.80081,0.500225 1.5607,0.360162 2.56115,0.760342 1.00045,0.380171 1.70076,1.180531 0.72033,0.80036 0.72033,2.160972 0,1.160522 -0.64029,2.100945 -0.62028,0.940423 -1.90085,1.500675 -1.28058,0.560252 -3.12141,0.560252 z"
|
|
||||||
id="path12622" /><path
|
|
||||||
d="m 354.47498,-35.89283 h -2.60117 v -11.80531 h -4.64209 v -2.20099 h 11.88535 v 2.20099 h -4.64209 z"
|
|
||||||
id="path12624" /></g></g>
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
<text
|
|
||||||
style="font-style:normal;font-variant:normal;font-weight:bold;font-stretch:normal;font-size:20.01px;font-family:'Montserrat SemiBold';-inkscape-font-specification:'Montserrat SemiBold, Bold';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;writing-mode:lr-tb;text-anchor:start;fill:#6f9aa8;fill-opacity:1;fill-rule:nonzero;stroke:none;stroke-width:1"
|
|
||||||
id="text2843"
|
|
||||||
x="240.16206"
|
|
||||||
y="-35.894695"
|
|
||||||
transform="scale(1,-1)"><tspan
|
|
||||||
id="tspan2841"
|
|
||||||
x="240.16206"
|
|
||||||
y="-35.894695" /></text></g></svg>
|
|
Before Width: | Height: | Size: 14 KiB |
0
static/images/backup.png
Normal file → Executable file
Before Width: | Height: | Size: 16 KiB After Width: | Height: | Size: 16 KiB |
0
static/images/cyberduck-logo.png
Normal file → Executable file
Before Width: | Height: | Size: 11 KiB After Width: | Height: | Size: 11 KiB |
0
static/images/host.png
Normal file → Executable file
Before Width: | Height: | Size: 12 KiB After Width: | Height: | Size: 12 KiB |
0
static/images/mastodon-logo.svg
Normal file → Executable file
Before Width: | Height: | Size: 1.4 KiB After Width: | Height: | Size: 1.4 KiB |
0
static/images/matrix-logo.svg
Normal file → Executable file
Before Width: | Height: | Size: 3.8 KiB After Width: | Height: | Size: 3.8 KiB |
0
static/images/nextcloud-logo.svg
Normal file → Executable file
Before Width: | Height: | Size: 10 KiB After Width: | Height: | Size: 10 KiB |
|
@ -1,34 +0,0 @@
|
||||||
<?xml version="1.0" standalone="no"?>
|
|
||||||
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 20010904//EN" "http://www.w3.org/TR/2001/REC-SVG-20010904/DTD/svg10.dtd">
|
|
||||||
<!-- Created using Karbon14, part of koffice: http://www.koffice.org/karbon -->
|
|
||||||
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" width="449px" height="168px">
|
|
||||||
<defs>
|
|
||||||
</defs>
|
|
||||||
<g id="Layer">
|
|
||||||
</g>
|
|
||||||
<g id="Layer">
|
|
||||||
<path fill="#98bf00" d="M446.602 73.8789L449.102 60.234L436.207 60.234L439.957 40.145L424.512 46.191L422.012 60.234L412.617 60.234L410.117 73.8789L419.363 73.8789L416.215 91.1719C416.066 92.125 415.816 93.5234 415.566 95.3203C415.316 97.1211 415.164 98.7188 415.164 100.07C415.215 106.316 416.715 111.465 419.664 115.516C422.613 119.66 427.41 122.109 434.109 122.859L440.555 109.566C437.105 109.117 434.508 107.766 432.66 105.469C430.809 103.117 429.91 100.168 429.91 96.5703C429.91 95.8711 430.012 94.8711 430.16 93.5234C430.309 92.1719 430.461 91.0742 430.609 90.2227L433.609 73.8789L446.602 73.8789L446.602 73.8789Z" />
|
|
||||||
<path fill="#98bf00" d="M310.707 72.332C313.105 71.4805 315.207 71.0312 316.957 71.0312C318.855 71.0312 320.453 71.582 321.754 72.6797C323.004 73.7305 323.602 75.2812 323.602 77.4297C323.602 78.0273 323.504 78.9297 323.301 80.1797C323.102 81.3281 322.953 82.3789 322.805 83.2773L319.203 100.168C318.953 101.469 318.703 102.82 318.453 104.219C318.203 105.668 318.105 106.918 318.105 107.965C318.105 112.016 319.203 115.414 321.453 118.113C323.602 120.812 327.449 122.41 333 122.859L339.348 110.016C337.195 109.668 335.648 108.867 334.699 107.617C333.699 106.418 333.199 104.719 333.199 102.57C333.199 102.07 333.25 101.469 333.348 100.82C333.398 100.168 333.5 99.6211 333.547 99.2188L337.195 82.0273C337.496 80.5781 337.746 79.1289 337.945 77.6797C338.148 76.2812 338.246 74.8789 338.246 73.5312C338.246 68.582 336.797 64.586 333.898 61.637C330.949 58.688 326.852 57.188 321.602 57.188C318.555 57.188 315.656 57.688 312.809 58.688C310.008 59.637 306.609 61.234 302.66 63.586C302.512 62.637 302.16 61.484 301.66 60.188C301.113 58.938 300.512 57.836 299.863 56.836L286.469 62.586C287.617 64.336 288.516 66.184 289.066 68.082C289.566 69.9805 289.816 71.7812 289.816 73.4297C289.816 74.2812 289.766 75.3281 289.617 76.4805C289.516 77.6289 289.367 78.5273 289.215 79.1797L281.27 121.512L295.664 121.512L304.109 75.8281C306.16 74.2812 308.359 73.1289 310.707 72.332L310.707 72.332Z" />
|
|
||||||
<path fill="#98bf00" d="M350.742 80.0781C349.191 84.6758 348.441 89.5742 348.441 94.7227C348.441 99.2188 349.043 103.219 350.191 106.719C351.34 110.215 352.992 113.164 355.09 115.516C357.141 117.914 359.688 119.711 362.637 120.961C365.586 122.211 368.883 122.859 372.484 122.859C376.832 122.859 381.129 122.062 385.43 120.461C389.777 118.863 393.574 116.363 396.824 113.016L391.426 100.519C388.926 103.32 386.176 105.418 383.129 106.867C380.078 108.316 377.031 109.016 374.031 109.016C370.535 109.016 367.785 107.918 365.785 105.719C363.836 103.469 362.836 100.668 362.836 97.3711L362.836 96.4219C362.836 96.0234 362.887 95.6211 362.988 95.2227C365.637 94.8711 368.633 94.4219 371.984 93.8242C375.332 93.2227 378.73 92.5234 382.18 91.7227C385.629 90.875 388.977 89.9258 392.273 88.9258C395.523 87.9258 398.422 86.875 400.871 85.8242L400.871 80.0781C400.871 76.5312 400.32 73.332 399.223 70.4805C398.074 67.734 396.574 65.332 394.625 63.285C392.676 61.285 390.324 59.785 387.676 58.785C385.078 57.738 382.23 57.188 379.18 57.188C374.73 57.188 370.582 58.188 366.836 60.137C363.035 62.086 359.789 64.785 357.141 68.2344C354.391 71.6328 352.293 75.5781 350.742 80.0781L350.742 80.0781ZM372.383 69.9805C373.934 69.1328 375.684 68.7344 377.633 68.7344C380.281 68.7344 382.48 69.582 384.227 71.332C385.977 73.0312 386.879 75.5781 386.879 79.0273C385.43 79.4766 383.727 80.0273 381.73 80.5781C379.68 81.0781 377.633 81.5781 375.531 82.0273C373.383 82.4766 371.332 82.9258 369.285 83.3281C367.234 83.6758 365.484 83.9766 363.984 84.2266C364.234 82.1289 364.688 80.1289 365.387 78.2773C366.137 76.4297 367.086 74.7812 368.234 73.3789C369.484 71.9805 370.832 70.832 372.383 69.9805L372.383 69.9805Z" fill-rule="evenodd" />
|
|
||||||
<path fill="#000000" d="M404.172 140.453C404.172 139.203 403.969 138.055 403.57 137.055C403.172 136.055 402.621 135.207 401.973 134.457C401.27 133.758 400.473 133.207 399.523 132.856C398.574 132.508 397.523 132.309 396.422 132.309C394.973 132.309 393.625 132.606 392.375 133.156C391.125 133.707 390.027 134.508 389.078 135.504C388.125 136.504 387.379 137.656 386.828 139.004C386.277 140.356 385.977 141.805 385.977 143.402C385.977 144.652 386.176 145.75 386.578 146.801C386.926 147.801 387.477 148.652 388.176 149.352C388.828 150.101 389.676 150.648 390.625 151.051C391.574 151.399 392.625 151.598 393.773 151.598C395.176 151.598 396.523 151.301 397.773 150.75C399.023 150.199 400.121 149.398 401.07 148.402C402.02 147.449 402.77 146.25 403.32 144.902C403.871 143.551 404.172 142.055 404.172 140.453L404.172 140.453ZM390.277 140.402C390.574 139.504 390.977 138.703 391.477 138.004C392.023 137.305 392.676 136.754 393.426 136.305C394.176 135.856 394.973 135.656 395.922 135.656C397.371 135.656 398.422 136.106 399.172 137.004C399.922 137.856 400.32 139.106 400.32 140.652C400.32 141.602 400.172 142.555 399.871 143.504C399.621 144.402 399.223 145.203 398.672 145.902C398.121 146.602 397.473 147.152 396.723 147.601C395.973 148 395.125 148.199 394.223 148.199C392.773 148.199 391.727 147.75 390.977 146.902C390.227 146 389.824 144.801 389.824 143.254C389.824 142.305 389.977 141.352 390.277 140.402L390.277 140.402Z" fill-rule="evenodd" />
|
|
||||||
<path fill="#000000" d="M434.559 132.559L431.008 132.559L429.109 143.602C429.059 143.754 429.012 144.004 429.012 144.352C429.012 144.703 429.012 144.953 429.012 145.203L428.859 145.203L422.465 132.559L419.113 132.559L415.766 151.301L419.363 151.301L421.363 140.004C421.414 139.856 421.414 139.606 421.414 139.356C421.414 139.106 421.414 138.805 421.414 138.504L421.563 138.504L428.109 151.449L431.309 151.149L434.559 132.559L434.559 132.559Z" />
|
|
||||||
<path fill="#000000" d="M374.383 132.559L370.734 132.559L367.387 151.301L371.082 151.301L374.383 132.559L374.383 132.559Z" />
|
|
||||||
<path fill="#000000" d="M328.949 132.559L324.703 132.559C323.902 133.906 323.051 135.457 322.102 137.106C321.152 138.754 320.254 140.453 319.355 142.152C318.453 143.852 317.656 145.5 316.906 147.102C316.156 148.699 315.555 150.101 315.105 151.301L318.953 151.301C319.105 150.949 319.254 150.5 319.453 150.051C319.652 149.602 319.855 149.102 320.105 148.652C320.305 148.199 320.504 147.75 320.703 147.301C320.902 146.852 321.102 146.453 321.254 146.102L327.75 146.102C327.801 146.551 327.801 147 327.852 147.5L328 148.949C328.051 149.398 328.102 149.852 328.152 150.301C328.199 150.75 328.199 151.098 328.199 151.449L331.898 151.149C331.898 150.449 331.848 149.648 331.75 148.699C331.699 147.75 331.551 146.75 331.398 145.703C331.25 144.652 331.098 143.504 330.898 142.351C330.75 141.203 330.551 140.055 330.301 138.906C330.102 137.754 329.898 136.656 329.648 135.555C329.398 134.508 329.199 133.508 328.949 132.559L328.949 132.559ZM326.602 138.106C326.703 138.656 326.801 139.254 326.902 139.902C327 140.504 327.102 141.106 327.152 141.652C327.25 142.203 327.301 142.601 327.352 142.953L322.703 142.953C322.953 142.504 323.203 142.004 323.453 141.453C323.754 140.902 324.051 140.305 324.352 139.703C324.703 139.106 325 138.555 325.301 138.004C325.602 137.453 325.852 136.957 326.102 136.606L326.301 136.606C326.402 137.004 326.5 137.504 326.602 138.106L326.602 138.106Z" fill-rule="evenodd" />
|
|
||||||
<path fill="#000000" d="M357.641 135.957L358.188 132.559L345.395 132.559L344.844 135.957L349.391 135.957L346.742 151.301L350.391 151.301L353.09 135.957L357.641 135.957L357.641 135.957Z" />
|
|
||||||
<path fill="#000000" d="M297.465 132.309C296.414 132.309 295.363 132.356 294.312 132.457C293.266 132.606 292.266 132.758 291.316 133.008L288.168 150.852C289.117 151.098 290.215 151.25 291.414 151.399C292.566 151.551 293.664 151.598 294.715 151.598C296.262 151.598 297.664 151.348 299.012 150.852C300.363 150.301 301.562 149.602 302.562 148.652C303.559 147.699 304.359 146.551 304.961 145.203C305.508 143.852 305.809 142.305 305.809 140.606C305.809 139.254 305.609 138.106 305.211 137.055C304.762 136.004 304.211 135.156 303.461 134.457C302.711 133.758 301.812 133.207 300.812 132.856C299.762 132.508 298.664 132.309 297.465 132.309L297.465 132.309ZM296.664 135.707C297.414 135.707 298.113 135.805 298.762 135.957C299.414 136.106 299.961 136.406 300.41 136.805C300.91 137.203 301.312 137.703 301.562 138.356C301.812 138.953 301.961 139.703 301.961 140.652C301.961 141.852 301.812 142.902 301.461 143.852C301.16 144.801 300.711 145.602 300.113 146.25C299.512 146.902 298.812 147.352 297.961 147.699C297.113 148.051 296.215 148.199 295.164 148.199C294.715 148.199 294.266 148.199 293.715 148.152C293.164 148.102 292.664 148.051 292.316 148L294.465 135.906C294.766 135.856 295.164 135.805 295.613 135.754C296.062 135.707 296.414 135.707 296.664 135.707L296.664 135.707Z" fill-rule="evenodd" />
|
|
||||||
<path fill="#000000" d="M185.809 62.586C186.957 64.336 187.855 66.184 188.406 68.082C188.906 69.9805 189.156 71.7812 189.156 73.4297C189.156 74.2812 189.105 75.3281 188.957 76.4805C188.855 77.6289 188.707 78.5273 188.555 79.1797L180.609 121.512L195.004 121.512L203.449 75.8281C205.5 74.2812 207.699 73.1289 210.047 72.332C212.445 71.4805 214.547 71.0312 216.297 71.0312C218.195 71.0312 219.793 71.582 221.094 72.6797C222.344 73.7305 222.941 75.2812 222.941 77.4297C222.941 78.0273 222.844 78.9297 222.645 80.1797C222.441 81.3281 222.293 82.3789 222.145 83.2773L218.543 100.168C218.293 101.469 218.043 102.82 217.793 104.219C217.547 105.668 217.445 106.918 217.445 107.965C217.445 112.016 218.543 115.414 220.793 118.113C222.941 120.812 226.793 122.41 232.34 122.859L238.688 110.016C236.539 109.668 234.988 108.867 234.039 107.617C233.039 106.418 232.539 104.719 232.539 102.57C232.539 102.07 232.59 101.469 232.688 100.82C232.738 100.168 232.84 99.6211 232.891 99.2188L236.539 82.0273C236.836 80.5781 237.086 79.1289 237.285 77.6797C237.488 76.2812 237.586 74.8789 237.586 73.5312C237.586 68.582 236.137 64.586 233.238 61.637C230.289 58.688 226.191 57.188 220.945 57.188C217.895 57.188 214.996 57.688 212.148 58.688C209.348 59.637 205.949 61.234 202 63.586C201.852 62.637 201.5 61.484 201 60.188C200.453 58.938 199.852 57.836 199.203 56.836L185.809 62.586L185.809 62.586Z" />
|
|
||||||
<path fill="#000000" d="M276.82 31.547L262.676 31.547L251.883 90.0234C251.43 91.9727 251.082 94.0234 250.832 96.1719C250.582 98.2695 250.434 100.219 250.434 102.019C250.434 107.816 251.531 112.566 253.781 116.262C256.031 119.961 259.828 122.16 265.176 122.859L271.672 109.566C270.625 109.066 269.723 108.516 268.875 107.918C268.023 107.367 267.324 106.617 266.773 105.769C266.176 104.918 265.727 103.918 265.477 102.719C265.227 101.519 265.074 100.019 265.074 98.2695C265.074 97.4219 265.125 96.4727 265.227 95.4727C265.375 94.4219 265.527 93.3711 265.676 92.2734L276.82 31.547L276.82 31.547Z" />
|
|
||||||
<path fill="#000000" d="M246.434 132.559L242.785 132.559L240.387 146.25C239.887 146.801 239.285 147.25 238.535 147.652C237.785 148 236.988 148.199 236.086 148.199C235.188 148.199 234.488 148 233.988 147.601C233.438 147.152 233.188 146.453 233.188 145.402C233.188 145.203 233.238 144.902 233.289 144.504C233.34 144.152 233.34 143.801 233.387 143.504L235.387 132.559L231.688 132.559L229.738 143.453C229.691 143.902 229.641 144.352 229.59 144.801C229.539 145.25 229.539 145.602 229.539 145.953C229.539 146.953 229.691 147.801 229.988 148.551C230.289 149.301 230.691 149.852 231.191 150.301C231.738 150.75 232.34 151.098 232.988 151.301C233.688 151.5 234.387 151.598 235.137 151.598C236.988 151.598 238.637 151.051 240.137 149.898C240.137 150.148 240.137 150.449 240.188 150.75C240.188 151 240.188 151.25 240.234 151.5L243.883 151.25C243.836 151 243.836 150.75 243.836 150.449C243.785 150.199 243.785 149.898 243.785 149.551C243.785 148.949 243.836 148.301 243.883 147.652C243.934 146.953 243.984 146.301 244.133 145.703L246.434 132.559L246.434 132.559Z" />
|
|
||||||
<path fill="#000000" d="M276.621 132.559L273.074 132.559L271.172 143.602C271.125 143.754 271.074 144.004 271.074 144.352C271.074 144.703 271.074 144.953 271.074 145.203L270.922 145.203L264.527 132.559L261.176 132.559L257.828 151.301L261.426 151.301L263.426 140.004C263.477 139.856 263.477 139.606 263.477 139.356C263.477 139.106 263.477 138.805 263.477 138.504L263.625 138.504L270.176 151.449L273.371 151.149L276.621 132.559L276.621 132.559Z" />
|
|
||||||
<path fill="#000000" d="M214.797 134.457C214.098 133.758 213.297 133.207 212.348 132.856C211.398 132.508 210.348 132.309 209.25 132.309C207.801 132.309 206.449 132.606 205.199 133.156C203.949 133.707 202.852 134.508 201.902 135.504C200.953 136.504 200.203 137.656 199.652 139.004C199.102 140.356 198.801 141.805 198.801 143.402C198.801 144.652 199.004 145.75 199.402 146.801C199.754 147.801 200.301 148.652 201 149.352C201.652 150.101 202.5 150.648 203.449 151.051C204.398 151.399 205.449 151.598 206.598 151.598C208 151.598 209.348 151.301 210.598 150.75C211.848 150.199 212.945 149.398 213.895 148.402C214.848 147.449 215.598 146.25 216.145 144.902C216.695 143.551 216.996 142.055 216.996 140.453C216.996 139.203 216.797 138.055 216.395 137.055C215.996 136.055 215.445 135.207 214.797 134.457L214.797 134.457ZM204.301 138.004C204.852 137.305 205.5 136.754 206.25 136.305C207 135.856 207.801 135.656 208.75 135.656C210.199 135.656 211.246 136.106 211.996 137.004C212.746 137.856 213.148 139.106 213.148 140.652C213.148 141.602 212.996 142.555 212.695 143.504C212.445 144.402 212.047 145.203 211.496 145.902C210.949 146.602 210.297 147.152 209.547 147.601C208.797 148 207.949 148.199 207.051 148.199C205.602 148.199 204.551 147.75 203.801 146.902C203.051 146 202.652 144.801 202.652 143.254C202.652 142.305 202.801 141.352 203.102 140.402C203.402 139.504 203.801 138.703 204.301 138.004L204.301 138.004Z" fill-rule="evenodd" />
|
|
||||||
<path fill="#000000" d="M188.258 132.559L177.961 132.559L174.613 151.301L178.312 151.301L179.559 144.152L186.309 144.152L186.906 140.754L180.16 140.754L181.008 135.957L187.656 135.957L188.258 132.559L188.258 132.559Z" />
|
|
||||||
<path fill="#98bf00" d="M127.082 44.891C128.43 33.945 125.684 24.102 118.883 15.402C112.086 6.707 103.191 1.66 92.2461 0.309C81.3008 -1.039 71.4531 1.711 62.7578 8.508C54.7109 14.754 49.8125 22.801 48.0625 32.648C47.9141 33.496 47.7617 34.297 47.6641 35.145C47.5625 35.996 47.5117 36.797 47.4648 37.594C47.1133 42.191 47.5625 46.59 48.7617 50.789C50.1133 55.688 52.4609 60.285 55.8594 64.633C59.2578 68.9805 63.1563 72.3828 67.6055 74.9297C71.3516 77.0781 75.5 78.5273 80.0508 79.3281C80.8516 79.4766 81.6484 79.5781 82.5 79.7266C82.9492 79.7773 83.3984 79.8281 83.8477 79.8789C84.9492 75.4297 86.6484 71.2812 88.9961 67.531C87.4453 67.582 85.8477 67.531 84.25 67.383C84.1484 67.332 84.0977 67.332 84.0469 67.332C82.1992 67.082 80.3984 66.734 78.75 66.184C73.6016 64.535 69.2539 61.484 65.707 56.938C62.1562 52.391 60.2578 47.441 59.9062 42.043C59.8086 40.293 59.8594 38.543 60.1094 36.695C60.1094 36.645 60.1094 36.547 60.1094 36.496C61.0586 29.047 64.5078 23 70.4531 18.352C76.4531 13.703 83.1992 11.805 90.7461 12.754C98.293 13.656 104.391 17.102 109.039 23.102C113.688 29.098 115.586 35.844 114.688 43.395C114.438 45.094 114.137 46.691 113.688 48.242C117.887 46.891 122.281 46.191 126.883 46.242C126.93 45.793 127.031 45.344 127.082 44.891L127.082 44.891Z" />
|
|
||||||
<path fill="#98bf00" d="M132.328 51.488C131.48 51.391 130.68 51.289 129.828 51.238C125.23 50.941 120.832 51.391 116.637 52.539C111.738 53.887 107.141 56.289 102.789 59.688C98.4414 63.035 95.043 66.934 92.5469 71.3828C90.3945 75.1289 88.9453 79.2773 88.0977 83.8281C92.4453 84.5742 96.4453 85.8242 100.141 87.6758C100.391 85.875 100.742 84.1758 101.242 82.5781C102.891 77.4297 105.941 73.082 110.488 69.5312C115.035 65.984 119.984 64.035 125.434 63.684C127.18 63.586 128.93 63.633 130.781 63.883C130.828 63.883 130.879 63.883 130.93 63.883C138.375 64.836 144.426 68.332 149.074 74.2812C153.77 80.2266 155.668 86.9766 154.719 94.5234C153.77 102.07 150.32 108.168 144.375 112.863C138.426 117.512 131.68 119.363 124.23 118.461C125.082 122.512 125.332 126.758 125.031 131.156C134.977 131.809 143.973 128.957 152.02 122.711C160.719 115.914 165.766 107.016 167.113 96.0703C168.465 85.125 165.715 75.2812 158.918 66.582C152.621 58.535 144.574 53.637 134.777 51.891C133.93 51.738 133.129 51.59 132.328 51.488L132.328 51.488Z" />
|
|
||||||
<path fill="#000000" d="M128.93 78.7266C125.48 78.3281 122.434 79.1797 119.684 81.3281C116.934 83.4766 115.387 86.2266 114.984 89.625C114.535 93.0742 115.387 96.1211 117.535 98.8711C119.684 101.621 122.434 103.168 125.883 103.57C129.281 104.019 132.328 103.168 135.078 101.019C137.828 98.8711 139.375 96.1211 139.824 92.6719C140.227 89.2734 139.375 86.2266 137.227 83.4766C135.078 80.7266 132.328 79.1797 128.93 78.7266L128.93 78.7266Z" />
|
|
||||||
<path fill="#98bf00" d="M12.8281 73.6289C13.7773 66.082 17.2266 59.938 23.2227 55.289C29.1719 50.641 35.8672 48.742 43.3164 49.691C42.4648 45.641 42.1641 41.395 42.5156 36.996C32.5703 36.344 23.5742 39.145 15.5273 45.441C6.77734 52.238 1.78125 61.137 0.433594 72.082C-0.917969 83.0273 1.78125 92.8242 8.62891 101.57C14.875 109.617 22.9219 114.516 32.7695 116.262C33.5703 116.414 34.3672 116.512 35.2188 116.664C36.0664 116.762 36.8672 116.863 37.7188 116.914C42.3164 117.215 46.7148 116.762 50.9102 115.613C55.7578 114.215 60.4062 111.816 64.7578 108.465C69.0547 105.066 72.4531 101.168 75.0039 96.7695C77.1523 93.0234 78.6016 88.875 79.4492 84.3281C75.1016 83.5781 71.1055 82.2773 67.4062 80.4766C67.1563 82.2266 66.8047 83.9258 66.3047 85.5742C64.6562 90.7227 61.6055 95.0703 57.0586 98.6211C52.5117 102.168 47.5625 104.117 42.1641 104.469C40.4141 104.566 38.6172 104.519 36.7656 104.269C36.7188 104.269 36.668 104.269 36.6172 104.219C29.1719 103.269 23.1211 99.8203 18.4727 93.8711C13.7773 87.875 11.8789 81.1289 12.8281 73.6289L12.8281 73.6289Z" />
|
|
||||||
<path fill="#000000" d="M32.4688 67.133C29.7188 69.2305 28.1719 72.0312 27.7227 75.4805C27.3203 78.8281 28.1719 81.8789 30.3203 84.625C32.418 87.375 35.168 88.9727 38.6172 89.4258C42.0664 89.7734 45.1133 88.9258 47.8633 86.8242C50.5625 84.6758 52.1094 81.8789 52.5625 78.5273C53.0117 75.0781 52.1602 71.9805 50.0117 69.2812C47.8633 66.535 45.1133 64.984 41.6641 64.586C38.2148 64.133 35.168 64.984 32.4688 67.133L32.4688 67.133Z" />
|
|
||||||
<path fill="#000000" d="M97.293 32.348C95.1445 29.598 92.3438 28.047 88.9453 27.648C85.4961 27.199 82.4492 28.047 79.75 30.199C77 32.297 75.4023 35.098 75.0039 38.543C74.5508 41.941 75.4531 44.992 77.6016 47.742C79.6992 50.441 82.4492 52.039 85.8984 52.488C89.2969 52.84 92.3438 51.988 95.0938 49.891C97.8438 47.742 99.3906 44.941 99.8438 41.594C100.242 38.145 99.3906 35.047 97.293 32.348L97.293 32.348Z" />
|
|
||||||
<path fill="#98bf00" d="M85.0469 88.4258C84.5977 88.375 84.1484 88.3242 83.6992 88.2734C82.5977 92.7227 80.8984 96.8711 78.5508 100.621C80.1016 100.519 81.6992 100.57 83.3477 100.769C83.3984 100.769 83.4492 100.769 83.5 100.82C85.3477 101.019 87.0977 101.371 88.7969 101.918C93.9453 103.57 98.293 106.668 101.84 111.215C105.391 115.715 107.289 120.66 107.641 126.109C107.738 127.859 107.688 129.609 107.438 131.457C107.438 131.508 107.438 131.559 107.438 131.656C106.488 139.106 103.039 145.152 97.0938 149.801C91.0938 154.449 84.3477 156.348 76.8008 155.398C69.2539 154.449 63.1563 151 58.5078 145.051C53.8086 139.055 51.9102 132.309 52.8594 124.762C53.0625 123.062 53.4102 121.461 53.9102 119.91C49.6641 121.262 45.2656 121.91 40.6641 121.91C40.6172 122.359 40.5156 122.812 40.4648 123.262C39.1172 134.207 41.8164 144.004 48.6641 152.75C55.4609 161.445 64.3555 166.492 75.3008 167.844C86.2461 169.191 96.043 166.445 104.789 159.645C112.836 153.348 117.734 145.301 119.484 135.457C119.633 134.656 119.734 133.856 119.883 133.008C119.934 132.156 120.035 131.359 120.082 130.559C120.383 125.91 119.934 121.512 118.785 117.363C117.434 112.465 115.035 107.867 111.688 103.519C108.289 99.1719 104.391 95.7227 99.9922 93.2227C96.1953 91.0742 92.0469 89.625 87.4961 88.8242C86.6992 88.6758 85.8984 88.5234 85.0469 88.4258L85.0469 88.4258Z" />
|
|
||||||
<path fill="#000000" d="M89.9961 120.41C87.8477 117.664 85.0977 116.113 81.6484 115.664C78.1992 115.266 75.1523 116.113 72.4531 118.262C69.7031 120.41 68.1562 123.16 67.7031 126.559C67.2539 130.008 68.1562 133.059 70.3047 135.805C72.4024 138.555 75.1523 140.106 78.6016 140.504C82.0508 140.953 85.0977 140.106 87.8477 137.953C90.5469 135.805 92.0938 133.059 92.5469 129.609C92.9453 126.211 92.0938 123.16 89.9961 120.41L89.9961 120.41Z" />
|
|
||||||
</g>
|
|
||||||
</svg>
|
|
Before Width: | Height: | Size: 20 KiB |
0
static/images/peertube-logo.svg
Normal file → Executable file
Before Width: | Height: | Size: 364 B After Width: | Height: | Size: 364 B |
0
static/images/rclone-logo.svg
Normal file → Executable file
Before Width: | Height: | Size: 4.3 KiB After Width: | Height: | Size: 4.3 KiB |
0
static/images/store.png
Normal file → Executable file
Before Width: | Height: | Size: 13 KiB After Width: | Height: | Size: 13 KiB |
0
static/js/site.js
Normal file → Executable file
0
tailwind.config.js
Normal file → Executable file
0
templates/404.html
Normal file → Executable file
3
templates/base.html
Normal file → Executable file
|
@ -9,9 +9,6 @@
|
||||||
<meta name="description" content="An open-source distributed storage service you can self-host to fullfill many needs.">
|
<meta name="description" content="An open-source distributed storage service you can self-host to fullfill many needs.">
|
||||||
<meta name="application-name" content="{{ config.title }}">
|
<meta name="application-name" content="{{ config.title }}">
|
||||||
{% include "partials/shared/head.html" %}
|
{% include "partials/shared/head.html" %}
|
||||||
<title>
|
|
||||||
{% block title %}{% endblock %}
|
|
||||||
</title>
|
|
||||||
</head>
|
</head>
|
||||||
|
|
||||||
<body class="has-background-white">
|
<body class="has-background-white">
|
||||||
|
|
|
@ -1,7 +1,7 @@
|
||||||
{% extends 'base.html' %}
|
{% extends 'base.html' %}
|
||||||
|
|
||||||
{% block title %}
|
{% block title %}
|
||||||
{% if page %}{{ page.title }}{% else %}{{ section.title }}{% endif %} | {{ config.title }}
|
{{ config.title }} | {% if page %}{{ page.title }}{% else %}{{ section.title }}{% endif %}
|
||||||
{% endblock %}
|
{% endblock %}
|
||||||
|
|
||||||
{% block content %}
|
{% block content %}
|
||||||
|
|
|
@ -1,7 +1,7 @@
|
||||||
{% extends 'base.html' %}
|
{% extends 'base.html' %}
|
||||||
|
|
||||||
{% block title %}
|
{% block title %}
|
||||||
Downloads | {{ config.title }}
|
{{ config.title }} | {{ page.title }}
|
||||||
{% endblock %}
|
{% endblock %}
|
||||||
|
|
||||||
{% block content %}
|
{% block content %}
|
||||||
|
@ -12,23 +12,9 @@ Downloads | {{ config.title }}
|
||||||
<div class="h-8 w-8 bg-gradient-to-bl from-gray-50 via-gray-50 to-gray-100 -rotate-45 transform origin-top-left shadow"></div>
|
<div class="h-8 w-8 bg-gradient-to-bl from-gray-50 via-gray-50 to-gray-100 -rotate-45 transform origin-top-left shadow"></div>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<div class="mx-auto max-w-7xl px-4">
|
<div class="mx-auto max-w-7xl px-4">
|
||||||
<div id="releases-container" class="py-24 space-y-20">
|
<div id="releases-container" class="py-24 space-y-20">
|
||||||
<div id="docker-images" class="space-y-4">
|
<div id="release-builds">
|
||||||
<h2 class="text-garage-gray text-xl font-semibold">Deploy with Docker</h2>
|
|
||||||
<p>All of the builds listed in the sections below can be downloaded as Docker images
|
|
||||||
available
|
|
||||||
<a href="https://hub.docker.com/r/dxflrs/garage" class="text-garage-orange font-bold hover:underline">on the Docker hub</a>.
|
|
||||||
</p>
|
|
||||||
</div>
|
|
||||||
<div id="docker-images" class="space-y-4">
|
|
||||||
<h2 class="text-garage-gray text-xl font-semibold">Release notes (changelogs)</h2>
|
|
||||||
<p>Release notes for each Garage release can be read
|
|
||||||
<a href="https://git.deuxfleurs.fr/Deuxfleurs/garage/releases" class="text-garage-orange font-bold hover:underline">on our Gitea instance</a>.
|
|
||||||
</p>
|
|
||||||
</div>
|
|
||||||
<div id="release-builds" class="space-y-4">
|
|
||||||
<h2 class="text-garage-gray text-xl font-semibold">Release Builds</h2>
|
<h2 class="text-garage-gray text-xl font-semibold">Release Builds</h2>
|
||||||
<div id="release-builds-container" class="space-y-12"></div>
|
<div id="release-builds-container" class="space-y-12"></div>
|
||||||
</div>
|
</div>
|
||||||
|
@ -48,12 +34,6 @@ Downloads | {{ config.title }}
|
||||||
<div id="development-builds-container" class="space-y-12"></div>
|
<div id="development-builds-container" class="space-y-12"></div>
|
||||||
</details>
|
</details>
|
||||||
</div>
|
</div>
|
||||||
<div class="space-y-4">
|
|
||||||
<p>
|
|
||||||
If this page is not loading correctly,
|
|
||||||
<a class="font-bold text-garage-orange hover:underline" href="https://garagehq.deuxfleurs.fr/_releases.html">click here</a>.
|
|
||||||
</p>
|
|
||||||
</div>
|
|
||||||
</div>
|
</div>
|
||||||
<noscript>
|
<noscript>
|
||||||
<style type="text/css">
|
<style type="text/css">
|
||||||
|
@ -95,6 +75,8 @@ Downloads | {{ config.title }}
|
||||||
let extraBuilds = data[1].builds;
|
let extraBuilds = data[1].builds;
|
||||||
let developmentBuilds = data[2].builds;
|
let developmentBuilds = data[2].builds;
|
||||||
|
|
||||||
|
console.log(extraBuilds)
|
||||||
|
|
||||||
/** Release Builds */
|
/** Release Builds */
|
||||||
for (i = 0; i < releaseBuilds.length; i++) {
|
for (i = 0; i < releaseBuilds.length; i++) {
|
||||||
window['build' + i] =
|
window['build' + i] =
|
||||||
|
@ -108,7 +90,7 @@ Downloads | {{ config.title }}
|
||||||
<div id="release-builds-detail-${i}" class="flex flex-col md:flex-row items-start md:items-center space-x-0 md:space-x-2 space-y-2 md:space-y-0"></div>
|
<div id="release-builds-detail-${i}" class="flex flex-col md:flex-row items-start md:items-center space-x-0 md:space-x-2 space-y-2 md:space-y-0"></div>
|
||||||
<span class="inline-block mt-4 text-sm mb-1 uppercase text-gray-600">Sources</span>
|
<span class="inline-block mt-4 text-sm mb-1 uppercase text-gray-600">Sources</span>
|
||||||
<div id="release-builds-source-${i}" class="flex items-center space-x-2">
|
<div id="release-builds-source-${i}" class="flex items-center space-x-2">
|
||||||
<a href="https://git.deuxfleurs.fr/Deuxfleurs/garage/src/tag/${releaseBuilds[i]['version']}" download class="inline-block p-1.5 text-garage-gray font-bold bg-gray-300 hover:bg-orange-300 rounded border-b-2 border-gray-400 hover:border-orange-400 transition-all duration-300">
|
<a href="https://git.deuxfleurs.fr/Deuxfleurs/garage/src/tag/${releaseBuilds[i]['version']}" class="inline-block p-1.5 text-garage-gray font-bold bg-gray-300 hover:bg-orange-300 rounded border-b-2 border-gray-400 hover:border-orange-400 transition-all duration-300">
|
||||||
<span>Gitea</span>
|
<span>Gitea</span>
|
||||||
</a>
|
</a>
|
||||||
<a href="https://git.deuxfleurs.fr/Deuxfleurs/garage/archive/${releaseBuilds[i]['version']}.zip" class="inline-block p-1.5 text-garage-gray font-bold bg-gray-300 hover:bg-orange-300 rounded border-b-2 border-gray-400 hover:border-orange-400 transition-all duration-300">
|
<a href="https://git.deuxfleurs.fr/Deuxfleurs/garage/archive/${releaseBuilds[i]['version']}.zip" class="inline-block p-1.5 text-garage-gray font-bold bg-gray-300 hover:bg-orange-300 rounded border-b-2 border-gray-400 hover:border-orange-400 transition-all duration-300">
|
||||||
|
@ -126,7 +108,7 @@ Downloads | {{ config.title }}
|
||||||
for (j = 0; j < releaseBuilds[i]['builds'].length; j++) {
|
for (j = 0; j < releaseBuilds[i]['builds'].length; j++) {
|
||||||
window['buildDetail' + i] =
|
window['buildDetail' + i] =
|
||||||
`
|
`
|
||||||
<a href="${releaseBuilds[i]['builds'][j]['url']}" download class="inline-block p-1.5 text-garage-gray font-bold bg-gray-300 hover:bg-orange-300 rounded border-b-2 border-gray-400 hover:border-orange-400 transition-all duration-300">
|
<a href="${releaseBuilds[i]['builds'][j]['url']}" class="inline-block p-1.5 text-garage-gray font-bold bg-gray-300 hover:bg-orange-300 rounded border-b-2 border-gray-400 hover:border-orange-400 transition-all duration-300">
|
||||||
<span>
|
<span>
|
||||||
${releaseBuilds[i]['builds'][j]['platform']
|
${releaseBuilds[i]['builds'][j]['platform']
|
||||||
.replace('aarch64-unknown-linux-musl', 'linux/arm64')
|
.replace('aarch64-unknown-linux-musl', 'linux/arm64')
|
||||||
|
@ -153,7 +135,7 @@ Downloads | {{ config.title }}
|
||||||
<div id="extra-builds-detail-${i}" class="flex flex-col md:flex-row items-start md:items-center space-x-0 md:space-x-2 space-y-2 md:space-y-0"></div>
|
<div id="extra-builds-detail-${i}" class="flex flex-col md:flex-row items-start md:items-center space-x-0 md:space-x-2 space-y-2 md:space-y-0"></div>
|
||||||
<span class="inline-block mt-4 text-sm mb-1 uppercase text-gray-600">Sources</span>
|
<span class="inline-block mt-4 text-sm mb-1 uppercase text-gray-600">Sources</span>
|
||||||
<div id="extra-builds-source-${i}" class="flex items-center pt-4 space-x-2">
|
<div id="extra-builds-source-${i}" class="flex items-center pt-4 space-x-2">
|
||||||
<a href="https://git.deuxfleurs.fr/Deuxfleurs/garage/src/tag/${extraBuilds[i]['version']}" download class="inline-block p-1.5 text-garage-gray font-bold bg-gray-300 hover:bg-orange-300 rounded border-b-2 border-gray-400 hover:border-orange-400 transition-all duration-300">
|
<a href="https://git.deuxfleurs.fr/Deuxfleurs/garage/src/tag/${extraBuilds[i]['version']}" class="inline-block p-1.5 text-garage-gray font-bold bg-gray-300 hover:bg-orange-300 rounded border-b-2 border-gray-400 hover:border-orange-400 transition-all duration-300">
|
||||||
<span>Gitea</span>
|
<span>Gitea</span>
|
||||||
</a>
|
</a>
|
||||||
<a href="https://git.deuxfleurs.fr/Deuxfleurs/garage/archive/${extraBuilds[i]['version']}.zip" class="inline-block p-1.5 text-garage-gray font-bold bg-gray-300 hover:bg-orange-300 rounded border-b-2 border-gray-400 hover:border-orange-400 transition-all duration-300">
|
<a href="https://git.deuxfleurs.fr/Deuxfleurs/garage/archive/${extraBuilds[i]['version']}.zip" class="inline-block p-1.5 text-garage-gray font-bold bg-gray-300 hover:bg-orange-300 rounded border-b-2 border-gray-400 hover:border-orange-400 transition-all duration-300">
|
||||||
|
@ -171,7 +153,7 @@ Downloads | {{ config.title }}
|
||||||
for (j = 0; j < extraBuilds[i]['builds'].length; j++) {
|
for (j = 0; j < extraBuilds[i]['builds'].length; j++) {
|
||||||
window['buildDetail' + i] =
|
window['buildDetail' + i] =
|
||||||
`
|
`
|
||||||
<a href="${extraBuilds[i]['builds'][j]['url']}" download class="inline-block p-1.5 text-garage-gray font-bold bg-gray-300 hover:bg-orange-300 rounded border-b-2 border-gray-400 hover:border-orange-400 transition-all duration-300">
|
<a href="${extraBuilds[i]['builds'][j]['url']}" class="inline-block p-1.5 text-garage-gray font-bold bg-gray-300 hover:bg-orange-300 rounded border-b-2 border-gray-400 hover:border-orange-400 transition-all duration-300">
|
||||||
<span>
|
<span>
|
||||||
${extraBuilds[i]['builds'][j]['platform']
|
${extraBuilds[i]['builds'][j]['platform']
|
||||||
.replace('aarch64-unknown-linux-musl', 'linux/arm64')
|
.replace('aarch64-unknown-linux-musl', 'linux/arm64')
|
||||||
|
@ -198,7 +180,7 @@ Downloads | {{ config.title }}
|
||||||
<div id="development-builds-detail-${i}" class="flex flex-col md:flex-row items-start md:items-center space-x-0 md:space-x-2 space-y-2 md:space-y-0"></div>
|
<div id="development-builds-detail-${i}" class="flex flex-col md:flex-row items-start md:items-center space-x-0 md:space-x-2 space-y-2 md:space-y-0"></div>
|
||||||
<span class="inline-block mt-4 text-sm mb-1 uppercase text-gray-600">Sources</span>
|
<span class="inline-block mt-4 text-sm mb-1 uppercase text-gray-600">Sources</span>
|
||||||
<div id="development-builds-source-${i}" class="flex items-center pt-4 space-x-2">
|
<div id="development-builds-source-${i}" class="flex items-center pt-4 space-x-2">
|
||||||
<a href="https://git.deuxfleurs.fr/Deuxfleurs/garage/src/tag/${developmentBuilds[i]['version']}" download class="inline-block p-1.5 text-garage-gray font-bold bg-gray-300 hover:bg-orange-300 rounded border-b-2 border-gray-400 hover:border-orange-400 transition-all duration-300">
|
<a href="https://git.deuxfleurs.fr/Deuxfleurs/garage/src/tag/${developmentBuilds[i]['version']}" class="inline-block p-1.5 text-garage-gray font-bold bg-gray-300 hover:bg-orange-300 rounded border-b-2 border-gray-400 hover:border-orange-400 transition-all duration-300">
|
||||||
<span>Gitea</span>
|
<span>Gitea</span>
|
||||||
</a>
|
</a>
|
||||||
<a href="https://git.deuxfleurs.fr/Deuxfleurs/garage/archive/${developmentBuilds[i]['version']}.zip" class="inline-block p-1.5 text-garage-gray font-bold bg-gray-300 hover:bg-orange-300 rounded border-b-2 border-gray-400 hover:border-orange-400 transition-all duration-300">
|
<a href="https://git.deuxfleurs.fr/Deuxfleurs/garage/archive/${developmentBuilds[i]['version']}.zip" class="inline-block p-1.5 text-garage-gray font-bold bg-gray-300 hover:bg-orange-300 rounded border-b-2 border-gray-400 hover:border-orange-400 transition-all duration-300">
|
||||||
|
@ -216,7 +198,7 @@ Downloads | {{ config.title }}
|
||||||
for (j = 0; j < developmentBuilds[i]['builds'].length; j++) {
|
for (j = 0; j < developmentBuilds[i]['builds'].length; j++) {
|
||||||
window['buildDetail' + i] =
|
window['buildDetail' + i] =
|
||||||
`
|
`
|
||||||
<a href="${developmentBuilds[i]['builds'][j]['url']}" download class="inline-block p-1.5 text-garage-gray font-bold bg-gray-300 hover:bg-orange-300 rounded border-b-2 border-gray-400 hover:border-orange-400 transition-all duration-300">
|
<a href="${developmentBuilds[i]['builds'][j]['url']}" class="inline-block p-1.5 text-garage-gray font-bold bg-gray-300 hover:bg-orange-300 rounded border-b-2 border-gray-400 hover:border-orange-400 transition-all duration-300">
|
||||||
<span>
|
<span>
|
||||||
${developmentBuilds[i]['builds'][j]['platform']
|
${developmentBuilds[i]['builds'][j]['platform']
|
||||||
.replace('aarch64-unknown-linux-musl', 'linux/arm64')
|
.replace('aarch64-unknown-linux-musl', 'linux/arm64')
|
||||||
|
|
50
templates/index.html
Normal file → Executable file
|
@ -1,16 +1,12 @@
|
||||||
{% extends "base.html" %}
|
{% extends "base.html" %}
|
||||||
|
|
||||||
{% block title %}
|
|
||||||
Garage - An open-source distributed object storage service
|
|
||||||
{% endblock title %}
|
|
||||||
|
|
||||||
{% block content %}
|
{% block content %}
|
||||||
<section class="section" id="home-section">
|
<section class="section" id="home-section">
|
||||||
<div>
|
<div>
|
||||||
|
|
||||||
<div class="flex flex-col items-center justify-center py-12 px-8 md:px-12 xl:px-0">
|
<div class="flex flex-col items-center justify-center py-12 px-8 md:px-12 xl:px-0">
|
||||||
<h1 class="hidden">{{config.extra.organization.name}}</h1>
|
<h1 class="hidden">{{config.extra.organization.name}}</h1>
|
||||||
<img src="{{ config.extra.organization.logo }}" width="220" alt="{{config.extra.organization.name}}"/>
|
<img src="{{ config.extra.organization.logo }}" width="220px" alt="{{config.extra.organization.name}}"/>
|
||||||
<p class="text-gray-500 leading-10 pt-4 text-xl text-center">{{ config.extra.organization.description }}</p>
|
<p class="text-gray-500 leading-10 pt-4 text-xl text-center">{{ config.extra.organization.description }}</p>
|
||||||
<div class="flex items-center justify-center space-x-2 md:space-x-4 py-4">
|
<div class="flex items-center justify-center space-x-2 md:space-x-4 py-4">
|
||||||
<a
|
<a
|
||||||
|
@ -28,7 +24,6 @@
|
||||||
<span class="inline text-sm md:text-base">Get Started</span>
|
<span class="inline text-sm md:text-base">Get Started</span>
|
||||||
</a>
|
</a>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<div class="max-w-7xl mx-auto grid grid-cols-1 md:grid-cols-3 gap-x-32 py-12">
|
<div class="max-w-7xl mx-auto grid grid-cols-1 md:grid-cols-3 gap-x-32 py-12">
|
||||||
<a href="{{config.base_url}}/documentation/connect/websites/" class="group flex flex-col items-center justify-center p-2">
|
<a href="{{config.base_url}}/documentation/connect/websites/" class="group flex flex-col items-center justify-center p-2">
|
||||||
<img src="{{ get_url(path='images/host.png') }}" class="transform group-hover:translate-y-2 transition duration-500">
|
<img src="{{ get_url(path='images/host.png') }}" class="transform group-hover:translate-y-2 transition duration-500">
|
||||||
|
@ -59,11 +54,11 @@
|
||||||
<p class="text-base text-gray-600">Each chunk of data is replicated in 3 zones</p>
|
<p class="text-base text-gray-600">Each chunk of data is replicated in 3 zones</p>
|
||||||
</div>
|
</div>
|
||||||
<div class="flex items-center space-x-2">
|
<div class="flex items-center space-x-2">
|
||||||
<img class="select-none" src="{{ get_url(path='icons/servers.svg') }}" width="48">
|
<img class="select-none" src="{{ get_url(path='icons/servers.svg') }}" width="48px">
|
||||||
<span>Zone (multiple servers)</span>
|
<span>Zone (multiple servers)</span>
|
||||||
</div>
|
</div>
|
||||||
<div class="flex items-center space-x-2">
|
<div class="flex items-center space-x-2">
|
||||||
<img class="select-none" src="{{ get_url(path='icons/datachunks.svg') }}" width="48">
|
<img class="select-none" src="{{ get_url(path='icons/datachunks.svg') }}" width="48px">
|
||||||
<span>Chunks of data</span>
|
<span>Chunks of data</span>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
@ -82,13 +77,13 @@
|
||||||
<div class="w-2 h-2 rounded-full bg-garage-orange"></div>
|
<div class="w-2 h-2 rounded-full bg-garage-orange"></div>
|
||||||
<li class="py-1.5 flex flex-col items-center justify-center">
|
<li class="py-1.5 flex flex-col items-center justify-center">
|
||||||
<span>Fast to deploy, safe to operate</span>
|
<span>Fast to deploy, safe to operate</span>
|
||||||
<p class="font-normal text-center">We are sysadmins, we know the value of operator-friendly software</p>
|
<p class="font-normal text-center">We are sysadmin, we know the value of operator friendly software</p>
|
||||||
</li>
|
</li>
|
||||||
<div class="w-2 h-2 rounded-full bg-garage-orange"></div>
|
<div class="w-2 h-2 rounded-full bg-garage-orange"></div>
|
||||||
<li class="py-1.5 flex flex-col items-center justify-center">
|
<li class="py-1.5 flex flex-col items-center justify-center">
|
||||||
<span>Deploy everywhere on every machine</span>
|
<span>Deploy everywhere on every machine</span>
|
||||||
<p class="font-normal text-center">We do not have a dedicated backbone, and neither do you,<br>
|
<p class="font-normal text-center">We do not have a dedicated backbone, neither do you,<br>
|
||||||
so we made software that run over the Internet across multiple datacenters</p>
|
so we made a software that run over the Internet across multiple datacenter</p>
|
||||||
</li>
|
</li>
|
||||||
<div class="w-2 h-2 rounded-full bg-garage-orange"></div>
|
<div class="w-2 h-2 rounded-full bg-garage-orange"></div>
|
||||||
<li class="py-1.5 flex flex-col items-center justify-center text-center">
|
<li class="py-1.5 flex flex-col items-center justify-center text-center">
|
||||||
|
@ -112,35 +107,35 @@
|
||||||
<ul class="text-center list-style-none flex flex-col space-y-2 justify-start py-4">
|
<ul class="text-center list-style-none flex flex-col space-y-2 justify-start py-4">
|
||||||
<li class="flex flex-col md:flex-row items-center justify-start">
|
<li class="flex flex-col md:flex-row items-center justify-start">
|
||||||
<div class="flex items-center space-x-2 w-max whitespace-nowrap bg-gray-200 shadow-inner py-0.5 px-1.5 rounded-md">
|
<div class="flex items-center space-x-2 w-max whitespace-nowrap bg-gray-200 shadow-inner py-0.5 px-1.5 rounded-md">
|
||||||
<img src="{{ get_url(path='icons/cpu.svg') }}" width="24">
|
<img src="{{ get_url(path='icons/cpu.svg') }}" width="24px">
|
||||||
<span class="font-normal">CPU</span>
|
<span class="font-normal">CPU</span>
|
||||||
</div>
|
</div>
|
||||||
<span class="px-2">Any x86_64 CPU from the last 10 years, ARMv7 or ARMv8</span>
|
<span class="px-2">Any x86_64 CPU from the last 10 years, ARMv7 or ARMv8</span>
|
||||||
</li>
|
</li>
|
||||||
<li class="flex flex-col md:flex-row items-center justify-start">
|
<li class="flex flex-col md:flex-row items-center justify-start">
|
||||||
<div class="flex items-center space-x-2 w-max whitespace-nowrap bg-gray-200 shadow-inner py-0.5 px-1.5 rounded-md">
|
<div class="flex items-center space-x-2 w-max whitespace-nowrap bg-gray-200 shadow-inner py-0.5 px-1.5 rounded-md">
|
||||||
<img src="{{ get_url(path='icons/ram.svg') }}" width="24">
|
<img src="{{ get_url(path='icons/ram.svg') }}" width="24px">
|
||||||
<span class="font-normal">RAM</span>
|
<span class="font-normal">RAM</span>
|
||||||
</div>
|
</div>
|
||||||
<span class="px-2">1 GB</span>
|
<span class="px-2">1 GB</span>
|
||||||
</li>
|
</li>
|
||||||
<li class="flex flex-col md:flex-row items-center justify-start">
|
<li class="flex flex-col md:flex-row items-center justify-start">
|
||||||
<div class="flex items-center space-x-2 w-max whitespace-nowrap bg-gray-200 shadow-inner py-0.5 px-1.5 rounded-md">
|
<div class="flex items-center space-x-2 w-max whitespace-nowrap bg-gray-200 shadow-inner py-0.5 px-1.5 rounded-md">
|
||||||
<img src="{{ get_url(path='icons/disk.svg') }}" width="24">
|
<img src="{{ get_url(path='icons/disk.svg') }}" width="24px">
|
||||||
<span class="font-normal">Disk space</span>
|
<span class="font-normal">Disk space</span>
|
||||||
</div>
|
</div>
|
||||||
<span class="px-2">At least 16 GB</span>
|
<span class="px-2">At least 16 GB</span>
|
||||||
</li>
|
</li>
|
||||||
<li class="flex flex-col md:flex-row items-center justify-start">
|
<li class="flex flex-col md:flex-row items-center justify-start">
|
||||||
<div class="flex items-center space-x-2 w-max whitespace-nowrap bg-gray-200 shadow-inner py-0.5 px-1.5 rounded-md">
|
<div class="flex items-center space-x-2 w-max whitespace-nowrap bg-gray-200 shadow-inner py-0.5 px-1.5 rounded-md">
|
||||||
<img src="{{ get_url(path='icons/network.svg') }}" width="24">
|
<img src="{{ get_url(path='icons/network.svg') }}" width="24px">
|
||||||
<span class="font-normal">Network</span>
|
<span class="font-normal">Network</span>
|
||||||
</div>
|
</div>
|
||||||
<span class="px-2">200 ms or less, 50 Mbps or more</span>
|
<span class="px-2">200 ms or less, 50 Mbps or more</span>
|
||||||
</li>
|
</li>
|
||||||
<li class="flex flex-col items-center md:items-start justify-center">
|
<li class="flex flex-col items-center md:items-start justify-center">
|
||||||
<div class="flex items-center space-x-2 w-max whitespace-nowrap bg-gray-200 shadow-inner py-0.5 px-1.5 rounded-md">
|
<div class="flex items-center space-x-2 w-max whitespace-nowrap bg-gray-200 shadow-inner py-0.5 px-1.5 rounded-md">
|
||||||
<img src="{{ get_url(path='icons/hardware.svg') }}" width="24">
|
<img src="{{ get_url(path='icons/hardware.svg') }}" width="24px">
|
||||||
<span class="font-normal">Heterogeneous hardware</span>
|
<span class="font-normal">Heterogeneous hardware</span>
|
||||||
</div>
|
</div>
|
||||||
<span class="px-2">Build a cluster with whatever second-hand machines are available</span>
|
<span class="px-2">Build a cluster with whatever second-hand machines are available</span>
|
||||||
|
@ -217,28 +212,15 @@
|
||||||
<div class="w-full flex flex-col items-center justify-center shadow-inner">
|
<div class="w-full flex flex-col items-center justify-center shadow-inner">
|
||||||
<div class="px-8 py-24 space-y-8 text-garage-gray max-w-4xl mx-auto">
|
<div class="px-8 py-24 space-y-8 text-garage-gray max-w-4xl mx-auto">
|
||||||
<h2 class="text-2xl text-garage-orange font-semibold">Sponsors and funding</h2>
|
<h2 class="text-2xl text-garage-orange font-semibold">Sponsors and funding</h2>
|
||||||
<p>Garage has received funding from <a class="text-garage-orange underline" href="https://pointer.ngi.eu/" target="_blank">NGI POINTER</a> (3 full-time employees for one year, in 2021-2022),
|
<p>The <a class="text-garage-orange underline" href="https://deuxfleurs.fr/" target="_blank">Deuxfleurs association</a>
|
||||||
and from <a class="text-garage-orange underline" href="https://nlnet.nl/entrust/" target="_blank">NLnet / NGI0 Entrust</a> (1 full-time employee for one year, in 2023-2024).
|
has received a grant from <a class="text-garage-orange underline" href="https://pointer.ngi.eu/" target="_blank">NGI POINTER</a>,
|
||||||
</p>
|
to fund 3 people working on Garage full-time for a year : from October 2021 to September 2022.</p>
|
||||||
<p>If you want to participate in funding Garage development,
|
<p>If you want to fund Garage development past its initial grant,
|
||||||
either through donation or support contract,
|
either through donation or support contract,
|
||||||
please <a class="text-garage-orange underline" href="mailto:{{config.extra.social.email}}">get in touch with us</a>.
|
please <a class="text-garage-orange underline" href="mailto:{{config.extra.social.email}}">get in touch with us</a></p>
|
||||||
</p>
|
|
||||||
<p>
|
|
||||||
<img src="{{ get_url(path='images/ngi-pointer-eu.png') }}" class="w-2/3 mx-auto" alt="NGI Pointers">
|
<img src="{{ get_url(path='images/ngi-pointer-eu.png') }}" class="w-2/3 mx-auto" alt="NGI Pointers">
|
||||||
</p>
|
|
||||||
<p class="flex flex-row justify-around">
|
|
||||||
<img src="{{ get_url(path='images/nlnet.svg') }}" class="w-1/3" alt="NLnet logo">
|
|
||||||
<img src="{{ get_url(path='images/NGI0Entrust_tag.svg') }}" class="w-1/3" alt="NGI0 Entrust logo">
|
|
||||||
</p>
|
|
||||||
<p class="italic">This project has received funding from the European Union's Horizon 2021 research and innovation programme
|
<p class="italic">This project has received funding from the European Union's Horizon 2021 research and innovation programme
|
||||||
within the framework of the NGI-POINTER Project funded under grant agreement N° 871528.</p>
|
within the framework of the NGI-POINTER Project funded under grant agreement N° 871528.</p>
|
||||||
<p class="italic">This project has received funding from the NGI0
|
|
||||||
Entrust Fund, a fund established by NLnet with financial support from the
|
|
||||||
European Commission's Next Generation Internet programme, under the aegis of DG
|
|
||||||
Communications Networks, Content and Technology under grant agreement No
|
|
||||||
101069594.
|
|
||||||
</p>
|
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
|
|
6
templates/macros.html
Normal file → Executable file
|
@ -3,7 +3,7 @@
|
||||||
{% if social_config.git %}
|
{% if social_config.git %}
|
||||||
<a href="{{ social_config.git }}" target="_blank">
|
<a href="{{ social_config.git }}" target="_blank">
|
||||||
<span class="h-10 w-10 bg-white hover:shadow-xl rounded-full shadow flex items-center justify-center" title="Git">
|
<span class="h-10 w-10 bg-white hover:shadow-xl rounded-full shadow flex items-center justify-center" title="Git">
|
||||||
<img src="{{get_url(path='icons/git.svg')}}" width="24" alt="">
|
<img src="{{get_url(path='icons/git.svg')}}" width="24px" alt="">
|
||||||
</span>
|
</span>
|
||||||
</a>
|
</a>
|
||||||
{% endif %}
|
{% endif %}
|
||||||
|
@ -11,7 +11,7 @@
|
||||||
{% if social_config.email %}
|
{% if social_config.email %}
|
||||||
<a href="mailto:{{ social_config.email }}" target="_blank">
|
<a href="mailto:{{ social_config.email }}" target="_blank">
|
||||||
<span class="h-10 w-10 bg-white hover:shadow-xl rounded-full shadow flex items-center justify-center" title="Contact">
|
<span class="h-10 w-10 bg-white hover:shadow-xl rounded-full shadow flex items-center justify-center" title="Contact">
|
||||||
<img src="{{get_url(path='icons/contact.svg')}}" width="24" alt="">
|
<img src="{{get_url(path='icons/contact.svg')}}" width="24px" alt="">
|
||||||
</span>
|
</span>
|
||||||
</a>
|
</a>
|
||||||
{% endif %}
|
{% endif %}
|
||||||
|
@ -19,7 +19,7 @@
|
||||||
{% if config.generate_feed %}
|
{% if config.generate_feed %}
|
||||||
<a href="{{ config.base_url }}/{{ config.feed_filename }}" target="_blank">
|
<a href="{{ config.base_url }}/{{ config.feed_filename }}" target="_blank">
|
||||||
<span class="h-10 w-10 bg-white hover:shadow-xl rounded-full shadow flex items-center justify-center" title="RSS Feed">
|
<span class="h-10 w-10 bg-white hover:shadow-xl rounded-full shadow flex items-center justify-center" title="RSS Feed">
|
||||||
<img src="{{get_url(path='icons/rss.svg')}}" width="24" alt="">
|
<img src="{{get_url(path='icons/rss.svg')}}" width="24px" alt="">
|
||||||
</span>
|
</span>
|
||||||
</a>
|
</a>
|
||||||
{% endif %}
|
{% endif %}
|
||||||
|
|
2
templates/blog_article.html → templates/page.html
Normal file → Executable file
|
@ -1,7 +1,7 @@
|
||||||
{% extends 'base.html' %}
|
{% extends 'base.html' %}
|
||||||
|
|
||||||
{% block title %}
|
{% block title %}
|
||||||
{{ page.title }} | Garage blog
|
{{ config.title }} | {{ page.title }}
|
||||||
{% endblock %}
|
{% endblock %}
|
||||||
|
|
||||||
{% block content %}
|
{% block content %}
|
|
@ -1,14 +0,0 @@
|
||||||
<div class="max-w-4xl mx-auto">
|
|
||||||
<div class="bg-teal-100 border-t-4 border-teal-500 rounded-b text-teal-900 px-4 py-3 shadow-md" role="alert">
|
|
||||||
<div class="flex">
|
|
||||||
<div class="py-1"><svg class="fill-current h-6 w-6 text-teal-500 mr-4" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 20 20"><path d="M2.93 17.07A10 10 0 1 1 17.07 2.93 10 10 0 0 1 2.93 17.07zm12.73-1.41A8 8 0 1 0 4.34 4.34a8 8 0 0 0 11.32 11.32zM9 11V9h2v6H9v-4zm0-6h2v2H9V5z"/></svg></div>
|
|
||||||
<div>
|
|
||||||
<p class="font-bold">Garage pre-1.0 community survey</p>
|
|
||||||
<p class="text-sm"> As part of our plans for the release of Garage v1.0, we are launching a survey to gather feedback from Garage users and potential users on all fronts, in order to improve Garage's reliability, user experience, and suitability for various application domains.</p>
|
|
||||||
<p>
|
|
||||||
<a href="https://pad.deuxfleurs.fr/form/#/2/form/view/bGZkUeZ5wxOuTSlP3nRJeTbCQlwdqUpF3ggN6vGqRds/" class="text-garage-orange font-bold hover:underline">Answer the survey here</a>
|
|
||||||
</p>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
|
@ -15,6 +15,12 @@
|
||||||
|
|
||||||
{% block user_custom_stylesheet %}{% endblock %}
|
{% block user_custom_stylesheet %}{% endblock %}
|
||||||
|
|
||||||
|
<title>
|
||||||
|
{% block title %}
|
||||||
|
{{ config.title }} - An open-source distributed storage service
|
||||||
|
{% endblock title %}
|
||||||
|
</title>
|
||||||
|
|
||||||
{% if config.extra.katex.enabled %}
|
{% if config.extra.katex.enabled %}
|
||||||
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/katex@0.15.1/dist/katex.min.css"
|
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/katex@0.15.1/dist/katex.min.css"
|
||||||
integrity="sha384-R4558gYOUz8mP9YWpZJjofhk+zx0AS11p36HnD2ZKj/6JR5z27gSSULCNHIRReVs" crossorigin="anonymous">
|
integrity="sha384-R4558gYOUz8mP9YWpZJjofhk+zx0AS11p36HnD2ZKj/6JR5z27gSSULCNHIRReVs" crossorigin="anonymous">
|
||||||
|
|
|
@ -3,7 +3,7 @@
|
||||||
<div class="navbar-brand">
|
<div class="navbar-brand">
|
||||||
<a class="hover:rounded-full hover:bg-white" href="{{config.base_url}}">
|
<a class="hover:rounded-full hover:bg-white" href="{{config.base_url}}">
|
||||||
<img class="px-2 transform duration-150 focus:bg-white hover:bg-white hover:shadow rounded-lg"
|
<img class="px-2 transform duration-150 focus:bg-white hover:bg-white hover:shadow rounded-lg"
|
||||||
src="{{ config.extra.organization.logo_horizontal }}" width="120">
|
src="{{ config.extra.organization.logo_horizontal }}" width="120px">
|
||||||
</a>
|
</a>
|
||||||
</div>
|
</div>
|
||||||
<input type="checkbox" id="navMenuToggleBtn" value="0"/>
|
<input type="checkbox" id="navMenuToggleBtn" value="0"/>
|
||||||
|
|
0
templates/robots.txt
Normal file → Executable file
4
templates/blog_index.html → templates/section.html
Normal file → Executable file
|
@ -1,7 +1,7 @@
|
||||||
{% extends 'base.html' %}
|
{% extends 'base.html' %}
|
||||||
|
|
||||||
{% block title %}
|
{% block title %}
|
||||||
{{ section.title }} | {{ config.title }}
|
{{ config.title }} | {{ section.title }}
|
||||||
{% endblock title %}
|
{% endblock title %}
|
||||||
|
|
||||||
{% block content %}
|
{% block content %}
|
||||||
|
@ -42,7 +42,7 @@
|
||||||
</div>
|
</div>
|
||||||
<div class="content mt-2">
|
<div class="content mt-2">
|
||||||
<div class="text-gray-700 text-lg not-italic">
|
<div class="text-gray-700 text-lg not-italic">
|
||||||
{{ page.summary | striptags | safe }}
|
{{ page.summary | safe | striptags }}
|
||||||
</div>
|
</div>
|
||||||
<a class="group font-semibold p-4 flex items-center space-x-1 text-garage-orange" href='{{ page.permalink }}'>
|
<a class="group font-semibold p-4 flex items-center space-x-1 text-garage-orange" href='{{ page.permalink }}'>
|
||||||
<div class="h-0.5 mt-0.5 w-4 group-hover:w-8 group-hover:bg-garage-gray transition-all bg-garage-orange"></div>
|
<div class="h-0.5 mt-0.5 w-4 group-hover:w-8 group-hover:bg-garage-gray transition-all bg-garage-orange"></div>
|