diff --git a/content/blog/2022-ipfs/index.md b/content/blog/2022-ipfs/index.md index 26dae83..87e9753 100644 --- a/content/blog/2022-ipfs/index.md +++ b/content/blog/2022-ipfs/index.md @@ -50,18 +50,18 @@ are in charge of storing the first half of the archive while Charlie and Eve are [Resilio](https://www.resilio.com/individuals/) and [Syncthing](https://syncthing.net/) both feature protocols inspired by BitTorrent to synchronize a tree of your file system between multiple computers. Reviewing these solutions is out of the scope of this article, feel free to try them by yourself!* -Garage, on the contrary, is designed to automatically spread your content over all your available nodes, in a manner that makes the best possible use of your storage space. +Garage, on the other hand, is designed to automatically spread your content over all your available nodes, in a manner that makes the best possible use of your storage space. At the same time, it ensures that your content is always replicated exactly 3 times across the cluster (or less if you change a configuration parameter), on different geographical zones when possible. -However, this means that when content is requested from a Garage cluster, there are only 3 nodes that are capable of returning it to the user. -As a consequence, when content becomes popular, these nodes might become a bottleneck. -Moreover, all resources created (keys, files, buckets) are tightly coupled to the Garage cluster on which they exist; +However, this means that when content is requested from a Garage cluster, there are only 3 nodes capable of returning it to the user. +As a consequence, when content becomes popular, this subset of nodes might become a bottleneck. +Moreover, all resources (keys, files, buckets) are tightly coupled to the Garage cluster on which they exist; servers from different clusters can't collaborate to serve together the same data (without additional software). ➡️ **Garage is designed to durably store content.** -In this blog post, we will explore whether we can combine both properties by connecting an IPFS node to a Garage cluster. +In this blog post, we will explore whether we can combine delivary and durability by connecting an IPFS node to a Garage cluster. ## Try #1: Vanilla IPFS over Garage @@ -73,10 +73,10 @@ The Peergos project has a fork because it seems that the plugin is known for hit ([#105](https://github.com/ipfs/go-ds-s3/issues/105), [#205](https://github.com/ipfs/go-ds-s3/pull/205)). This is the one we will try in the following. -The easiest solution to use this plugin in IPFS is to bundle it in the main IPFS daemon, and thus recompile IPFS from sources. +The easiest solution to use this plugin in IPFS is to bundle it in the main IPFS daemon, and recompile IPFS from sources. Following the instructions on the README file allowed me to spawn an IPFS daemon configured with S3 as the block store. -I had a small issue when adding the plugin to the `plugin/loader/preload_list` file: the given command lacks a newline. +I had a small issue when adding the plugin to the `plugin/loader/preload_list` file: the given command lacks a newline. I had to edit the file manually after running it, the issue was directly visible and easy to fix. After that, I just ran the daemon and accessed the web interface to upload a photo of my dog: @@ -95,20 +95,19 @@ For example, you can inspect it [from the official gateway](https://explore.ipld ![A screenshot of the IPFS explorer](./explorer.png) At the same time, I was monitoring Garage (through [the OpenTelemetry stack we implemented earlier this year](/blog/2022-v0-7-released/)). -Just after launching the daemon and before doing anything, we had this surprisingly active Grafana plot: +Just after launching the daemon - and before doing anything - I was met by this surprisingly active Grafana plot: ![Grafana API request rate when IPFS is idle](./idle.png)