Proofreading after-the-fact #10
4 changed files with 7 additions and 7 deletions
|
@ -23,7 +23,7 @@ in the last few years. Nothing too unfamiliar to us, as the organization is usin
|
|||
the same tools as we are: a combination of Jitsi and Matrix.
|
||||
|
||||
We are of course extremely honored that our presentation was accepted.
|
||||
If technical details are your thing, we invite you to come to share this event with us.
|
||||
If technical details are your thing, we invite you to come and share this event with us.
|
||||
In all cases, the event will be recorded and available as a VOD (Video On Demand)
|
||||
afterward. Concerning the details of the organization:
|
||||
|
||||
|
|
|
@ -18,7 +18,7 @@ locations, and does its best to not be too impacted by network latencies.*
|
|||
Hello! We are Deuxfleurs, a non-profit based in France working to promote
|
||||
self-hosting and small-scale hosting.
|
||||
|
||||
What does that mean? Well, we figured that big tech monopoly such as Google,
|
||||
What does that mean? Well, we figured that big tech monopolies such as Google,
|
||||
Facebook or Amazon today hold disproportionate power and are becoming quite
|
||||
dangerous to us, citizens of the Internet. They know everything we are doing,
|
||||
saying, and even thinking, and they are not making good use of that
|
||||
|
@ -93,7 +93,7 @@ us:
|
|||
|
||||
- **Crash tolerance** is when a service that runs on several computers at once
|
||||
can continue operating normally even when one (or a small number) of the
|
||||
computers stop working.
|
||||
computers stops working.
|
||||
|
||||
- **Geo-distribution** is when the computers that make up a distributed system
|
||||
are not all located in the same facility. Ideally, they would even be spread
|
||||
|
|
|
@ -61,7 +61,7 @@ servers from different clusters can't collaborate to serve together the same dat
|
|||
|
||||
➡️ **Garage is designed to durably store content.**
|
||||
|
||||
In this blog post, we will explore whether we can combine delivary and durability by connecting an IPFS node to a Garage cluster.
|
||||
In this blog post, we will explore whether we can combine efficient delivery and strong durability by connecting an IPFS node to a Garage cluster.
|
||||
|
||||
## Try #1: Vanilla IPFS over Garage
|
||||
|
||||
|
@ -223,7 +223,7 @@ as there are IPFS blocks in the object to be read.
|
|||
On the receiving end, this means that any fully-fledged IPFS node has to answer large numbers
|
||||
of requests for blocks required by users everywhere on the network, which is what we observed in our experiment above.
|
||||
We were however surprised to observe that many requests coming from the IPFS network were for blocks
|
||||
which our node didn't had a copy for: this means that somewhere in the IPFS protocol, an overly optimistic
|
||||
which our node didn't have a copy of: this means that somewhere in the IPFS protocol, an overly optimistic
|
||||
assumption is made on where data could be found in the network, and this ends up translating into many requests
|
||||
between nodes that return negative results.
|
||||
When IPFS blocks are stored on a local filesystem, answering these requests fast might be possible.
|
||||
|
|
|
@ -101,7 +101,7 @@ Conversely, at this time, no reads are done as the corresponding read endpoints
|
|||
Garage also collects metrics from lower-level parts of the system.
|
||||
You can use them to better understand how Garage is interacting with your OS and your hardware.
|
||||
|
||||
![A screenshot of a plot made by Grafana depicting the write speed (in MB/s) during the test time.](writes.png)
|
||||
![A screenshot of a plot made by Grafana depicting the write speed (in MB/s) during test time.](writes.png)
|
||||
|
||||
This plot has been captured at the same moment as the previous one.
|
||||
We do not see a correlation between the writes and the API requests for the full upload but only for its beginning.
|
||||
|
|
Loading…
Reference in a new issue