Add a note about Peergos without IPFS
This commit is contained in:
parent
0e2689efdb
commit
7c951f4376
1 changed files with 11 additions and 2 deletions
|
@ -176,7 +176,12 @@ Finally, we restart Peergos and observe this more peaceful graph:
|
|||
Now, for a given endpoint, we have peaks of around 10 req/sec which is way more reasonable.
|
||||
Furthermore, we are not hammering anymore our backend with requests on objects that are not here.
|
||||
|
||||
The next step would be to gradually allowing back our node to connect to the IPFS network,
|
||||
After discussing with the developpers, it is possible to go even further by running Peergos without IPFS:
|
||||
this is what they do for some of their tests. At the same time, if you increase the size
|
||||
of a block, you might have a non-federated but efficient end-to-end encrypted "cloud storage" that works over Garage,
|
||||
with your clients directly hitting the S3 API!
|
||||
|
||||
If federation is a hard requirement for your, the next step would be to gradually allowing back our node to connect to the IPFS network,
|
||||
while ensuring that the traffic to the S3 cluster remains low. For example, configuring our IPFS
|
||||
node as a `dhtclient` instead of `dhtserver` would exempt it from answering public DHT requests.
|
||||
Keeping an in-memory index (as a hashmap and/or blum filter) of the blocks stored on the current node
|
||||
|
@ -201,7 +206,11 @@ Running IPFS over a S3 backend does not quite work out of the box in term of per
|
|||
We have identified that the main problem is linked with the DHT service,
|
||||
and proposed some improvements (disabling the DHT server, keeping an in-memory index of the blocks, using the S3 backend only for your data).
|
||||
|
||||
From a design perspective, it seems however that the numerous small blocks created by IPFS
|
||||
It is possible to modify Peergos to make it work without IPFS. With some optimization on the block size,
|
||||
you might have a great proof of concept of an end-to-end encrypted "cloud storage" over Garage.
|
||||
*Ping us if you make a prototype!*
|
||||
|
||||
From an IPFS design perspective, it seems however that the numerous small blocks handled by the protocol
|
||||
do not map trivially to efficient S3 requests, and thus could be a limiting factor to any optimization work.
|
||||
|
||||
As part of our test journey, we also read some posts about performance issues on IPFS (eg. [#6283](https://github.com/ipfs/go-ipfs/issues/6283)) that are not
|
||||
|
|
Loading…
Reference in a new issue