Rework conclusion
This commit is contained in:
parent
cf672377f3
commit
0e2689efdb
1 changed files with 6 additions and 7 deletions
|
@ -198,20 +198,19 @@ Even if all requests have not the same cost on the cluster, processing a request
|
||||||
## Conclusion
|
## Conclusion
|
||||||
|
|
||||||
Running IPFS over a S3 backend does not quite work out of the box in term of performances yet.
|
Running IPFS over a S3 backend does not quite work out of the box in term of performances yet.
|
||||||
We have identified some possible measures for improvement (disabling the DHT server, keeping an in-memory index of the blocks, using the S3 backend only for your data)
|
We have identified that the main problem is linked with the DHT service,
|
||||||
that might allow you to still run an IPFS node over Garage.
|
and proposed some improvements (disabling the DHT server, keeping an in-memory index of the blocks, using the S3 backend only for your data).
|
||||||
|
|
||||||
From a design perspective, it seems however that the numerous small blocks created by IPFS
|
From a design perspective, it seems however that the numerous small blocks created by IPFS
|
||||||
do not map trivially to efficient S3 requests, and thus could be a limiting factor to any optimization work.
|
do not map trivially to efficient S3 requests, and thus could be a limiting factor to any optimization work.
|
||||||
|
|
||||||
As part of our test journey, we read some posts about performance issues on IPFS (eg. [#6283 - Reduce the impact of the DHT](https://github.com/ipfs/go-ipfs/issues/6283)) that are not
|
As part of our test journey, we also read some posts about performance issues on IPFS (eg. [#6283](https://github.com/ipfs/go-ipfs/issues/6283)) that are not
|
||||||
linked with the S3 connector. We might be negatively influenced by our failure to connect IPFS with S3,
|
linked with the S3 connector. We might be negatively influenced by our failure to connect IPFS with S3,
|
||||||
but we are tempted to think that in any case, IPFS will be ressource intensive for your hardware.
|
but we are tempted to think that IPFS is intrinsically ressource intensive.
|
||||||
|
|
||||||
On our side, we will continue our investigations towards more *minimalist* software that tends to limit the
|
On our side, we will continue our investigations towards more *minimalist* software.
|
||||||
number of requests they send.
|
|
||||||
This choice makes sense for us as we want to reduce the ecological impact of our services
|
This choice makes sense for us as we want to reduce the ecological impact of our services
|
||||||
by deploying optimized software on a limited number of second-hand servers.
|
by deploying less servers, that use less energy, and that are renewed less frequently.
|
||||||
|
|
||||||
*Yes we are aware of the existence of Nextcloud, Owncloud, Owncloud Infinite Scale, Seafile, Filestash, Pydio, SOLID, Remote Storage, etc.
|
*Yes we are aware of the existence of Nextcloud, Owncloud, Owncloud Infinite Scale, Seafile, Filestash, Pydio, SOLID, Remote Storage, etc.
|
||||||
We might even try one of them in a future post, so stay tuned!*
|
We might even try one of them in a future post, so stay tuned!*
|
||||||
|
|
Loading…
Reference in a new issue