IPFS article #8
1 changed files with 7 additions and 6 deletions
|
@ -208,11 +208,6 @@ Running IPFS over an S3 storage back-end does not quite work out of the box in t
|
|||
We have identified that the main problem is linked with the DHT service,
|
||||
and proposed some improvements (disabling the DHT server, keeping an in-memory index of the blocks, using the S3 back-end only for user data).
|
||||
|
||||
It is possible to modify Peergos to make it work without IPFS.
|
||||
With some optimizations on the block size,
|
||||
we might have a great proof of concept of an end-to-end encrypted "cloud storage" over Garage.
|
||||
*If you happen to be working on this, please inform us!*
|
||||
|
||||
From an IPFS design perspective, it seems however that the numerous small blocks handled by the protocol
|
||||
do not map trivially to efficient use of the S3 API, and thus could be a limiting factor to any optimization work.
|
||||
|
||||
|
@ -224,7 +219,13 @@ On our side at Deuxfleurs, we will continue our investigations towards more *min
|
|||
This choice makes sense for us as we want to reduce the ecological impact of our services
|
||||
by deploying less servers, that use less energy, and that are renewed less frequently.
|
||||
|
||||
*We are aware of the existence of many other software projects for file sharing
|
||||
After discussing with Peergos maintainers, we identified that it is possible to run Peergos without IPFS.
|
||||
With some optimizations on the block size, we envision great synergies between Garage and Peergos that could lead to
|
||||
an efficient and lightweight end-to-end encrypted "cloud storage" platform.
|
||||
*If you happen to be working on this, please inform us!*
|
||||
|
||||
|
||||
*We are also aware of the existence of many other software projects for file sharing
|
||||
such as Nextcloud, Owncloud, Owncloud Infinite Scale, Seafile, Filestash, Pydio, SOLID, Remote Storage, etc.
|
||||
Many of these could be connected to an S3 back-end such as Garage.
|
||||
We might even try some of them in future blog posts, so stay tuned!*
|
||||
|
|
Loading…
Reference in a new issue