Better explain myself
This commit is contained in:
parent
4dcdfc3dc3
commit
0ac2f00940
1 changed files with 4 additions and 3 deletions
|
@ -34,7 +34,7 @@ We also see [in the code](
|
|||
https://github.com/matrix-org/synapse/blob/b996782df51eaa5dd30635a7c59c93994d3a735e/synapse/rest/media/v1/media_storage.py#L202-L211) that the *media provider* can be referred as the local cache and that some parts of the code may require that a file is in the local cache.
|
||||
|
||||
As a conclusion, the best we can do is to keep the *media provider* as a local cache.
|
||||
But even if this case, it is our responsability to garbage collect the cache.
|
||||
The concept of cache is very artificial as there is no integrated tool for cache eviction: it is our responsability to garbage collect the cache.
|
||||
|
||||
## Migration
|
||||
|
||||
|
@ -54,9 +54,10 @@ media_storage_providers:
|
|||
secret_access_key: XXXXXXXXXXX
|
||||
```
|
||||
|
||||
But registering it like that will only be useful for our new media (because we activated `store_local` and `store_remote` for local and remote content that must automatically pushed to our S3 backend).
|
||||
Registering the module like that will only be useful for our new media, `store_local: True` and `store_remote: True` means that newly media will be uploaded to our S3 target and we want to check that upload suceed before notifying the user (`store_synchronous: True`). The rationale for there store options is to enable administators to handle the upload with a *pull approach* rather than with our *push approach*. In practise, for the *pull approach*, administrators have to call regularly a script (with a cron for example) to copy the files on the target. A script is provided by the extension developpers named `s3_media_upload`.
|
||||
|
||||
Old media must be migrated with a script named `s3_media_upload`. First, we need some setup to use this tool:
|
||||
This script is also the sole way to migrate old media (that cannot be *pushed*) so we will still have to use it.
|
||||
First, we need some setup to use this tool:
|
||||
- postgres credentials + endpoint must be stored in a `database.yml` file
|
||||
- s3 credentials must be configured as per the [boto convention](https://boto3.amazonaws.com/v1/documentation/api/1.9.46/guide/configuration.html) and the endpoint can be specified on the command line
|
||||
- the path to the local cache/media repository is also passed through the command line
|
||||
|
|
Loading…
Reference in a new issue