forked from Deuxfleurs/garage
Compare commits
No commits in common. "33b3cf8e227048525918b7f232fc250e44dc2d47" and "39c3738a079f2a18ee1ef378c8f67050eb2f442b" have entirely different histories.
33b3cf8e22
...
39c3738a07
6 changed files with 25 additions and 108 deletions
|
@ -11,7 +11,6 @@ In this section, we cover the following web applications:
|
|||
| [Peertube](#peertube) | ✅ | Supported with the website endpoint, proxifying private videos unsupported |
|
||||
| [Mastodon](#mastodon) | ✅ | Natively supported |
|
||||
| [Matrix](#matrix) | ✅ | Tested with `synapse-s3-storage-provider` |
|
||||
| [ejabberd](#ejabberd) | ✅ | `mod_s3_upload` |
|
||||
| [Pixelfed](#pixelfed) | ❓ | Not yet tested |
|
||||
| [Pleroma](#pleroma) | ❓ | Not yet tested |
|
||||
| [Lemmy](#lemmy) | ✅ | Supported with pict-rs |
|
||||
|
@ -475,52 +474,6 @@ And add a new line. For example, to run it every 10 minutes:
|
|||
|
||||
*External link:* [matrix-media-repo Documentation > S3](https://docs.t2bot.io/matrix-media-repo/configuration/s3-datastore.html)
|
||||
|
||||
## ejabberd
|
||||
|
||||
ejabberd is an XMPP server implementation which, with the `mod_s3_upload`
|
||||
module in the [ejabberd-contrib](https://github.com/processone/ejabberd-contrib)
|
||||
repository, can be integrated to store chat media files in Garage.
|
||||
|
||||
For uploads, this module leverages presigned URLs - this allows XMPP clients to
|
||||
directly send media to Garage. Receiving clients then retrieve this media
|
||||
through the [static website](@/documentation/cookbook/exposing-websites.md)
|
||||
functionality.
|
||||
|
||||
As the data itself is publicly accessible to someone with knowledge of the
|
||||
object URL - users are recommended to use
|
||||
[E2EE](@/documentation/cookbook/encryption.md) to protect this data-at-rest
|
||||
from unauthorized access.
|
||||
|
||||
Install the module with:
|
||||
|
||||
```bash
|
||||
ejabberdctl module_install mod_s3_upload
|
||||
```
|
||||
|
||||
Create the required key and bucket with:
|
||||
|
||||
```bash
|
||||
garage key new --name ejabberd
|
||||
garage bucket create objects.xmpp-server.fr
|
||||
garage bucket allow objects.xmpp-server.fr --read --write --key ejabberd
|
||||
garage bucket website --allow objects.xmpp-server.fr
|
||||
```
|
||||
|
||||
The module can then be configured with:
|
||||
|
||||
```
|
||||
mod_s3_upload:
|
||||
#bucket_url: https://objects.xmpp-server.fr.my-garage-instance.mydomain.tld
|
||||
bucket_url: https://my-garage-instance.mydomain.tld/objects.xmpp-server.fr
|
||||
access_key_id: GK...
|
||||
access_key_secret: ...
|
||||
region: garage
|
||||
download_url: https://objects.xmpp-server.fr
|
||||
```
|
||||
|
||||
Other configuration options can be found in the
|
||||
[configuration YAML file](https://github.com/processone/ejabberd-contrib/blob/master/mod_s3_upload/conf/mod_s3_upload.yml).
|
||||
|
||||
## Pixelfed
|
||||
|
||||
[Pixelfed Technical Documentation > Configuration](https://docs.pixelfed.org/technical-documentation/env.html#filesystem)
|
||||
|
@ -586,7 +539,7 @@ secret_key = 'abcdef0123456789...'
|
|||
|
||||
```
|
||||
PICTRS__STORE__TYPE=object_storage
|
||||
PICTRS__STORE__ENDPOINT=http://my-garage-instance.mydomain.tld:3900
|
||||
PICTRS__STORE__ENDPOINT=http:/my-garage-instance.mydomain.tld:3900
|
||||
PICTRS__STORE__BUCKET_NAME=pictrs-data
|
||||
PICTRS__STORE__REGION=garage
|
||||
PICTRS__STORE__ACCESS_KEY=GK...
|
||||
|
|
|
@ -49,9 +49,14 @@ implements a protocol that has been clearly reviewed, Secure ScuttleButt's
|
|||
Secret Handshake protocol. This is why setting a `rpc_secret` is mandatory,
|
||||
and that's also why your nodes have super long identifiers.
|
||||
|
||||
## HTTP API endpoints provided by Garage are in clear text
|
||||
## Encrypting traffic between a Garage node and your client
|
||||
|
||||
Adding TLS support built into Garage is not currently planned.
|
||||
HTTP API endpoints provided by Garage are in clear text.
|
||||
You have multiple options to have encryption between your client and a node:
|
||||
|
||||
- Setup a reverse proxy with TLS / ACME / Let's encrypt
|
||||
- Setup a Garage gateway locally, and only contact the garage daemon on `localhost`
|
||||
- Only contact your Garage daemon over a secure, encrypted overlay network such as Wireguard
|
||||
|
||||
## Garage stores data in plain text on the filesystem
|
||||
|
||||
|
@ -71,14 +76,6 @@ system such as Hashicorp Vault?
|
|||
|
||||
# Adding data encryption using external tools
|
||||
|
||||
## Encrypting traffic between a Garage node and your client
|
||||
|
||||
You have multiple options to have encryption between your client and a node:
|
||||
|
||||
- Setup a reverse proxy with TLS / ACME / Let's encrypt
|
||||
- Setup a Garage gateway locally, and only contact the garage daemon on `localhost`
|
||||
- Only contact your Garage daemon over a secure, encrypted overlay network such as Wireguard
|
||||
|
||||
## Encrypting data at rest
|
||||
|
||||
Protects against the following threats:
|
||||
|
@ -104,13 +101,5 @@ Implementations are very specific to the various applications. Examples:
|
|||
in Matrix are probably encrypted using symmetric encryption, with a key that is
|
||||
distributed in the end-to-end encrypted message that contains the link to the object.
|
||||
|
||||
- XMPP: clients normally support either OMEMO / OpenPGP for the E2EE of user
|
||||
messages. Media files are encrypted per
|
||||
[XEP-0454](https://xmpp.org/extensions/xep-0454.html).
|
||||
|
||||
- Aerogramme: use the user's password as a key to decrypt data in the user's bucket
|
||||
|
||||
- Cyberduck: comes with support for
|
||||
[Cryptomator](https://docs.cyberduck.io/cryptomator/) which allows users to
|
||||
create client-side vaults to encrypt files in before they are uploaded to a
|
||||
cloud storage endpoint.
|
||||
|
|
|
@ -33,20 +33,7 @@ NoNewPrivileges=true
|
|||
WantedBy=multi-user.target
|
||||
```
|
||||
|
||||
**A note on hardening:** Garage will be run as a non privileged user, its user
|
||||
id is dynamically allocated by systemd (set with `DynamicUser=true`). It cannot
|
||||
access (read or write) home folders (`/home`, `/root` and `/run/user`), the
|
||||
rest of the filesystem can only be read but not written, only the path seen as
|
||||
`/var/lib/garage` is writable as seen by the service. Additionnaly, the process
|
||||
can not gain new privileges over time.
|
||||
|
||||
For this to work correctly, your `garage.toml` must be set with
|
||||
`metadata_dir=/var/lib/garage/meta` and `data_dir=/var/lib/garage/data`. This
|
||||
is mandatory to use the DynamicUser hardening feature of systemd, which
|
||||
autocreates these directories as virtual mapping. If the directory
|
||||
`/var/lib/garage` already exists before starting the server for the first time,
|
||||
the systemd service might not start correctly. Note that in your host
|
||||
filesystem, Garage data will be held in `/var/lib/private/garage`.
|
||||
*A note on hardening: garage will be run as a non privileged user, its user id is dynamically allocated by systemd. It cannot access (read or write) home folders (/home, /root and /run/user), the rest of the filesystem can only be read but not written, only the path seen as /var/lib/garage is writable as seen by the service (mapped to /var/lib/private/garage on your host). Additionnaly, the process can not gain new privileges over time.*
|
||||
|
||||
To start the service then automatically enable it at boot:
|
||||
|
||||
|
|
|
@ -26,11 +26,8 @@ their content is correct, by verifying their hash. Any block found to be corrupt
|
|||
(e.g. by bitrot or by an accidental manipulation of the datastore) will be
|
||||
restored from another node that holds a valid copy.
|
||||
|
||||
Scrubs are automatically scheduled by Garage to run every 25-35 days (the
|
||||
actual time is randomized to spread load across nodes). The next scheduled run
|
||||
can be viewed with `garage worker get`.
|
||||
|
||||
A scrub can also be launched manually using `garage repair scrub start`.
|
||||
A scrub is run automatically by Garage every 30 days. It can also be launched
|
||||
manually using `garage repair scrub start`.
|
||||
|
||||
To view the status of an ongoing scrub, first find the task ID of the scrub worker
|
||||
using `garage worker list`. Then, run `garage worker info <scrub_task_id>` to
|
||||
|
@ -82,7 +79,7 @@ To help make the difference between cases 1 and cases 2 and 3, you may use the
|
|||
`garage block info` command to see which objects hold a reference to each block.
|
||||
|
||||
In the second case (transient errors), Garage will try to fetch the block again
|
||||
after a certain time, so the error should disappear naturally. You can also
|
||||
after a certain time, so the error should disappear natuarlly. You can also
|
||||
request Garage to try to fetch the block immediately using `garage block retry-now`
|
||||
if you have fixed the transient issue.
|
||||
|
||||
|
|
|
@ -311,19 +311,23 @@ impl BatchOutputKind {
|
|||
.collect::<Vec<_>>()
|
||||
}
|
||||
|
||||
fn display_poll_range_output(&self, poll_range: PollRangeResult) -> ! {
|
||||
fn display_poll_range_output(
|
||||
&self,
|
||||
seen_marker: String,
|
||||
values: BTreeMap<String, CausalValue>,
|
||||
) -> ! {
|
||||
if self.json {
|
||||
let json = serde_json::json!({
|
||||
"values": self.values_json(poll_range.items),
|
||||
"seen_marker": poll_range.seen_marker,
|
||||
"values": self.values_json(values),
|
||||
"seen_marker": seen_marker,
|
||||
});
|
||||
|
||||
let stdout = std::io::stdout();
|
||||
serde_json::to_writer_pretty(stdout, &json).unwrap();
|
||||
exit(0)
|
||||
} else {
|
||||
println!("seen marker: {}", poll_range.seen_marker);
|
||||
self.display_human_output(poll_range.items)
|
||||
println!("seen marker: {}", seen_marker);
|
||||
self.display_human_output(values)
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -497,8 +501,8 @@ async fn main() -> Result<(), Error> {
|
|||
)
|
||||
.await?;
|
||||
match res {
|
||||
Some(poll_range_output) => {
|
||||
output_kind.display_poll_range_output(poll_range_output);
|
||||
Some((items, seen_marker)) => {
|
||||
output_kind.display_poll_range_output(seen_marker, items);
|
||||
}
|
||||
None => {
|
||||
if output_kind.json {
|
||||
|
|
|
@ -182,7 +182,7 @@ impl K2vClient {
|
|||
filter: Option<PollRangeFilter<'_>>,
|
||||
seen_marker: Option<&str>,
|
||||
timeout: Option<Duration>,
|
||||
) -> Result<Option<PollRangeResult>, Error> {
|
||||
) -> Result<Option<(BTreeMap<String, CausalValue>, String)>, Error> {
|
||||
let timeout = timeout.unwrap_or(DEFAULT_POLL_TIMEOUT);
|
||||
|
||||
let request = PollRangeRequest {
|
||||
|
@ -217,10 +217,7 @@ impl K2vClient {
|
|||
})
|
||||
.collect::<BTreeMap<_, _>>();
|
||||
|
||||
Ok(Some(PollRangeResult {
|
||||
items,
|
||||
seen_marker: resp.seen_marker,
|
||||
}))
|
||||
Ok(Some((items, resp.seen_marker)))
|
||||
}
|
||||
|
||||
/// Perform an InsertItem request, inserting a value for a single pk+sk.
|
||||
|
@ -573,7 +570,6 @@ pub struct Filter<'a> {
|
|||
pub reverse: bool,
|
||||
}
|
||||
|
||||
/// Filter for a poll range operations.
|
||||
#[derive(Debug, Default, Clone, Serialize)]
|
||||
pub struct PollRangeFilter<'a> {
|
||||
pub start: Option<&'a str>,
|
||||
|
@ -581,15 +577,6 @@ pub struct PollRangeFilter<'a> {
|
|||
pub prefix: Option<&'a str>,
|
||||
}
|
||||
|
||||
/// Response to a poll_range query
|
||||
#[derive(Debug, Default, Clone, Serialize)]
|
||||
pub struct PollRangeResult {
|
||||
/// List of items that have changed since last PollRange call.
|
||||
pub items: BTreeMap<String, CausalValue>,
|
||||
/// opaque string representing items already seen for future PollRange calls.
|
||||
pub seen_marker: String,
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Serialize)]
|
||||
#[serde(rename_all = "camelCase")]
|
||||
struct PollRangeRequest<'a> {
|
||||
|
|
Loading…
Reference in a new issue