Compare commits
423 commits
optimal-la
...
main
Author | SHA1 | Date | |
---|---|---|---|
3a0e074047 | |||
95ae09917b | |||
a7ababb5db | |||
be1a16b42b | |||
91e764a2bf | |||
aa79810596 | |||
143a349f55 | |||
9cfe55ab60 | |||
2548a247f2 | |||
d5bb50d738 | |||
fc635f7072 | |||
f8b3883611 | |||
51b9731a08 | |||
5f86b48f97 | |||
51eac97260 | |||
e78566591b | |||
32e5686ad8 | |||
06369c8f4a | |||
cece1be1bb | |||
769b6fe054 | |||
e66c78d6ea | |||
51011e68b1 | |||
a54a1f5616 | |||
9b4ce4a8ad | |||
2bbe2da5ad | |||
29353adbe5 | |||
c5cafa0000 | |||
74478443ec | |||
|
d66d81ae2d | ||
|
7d8296ec59 | ||
|
f607ac6792 | ||
|
96d1d81ab7 | ||
|
5185701aa8 | ||
d539a56d3a | |||
bd50333ade | |||
170c6a2eac | |||
|
7f7d85654d | ||
|
245a0882e1 | ||
63da1d2443 | |||
24e533f262 | |||
67b1457c77 | |||
|
59bfc68f2e | ||
a98855157b | |||
4d7bbf7878 | |||
18eb73d52e | |||
79ca8e76a4 | |||
1bbf604224 | |||
6ba611361e | |||
c855284760 | |||
b1ca1784a1 | |||
f0b7a0af3d | |||
194549ca46 | |||
202d3f0e3c | |||
7605d0cb11 | |||
031804171a | |||
|
aee0d97f22 | ||
|
098c388f1b | ||
e716320b0a | |||
e466edbaec | |||
76355453dd | |||
ee494f5aa2 | |||
|
f31d98097a | ||
|
a6da7e588f | ||
e5835704b7 | |||
|
7f8bf2d801 | ||
|
4297233d3e | ||
|
b94ba47f29 | ||
33b3cf8e22 | |||
736083063f | |||
|
a5ae566e0b | ||
|
185f9e78f3 | ||
|
fb971a5f01 | ||
|
6af2cde23f | ||
|
97eb389274 | ||
5e291c64b3 | |||
9092c71a01 | |||
120f8b3bfb | |||
39c3738a07 | |||
7169ee6ee6 | |||
dd7533a260 | |||
9233661967 | |||
3aadba724d | |||
5a186be363 | |||
01346143ca | |||
eb9cecf05c | |||
802ed75721 | |||
fc29548933 | |||
1ea4937c8b | |||
6aec73b641 | |||
|
8a945ee996 | ||
|
180992d0f1 | ||
44548a9114 | |||
|
32ad4538ee | ||
|
ef8a7add08 | ||
|
2d46d24d06 | ||
|
b770504126 | ||
|
6b69404f1a | ||
|
011f473048 | ||
|
fd7dbea5b8 | ||
|
bd6485565e | ||
|
4d6e6fc155 | ||
|
02ba9016ab | ||
9d833bb7ef | |||
c3d3b837eb | |||
130e01505b | |||
e2ce5970c6 | |||
644e872264 | |||
03efc191c1 | |||
4420db7310 | |||
746b0090e4 | |||
c26a4308b4 | |||
217d429937 | |||
a1cec2cd60 | |||
b66f247580 | |||
16f2a32bb7 | |||
472444ed8e | |||
bb03805b58 | |||
e4f955d672 | |||
ea9b15f669 | |||
2e6bb3f766 | |||
375270afd1 | |||
|
c783194e8b | ||
|
fdcd7dee5a | ||
|
0f0795103d | ||
|
c9d26e8c50 | ||
b925f53dc3 | |||
2f495575d8 | |||
9e0a9c1c15 | |||
|
9c788059e2 | ||
5684e1990c | |||
14c50f2f84 | |||
0fab9c3b8c | |||
75759a163c | |||
d2deee0b8b | |||
8499cd5c21 | |||
4ea7983093 | |||
d5e39d11eb | |||
06caa12d49 | |||
6d3ace1ea9 | |||
833cf082da | |||
1ecd88c01f | |||
5efcdc0de3 | |||
a16eb7e4b8 | |||
6742070517 | |||
6894878146 | |||
02b0ba5f44 | |||
|
fb3bd11dce | ||
|
c168383113 | ||
04a0063df9 | |||
a2a35ac7a8 | |||
f167310f42 | |||
66ed0bdd91 | |||
|
11b154b33b | ||
703ac43f1c | |||
000006d689 | |||
0a1ddcf630 | |||
d6ffa57f40 | |||
7fcc153e7c | |||
f37ec584b6 | |||
|
dc6be39833 | ||
70b5424b99 | |||
2687fb7fa8 | |||
24e43f1aa0 | |||
|
8ad6efb338 | ||
3b498c7c47 | |||
40fa1242f0 | |||
|
9ea154ae9c | ||
|
4421378023 | ||
|
25f2a46fc3 | ||
3325928c13 | |||
|
d218f475cb | ||
|
7b65dd24e2 | ||
|
b70cc0a940 | ||
9e061d5a70 | |||
db69267a56 | |||
2dc80abbb1 | |||
|
148b66b843 | ||
|
53d09eb00f | ||
00dcfc97a5 | |||
|
4e0fc3d6c9 | ||
|
e4e5196066 | ||
0d0906b066 | |||
b8123fb6cd | |||
3d37be33a8 | |||
|
ff70e09aa0 | ||
|
f056ad569d | ||
a5f7a79250 | |||
|
3b22da251d | ||
|
f0717dd169 | ||
e818e39321 | |||
a15eb115c8 | |||
ae0934e018 | |||
|
6b8d634cc2 | ||
|
ee88ccf2b2 | ||
|
4c143776bf | ||
8b4d0adc75 | |||
c2a9f00a58 | |||
d14678e0ac | |||
|
179fda9fb6 | ||
80e2326998 | |||
|
94d70bec69 | ||
656b8d42de | |||
fba8224cf0 | |||
|
1b6ec74748 | ||
30f1636a00 | |||
8013a5cd58 | |||
2ba9463a8a | |||
7f715ba94f | |||
44f8b1d71a | |||
56384677fa | |||
4cff37397f | |||
|
5f412abd4e | ||
|
c753a9dfb6 | ||
|
ae9c7a2900 | ||
|
7ab27f84b8 | ||
|
55c369137d | ||
a1005c26b6 | |||
f9573b6912 | |||
4d3a5f29e0 | |||
e2173d00a9 | |||
|
9e0567dce4 | ||
|
e85a200189 | ||
|
9c354f0a8f | ||
|
004bb5b4f1 | ||
|
0c618f8a89 | ||
df30f3df4b | |||
50bce43f25 | |||
ac6751f509 | |||
b999bb36af | |||
d20e8c9256 | |||
fd03b184b3 | |||
da6f7b0dda | |||
e17970773a | |||
88b66c69a5 | |||
f2c256cac4 | |||
a08e01f17a | |||
d6af95d205 | |||
c56794655e | |||
8e93d69974 | |||
246f7468cd | |||
3113f6b5f2 | |||
1dff62564f | |||
590a0a8450 | |||
611792ddcf | |||
94d559ae00 | |||
5fb383fe4c | |||
0da054194b | |||
c7d0ad0aa0 | |||
efb6b6e868 | |||
f251b4721f | |||
|
3dc655095f | ||
|
20c1cdf662 | ||
|
f952e37ba7 | ||
|
fbafa76284 | ||
|
63e22e71f2 | ||
|
f6eaf3661c | ||
|
d3b2a68988 | ||
|
b4a1a6a32f | ||
|
bcac889f9a | ||
|
9e08a05e69 | ||
|
69497be5c6 | ||
|
36944f1839 | ||
1311742fe0 | |||
|
f2492107d7 | ||
|
93c3f8fc8c | ||
|
1c435fce09 | ||
|
dead123892 | ||
|
5c3075fe01 | ||
9adf5ca76d | |||
18bf45061a | |||
aff9c264c8 | |||
3250be7c48 | |||
fcc5033466 | |||
|
97bb110219 | ||
0010f705ef | |||
065d6e1e06 | |||
d44e8366e7 | |||
cbb522e179 | |||
f5746a46f9 | |||
|
4962b88f8b | ||
|
100b01e859 | ||
9bf94faaa1 | |||
1f5e3aaf8e | |||
f5a7bc3736 | |||
fe850f62c9 | |||
7416ba97ef | |||
dac254a6e7 | |||
94d723f27c | |||
be6b8f419d | |||
638c5a3ce0 | |||
399f137fd0 | |||
5b5ca63cf6 | |||
cbfae673e8 | |||
bba13f40fc | |||
ba384e61c0 | |||
09a3dad0f2 | |||
32aab06929 | |||
de1111076b | |||
b83517d521 | |||
57eabe7879 | |||
43fd6c1526 | |||
789540ca37 | |||
|
4cfb469d2b | ||
|
df1d9a9873 | ||
|
aac348fe93 | ||
9f5419f465 | |||
a48e2e0cb2 | |||
d6ea0cbefa | |||
7b62fe3f0b | |||
f2106c2733 | |||
02e8eb167e | |||
329c0e64f9 | |||
29dbcb8278 | |||
f3f27293df | |||
13c5549886 | |||
936b6cb563 | |||
0650a43cf1 | |||
4eb8ca3a52 | |||
1fc220886a | |||
73ed9c7403 | |||
1d5bdc17a4 | |||
c106304b9c | |||
33f25d26c7 | |||
d6d571d512 | |||
a54b67740d | |||
8d5505514f | |||
426d8784da | |||
a81200d345 | |||
cdb2a591e9 | |||
582b076179 | |||
939a6d67e8 | |||
76230f2028 | |||
6775569525 | |||
6b857a9b8c | |||
1649002e2b | |||
822e344845 | |||
7f7d53cfa9 | |||
fd10200bec | |||
0c7ed0b0af | |||
559e924cc2 | |||
e852c91d18 | |||
e9b0068079 | |||
49a138b670 | |||
e94d6f78d7 | |||
1af4a5ed56 | |||
1fcd0b371b | |||
13c8662126 | |||
e6f14ab5cf | |||
510b620108 | |||
dfc131850a | |||
d4af27f920 | |||
0d6b05bb6c | |||
a19bfef508 | |||
d56c472712 | |||
2183518edc | |||
83c8467e23 | |||
f8e528c15d | |||
d1279e04f3 | |||
041b60ed1d | |||
f8d5409894 | |||
d6040e32a6 | |||
d7f90cabb0 | |||
687660b27f | |||
9d82196945 | |||
a51e8d94c6 | |||
de9d6cddf7 | |||
f7c65e830e | |||
0e61e3b6fb | |||
a0abf41762 | |||
2ac75018a1 | |||
980572a887 | |||
7a0014b6f7 | |||
edb0b9c1ee | |||
f58a813a36 | |||
defd7d9e63 | |||
533afcf4e1 | |||
5ea5fd2130 | |||
35f8e8e2fb | |||
d5a2502b09 | |||
d7868c48a4 | |||
280d1be7b1 | |||
2065f011ca | |||
243b7c9a1c | |||
a3afc761b6 | |||
19bdd1c799 | |||
448dcc5cf4 | |||
26121bb619 | |||
280330ac72 | |||
4d7b4d9d20 | |||
fc450ec13a | |||
379b2049f5 | |||
293139a94a | |||
54e800ef8d | |||
1e40c93fd0 | |||
0cfb56d33e | |||
c1fb65194c | |||
67941000ee | |||
60c26fbc62 | |||
e76dba9561 | |||
7fafd14a25 | |||
555a54ec40 | |||
fc8f795bba | |||
a7af0c8af9 | |||
bcc9772470 | |||
c4e4cc1156 | |||
05547f2ba6 | |||
39ac295eb7 | |||
cf23aee183 | |||
74ea449f4b | |||
eabb37b53f | |||
e7824faa17 | |||
|
8dfc909759 | ||
485109ea60 | |||
ebe8a41f2d | |||
dc50fa3b34 | |||
a976c9190c | |||
72a0f90070 | |||
d814deb806 | |||
6a09f16da7 | |||
23207d18a0 | |||
3024405a65 | |||
5f0928f89c | |||
0a01b34e81 |
1
.envrc
Normal file
|
@ -0,0 +1 @@
|
|||
use flake
|
1
.gitignore
vendored
|
@ -3,3 +3,4 @@
|
|||
/pki
|
||||
**/*.rs.bk
|
||||
*.swp
|
||||
/.direnv
|
2480
Cargo.lock
generated
13
Cargo.toml
|
@ -11,10 +11,23 @@ members = [
|
|||
"src/web",
|
||||
"src/garage",
|
||||
"src/k2v-client",
|
||||
"src/format-table",
|
||||
]
|
||||
|
||||
default-members = ["src/garage"]
|
||||
|
||||
[workspace.dependencies]
|
||||
format_table = { version = "0.1.1", path = "src/format-table" }
|
||||
garage_api = { version = "0.8.4", path = "src/api" }
|
||||
garage_block = { version = "0.8.4", path = "src/block" }
|
||||
garage_db = { version = "0.8.4", path = "src/db", default-features = false }
|
||||
garage_model = { version = "0.8.4", path = "src/model", default-features = false }
|
||||
garage_rpc = { version = "0.8.4", path = "src/rpc" }
|
||||
garage_table = { version = "0.8.4", path = "src/table" }
|
||||
garage_util = { version = "0.8.4", path = "src/util" }
|
||||
garage_web = { version = "0.8.4", path = "src/web" }
|
||||
k2v-client = { version = "0.0.4", path = "src/k2v-client" }
|
||||
|
||||
[profile.dev]
|
||||
lto = "off"
|
||||
|
||||
|
|
2
Makefile
|
@ -4,7 +4,7 @@ all:
|
|||
clear; cargo build
|
||||
|
||||
release:
|
||||
nix-build --arg release true
|
||||
nix-build --attr pkgs.amd64.release --no-build-output
|
||||
|
||||
shell:
|
||||
nix-shell
|
||||
|
|
32
default.nix
|
@ -1,7 +1,4 @@
|
|||
{
|
||||
system ? builtins.currentSystem,
|
||||
git_version ? null,
|
||||
}:
|
||||
{ system ? builtins.currentSystem, git_version ? null, }:
|
||||
|
||||
with import ./nix/common.nix;
|
||||
|
||||
|
@ -11,23 +8,22 @@ let
|
|||
|
||||
build_debug_and_release = (target: {
|
||||
debug = (compile {
|
||||
inherit target git_version;
|
||||
inherit system target git_version pkgsSrc cargo2nixOverlay;
|
||||
release = false;
|
||||
}).workspace.garage {
|
||||
compileMode = "build";
|
||||
};
|
||||
}).workspace.garage { compileMode = "build"; };
|
||||
|
||||
release = (compile {
|
||||
inherit target git_version;
|
||||
inherit system target git_version pkgsSrc cargo2nixOverlay;
|
||||
release = true;
|
||||
}).workspace.garage {
|
||||
compileMode = "build";
|
||||
};
|
||||
}).workspace.garage { compileMode = "build"; };
|
||||
});
|
||||
|
||||
test = (rustPkgs: pkgs.symlinkJoin {
|
||||
test = (rustPkgs:
|
||||
pkgs.symlinkJoin {
|
||||
name = "garage-tests";
|
||||
paths = builtins.map (key: rustPkgs.workspace.${key} { compileMode = "test"; }) (builtins.attrNames rustPkgs.workspace);
|
||||
paths =
|
||||
builtins.map (key: rustPkgs.workspace.${key} { compileMode = "test"; })
|
||||
(builtins.attrNames rustPkgs.workspace);
|
||||
});
|
||||
|
||||
in {
|
||||
|
@ -39,7 +35,7 @@ in {
|
|||
};
|
||||
test = {
|
||||
amd64 = test (compile {
|
||||
inherit git_version;
|
||||
inherit system git_version pkgsSrc cargo2nixOverlay;
|
||||
target = "x86_64-unknown-linux-musl";
|
||||
features = [
|
||||
"garage/bundled-libs"
|
||||
|
@ -52,11 +48,9 @@ in {
|
|||
};
|
||||
clippy = {
|
||||
amd64 = (compile {
|
||||
inherit git_version;
|
||||
inherit system git_version pkgsSrc cargo2nixOverlay;
|
||||
target = "x86_64-unknown-linux-musl";
|
||||
compiler = "clippy";
|
||||
}).workspace.garage {
|
||||
compileMode = "build";
|
||||
};
|
||||
}).workspace.garage { compileMode = "build"; };
|
||||
};
|
||||
}
|
||||
|
|
17
doc/api/README.md
Normal file
|
@ -0,0 +1,17 @@
|
|||
# Browse doc
|
||||
|
||||
Run in this directory:
|
||||
|
||||
```
|
||||
python3 -m http.server
|
||||
```
|
||||
|
||||
And open in your browser:
|
||||
- http://localhost:8000/garage-admin-v0.html
|
||||
|
||||
# Validate doc
|
||||
|
||||
```
|
||||
wget https://repo1.maven.org/maven2/org/openapitools/openapi-generator-cli/6.1.0/openapi-generator-cli-6.1.0.jar -O openapi-generator-cli.jar
|
||||
java -jar openapi-generator-cli.jar validate -i garage-admin-v0.yml
|
||||
```
|
59
doc/api/css/redoc.css
Normal file
|
@ -0,0 +1,59 @@
|
|||
/* montserrat-300 - latin */
|
||||
@font-face {
|
||||
font-family: 'Montserrat';
|
||||
font-style: normal;
|
||||
font-weight: 300;
|
||||
src: local(''),
|
||||
url('../fonts/montserrat-v25-latin-300.woff2') format('woff2'), /* Chrome 26+, Opera 23+, Firefox 39+ */
|
||||
url('../fonts/montserrat-v25-latin-300.woff') format('woff'); /* Chrome 6+, Firefox 3.6+, IE 9+, Safari 5.1+ */
|
||||
}
|
||||
|
||||
/* montserrat-regular - latin */
|
||||
@font-face {
|
||||
font-family: 'Montserrat';
|
||||
font-style: normal;
|
||||
font-weight: 400;
|
||||
src: local(''),
|
||||
url('../fonts/montserrat-v25-latin-regular.woff2') format('woff2'), /* Chrome 26+, Opera 23+, Firefox 39+ */
|
||||
url('../fonts/montserrat-v25-latin-regular.woff') format('woff'); /* Chrome 6+, Firefox 3.6+, IE 9+, Safari 5.1+ */
|
||||
}
|
||||
|
||||
/* montserrat-700 - latin */
|
||||
@font-face {
|
||||
font-family: 'Montserrat';
|
||||
font-style: normal;
|
||||
font-weight: 700;
|
||||
src: local(''),
|
||||
url('../fonts/montserrat-v25-latin-700.woff2') format('woff2'), /* Chrome 26+, Opera 23+, Firefox 39+ */
|
||||
url('../fonts/montserrat-v25-latin-700.woff') format('woff'); /* Chrome 6+, Firefox 3.6+, IE 9+, Safari 5.1+ */
|
||||
}
|
||||
/* roboto-300 - latin */
|
||||
@font-face {
|
||||
font-family: 'Roboto';
|
||||
font-style: normal;
|
||||
font-weight: 300;
|
||||
src: local(''),
|
||||
url('../fonts/roboto-v30-latin-300.woff2') format('woff2'), /* Chrome 26+, Opera 23+, Firefox 39+ */
|
||||
url('../fonts/roboto-v30-latin-300.woff') format('woff'); /* Chrome 6+, Firefox 3.6+, IE 9+, Safari 5.1+ */
|
||||
}
|
||||
|
||||
/* roboto-regular - latin */
|
||||
@font-face {
|
||||
font-family: 'Roboto';
|
||||
font-style: normal;
|
||||
font-weight: 400;
|
||||
src: local(''),
|
||||
url('../fonts/roboto-v30-latin-regular.woff2') format('woff2'), /* Chrome 26+, Opera 23+, Firefox 39+ */
|
||||
url('../fonts/roboto-v30-latin-regular.woff') format('woff'); /* Chrome 6+, Firefox 3.6+, IE 9+, Safari 5.1+ */
|
||||
}
|
||||
|
||||
/* roboto-700 - latin */
|
||||
@font-face {
|
||||
font-family: 'Roboto';
|
||||
font-style: normal;
|
||||
font-weight: 700;
|
||||
src: local(''),
|
||||
url('../fonts/roboto-v30-latin-700.woff2') format('woff2'), /* Chrome 26+, Opera 23+, Firefox 39+ */
|
||||
url('../fonts/roboto-v30-latin-700.woff') format('woff'); /* Chrome 6+, Firefox 3.6+, IE 9+, Safari 5.1+ */
|
||||
}
|
||||
|
BIN
doc/api/fonts/montserrat-v25-latin-300.woff
Normal file
BIN
doc/api/fonts/montserrat-v25-latin-300.woff2
Normal file
BIN
doc/api/fonts/montserrat-v25-latin-700.woff
Normal file
BIN
doc/api/fonts/montserrat-v25-latin-700.woff2
Normal file
BIN
doc/api/fonts/montserrat-v25-latin-regular.woff
Normal file
BIN
doc/api/fonts/montserrat-v25-latin-regular.woff2
Normal file
BIN
doc/api/fonts/roboto-v30-latin-300.woff
Normal file
BIN
doc/api/fonts/roboto-v30-latin-300.woff2
Normal file
BIN
doc/api/fonts/roboto-v30-latin-700.woff
Normal file
BIN
doc/api/fonts/roboto-v30-latin-700.woff2
Normal file
BIN
doc/api/fonts/roboto-v30-latin-regular.woff
Normal file
BIN
doc/api/fonts/roboto-v30-latin-regular.woff2
Normal file
24
doc/api/garage-admin-v0.html
Normal file
|
@ -0,0 +1,24 @@
|
|||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head>
|
||||
<title>Garage Adminstration API v0</title>
|
||||
<!-- needed for adaptive design -->
|
||||
<meta charset="utf-8"/>
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1">
|
||||
<link href="./css/redoc.css" rel="stylesheet">
|
||||
|
||||
<!--
|
||||
Redoc doesn't change outer page styles
|
||||
-->
|
||||
<style>
|
||||
body {
|
||||
margin: 0;
|
||||
padding: 0;
|
||||
}
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
<redoc spec-url='./garage-admin-v0.yml'></redoc>
|
||||
<script src="./redoc.standalone.js"> </script>
|
||||
</body>
|
||||
</html>
|
1218
doc/api/garage-admin-v0.yml
Normal file
1806
doc/api/redoc.standalone.js
Normal file
54
doc/book/build/_index.md
Normal file
|
@ -0,0 +1,54 @@
|
|||
+++
|
||||
title = "Build your own app"
|
||||
weight = 40
|
||||
sort_by = "weight"
|
||||
template = "documentation.html"
|
||||
+++
|
||||
|
||||
Garage has many API that you can rely on to build complex applications.
|
||||
In this section, we reference the existing SDKs and give some code examples.
|
||||
|
||||
|
||||
## ⚠️ DISCLAIMER
|
||||
|
||||
**K2V AND ADMIN SDK ARE TECHNICAL PREVIEWS**. The following limitations apply:
|
||||
- The API is not complete, some actions are possible only through the `garage` binary
|
||||
- The underlying admin API is not yet stable nor complete, it can breaks at any time
|
||||
- The generator configuration is currently tweaked, the library might break at any time due to a generator change
|
||||
- Because the API and the library are not stable, none of them are published in a package manager (npm, pypi, etc.)
|
||||
- This code has not been extensively tested, some things might not work (please report!)
|
||||
|
||||
To have the best experience possible, please consider:
|
||||
- Make sure that the version of the library you are using is pinned (`go.sum`, `package-lock.json`, `requirements.txt`).
|
||||
- Before upgrading your Garage cluster, make sure that you can find a version of this SDK that works with your targeted version and that you are able to update your own code to work with this new version of the library.
|
||||
- Join our Matrix channel at `#garage:deuxfleurs.fr`, say that you are interested by this SDK, and report any friction.
|
||||
- If stability is critical, mirror this repository on your own infrastructure, regenerate the SDKs and upgrade them at your own pace.
|
||||
|
||||
|
||||
## About the APIs
|
||||
|
||||
Code can interact with Garage through 3 different APIs: S3, K2V, and Admin.
|
||||
Each of them has a specific scope.
|
||||
|
||||
### S3
|
||||
|
||||
De-facto standard, introduced by Amazon, designed to store blobs of data.
|
||||
|
||||
### K2V
|
||||
|
||||
A simple database API similar to RiakKV or DynamoDB.
|
||||
Think a key value store with some additional operations.
|
||||
Its design is inspired by Distributed Hash Tables (DHT).
|
||||
|
||||
More information:
|
||||
- [In the reference manual](@/documentation/reference-manual/k2v.md)
|
||||
|
||||
|
||||
### Administration
|
||||
|
||||
Garage operations can also be automated through a REST API.
|
||||
We are currently building this SDK for [Python](@/documentation/build/python.md#admin-api), [Javascript](@/documentation/build/javascript.md#administration) and [Golang](@/documentation/build/golang.md#administration).
|
||||
|
||||
More information:
|
||||
- [In the reference manual](@/documentation/reference-manual/admin-api.md)
|
||||
- [Full specifiction](https://garagehq.deuxfleurs.fr/api/garage-admin-v0.html)
|
69
doc/book/build/golang.md
Normal file
|
@ -0,0 +1,69 @@
|
|||
+++
|
||||
title = "Golang"
|
||||
weight = 30
|
||||
+++
|
||||
|
||||
## S3
|
||||
|
||||
*Coming soon*
|
||||
|
||||
Some refs:
|
||||
- Minio minio-go-sdk
|
||||
- [Reference](https://docs.min.io/docs/golang-client-api-reference.html)
|
||||
|
||||
- Amazon aws-sdk-go-v2
|
||||
- [Installation](https://aws.github.io/aws-sdk-go-v2/docs/getting-started/)
|
||||
- [Reference](https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/service/s3)
|
||||
- [Example](https://aws.github.io/aws-sdk-go-v2/docs/code-examples/s3/putobject/)
|
||||
|
||||
## K2V
|
||||
|
||||
*Coming soon*
|
||||
|
||||
## Administration
|
||||
|
||||
Install the SDK with:
|
||||
|
||||
```bash
|
||||
go get git.deuxfleurs.fr/garage-sdk/garage-admin-sdk-golang
|
||||
```
|
||||
|
||||
A short example:
|
||||
|
||||
```go
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"os"
|
||||
garage "git.deuxfleurs.fr/garage-sdk/garage-admin-sdk-golang"
|
||||
)
|
||||
|
||||
func main() {
|
||||
// Set Host and other parameters
|
||||
configuration := garage.NewConfiguration()
|
||||
configuration.Host = "127.0.0.1:3903"
|
||||
|
||||
|
||||
// We can now generate a client
|
||||
client := garage.NewAPIClient(configuration)
|
||||
|
||||
// Authentication is handled through the context pattern
|
||||
ctx := context.WithValue(context.Background(), garage.ContextAccessToken, "s3cr3t")
|
||||
|
||||
// Send a request
|
||||
resp, r, err := client.NodesApi.GetNodes(ctx).Execute()
|
||||
if err != nil {
|
||||
fmt.Fprintf(os.Stderr, "Error when calling `NodesApi.GetNodes``: %v\n", err)
|
||||
fmt.Fprintf(os.Stderr, "Full HTTP response: %v\n", r)
|
||||
}
|
||||
|
||||
// Process the response
|
||||
fmt.Fprintf(os.Stdout, "Target hostname: %v\n", resp.KnownNodes[resp.Node].Hostname)
|
||||
}
|
||||
```
|
||||
|
||||
See also:
|
||||
- [generated doc](https://git.deuxfleurs.fr/garage-sdk/garage-admin-sdk-golang)
|
||||
- [examples](https://git.deuxfleurs.fr/garage-sdk/garage-admin-sdk-generator/src/branch/main/example/golang)
|
55
doc/book/build/javascript.md
Normal file
|
@ -0,0 +1,55 @@
|
|||
+++
|
||||
title = "Javascript"
|
||||
weight = 10
|
||||
+++
|
||||
|
||||
## S3
|
||||
|
||||
*Coming soon*.
|
||||
|
||||
Some refs:
|
||||
- Minio SDK
|
||||
- [Reference](https://docs.min.io/docs/javascript-client-api-reference.html)
|
||||
|
||||
- Amazon aws-sdk-js
|
||||
- [Installation](https://docs.aws.amazon.com/sdk-for-javascript/v3/developer-guide/getting-started.html)
|
||||
- [Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html)
|
||||
- [Example](https://docs.aws.amazon.com/sdk-for-javascript/v3/developer-guide/s3-example-creating-buckets.html)
|
||||
|
||||
## K2V
|
||||
|
||||
*Coming soon*
|
||||
|
||||
## Administration
|
||||
|
||||
Install the SDK with:
|
||||
|
||||
```bash
|
||||
npm install --save git+https://git.deuxfleurs.fr/garage-sdk/garage-admin-sdk-js.git
|
||||
```
|
||||
|
||||
A short example:
|
||||
|
||||
```javascript
|
||||
const garage = require('garage_administration_api_v0garage_v0_8_0');
|
||||
|
||||
const api = new garage.ApiClient("http://127.0.0.1:3903/v0");
|
||||
api.authentications['bearerAuth'].accessToken = "s3cr3t";
|
||||
|
||||
const [node, layout, key, bucket] = [
|
||||
new garage.NodesApi(api),
|
||||
new garage.LayoutApi(api),
|
||||
new garage.KeyApi(api),
|
||||
new garage.BucketApi(api),
|
||||
];
|
||||
|
||||
node.getNodes().then((data) => {
|
||||
console.log(`nodes: ${Object.values(data.knownNodes).map(n => n.hostname)}`)
|
||||
}, (error) => {
|
||||
console.error(error);
|
||||
});
|
||||
```
|
||||
|
||||
See also:
|
||||
- [sdk repository](https://git.deuxfleurs.fr/garage-sdk/garage-admin-sdk-js)
|
||||
- [examples](https://git.deuxfleurs.fr/garage-sdk/garage-admin-sdk-generator/src/branch/main/example/javascript)
|
|
@ -1,8 +1,10 @@
|
|||
+++
|
||||
title = "Your code (PHP, JS, Go...)"
|
||||
weight = 30
|
||||
title = "Others"
|
||||
weight = 99
|
||||
+++
|
||||
|
||||
## S3
|
||||
|
||||
If you are developping a new application, you may want to use Garage to store your user's media.
|
||||
|
||||
The S3 API that Garage uses is a standard REST API, so as long as you can make HTTP requests,
|
||||
|
@ -13,44 +15,14 @@ Instead, there are some libraries already avalaible.
|
|||
|
||||
Some of them are maintained by Amazon, some by Minio, others by the community.
|
||||
|
||||
## PHP
|
||||
### PHP
|
||||
|
||||
- Amazon aws-sdk-php
|
||||
- [Installation](https://docs.aws.amazon.com/sdk-for-php/v3/developer-guide/getting-started_installation.html)
|
||||
- [Reference](https://docs.aws.amazon.com/aws-sdk-php/v3/api/api-s3-2006-03-01.html)
|
||||
- [Example](https://docs.aws.amazon.com/sdk-for-php/v3/developer-guide/s3-examples-creating-buckets.html)
|
||||
|
||||
## Javascript
|
||||
|
||||
- Minio SDK
|
||||
- [Reference](https://docs.min.io/docs/javascript-client-api-reference.html)
|
||||
|
||||
- Amazon aws-sdk-js
|
||||
- [Installation](https://docs.aws.amazon.com/sdk-for-javascript/v3/developer-guide/getting-started.html)
|
||||
- [Reference](https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html)
|
||||
- [Example](https://docs.aws.amazon.com/sdk-for-javascript/v3/developer-guide/s3-example-creating-buckets.html)
|
||||
|
||||
## Golang
|
||||
|
||||
- Minio minio-go-sdk
|
||||
- [Reference](https://docs.min.io/docs/golang-client-api-reference.html)
|
||||
|
||||
- Amazon aws-sdk-go-v2
|
||||
- [Installation](https://aws.github.io/aws-sdk-go-v2/docs/getting-started/)
|
||||
- [Reference](https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/service/s3)
|
||||
- [Example](https://aws.github.io/aws-sdk-go-v2/docs/code-examples/s3/putobject/)
|
||||
|
||||
## Python
|
||||
|
||||
- Minio SDK
|
||||
- [Reference](https://docs.min.io/docs/python-client-api-reference.html)
|
||||
|
||||
- Amazon boto3
|
||||
- [Installation](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/quickstart.html)
|
||||
- [Reference](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3.html)
|
||||
- [Example](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/s3-uploading-files.html)
|
||||
|
||||
## Java
|
||||
### Java
|
||||
|
||||
- Minio SDK
|
||||
- [Reference](https://docs.min.io/docs/java-client-api-reference.html)
|
||||
|
@ -60,23 +32,18 @@ Some of them are maintained by Amazon, some by Minio, others by the community.
|
|||
- [Reference](https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/services/s3/S3Client.html)
|
||||
- [Example](https://docs.aws.amazon.com/sdk-for-java/latest/developer-guide/examples-s3-objects.html)
|
||||
|
||||
## Rust
|
||||
|
||||
- Amazon aws-rust-sdk
|
||||
- [Github](https://github.com/awslabs/aws-sdk-rust)
|
||||
|
||||
## .NET
|
||||
### .NET
|
||||
|
||||
- Minio SDK
|
||||
- [Reference](https://docs.min.io/docs/dotnet-client-api-reference.html)
|
||||
|
||||
- Amazon aws-dotnet-sdk
|
||||
|
||||
## C++
|
||||
### C++
|
||||
|
||||
- Amazon aws-cpp-sdk
|
||||
|
||||
## Haskell
|
||||
### Haskell
|
||||
|
||||
- Minio SDK
|
||||
- [Reference](https://docs.min.io/docs/haskell-client-api-reference.html)
|
138
doc/book/build/python.md
Normal file
|
@ -0,0 +1,138 @@
|
|||
+++
|
||||
title = "Python"
|
||||
weight = 20
|
||||
+++
|
||||
|
||||
## S3
|
||||
|
||||
### Using Minio SDK
|
||||
|
||||
First install the SDK:
|
||||
|
||||
```bash
|
||||
pip3 install minio
|
||||
```
|
||||
|
||||
Then instantiate a client object using garage root domain, api key and secret:
|
||||
|
||||
```python
|
||||
import minio
|
||||
|
||||
client = minio.Minio(
|
||||
"your.domain.tld",
|
||||
"GKyourapikey",
|
||||
"abcd[...]1234",
|
||||
# Force the region, this is specific to garage
|
||||
region="region",
|
||||
)
|
||||
```
|
||||
|
||||
Then use all the standard S3 endpoints as implemented by the Minio SDK:
|
||||
|
||||
```
|
||||
# List buckets
|
||||
print(client.list_buckets())
|
||||
|
||||
# Put an object containing 'content' to /path in bucket named 'bucket':
|
||||
content = b"content"
|
||||
client.put_object(
|
||||
"bucket",
|
||||
"path",
|
||||
io.BytesIO(content),
|
||||
len(content),
|
||||
)
|
||||
|
||||
# Read the object back and check contents
|
||||
data = client.get_object("bucket", "path").read()
|
||||
assert data == content
|
||||
```
|
||||
|
||||
For further documentation, see the Minio SDK
|
||||
[Reference](https://docs.min.io/docs/python-client-api-reference.html)
|
||||
|
||||
### Using Amazon boto3
|
||||
|
||||
*Coming soon*
|
||||
|
||||
See the official documentation:
|
||||
- [Installation](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/quickstart.html)
|
||||
- [Reference](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3.html)
|
||||
- [Example](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/s3-uploading-files.html)
|
||||
|
||||
## K2V
|
||||
|
||||
*Coming soon*
|
||||
|
||||
## Admin API
|
||||
|
||||
You need at least Python 3.6, pip, and setuptools.
|
||||
Because the python package is in a subfolder, the command is a bit more complicated than usual:
|
||||
|
||||
```bash
|
||||
pip3 install --user 'git+https://git.deuxfleurs.fr/garage-sdk/garage-admin-sdk-python'
|
||||
```
|
||||
|
||||
Now, let imagine you have a fresh Garage instance running on localhost, with the admin API configured on port 3903 with the bearer `s3cr3t`:
|
||||
|
||||
```python
|
||||
import garage_admin_sdk
|
||||
from garage_admin_sdk.apis import *
|
||||
from garage_admin_sdk.models import *
|
||||
|
||||
configuration = garage_admin_sdk.Configuration(
|
||||
host = "http://localhost:3903/v0",
|
||||
access_token = "s3cr3t"
|
||||
)
|
||||
|
||||
# Init APIs
|
||||
api = garage_admin_sdk.ApiClient(configuration)
|
||||
nodes, layout, keys, buckets = NodesApi(api), LayoutApi(api), KeyApi(api), BucketApi(api)
|
||||
|
||||
# Display some info on the node
|
||||
status = nodes.get_nodes()
|
||||
print(f"running garage {status.garage_version}, node_id {status.node}")
|
||||
|
||||
# Change layout of this node
|
||||
current = layout.get_layout()
|
||||
layout.add_layout({
|
||||
status.node: NodeClusterInfo(
|
||||
zone = "dc1",
|
||||
capacity = 1,
|
||||
tags = [ "dev" ],
|
||||
)
|
||||
})
|
||||
layout.apply_layout(LayoutVersion(
|
||||
version = current.version + 1
|
||||
))
|
||||
|
||||
# Create key, allow it to create buckets
|
||||
kinfo = keys.add_key(AddKeyRequest(name="openapi"))
|
||||
|
||||
allow_create = UpdateKeyRequestAllow(create_bucket=True)
|
||||
keys.update_key(kinfo.access_key_id, UpdateKeyRequest(allow=allow_create))
|
||||
|
||||
# Create a bucket, allow key, set quotas
|
||||
binfo = buckets.create_bucket(CreateBucketRequest(global_alias="documentation"))
|
||||
binfo = buckets.allow_bucket_key(AllowBucketKeyRequest(
|
||||
bucket_id=binfo.id,
|
||||
access_key_id=kinfo.access_key_id,
|
||||
permissions=AllowBucketKeyRequestPermissions(read=True, write=True, owner=True),
|
||||
))
|
||||
binfo = buckets.update_bucket(binfo.id, UpdateBucketRequest(
|
||||
quotas=UpdateBucketRequestQuotas(max_size=19029801,max_objects=1500)))
|
||||
|
||||
# Display key
|
||||
print(f"""
|
||||
cluster ready
|
||||
key id is {kinfo.access_key_id}
|
||||
secret key is {kinfo.secret_access_key}
|
||||
bucket {binfo.global_aliases[0]} contains {binfo.objects}/{binfo.quotas.max_objects} objects
|
||||
""")
|
||||
```
|
||||
|
||||
*This example is named `short.py` in the example folder. Other python examples are also available.*
|
||||
|
||||
See also:
|
||||
- [sdk repo](https://git.deuxfleurs.fr/garage-sdk/garage-admin-sdk-python)
|
||||
- [examples](https://git.deuxfleurs.fr/garage-sdk/garage-admin-sdk-generator/src/branch/main/example/python)
|
||||
|
47
doc/book/build/rust.md
Normal file
|
@ -0,0 +1,47 @@
|
|||
+++
|
||||
title = "Rust"
|
||||
weight = 40
|
||||
+++
|
||||
|
||||
## S3
|
||||
|
||||
*Coming soon*
|
||||
|
||||
Some refs:
|
||||
- Amazon aws-rust-sdk
|
||||
- [Github](https://github.com/awslabs/aws-sdk-rust)
|
||||
|
||||
## K2V
|
||||
|
||||
*Coming soon*
|
||||
|
||||
Some refs: https://git.deuxfleurs.fr/Deuxfleurs/garage/src/branch/main/src/k2v-client
|
||||
|
||||
```bash
|
||||
# all these values can be provided on the cli instead
|
||||
export AWS_ACCESS_KEY_ID=GK123456
|
||||
export AWS_SECRET_ACCESS_KEY=0123..789
|
||||
export AWS_REGION=garage
|
||||
export K2V_ENDPOINT=http://172.30.2.1:3903
|
||||
export K2V_BUCKET=my-bucket
|
||||
|
||||
cargo run --features=cli -- read-range my-partition-key --all
|
||||
|
||||
cargo run --features=cli -- insert my-partition-key my-sort-key --text "my string1"
|
||||
cargo run --features=cli -- insert my-partition-key my-sort-key --text "my string2"
|
||||
cargo run --features=cli -- insert my-partition-key my-sort-key2 --text "my string"
|
||||
|
||||
cargo run --features=cli -- read-range my-partition-key --all
|
||||
|
||||
causality=$(cargo run --features=cli -- read my-partition-key my-sort-key2 -b | head -n1)
|
||||
cargo run --features=cli -- delete my-partition-key my-sort-key2 -c $causality
|
||||
|
||||
causality=$(cargo run --features=cli -- read my-partition-key my-sort-key -b | head -n1)
|
||||
cargo run --features=cli -- insert my-partition-key my-sort-key --text "my string3" -c $causality
|
||||
|
||||
cargo run --features=cli -- read-range my-partition-key --all
|
||||
```
|
||||
|
||||
## Admin API
|
||||
|
||||
*Coming soon*
|
|
@ -1,6 +1,6 @@
|
|||
+++
|
||||
title = "Integrations"
|
||||
weight = 3
|
||||
title = "Existing integrations"
|
||||
weight = 30
|
||||
sort_by = "weight"
|
||||
template = "documentation.html"
|
||||
+++
|
||||
|
@ -10,12 +10,12 @@ Garage implements the Amazon S3 protocol, which makes it compatible with many ex
|
|||
|
||||
In particular, you will find here instructions to connect it with:
|
||||
|
||||
- [Browsing tools](@/documentation/connect/cli.md)
|
||||
- [Applications](@/documentation/connect/apps/index.md)
|
||||
- [Website hosting](@/documentation/connect/websites.md)
|
||||
- [Software repositories](@/documentation/connect/repositories.md)
|
||||
- [Your own code](@/documentation/connect/code.md)
|
||||
- [Browsing tools](@/documentation/connect/cli.md)
|
||||
- [FUSE](@/documentation/connect/fs.md)
|
||||
- [Observability](@/documentation/connect/observability.md)
|
||||
- [Software repositories](@/documentation/connect/repositories.md)
|
||||
- [Website hosting](@/documentation/connect/websites.md)
|
||||
|
||||
### Generic instructions
|
||||
|
||||
|
|
|
@ -8,12 +8,13 @@ In this section, we cover the following web applications:
|
|||
| Name | Status | Note |
|
||||
|------|--------|------|
|
||||
| [Nextcloud](#nextcloud) | ✅ | Both Primary Storage and External Storage are supported |
|
||||
| [Peertube](#peertube) | ✅ | Must be configured with the website endpoint |
|
||||
| [Peertube](#peertube) | ✅ | Supported with the website endpoint, proxifying private videos unsupported |
|
||||
| [Mastodon](#mastodon) | ✅ | Natively supported |
|
||||
| [Matrix](#matrix) | ✅ | Tested with `synapse-s3-storage-provider` |
|
||||
| [ejabberd](#ejabberd) | ✅ | `mod_s3_upload` |
|
||||
| [Pixelfed](#pixelfed) | ❓ | Not yet tested |
|
||||
| [Pleroma](#pleroma) | ❓ | Not yet tested |
|
||||
| [Lemmy](#lemmy) | ❓ | Not yet tested |
|
||||
| [Lemmy](#lemmy) | ✅ | Supported with pict-rs |
|
||||
| [Funkwhale](#funkwhale) | ❓ | Not yet tested |
|
||||
| [Misskey](#misskey) | ❓ | Not yet tested |
|
||||
| [Prismo](#prismo) | ❓ | Not yet tested |
|
||||
|
@ -128,6 +129,10 @@ In other words, Peertube is only responsible of the "control plane" and offload
|
|||
In return, this system is a bit harder to configure.
|
||||
We show how it is still possible to configure Garage with Peertube, allowing you to spread the load and the bandwidth usage on the Garage cluster.
|
||||
|
||||
Starting from version 5.0, Peertube also supports improving the security for private videos by not exposing them directly
|
||||
but relying on a single control point in the Peertube instance. This is based on S3 per-object and prefix ACL, which are not currently supported
|
||||
in Garage, so this feature is unsupported. While this technically impedes security for private videos, it is not a blocking issue and could be
|
||||
a reasonable trade-off for some instances.
|
||||
|
||||
### Create resources in Garage
|
||||
|
||||
|
@ -195,6 +200,11 @@ object_storage:
|
|||
|
||||
max_upload_part: 2GB
|
||||
|
||||
proxy:
|
||||
# You may enable this feature, yet it will not provide any security benefit, so
|
||||
# you should rather benefit from Garage public endpoint for all videos
|
||||
proxify_private_files: false
|
||||
|
||||
streaming_playlists:
|
||||
bucket_name: 'peertube-playlist'
|
||||
|
||||
|
@ -465,6 +475,52 @@ And add a new line. For example, to run it every 10 minutes:
|
|||
|
||||
*External link:* [matrix-media-repo Documentation > S3](https://docs.t2bot.io/matrix-media-repo/configuration/s3-datastore.html)
|
||||
|
||||
## ejabberd
|
||||
|
||||
ejabberd is an XMPP server implementation which, with the `mod_s3_upload`
|
||||
module in the [ejabberd-contrib](https://github.com/processone/ejabberd-contrib)
|
||||
repository, can be integrated to store chat media files in Garage.
|
||||
|
||||
For uploads, this module leverages presigned URLs - this allows XMPP clients to
|
||||
directly send media to Garage. Receiving clients then retrieve this media
|
||||
through the [static website](@/documentation/cookbook/exposing-websites.md)
|
||||
functionality.
|
||||
|
||||
As the data itself is publicly accessible to someone with knowledge of the
|
||||
object URL - users are recommended to use
|
||||
[E2EE](@/documentation/cookbook/encryption.md) to protect this data-at-rest
|
||||
from unauthorized access.
|
||||
|
||||
Install the module with:
|
||||
|
||||
```bash
|
||||
ejabberdctl module_install mod_s3_upload
|
||||
```
|
||||
|
||||
Create the required key and bucket with:
|
||||
|
||||
```bash
|
||||
garage key new --name ejabberd
|
||||
garage bucket create objects.xmpp-server.fr
|
||||
garage bucket allow objects.xmpp-server.fr --read --write --key ejabberd
|
||||
garage bucket website --allow objects.xmpp-server.fr
|
||||
```
|
||||
|
||||
The module can then be configured with:
|
||||
|
||||
```
|
||||
mod_s3_upload:
|
||||
#bucket_url: https://objects.xmpp-server.fr.my-garage-instance.mydomain.tld
|
||||
bucket_url: https://my-garage-instance.mydomain.tld/objects.xmpp-server.fr
|
||||
access_key_id: GK...
|
||||
access_key_secret: ...
|
||||
region: garage
|
||||
download_url: https://objects.xmpp-server.fr
|
||||
```
|
||||
|
||||
Other configuration options can be found in the
|
||||
[configuration YAML file](https://github.com/processone/ejabberd-contrib/blob/master/mod_s3_upload/conf/mod_s3_upload.yml).
|
||||
|
||||
## Pixelfed
|
||||
|
||||
[Pixelfed Technical Documentation > Configuration](https://docs.pixelfed.org/technical-documentation/env.html#filesystem)
|
||||
|
@ -475,7 +531,68 @@ And add a new line. For example, to run it every 10 minutes:
|
|||
|
||||
## Lemmy
|
||||
|
||||
Lemmy uses pict-rs that [supports S3 backends](https://git.asonix.dog/asonix/pict-rs/commit/f9f4fc63d670f357c93f24147c2ee3e1278e2d97)
|
||||
Lemmy uses pict-rs that [supports S3 backends](https://git.asonix.dog/asonix/pict-rs/commit/f9f4fc63d670f357c93f24147c2ee3e1278e2d97).
|
||||
This feature requires `pict-rs >= 4.0.0`.
|
||||
|
||||
### Creating your bucket
|
||||
|
||||
This is the usual Garage setup:
|
||||
|
||||
```bash
|
||||
garage key new --name pictrs-key
|
||||
garage bucket create pictrs-data
|
||||
garage bucket allow pictrs-data --read --write --key pictrs-key
|
||||
```
|
||||
|
||||
Note the Key ID and Secret Key.
|
||||
|
||||
### Migrating your data
|
||||
|
||||
If your pict-rs instance holds existing data, you first need to migrate to the S3 bucket.
|
||||
|
||||
Stop pict-rs, then run the migration utility from local filesystem to the bucket:
|
||||
|
||||
```
|
||||
pict-rs \
|
||||
filesystem -p /path/to/existing/files \
|
||||
object-store \
|
||||
-e my-garage-instance.mydomain.tld:3900 \
|
||||
-b pictrs-data \
|
||||
-r garage \
|
||||
-a GK... \
|
||||
-s abcdef0123456789...
|
||||
```
|
||||
|
||||
This is pretty slow, so hold on while migrating.
|
||||
|
||||
### Running pict-rs with an S3 backend
|
||||
|
||||
Pict-rs supports both a configuration file and environment variables.
|
||||
|
||||
Either set the following section in your `pict-rs.toml`:
|
||||
|
||||
```
|
||||
[store]
|
||||
type = 'object_storage'
|
||||
endpoint = 'http://my-garage-instance.mydomain.tld:3900'
|
||||
bucket_name = 'pictrs-data'
|
||||
region = 'garage'
|
||||
access_key = 'GK...'
|
||||
secret_key = 'abcdef0123456789...'
|
||||
```
|
||||
|
||||
... or set these environment variables:
|
||||
|
||||
|
||||
```
|
||||
PICTRS__STORE__TYPE=object_storage
|
||||
PICTRS__STORE__ENDPOINT=http://my-garage-instance.mydomain.tld:3900
|
||||
PICTRS__STORE__BUCKET_NAME=pictrs-data
|
||||
PICTRS__STORE__REGION=garage
|
||||
PICTRS__STORE__ACCESS_KEY=GK...
|
||||
PICTRS__STORE__SECRET_KEY=abcdef0123456789...
|
||||
```
|
||||
|
||||
|
||||
## Funkwhale
|
||||
|
||||
|
|
|
@ -13,7 +13,41 @@ Borg Backup is very popular among the backup tools but it is not yet compatible
|
|||
We recommend using any other tool listed in this guide because they are all compatible with the S3 API.
|
||||
If you still want to use Borg, you can use it with `rclone mount`.
|
||||
|
||||
## git-annex
|
||||
|
||||
[git-annex](https://git-annex.branchable.com/) supports synchronizing files
|
||||
with its [S3 special remote](https://git-annex.branchable.com/special_remotes/S3/).
|
||||
|
||||
Note that `git-annex` requires to be compiled with Haskell package version
|
||||
`aws-0.24` to work with Garage.
|
||||
|
||||
```bash
|
||||
garage key new --name my-key
|
||||
garage bucket create my-git-annex
|
||||
garage bucket allow my-git-annex --read --write --key my-key
|
||||
```
|
||||
|
||||
Register your Key ID and Secret key in your environment:
|
||||
|
||||
```bash
|
||||
export AWS_ACCESS_KEY_ID=GKxxx
|
||||
export AWS_SECRET_ACCESS_KEY=xxxx
|
||||
```
|
||||
|
||||
Within a git-annex enabled repository, configure your Garage S3 endpoint with
|
||||
the following command:
|
||||
|
||||
```bash
|
||||
git annex initremote garage type=S3 encryption=none host=my-garage-instance.mydomain.tld protocol=https bucket=my-git-annex requeststyle=path region=garage signature=v4
|
||||
```
|
||||
|
||||
Files can now be synchronized using the usual `git-annex` `copy` or `get`
|
||||
commands.
|
||||
|
||||
Note that for simplicity - this example does not enable encryption for the files
|
||||
sent to Garage - please refer to the
|
||||
[git-annex encryption page](https://git-annex.branchable.com/encryption/) for
|
||||
how to configure this.
|
||||
|
||||
## Restic
|
||||
|
||||
|
@ -71,6 +105,7 @@ restic restore 79766175 --target /var/lib/postgresql
|
|||
Restic has way more features than the ones presented here.
|
||||
You can discover all of them by accessing its documentation from the link below.
|
||||
|
||||
Files on Android devices can also be backed up with [restic-android](https://github.com/lhns/restic-android).
|
||||
|
||||
*External links:* [Restic Documentation > Amazon S3](https://restic.readthedocs.io/en/stable/030_preparing_a_new_repo.html#amazon-s3)
|
||||
|
||||
|
|
|
@ -12,6 +12,7 @@ These tools are particularly suitable for debug, backups, website deployments or
|
|||
| [AWS CLI](#aws-cli) | ✅ | Recommended |
|
||||
| [rclone](#rclone) | ✅ | |
|
||||
| [s3cmd](#s3cmd) | ✅ | |
|
||||
| [s5cmd](#s5cmd) | ✅ | |
|
||||
| [(Cyber)duck](#cyberduck) | ✅ | |
|
||||
| [WinSCP (libs3)](#winscp) | ✅ | CLI instructions only |
|
||||
| [sftpgo](#sftpgo) | ✅ | |
|
||||
|
@ -178,59 +179,34 @@ s3cmd put /tmp/hello.txt s3://my-bucket/
|
|||
s3cmd get s3://my-bucket/hello.txt hello.txt
|
||||
```
|
||||
|
||||
## `s5cmd`
|
||||
|
||||
Configure a credentials file as follows:
|
||||
|
||||
```bash
|
||||
export AWS_ACCESS_KEY_ID=GK...
|
||||
export AWS_SECRET_ACCESS_KEY=
|
||||
export AWS_DEFAULT_REGION='garage'
|
||||
export AWS_ENDPOINT='http://localhost:3900'
|
||||
```
|
||||
|
||||
After adding these environment variables in your shell, `s5cmd` can be used
|
||||
with:
|
||||
|
||||
```bash
|
||||
s5cmd --endpoint-url=$AWS_ENDPOINT ls
|
||||
```
|
||||
|
||||
See its usage output for other commands available.
|
||||
|
||||
## Cyberduck & duck {#cyberduck}
|
||||
|
||||
Both Cyberduck (the GUI) and duck (the CLI) have a concept of "Connection Profiles" that contain some presets for a specific provider.
|
||||
We wrote the following connection profile for Garage:
|
||||
|
||||
```xml
|
||||
<?xml version="1.0" encoding="UTF-8"?>
|
||||
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
|
||||
<plist version="1.0">
|
||||
<dict>
|
||||
<key>Protocol</key>
|
||||
<string>s3</string>
|
||||
<key>Vendor</key>
|
||||
<string>garage</string>
|
||||
<key>Scheme</key>
|
||||
<string>https</string>
|
||||
<key>Description</key>
|
||||
<string>GarageS3</string>
|
||||
<key>Default Hostname</key>
|
||||
<string>127.0.0.1</string>
|
||||
<key>Default Port</key>
|
||||
<string>4443</string>
|
||||
<key>Hostname Configurable</key>
|
||||
<false/>
|
||||
<key>Port Configurable</key>
|
||||
<false/>
|
||||
<key>Username Configurable</key>
|
||||
<true/>
|
||||
<key>Username Placeholder</key>
|
||||
<string>Access Key ID (GK...)</string>
|
||||
<key>Password Placeholder</key>
|
||||
<string>Secret Key</string>
|
||||
<key>Properties</key>
|
||||
<array>
|
||||
<string>s3service.disable-dns-buckets=true</string>
|
||||
</array>
|
||||
<key>Region</key>
|
||||
<string>garage</string>
|
||||
<key>Regions</key>
|
||||
<array>
|
||||
<string>garage</string>
|
||||
</array>
|
||||
</dict>
|
||||
</plist>
|
||||
```
|
||||
|
||||
*Note: If your garage instance is configured with vhost access style, you can remove `s3service.disable-dns-buckets=true`.*
|
||||
|
||||
### Instructions for the GUI
|
||||
|
||||
Copy the connection profile, and save it anywhere as `garage.cyberduckprofile`.
|
||||
Then find this file with your file explorer and double click on it: Cyberduck will open a connection wizard for this profile.
|
||||
Simply follow the wizard and you should be done!
|
||||
Within Cyberduck, a
|
||||
[Garage connection profile](https://docs.cyberduck.io/protocols/s3/garage/) is
|
||||
available within the `Preferences -> Profiles` section. This can enabled and
|
||||
then connections to Garage may be configured.
|
||||
|
||||
### Instuctions for the CLI
|
||||
|
||||
|
|
57
doc/book/connect/observability.md
Normal file
|
@ -0,0 +1,57 @@
|
|||
+++
|
||||
title = "Observability"
|
||||
weight = 25
|
||||
+++
|
||||
|
||||
An object store can be used as data storage location for metrics, and logs which
|
||||
can then be leveraged for systems observability.
|
||||
|
||||
## Metrics
|
||||
|
||||
### Prometheus
|
||||
|
||||
Prometheus itself has no object store capabilities, however two projects exist
|
||||
which support storing metrics in an object store:
|
||||
|
||||
- [Cortex](https://cortexmetrics.io/)
|
||||
- [Thanos](https://thanos.io/)
|
||||
|
||||
## System logs
|
||||
|
||||
### Vector
|
||||
|
||||
[Vector](https://vector.dev/) natively supports S3 as a
|
||||
[data sink](https://vector.dev/docs/reference/configuration/sinks/aws_s3/)
|
||||
(and [source](https://vector.dev/docs/reference/configuration/sources/aws_s3/)).
|
||||
|
||||
This can be configured with Garage with the following:
|
||||
|
||||
```bash
|
||||
garage key new --name vector-system-logs
|
||||
garage bucket create system-logs
|
||||
garage bucket allow system-logs --read --write --key vector-system-logs
|
||||
```
|
||||
|
||||
The `vector.toml` can then be configured as follows:
|
||||
|
||||
```toml
|
||||
[sources.journald]
|
||||
type = "journald"
|
||||
current_boot_only = true
|
||||
|
||||
[sinks.out]
|
||||
encoding.codec = "json"
|
||||
type = "aws_s3"
|
||||
inputs = [ "journald" ]
|
||||
bucket = "system-logs"
|
||||
key_prefix = "%F/"
|
||||
compression = "none"
|
||||
region = "garage"
|
||||
endpoint = "https://my-garage-instance.mydomain.tld"
|
||||
auth.access_key_id = ""
|
||||
auth.secret_access_key = ""
|
||||
```
|
||||
|
||||
This is an example configuration - please refer to the Vector documentation for
|
||||
all configuration and transformation possibilities. Also note that Garage
|
||||
performs its own compression, so this should be disabled in Vector.
|
|
@ -1,12 +1,12 @@
|
|||
+++
|
||||
title="Cookbook"
|
||||
template = "documentation.html"
|
||||
weight = 2
|
||||
weight = 20
|
||||
sort_by = "weight"
|
||||
+++
|
||||
|
||||
A cookbook, when you cook, is a collection of recipes.
|
||||
Similarly, Garage's cookbook contains a collection of recipes that are known to works well!
|
||||
Similarly, Garage's cookbook contains a collection of recipes that are known to work well!
|
||||
This chapter could also be referred as "Tutorials" or "Best practices".
|
||||
|
||||
- **[Multi-node deployment](@/documentation/cookbook/real-world.md):** This page will walk you through all of the necessary
|
||||
|
@ -16,6 +16,10 @@ This chapter could also be referred as "Tutorials" or "Best practices".
|
|||
source in case a binary is not provided for your architecture, or if you want to
|
||||
hack with us!
|
||||
|
||||
- **[Binary packages](@/documentation/cookbook/binary-packages.md):** This page
|
||||
lists the different platforms that provide ready-built software packages for
|
||||
Garage.
|
||||
|
||||
- **[Integration with Systemd](@/documentation/cookbook/systemd.md):** This page explains how to run Garage
|
||||
as a Systemd service (instead of as a Docker container).
|
||||
|
||||
|
@ -26,6 +30,10 @@ This chapter could also be referred as "Tutorials" or "Best practices".
|
|||
|
||||
- **[Configuring a reverse-proxy](@/documentation/cookbook/reverse-proxy.md):** This page explains how to configure a reverse-proxy to add TLS support to your S3 api endpoint.
|
||||
|
||||
- **[Recovering from failures](@/documentation/cookbook/recovering.md):** Garage's first selling point is resilience
|
||||
to hardware failures. This section explains how to recover from such a failure in the
|
||||
best possible way.
|
||||
- **[Deploying on Kubernetes](@/documentation/cookbook/kubernetes.md):** This page explains how to deploy Garage on Kubernetes using our Helm chart.
|
||||
|
||||
- **[Deploying with Ansible](@/documentation/cookbook/ansible.md):** This page lists available Ansible roles developed by the community to deploy Garage.
|
||||
|
||||
- **[Monitoring Garage](@/documentation/cookbook/monitoring.md)** This page
|
||||
explains the Prometheus metrics available for monitoring the Garage
|
||||
cluster/nodes.
|
||||
|
|
51
doc/book/cookbook/ansible.md
Normal file
|
@ -0,0 +1,51 @@
|
|||
+++
|
||||
title = "Deploying with Ansible"
|
||||
weight = 35
|
||||
+++
|
||||
|
||||
While Ansible is not officially supported to deploy Garage, several community members
|
||||
have published Ansible roles. We list them and compare them below.
|
||||
|
||||
## Comparison of Ansible roles
|
||||
|
||||
| Feature | [ansible-role-garage](#zorun-ansible-role-garage) | [garage-docker-ansible-deploy](#moan0s-garage-docker-ansible-deploy) |
|
||||
|------------------------------------|---------------------------------------------|---------------------------------------------------------------|
|
||||
| **Runtime** | Systemd | Docker |
|
||||
| **Target OS** | Any Linux | Any Linux |
|
||||
| **Architecture** | amd64, arm64, i686 | amd64, arm64 |
|
||||
| **Additional software** | None | Traefik |
|
||||
| **Automatic node connection** | ❌ | ✅ |
|
||||
| **Layout management** | ❌ | ✅ |
|
||||
| **Manage buckets & keys** | ❌ | ✅ (basic) |
|
||||
| **Allow custom Garage config** | ✅ | ❌ |
|
||||
| **Facilitate Garage upgrades** | ✅ | ❌ |
|
||||
| **Multiple instances on one host** | ✅ | ✅ |
|
||||
|
||||
|
||||
## zorun/ansible-role-garage
|
||||
|
||||
[Source code](https://github.com/zorun/ansible-role-garage), [Ansible galaxy](https://galaxy.ansible.com/zorun/garage)
|
||||
|
||||
This role is voluntarily simple: it relies on the official Garage static
|
||||
binaries and only requires Systemd. As such, it should work on any
|
||||
Linux-based OS.
|
||||
|
||||
To make things more flexible, the user has to provide a Garage
|
||||
configuration template. This allows to customize Garage configuration in
|
||||
any way.
|
||||
|
||||
Some more features might be added, such as a way to automatically connect
|
||||
nodes to each other or to define a layout.
|
||||
|
||||
## moan0s/garage-docker-ansible-deploy
|
||||
|
||||
[Source code](https://github.com/moan0s/garage-docker-ansible-deploy), [Blog post](https://hyteck.de/post/garage/)
|
||||
|
||||
This role is based on the Docker image for Garage, and comes with
|
||||
"batteries included": it will additionally install Docker and Traefik. In
|
||||
addition, it is "opinionated" in the sense that it expects a particular
|
||||
deployment structure (one instance per disk, one gateway per host,
|
||||
structured DNS names, etc).
|
||||
|
||||
As a result, this role makes it easier to start with Garage on Ansible,
|
||||
but is less flexible.
|
41
doc/book/cookbook/binary-packages.md
Normal file
|
@ -0,0 +1,41 @@
|
|||
+++
|
||||
title = "Binary packages"
|
||||
weight = 11
|
||||
+++
|
||||
|
||||
Garage is also available in binary packages on:
|
||||
|
||||
## Alpine Linux
|
||||
|
||||
If you use Alpine Linux, you can simply install the
|
||||
[garage](https://pkgs.alpinelinux.org/packages?name=garage) package from the
|
||||
Alpine Linux repositories (available since v3.17):
|
||||
|
||||
```bash
|
||||
apk add garage
|
||||
```
|
||||
|
||||
The default configuration file is installed to `/etc/garage.toml`. You can run
|
||||
Garage using: `rc-service garage start`. If you don't specify `rpc_secret`, it
|
||||
will be automatically replaced with a random string on the first start.
|
||||
|
||||
Please note that this package is built without Consul discovery, Kubernetes
|
||||
discovery, OpenTelemetry exporter, and K2V features (K2V will be enabled once
|
||||
it's stable).
|
||||
|
||||
|
||||
## Arch Linux
|
||||
|
||||
Garage is available in the [AUR](https://aur.archlinux.org/packages/garage).
|
||||
|
||||
## FreeBSD
|
||||
|
||||
```bash
|
||||
pkg install garage
|
||||
```
|
||||
|
||||
## NixOS
|
||||
|
||||
```bash
|
||||
nix-shell -p garage
|
||||
```
|
116
doc/book/cookbook/encryption.md
Normal file
|
@ -0,0 +1,116 @@
|
|||
+++
|
||||
title = "Encryption"
|
||||
weight = 50
|
||||
+++
|
||||
|
||||
Encryption is a recurring subject when discussing Garage.
|
||||
Garage does not handle data encryption by itself, but many things can
|
||||
already be done with Garage's current feature set and the existing ecosystem.
|
||||
|
||||
This page takes a high level approach to security in general and data encryption
|
||||
in particular.
|
||||
|
||||
|
||||
# Examining your need for encryption
|
||||
|
||||
- Why do you want encryption in Garage?
|
||||
|
||||
- What is your threat model? What are you fearing?
|
||||
- A stolen HDD?
|
||||
- A curious administrator?
|
||||
- A malicious administrator?
|
||||
- A remote attacker?
|
||||
- etc.
|
||||
|
||||
- What services do you want to protect with encryption?
|
||||
- An existing application? Which one? (eg. Nextcloud)
|
||||
- An application that you are writing
|
||||
|
||||
- Any expertise you may have on the subject
|
||||
|
||||
This page explains what Garage provides, and how you can improve the situation by yourself
|
||||
by adding encryption at different levels.
|
||||
|
||||
We would be very curious to know your needs and thougs about ideas such as
|
||||
encryption practices and things like key management, as we want Garage to be a
|
||||
serious base platform for the developpment of secure, encrypted applications.
|
||||
Do not hesitate to come talk to us if you have any thoughts or questions on the
|
||||
subject.
|
||||
|
||||
|
||||
# Capabilities provided by Garage
|
||||
|
||||
## Traffic is encrypted between Garage nodes
|
||||
|
||||
RPCs between Garage nodes are encrypted. More specifically, contrary to many
|
||||
distributed software, it is impossible in Garage to have clear-text RPC. We
|
||||
use the [kuska handshake](https://github.com/Kuska-ssb/handshake) library which
|
||||
implements a protocol that has been clearly reviewed, Secure ScuttleButt's
|
||||
Secret Handshake protocol. This is why setting a `rpc_secret` is mandatory,
|
||||
and that's also why your nodes have super long identifiers.
|
||||
|
||||
## HTTP API endpoints provided by Garage are in clear text
|
||||
|
||||
Adding TLS support built into Garage is not currently planned.
|
||||
|
||||
## Garage stores data in plain text on the filesystem
|
||||
|
||||
Garage does not handle data encryption at rest by itself, and instead delegates
|
||||
to the user to add encryption, either at the storage layer (LUKS, etc) or on
|
||||
the client side (or both). There are no current plans to add data encryption
|
||||
directly in Garage.
|
||||
|
||||
Implementing data encryption directly in Garage might make things simpler for
|
||||
end users, but also raises many more questions, especially around key
|
||||
management: for encryption of data, where could Garage get the encryption keys
|
||||
from ? If we encrypt data but keep the keys in a plaintext file next to them,
|
||||
it's useless. We probably don't want to have to manage secrets in garage as it
|
||||
would be very hard to do in a secure way. Maybe integrate with an external
|
||||
system such as Hashicorp Vault?
|
||||
|
||||
|
||||
# Adding data encryption using external tools
|
||||
|
||||
## Encrypting traffic between a Garage node and your client
|
||||
|
||||
You have multiple options to have encryption between your client and a node:
|
||||
|
||||
- Setup a reverse proxy with TLS / ACME / Let's encrypt
|
||||
- Setup a Garage gateway locally, and only contact the garage daemon on `localhost`
|
||||
- Only contact your Garage daemon over a secure, encrypted overlay network such as Wireguard
|
||||
|
||||
## Encrypting data at rest
|
||||
|
||||
Protects against the following threats:
|
||||
|
||||
- Stolen HDD
|
||||
|
||||
Crucially, does not protect againt malicious sysadmins or remote attackers that
|
||||
might gain access to your servers.
|
||||
|
||||
Methods include full-disk encryption with tools such as LUKS.
|
||||
|
||||
## Encrypting data on the client side
|
||||
|
||||
Protects againt the following threats:
|
||||
|
||||
- A honest-but-curious administrator
|
||||
- A malicious administrator that tries to corrupt your data
|
||||
- A remote attacker that can read your server's data
|
||||
|
||||
Implementations are very specific to the various applications. Examples:
|
||||
|
||||
- Matrix: uses the OLM protocol for E2EE of user messages. Media files stored
|
||||
in Matrix are probably encrypted using symmetric encryption, with a key that is
|
||||
distributed in the end-to-end encrypted message that contains the link to the object.
|
||||
|
||||
- XMPP: clients normally support either OMEMO / OpenPGP for the E2EE of user
|
||||
messages. Media files are encrypted per
|
||||
[XEP-0454](https://xmpp.org/extensions/xep-0454.html).
|
||||
|
||||
- Aerogramme: use the user's password as a key to decrypt data in the user's bucket
|
||||
|
||||
- Cyberduck: comes with support for
|
||||
[Cryptomator](https://docs.cyberduck.io/cryptomator/) which allows users to
|
||||
create client-side vaults to encrypt files in before they are uploaded to a
|
||||
cloud storage endpoint.
|
|
@ -21,7 +21,7 @@ You can configure Garage as a gateway on all nodes that will consume your S3 API
|
|||
The instructions are similar to a regular node, the only option that is different is while configuring the node, you must set the `--gateway` parameter:
|
||||
|
||||
```bash
|
||||
garage layout assign --gateway --tag gw1 <node_id>
|
||||
garage layout assign --gateway --tag gw1 -z dc1 <node_id>
|
||||
garage layout show # review the changes you are making
|
||||
garage layout apply # once satisfied, apply the changes
|
||||
```
|
||||
|
|
|
@ -48,6 +48,7 @@ garage:
|
|||
replicationMode: "2"
|
||||
|
||||
# Start 4 instances (StatefulSets) of garage
|
||||
deployment:
|
||||
replicaCount: 4
|
||||
|
||||
# Override default storage class and size
|
||||
|
|
53
doc/book/cookbook/monitoring.md
Normal file
|
@ -0,0 +1,53 @@
|
|||
+++
|
||||
title = "Monitoring Garage"
|
||||
weight = 40
|
||||
+++
|
||||
|
||||
Garage exposes some internal metrics in the Prometheus data format.
|
||||
This page explains how to exploit these metrics.
|
||||
|
||||
## Setting up monitoring
|
||||
|
||||
### Enabling the Admin API endpoint
|
||||
|
||||
If you have not already enabled the [administration API endpoint](@/documentation/reference-manual/admin-api.md), do so by adding the following lines to your configuration file:
|
||||
|
||||
```toml
|
||||
[admin]
|
||||
api_bind_addr = "0.0.0.0:3903"
|
||||
```
|
||||
|
||||
This will allow anyone to scrape Prometheus metrics by fetching
|
||||
`http://localhost:3093/metrics`. If you want to restrict access
|
||||
to the exported metrics, set the `metrics_token` configuration value
|
||||
to a bearer token to be used when fetching the metrics endpoint.
|
||||
|
||||
### Setting up Prometheus and Grafana
|
||||
|
||||
Add a scrape config to your Prometheus daemon to scrape metrics from
|
||||
all of your nodes:
|
||||
|
||||
```yaml
|
||||
scrape_configs:
|
||||
- job_name: 'garage'
|
||||
static_configs:
|
||||
- targets:
|
||||
- 'node1.mycluster:3903'
|
||||
- 'node2.mycluster:3903'
|
||||
- 'node3.mycluster:3903'
|
||||
```
|
||||
|
||||
If you have set a metrics token in your Garage configuration file,
|
||||
add the following lines in your Prometheus scrape config:
|
||||
|
||||
```yaml
|
||||
authorization:
|
||||
type: Bearer
|
||||
credentials: 'your metrics token'
|
||||
```
|
||||
|
||||
To visualize the scraped data in Grafana,
|
||||
you can either import our [Grafana dashboard for Garage](https://git.deuxfleurs.fr/Deuxfleurs/garage/raw/branch/main/script/telemetry/grafana-garage-dashboard-prometheus.json)
|
||||
or make your own.
|
||||
|
||||
The list of exported metrics is available on our [dedicated page](@/documentation/reference-manual/monitoring.md) in the Reference manual section.
|
|
@ -11,19 +11,20 @@ We recommend first following the [quick start guide](@/documentation/quick-start
|
|||
to get familiar with Garage's command line and usage patterns.
|
||||
|
||||
|
||||
## Preparing your environment
|
||||
|
||||
## Prerequisites
|
||||
### Prerequisites
|
||||
|
||||
To run a real-world deployment, make sure the following conditions are met:
|
||||
|
||||
- You have at least three machines with sufficient storage space available.
|
||||
|
||||
- Each machine has a public IP address which is reachable by other machines.
|
||||
Running behind a NAT is likely to be possible but hasn't been tested for the latest version (TODO).
|
||||
|
||||
- Ideally, each machine should have a SSD available in addition to the HDD you are dedicating
|
||||
to Garage. This will allow for faster access to metadata and has the potential
|
||||
to significantly reduce Garage's response times.
|
||||
- Each machine has a public IP address which is reachable by other machines. It
|
||||
is highly recommended that you use IPv6 for this end-to-end connectivity. If
|
||||
IPv6 is not available, then using a mesh VPN such as
|
||||
[Nebula](https://github.com/slackhq/nebula) or
|
||||
[Yggdrasil](https://yggdrasil-network.github.io/) are approaches to consider
|
||||
in addition to building out your own VPN tunneling.
|
||||
|
||||
- This guide will assume you are using Docker containers to deploy Garage on each node.
|
||||
Garage can also be run independently, for instance as a [Systemd service](@/documentation/cookbook/systemd.md).
|
||||
|
@ -49,6 +50,42 @@ available in the different locations of your cluster is roughly the same.
|
|||
For instance, here, the Mercury node could be moved to Brussels; this would allow the cluster
|
||||
to store 2 TB of data in total.
|
||||
|
||||
### Best practices
|
||||
|
||||
- If you have fast dedicated networking between all your nodes, and are planing to store
|
||||
very large files, bump the `block_size` configuration parameter to 10 MB
|
||||
(`block_size = 10485760`).
|
||||
|
||||
- Garage stores its files in two locations: it uses a metadata directory to store frequently-accessed
|
||||
small metadata items, and a data directory to store data blocks of uploaded objects.
|
||||
Ideally, the metadata directory would be stored on an SSD (smaller but faster),
|
||||
and the data directory would be stored on an HDD (larger but slower).
|
||||
|
||||
- For the data directory, Garage already does checksumming and integrity verification,
|
||||
so there is no need to use a filesystem such as BTRFS or ZFS that does it.
|
||||
We recommend using XFS for the data partition, as it has the best performance.
|
||||
EXT4 is not recommended as it has more strict limitations on the number of inodes,
|
||||
which might cause issues with Garage when large numbers of objects are stored.
|
||||
|
||||
- If you only have an HDD and no SSD, it's fine to put your metadata alongside the data
|
||||
on the same drive. Having lots of RAM for your kernel to cache the metadata will
|
||||
help a lot with performance. Make sure to use the LMDB database engine,
|
||||
instead of Sled, which suffers from quite bad performance degradation on HDDs.
|
||||
Sled is still the default for legacy reasons, but is not recommended anymore.
|
||||
|
||||
- For the metadata storage, Garage does not do checksumming and integrity
|
||||
verification on its own. If you are afraid of bitrot/data corruption,
|
||||
put your metadata directory on a BTRFS partition. Otherwise, just use regular
|
||||
EXT4 or XFS.
|
||||
|
||||
- Having a single server with several storage drives is currently not very well
|
||||
supported in Garage ([#218](https://git.deuxfleurs.fr/Deuxfleurs/garage/issues/218)).
|
||||
For an easy setup, just put all your drives in a RAID0 or a ZFS RAIDZ array.
|
||||
If you're adventurous, you can try to format each of your disk as
|
||||
a separate XFS partition, and then run one `garage` daemon per disk drive,
|
||||
or use something like [`mergerfs`](https://github.com/trapexit/mergerfs) to merge
|
||||
all your disks in a single union filesystem that spreads load over them.
|
||||
|
||||
## Get a Docker image
|
||||
|
||||
Our docker image is currently named `dxflrs/garage` and is stored on the [Docker Hub](https://hub.docker.com/r/dxflrs/garage/tags?page=1&ordering=last_updated).
|
||||
|
@ -76,11 +113,12 @@ especially you must consider the following folders/files:
|
|||
this folder will be your main data storage and must be on a large storage (e.g. large HDD)
|
||||
|
||||
|
||||
A valid `/etc/garage/garage.toml` for our cluster would look as follows:
|
||||
A valid `/etc/garage.toml` for our cluster would look as follows:
|
||||
|
||||
```toml
|
||||
metadata_dir = "/var/lib/garage/meta"
|
||||
data_dir = "/var/lib/garage/data"
|
||||
db_engine = "lmdb"
|
||||
|
||||
replication_mode = "3"
|
||||
|
||||
|
@ -90,8 +128,6 @@ rpc_bind_addr = "[::]:3901"
|
|||
rpc_public_addr = "<this node's public IP>:3901"
|
||||
rpc_secret = "<RPC secret>"
|
||||
|
||||
bootstrap_peers = []
|
||||
|
||||
[s3_api]
|
||||
s3_region = "garage"
|
||||
api_bind_addr = "[::]:3900"
|
||||
|
@ -132,6 +168,21 @@ It should be restarted automatically at each reboot.
|
|||
Please note that we use host networking as otherwise Docker containers
|
||||
can not communicate with IPv6.
|
||||
|
||||
If you want to use `docker-compose`, you may use the following `docker-compose.yml` file as a reference:
|
||||
|
||||
```yaml
|
||||
version: "3"
|
||||
services:
|
||||
garage:
|
||||
image: dxflrs/garage:v0.8.0
|
||||
network_mode: "host"
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- /etc/garage.toml:/etc/garage.toml
|
||||
- /var/lib/garage/meta:/var/lib/garage/meta
|
||||
- /var/lib/garage/data:/var/lib/garage/data
|
||||
```
|
||||
|
||||
Upgrading between Garage versions should be supported transparently,
|
||||
but please check the relase notes before doing so!
|
||||
To upgrade, simply stop and remove this container and
|
||||
|
@ -146,6 +197,12 @@ The `garage` binary has two purposes:
|
|||
Ensure an appropriate `garage` binary (the same version as your Docker image) is available in your path.
|
||||
If your configuration file is at `/etc/garage.toml`, the `garage` binary should work with no further change.
|
||||
|
||||
You can also use an alias as follows to use the Garage binary inside your docker container:
|
||||
|
||||
```bash
|
||||
alias garage="docker exec -ti <container name> /garage"
|
||||
```
|
||||
|
||||
You can test your `garage` CLI utility by running a simple command such as:
|
||||
|
||||
```bash
|
||||
|
@ -288,7 +345,7 @@ garage layout apply
|
|||
```
|
||||
|
||||
**WARNING:** if you want to use the layout modification commands in a script,
|
||||
make sure to read [this page](@/documentation/reference-manual/layout.md) first.
|
||||
make sure to read [this page](@/documentation/operations/layout.md) first.
|
||||
|
||||
|
||||
## Using your Garage cluster
|
||||
|
@ -298,5 +355,5 @@ and is covered in the [quick start guide](@/documentation/quick-start/_index.md)
|
|||
Remember also that the CLI is self-documented thanks to the `--help` flag and
|
||||
the `help` subcommand (e.g. `garage help`, `garage key --help`).
|
||||
|
||||
Configuring S3-compatible applicatiosn to interact with Garage
|
||||
Configuring S3-compatible applications to interact with Garage
|
||||
is covered in the [Integrations](@/documentation/connect/_index.md) section.
|
||||
|
|
|
@ -70,14 +70,16 @@ A possible configuration:
|
|||
|
||||
```nginx
|
||||
upstream s3_backend {
|
||||
# if you have a garage instance locally
|
||||
# If you have a garage instance locally.
|
||||
server 127.0.0.1:3900;
|
||||
# you can also put your other instances
|
||||
# You can also put your other instances.
|
||||
server 192.168.1.3:3900;
|
||||
# domain names also work
|
||||
# Domain names also work.
|
||||
server garage1.example.com:3900;
|
||||
# you can assign weights if you have some servers
|
||||
# that are more powerful than others
|
||||
# A "backup" server is only used if all others have failed.
|
||||
server garage-remote.example.com:3900 backup;
|
||||
# You can assign weights if you have some servers
|
||||
# that can serve more requests than others.
|
||||
server garage2.example.com:3900 weight=2;
|
||||
}
|
||||
|
||||
|
@ -96,6 +98,8 @@ server {
|
|||
proxy_pass http://s3_backend;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header Host $host;
|
||||
# Disable buffering to a temporary file.
|
||||
proxy_max_temp_file_size 0;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
@ -164,40 +168,65 @@ Here is [a basic configuration file](https://doc.traefik.io/traefik/https/acme/#
|
|||
|
||||
### Add Garage service
|
||||
|
||||
To add Garage on Traefik you should declare a new service using its IP address (or hostname) and port:
|
||||
To add Garage on Traefik you should declare two new services using its IP
|
||||
address (or hostname) and port, these are used for the S3, and web components
|
||||
of Garage:
|
||||
|
||||
```toml
|
||||
[http.services]
|
||||
[http.services.my_garage_service.loadBalancer]
|
||||
[[http.services.my_garage_service.loadBalancer.servers]]
|
||||
[http.services.garage-s3-service.loadBalancer]
|
||||
[[http.services.garage-s3-service.loadBalancer.servers]]
|
||||
url = "http://xxx.xxx.xxx.xxx"
|
||||
port = 3900
|
||||
|
||||
[http.services.garage-web-service.loadBalancer]
|
||||
[[http.services.garage-web-service.loadBalancer.servers]]
|
||||
url = "http://xxx.xxx.xxx.xxx"
|
||||
port = 3902
|
||||
```
|
||||
|
||||
It's possible to declare multiple Garage servers as back-ends:
|
||||
|
||||
```toml
|
||||
[http.services]
|
||||
[[http.services.my_garage_service.loadBalancer.servers]]
|
||||
[[http.services.garage-s3-service.loadBalancer.servers]]
|
||||
url = "http://xxx.xxx.xxx.xxx"
|
||||
port = 3900
|
||||
[[http.services.my_garage_service.loadBalancer.servers]]
|
||||
[[http.services.garage-s3-service.loadBalancer.servers]]
|
||||
url = "http://yyy.yyy.yyy.yyy"
|
||||
port = 3900
|
||||
[[http.services.my_garage_service.loadBalancer.servers]]
|
||||
[[http.services.garage-s3-service.loadBalancer.servers]]
|
||||
url = "http://zzz.zzz.zzz.zzz"
|
||||
port = 3900
|
||||
|
||||
[[http.services.garage-web-service.loadBalancer.servers]]
|
||||
url = "http://xxx.xxx.xxx.xxx"
|
||||
port = 3902
|
||||
[[http.services.garage-web-service.loadBalancer.servers]]
|
||||
url = "http://yyy.yyy.yyy.yyy"
|
||||
port = 3902
|
||||
[[http.services.garage-web-service.loadBalancer.servers]]
|
||||
url = "http://zzz.zzz.zzz.zzz"
|
||||
port = 3902
|
||||
```
|
||||
|
||||
Traefik can remove unhealthy servers automatically with [a health check configuration](https://doc.traefik.io/traefik/routing/services/#health-check):
|
||||
|
||||
```
|
||||
[http.services]
|
||||
[http.services.my_garage_service.loadBalancer]
|
||||
[http.services.my_garage_service.loadBalancer.healthCheck]
|
||||
path = "/"
|
||||
interval = "60s"
|
||||
timeout = "5s"
|
||||
[http.services.garage-s3-service.loadBalancer]
|
||||
[http.services.garage-s3-service.loadBalancer.healthCheck]
|
||||
path = "/health"
|
||||
port = "3903"
|
||||
#interval = "15s"
|
||||
#timeout = "2s"
|
||||
|
||||
[http.services.garage-web-service.loadBalancer]
|
||||
[http.services.garage-web-service.loadBalancer.healthCheck]
|
||||
path = "/health"
|
||||
port = "3903"
|
||||
#interval = "15s"
|
||||
#timeout = "2s"
|
||||
```
|
||||
|
||||
### Adding a website
|
||||
|
@ -206,10 +235,15 @@ To add a new website, add the following declaration to your Traefik configuratio
|
|||
|
||||
```toml
|
||||
[http.routers]
|
||||
[http.routers.garage-s3]
|
||||
rule = "Host(`s3.example.org`)"
|
||||
service = "garage-s3-service"
|
||||
entryPoints = ["websecure"]
|
||||
|
||||
[http.routers.my_website]
|
||||
rule = "Host(`yoururl.example.org`)"
|
||||
service = "my_garage_service"
|
||||
entryPoints = ["web"]
|
||||
service = "garage-web-service"
|
||||
entryPoints = ["websecure"]
|
||||
```
|
||||
|
||||
Enable HTTPS access to your website with the following configuration section ([documentation](https://doc.traefik.io/traefik/https/overview/)):
|
||||
|
@ -222,7 +256,7 @@ Enable HTTPS access to your website with the following configuration section ([d
|
|||
...
|
||||
```
|
||||
|
||||
### Adding gzip compression
|
||||
### Adding compression
|
||||
|
||||
Add the following configuration section [to compress response](https://doc.traefik.io/traefik/middlewares/http/compress/) using [gzip](https://developer.mozilla.org/en-US/docs/Glossary/GZip_compression) before sending them to the client:
|
||||
|
||||
|
@ -230,10 +264,10 @@ Add the following configuration section [to compress response](https://doc.traef
|
|||
[http.routers]
|
||||
[http.routers.my_website]
|
||||
...
|
||||
middlewares = ["gzip_compress"]
|
||||
middlewares = ["compression"]
|
||||
...
|
||||
[http.middlewares]
|
||||
[http.middlewares.gzip_compress.compress]
|
||||
[http.middlewares.compression.compress]
|
||||
```
|
||||
|
||||
### Add caching response
|
||||
|
@ -258,27 +292,54 @@ Traefik's caching middleware is only available on [entreprise version](https://d
|
|||
entryPoint = "web"
|
||||
|
||||
[http.routers]
|
||||
[http.routers.garage-s3]
|
||||
rule = "Host(`s3.example.org`)"
|
||||
service = "garage-s3-service"
|
||||
entryPoints = ["websecure"]
|
||||
|
||||
[http.routers.my_website]
|
||||
rule = "Host(`yoururl.example.org`)"
|
||||
service = "my_garage_service"
|
||||
middlewares = ["gzip_compress"]
|
||||
service = "garage-web-service"
|
||||
middlewares = ["compression"]
|
||||
entryPoints = ["websecure"]
|
||||
|
||||
[http.services]
|
||||
[http.services.my_garage_service.loadBalancer]
|
||||
[http.services.my_garage_service.loadBalancer.healthCheck]
|
||||
path = "/"
|
||||
interval = "60s"
|
||||
timeout = "5s"
|
||||
[[http.services.my_garage_service.loadBalancer.servers]]
|
||||
[http.services.garage-s3-service.loadBalancer]
|
||||
[http.services.garage-s3-service.loadBalancer.healthCheck]
|
||||
path = "/health"
|
||||
port = "3903"
|
||||
#interval = "15s"
|
||||
#timeout = "2s"
|
||||
|
||||
[http.services.garage-web-service.loadBalancer]
|
||||
[http.services.garage-web-service.loadBalancer.healthCheck]
|
||||
path = "/health"
|
||||
port = "3903"
|
||||
#interval = "15s"
|
||||
#timeout = "2s"
|
||||
|
||||
[[http.services.garage-s3-service.loadBalancer.servers]]
|
||||
url = "http://xxx.xxx.xxx.xxx"
|
||||
[[http.services.my_garage_service.loadBalancer.servers]]
|
||||
port = 3900
|
||||
[[http.services.garage-s3-service.loadBalancer.servers]]
|
||||
url = "http://yyy.yyy.yyy.yyy"
|
||||
[[http.services.my_garage_service.loadBalancer.servers]]
|
||||
port = 3900
|
||||
[[http.services.garage-s3-service.loadBalancer.servers]]
|
||||
url = "http://zzz.zzz.zzz.zzz"
|
||||
port = 3900
|
||||
|
||||
[[http.services.garage-web-service.loadBalancer.servers]]
|
||||
url = "http://xxx.xxx.xxx.xxx"
|
||||
port = 3902
|
||||
[[http.services.garage-web-service.loadBalancer.servers]]
|
||||
url = "http://yyy.yyy.yyy.yyy"
|
||||
port = 3902
|
||||
[[http.services.garage-web-service.loadBalancer.servers]]
|
||||
url = "http://zzz.zzz.zzz.zzz"
|
||||
port = 3902
|
||||
|
||||
[http.middlewares]
|
||||
[http.middlewares.gzip_compress.compress]
|
||||
[http.middlewares.compression.compress]
|
||||
```
|
||||
|
||||
## Caddy
|
||||
|
@ -287,18 +348,127 @@ Your Caddy configuration can be as simple as:
|
|||
|
||||
```caddy
|
||||
s3.garage.tld, *.s3.garage.tld {
|
||||
reverse_proxy localhost:3900 192.168.1.2:3900 example.tld:3900
|
||||
reverse_proxy localhost:3900 192.168.1.2:3900 example.tld:3900 {
|
||||
health_uri /health
|
||||
health_port 3903
|
||||
#health_interval 15s
|
||||
#health_timeout 5s
|
||||
}
|
||||
}
|
||||
|
||||
*.web.garage.tld {
|
||||
reverse_proxy localhost:3902 192.168.1.2:3900 example.tld:3900
|
||||
reverse_proxy localhost:3902 192.168.1.2:3902 example.tld:3902 {
|
||||
health_uri /health
|
||||
health_port 3903
|
||||
#health_interval 15s
|
||||
#health_timeout 5s
|
||||
}
|
||||
}
|
||||
|
||||
admin.garage.tld {
|
||||
reverse_proxy localhost:3903
|
||||
reverse_proxy localhost:3903 {
|
||||
health_uri /health
|
||||
health_port 3903
|
||||
#health_interval 15s
|
||||
#health_timeout 5s
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
But at the same time, the `reverse_proxy` is very flexible.
|
||||
For a production deployment, you should [read its documentation](https://caddyserver.com/docs/caddyfile/directives/reverse_proxy) as it supports features like DNS discovery of upstreams, load balancing with checks, streaming parameters, etc.
|
||||
|
||||
### Caching
|
||||
|
||||
Caddy can compiled with a
|
||||
[cache plugin](https://github.com/caddyserver/cache-handler) which can be used
|
||||
to provide a hot-cache at the webserver-level for static websites hosted by
|
||||
Garage.
|
||||
|
||||
This can be configured as follows:
|
||||
|
||||
```caddy
|
||||
# Caddy global configuration section
|
||||
{
|
||||
# Bare minimum configuration to enable cache.
|
||||
order cache before rewrite
|
||||
|
||||
cache
|
||||
|
||||
#cache
|
||||
# allowed_http_verbs GET
|
||||
# default_cache_control public
|
||||
# ttl 8h
|
||||
#}
|
||||
}
|
||||
|
||||
# Site specific section
|
||||
https:// {
|
||||
cache
|
||||
|
||||
#cache {
|
||||
# timeout {
|
||||
# backend 30s
|
||||
# }
|
||||
#}
|
||||
|
||||
reverse_proxy ...
|
||||
}
|
||||
```
|
||||
|
||||
Caching is a complicated subject, and the reader is encouraged to study the
|
||||
available options provided by the plugin.
|
||||
|
||||
### On-demand TLS
|
||||
|
||||
Caddy supports a technique called
|
||||
[on-demand TLS](https://caddyserver.com/docs/automatic-https#on-demand-tls), by
|
||||
which one can configure the webserver to provision TLS certificates when a
|
||||
client first connects to it.
|
||||
|
||||
In order to prevent an attack vector whereby domains are simply pointed at your
|
||||
webserver and certificates are requested for them - Caddy can be configured to
|
||||
ask Garage if a domain is authorized for web hosting, before it then requests
|
||||
a TLS certificate.
|
||||
|
||||
This 'check' endpoint, which is on the admin port (3903 by default), can be
|
||||
configured in Caddy's global section as follows:
|
||||
|
||||
```caddy
|
||||
{
|
||||
...
|
||||
on_demand_tls {
|
||||
ask http://localhost:3903/check
|
||||
interval 2m
|
||||
burst 5
|
||||
}
|
||||
...
|
||||
}
|
||||
```
|
||||
|
||||
The host section can then be configured with (note that this uses the web
|
||||
endpoint instead):
|
||||
|
||||
```caddy
|
||||
# For a specific set of subdomains
|
||||
*.web.garage.tld {
|
||||
tls {
|
||||
on_demand
|
||||
}
|
||||
|
||||
reverse_proxy localhost:3902 192.168.1.2:3902 example.tld:3902
|
||||
}
|
||||
|
||||
# Accept all domains on HTTPS
|
||||
# Never configure this without global section above
|
||||
https:// {
|
||||
tls {
|
||||
on_demand
|
||||
}
|
||||
|
||||
reverse_proxy localhost:3902 192.168.1.2:3902 example.tld:3902
|
||||
}
|
||||
```
|
||||
|
||||
More information on how this endpoint is implemented in Garage is available
|
||||
in the [Admin API Reference](@/documentation/reference-manual/admin-api.md) page.
|
||||
|
|
|
@ -33,7 +33,20 @@ NoNewPrivileges=true
|
|||
WantedBy=multi-user.target
|
||||
```
|
||||
|
||||
*A note on hardening: garage will be run as a non privileged user, its user id is dynamically allocated by systemd. It cannot access (read or write) home folders (/home, /root and /run/user), the rest of the filesystem can only be read but not written, only the path seen as /var/lib/garage is writable as seen by the service (mapped to /var/lib/private/garage on your host). Additionnaly, the process can not gain new privileges over time.*
|
||||
**A note on hardening:** Garage will be run as a non privileged user, its user
|
||||
id is dynamically allocated by systemd (set with `DynamicUser=true`). It cannot
|
||||
access (read or write) home folders (`/home`, `/root` and `/run/user`), the
|
||||
rest of the filesystem can only be read but not written, only the path seen as
|
||||
`/var/lib/garage` is writable as seen by the service. Additionnaly, the process
|
||||
can not gain new privileges over time.
|
||||
|
||||
For this to work correctly, your `garage.toml` must be set with
|
||||
`metadata_dir=/var/lib/garage/meta` and `data_dir=/var/lib/garage/data`. This
|
||||
is mandatory to use the DynamicUser hardening feature of systemd, which
|
||||
autocreates these directories as virtual mapping. If the directory
|
||||
`/var/lib/garage` already exists before starting the server for the first time,
|
||||
the systemd service might not start correctly. Note that in your host
|
||||
filesystem, Garage data will be held in `/var/lib/private/garage`.
|
||||
|
||||
To start the service then automatically enable it at boot:
|
||||
|
||||
|
|
|
@ -1,50 +0,0 @@
|
|||
+++
|
||||
title = "Upgrading Garage"
|
||||
weight = 40
|
||||
+++
|
||||
|
||||
Garage is a stateful clustered application, where all nodes are communicating together and share data structures.
|
||||
It makes upgrade more difficult than stateless applications so you must be more careful when upgrading.
|
||||
On a new version release, there is 2 possibilities:
|
||||
- protocols and data structures remained the same ➡️ this is a **straightforward upgrade**
|
||||
- protocols or data structures changed ➡️ this is an **advanced upgrade**
|
||||
|
||||
You can quickly now what type of update you will have to operate by looking at the version identifier.
|
||||
Following the [SemVer ](https://semver.org/) terminology, if only the *patch* number changed, it will only need a straightforward upgrade.
|
||||
Example: an upgrade from v0.6.0 from v0.6.1 is a straightforward upgrade.
|
||||
If the *minor* or *major* number changed however, you will have to do an advanced upgrade. Example: from v0.6.1 to v0.7.0.
|
||||
|
||||
Migrations are designed to be run only between contiguous versions (from a *major*.*minor* perspective, *patches* can be skipped).
|
||||
Example: migrations from v0.6.1 to v0.7.0 and from v0.6.0 to v0.7.0 are supported but migrations from v0.5.0 to v0.7.0 are not supported.
|
||||
|
||||
## Straightforward upgrades
|
||||
|
||||
Straightforward upgrades do not imply cluster downtime.
|
||||
Before upgrading, you should still read [the changelog](https://git.deuxfleurs.fr/Deuxfleurs/garage/releases) and ideally test your deployment on a staging cluster before.
|
||||
|
||||
When you are ready, start by checking the health of your cluster.
|
||||
You can force some checks with `garage repair`, we recommend at least running `garage repair --all-nodes --yes` that is very quick to run (less than a minute).
|
||||
You will see that the command correctly terminated in the logs of your daemon.
|
||||
|
||||
Finally, you can simply upgrades nodes one by one.
|
||||
For each node: stop it, install the new binary, edit the configuration if needed, restart it.
|
||||
|
||||
## Advanced upgrades
|
||||
|
||||
Advanced upgrades will imply cluster downtime.
|
||||
Before upgrading, you must read [the changelog](https://git.deuxfleurs.fr/Deuxfleurs/garage/releases) and you must test your deployment on a staging cluster before.
|
||||
|
||||
From a high level perspective, an advanced upgrade looks like this:
|
||||
1. Make sure the health of your cluster is good (see `garage repair`)
|
||||
2. Disable API access (comment the configuration in your reverse proxy)
|
||||
3. Check that your cluster is idle
|
||||
4. Stop the whole cluster
|
||||
5. Backup the metadata folder of all your nodes, so that you will be able to restore it quickly if the upgrade fails (blocks being immutable, they should not be impacted)
|
||||
6. Install the new binary, update the configuration
|
||||
7. Start the whole cluster
|
||||
8. If needed, run the corresponding migration from `garage migrate`
|
||||
9. Make sure the health of your cluster is good
|
||||
10. Enable API access (uncomment the configuration in your reverse proxy)
|
||||
11. Monitor your cluster while load comes back, check that all your applications are happy with this new version
|
||||
|
||||
We write guides for each advanced upgrade, they are stored under the "Working Documents" section of this documentation.
|
|
@ -1,6 +1,6 @@
|
|||
+++
|
||||
title = "Design"
|
||||
weight = 5
|
||||
weight = 70
|
||||
sort_by = "weight"
|
||||
template = "documentation.html"
|
||||
+++
|
||||
|
@ -20,12 +20,16 @@ and could not do, etc.
|
|||
|
||||
We love to talk and hear about Garage, that's why we keep a log here:
|
||||
|
||||
- [(en, 2023-01-18) Presentation of Garage with some details on CRDTs and data partitioning among nodes](https://git.deuxfleurs.fr/Deuxfleurs/garage/src/commit/4cff37397f626ef063dad29e5b5e97ab1206015d/doc/talks/2023-01-18-tocatta/talk.pdf)
|
||||
|
||||
- [(fr, 2022-11-19) De l'auto-hébergement à l'entre-hébergement : Garage, pour conserver ses données ensemble](https://git.deuxfleurs.fr/Deuxfleurs/garage/src/commit/4cff37397f626ef063dad29e5b5e97ab1206015d/doc/talks/2022-11-19-Capitole-du-Libre/pr%C3%A9sentation.pdf)
|
||||
|
||||
- [(en, 2022-06-23) General presentation of Garage](https://git.deuxfleurs.fr/Deuxfleurs/garage/src/commit/4cff37397f626ef063dad29e5b5e97ab1206015d/doc/talks/2022-06-23-stack/talk.pdf)
|
||||
|
||||
- [(fr, 2021-11-13, video) Garage : Mille et une façons de stocker vos données](https://video.tedomum.net/w/moYKcv198dyMrT8hCS5jz9) and [slides (html)](https://rfid.deuxfleurs.fr/presentations/2021-11-13/garage/) - during [RFID#1](https://rfid.deuxfleurs.fr/programme/2021-11-13/) event
|
||||
|
||||
- [(en, 2021-04-28) Distributed object storage is centralised](https://git.deuxfleurs.fr/Deuxfleurs/garage/raw/commit/b1f60579a13d3c5eba7f74b1775c84639ea9b51a/doc/talks/2021-04-28_spirals-team/talk.pdf)
|
||||
- [(en, 2021-04-28) Distributed object storage is centralised](https://git.deuxfleurs.fr/Deuxfleurs/garage/src/commit/b1f60579a13d3c5eba7f74b1775c84639ea9b51a/doc/talks/2021-04-28_spirals-team/talk.pdf)
|
||||
|
||||
- [(fr, 2020-12-02) Garage : jouer dans la cour des grands quand on est un hébergeur associatif](https://git.deuxfleurs.fr/Deuxfleurs/garage/raw/commit/b1f60579a13d3c5eba7f74b1775c84639ea9b51a/doc/talks/2020-12-02_wide-team/talk.pdf)
|
||||
|
||||
*Did you write or talk about Garage? [Open a pull request](https://git.deuxfleurs.fr/Deuxfleurs/garage/) to add a link here!*
|
||||
- [(fr, 2020-12-02) Garage : jouer dans la cour des grands quand on est un hébergeur associatif](https://git.deuxfleurs.fr/Deuxfleurs/garage/src/commit/b1f60579a13d3c5eba7f74b1775c84639ea9b51a/doc/talks/2020-12-02_wide-team/talk.pdf)
|
||||
|
||||
|
||||
|
|
|
@ -12,7 +12,7 @@ as pictures, video, images, documents, etc., in a redundant multi-node
|
|||
setting. S3 is versatile enough to also be used to publish a static
|
||||
website.
|
||||
|
||||
Garage is an opinionated object storage solutoin, we focus on the following **desirable properties**:
|
||||
Garage is an opinionated object storage solution, we focus on the following **desirable properties**:
|
||||
|
||||
- **Internet enabled**: made for multi-sites (eg. datacenters, offices, households, etc.) interconnected through regular Internet connections.
|
||||
- **Self-contained & lightweight**: works everywhere and integrates well in existing environments to target [hyperconverged infrastructures](https://en.wikipedia.org/wiki/Hyper-converged_infrastructure).
|
||||
|
@ -42,15 +42,13 @@ locations. They use Garage themselves for the following tasks:
|
|||
|
||||
- As a [Matrix media backend](https://github.com/matrix-org/synapse-s3-storage-provider)
|
||||
|
||||
- To store personal data and shared documents through [Bagage](https://git.deuxfleurs.fr/Deuxfleurs/bagage), a homegrown WebDav-to-S3 proxy
|
||||
- As a Nix binary cache
|
||||
|
||||
- To store personal data and shared documents through [Bagage](https://git.deuxfleurs.fr/Deuxfleurs/bagage), a homegrown WebDav-to-S3 and SFTP-to-S3 proxy
|
||||
|
||||
- As a backup target using `rclone` and `restic`
|
||||
|
||||
- In the Drone continuous integration platform to store task logs
|
||||
|
||||
- As a Nix binary cache
|
||||
|
||||
- As a backup target using `rclone`
|
||||
|
||||
The Deuxfleurs Garage cluster is a multi-site cluster currently composed of
|
||||
4 nodes in 2 physical locations. In the future it will be expanded to at
|
||||
least 3 physical locations to fully exploit Garage's potential for high
|
||||
availability.
|
||||
9 nodes in 3 physical locations.
|
||||
|
|
|
@ -61,7 +61,7 @@ Garage prioritizes which nodes to query according to a few criteria:
|
|||
|
||||
|
||||
For further reading on the cluster structure look at the [gateway](@/documentation/cookbook/gateways.md)
|
||||
and [cluster layout management](@/documentation/reference-manual/layout.md) pages.
|
||||
and [cluster layout management](@/documentation/operations/layout.md) pages.
|
||||
|
||||
## Garbage collection
|
||||
|
||||
|
|
|
@ -72,8 +72,7 @@ We considered there v2's design but concluded that it does not fit both our *Sel
|
|||
**[Riak CS](https://docs.riak.com/riak/cs/2.1.1/index.html):**
|
||||
*Not written yet*
|
||||
|
||||
**[IPFS](https://ipfs.io/):**
|
||||
*Not written yet*
|
||||
**[IPFS](https://ipfs.io/):** IPFS has design goals radically different from Garage, we have [a blog post](@/blog/2022-ipfs/index.md) talking about it.
|
||||
|
||||
## Specific research papers
|
||||
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
+++
|
||||
title = "Development"
|
||||
weight = 6
|
||||
weight = 80
|
||||
sort_by = "weight"
|
||||
template = "documentation.html"
|
||||
+++
|
||||
|
|
|
@ -25,7 +25,7 @@ git clone https://git.deuxfleurs.fr/Deuxfleurs/garage
|
|||
cd garage
|
||||
```
|
||||
|
||||
*Optionnaly, you can use our nix.conf file to speed up compilations:*
|
||||
*Optionally, you can use our nix.conf file to speed up compilations:*
|
||||
|
||||
```bash
|
||||
sudo mkdir -p /etc/nix
|
||||
|
@ -39,7 +39,7 @@ Now you can enter our nix-shell, all the required packages will be downloaded bu
|
|||
nix-shell
|
||||
```
|
||||
|
||||
You can use the traditionnal Rust development workflow:
|
||||
You can use the traditional Rust development workflow:
|
||||
|
||||
```bash
|
||||
cargo build # compile the project
|
||||
|
|
|
@ -11,7 +11,7 @@ We define them as our release process.
|
|||
While we run some tests on every commits, we do not make a release for all of them.
|
||||
|
||||
A release can be triggered manually by "promoting" a successful build.
|
||||
Otherwise, every weeks, a release build is triggered on the `main` branch.
|
||||
Otherwise, every night, a release build is triggered on the `main` branch.
|
||||
|
||||
If the build is from a tag following the regex: `v[0-9]+\.[0-9]+\.[0-9]+`, it will be listed as stable.
|
||||
If it is a tag but with a different format, it will be listed as Extra.
|
||||
|
|
23
doc/book/operations/_index.md
Normal file
|
@ -0,0 +1,23 @@
|
|||
+++
|
||||
title = "Operations & Maintenance"
|
||||
weight = 50
|
||||
sort_by = "weight"
|
||||
template = "documentation.html"
|
||||
+++
|
||||
|
||||
This section contains a number of important information on how to best operate a Garage cluster,
|
||||
to ensure integrity and availability of your data:
|
||||
|
||||
- **[Upgrading Garage](@/documentation/operations/upgrading.md):** General instructions on how to
|
||||
upgrade your cluster from one version to the next. Instructions specific for each version upgrade
|
||||
can bef ound in the [working documents](@/documentation/working-documents/_index.md) section.
|
||||
|
||||
- **[Layout management](@/documentation/operations/layout.md):** Best practices for using the `garage layout`
|
||||
commands when adding or removing nodes from your cluster.
|
||||
|
||||
- **[Durability and repairs](@/documentation/operations/durability-repairs.md):** How to check for small things
|
||||
that might be going wrong, and how to recover from such failures.
|
||||
|
||||
- **[Recovering from failures](@/documentation/operations/recovering.md):** Garage's first selling point is resilience
|
||||
to hardware failures. This section explains how to recover from such a failure in the
|
||||
best possible way.
|
117
doc/book/operations/durability-repairs.md
Normal file
|
@ -0,0 +1,117 @@
|
|||
+++
|
||||
title = "Durability & Repairs"
|
||||
weight = 30
|
||||
+++
|
||||
|
||||
To ensure the best durability of your data and to fix any inconsistencies that may
|
||||
pop up in a distributed system, Garage provides a series of repair operations.
|
||||
This guide will explain the meaning of each of them and when they should be applied.
|
||||
|
||||
|
||||
# General syntax of repair operations
|
||||
|
||||
Repair operations described below are of the form `garage repair <repair_name>`.
|
||||
These repairs will not launch without the `--yes` flag, which should
|
||||
be added as follows: `garage repair --yes <repair_name>`.
|
||||
By default these repair procedures will only run on the Garage node your CLI is
|
||||
connecting to. To run on all nodes, add the `-a` flag as follows:
|
||||
`garage repair -a --yes <repair_name>`.
|
||||
|
||||
# Data block operations
|
||||
|
||||
## Data store scrub
|
||||
|
||||
Scrubbing the data store means examining each individual data block to check that
|
||||
their content is correct, by verifying their hash. Any block found to be corrupted
|
||||
(e.g. by bitrot or by an accidental manipulation of the datastore) will be
|
||||
restored from another node that holds a valid copy.
|
||||
|
||||
Scrubs are automatically scheduled by Garage to run every 25-35 days (the
|
||||
actual time is randomized to spread load across nodes). The next scheduled run
|
||||
can be viewed with `garage worker get`.
|
||||
|
||||
A scrub can also be launched manually using `garage repair scrub start`.
|
||||
|
||||
To view the status of an ongoing scrub, first find the task ID of the scrub worker
|
||||
using `garage worker list`. Then, run `garage worker info <scrub_task_id>` to
|
||||
view detailed runtime statistics of the scrub. To gather cluster-wide information,
|
||||
this command has to be run on each individual node.
|
||||
|
||||
A scrub is a very disk-intensive operation that might slow down your cluster.
|
||||
You may pause an ongoing scrub using `garage repair scrub pause`, but note that
|
||||
the scrub will resume automatically 24 hours later as Garage will not let your
|
||||
cluster run without a regular scrub. If the scrub procedure is too intensive
|
||||
for your servers and is slowing down your workload, the recommended solution
|
||||
is to increase the "scrub tranquility" using `garage repair scrub set-tranquility`.
|
||||
A higher tranquility value will make Garage take longer pauses between two block
|
||||
verifications. Of course, scrubbing the entire data store will also take longer.
|
||||
|
||||
## Block check and resync
|
||||
|
||||
In some cases, nodes hold a reference to a block but do not actually have the block
|
||||
stored on disk. Conversely, they may also have on disk blocks that are not referenced
|
||||
any more. To fix both cases, a block repair may be run with `garage repair blocks`.
|
||||
This will scan the entire block reference counter table to check that the blocks
|
||||
exist on disk, and will scan the entire disk store to check that stored blocks
|
||||
are referenced.
|
||||
|
||||
It is recommended to run this procedure when changing your cluster layout,
|
||||
after the metadata tables have finished synchronizing between nodes
|
||||
(usually a few hours after `garage layout apply`).
|
||||
|
||||
## Inspecting lost blocks
|
||||
|
||||
In extremely rare situations, data blocks may be unavailable from the entire cluster.
|
||||
This means that even using `garage repair blocks`, some nodes may be unable
|
||||
to fetch data blocks for which they hold a reference.
|
||||
|
||||
These errors are stored on each node in a list of "block resync errors", i.e.
|
||||
blocks for which the last resync operation failed.
|
||||
This list can be inspected using `garage block list-errors`.
|
||||
These errors usually fall into one of the following categories:
|
||||
|
||||
1. a block is still referenced but the object was deleted, this is a case
|
||||
of metadata reference inconsistency (see below for the fix)
|
||||
2. a block is referenced by a non-deleted object, but could not be fetched due
|
||||
to a transient error such as a network failure
|
||||
3. a block is referenced by a non-deleted object, but could not be fetched due
|
||||
to a permanent error such as there not being any valid copy of the block on the
|
||||
entire cluster
|
||||
|
||||
To help make the difference between cases 1 and cases 2 and 3, you may use the
|
||||
`garage block info` command to see which objects hold a reference to each block.
|
||||
|
||||
In the second case (transient errors), Garage will try to fetch the block again
|
||||
after a certain time, so the error should disappear naturally. You can also
|
||||
request Garage to try to fetch the block immediately using `garage block retry-now`
|
||||
if you have fixed the transient issue.
|
||||
|
||||
If you are confident that you are in the third scenario and that your data block
|
||||
is definitely lost, then there is no other choice than to declare your S3 objects
|
||||
as unrecoverable, and to delete them properly from the data store. This can be done
|
||||
using the `garage block purge` command.
|
||||
|
||||
|
||||
# Metadata operations
|
||||
|
||||
## Metadata table resync
|
||||
|
||||
Garage automatically resyncs all entries stored in the metadata tables every hour,
|
||||
to ensure that all nodes have the most up-to-date version of all the information
|
||||
they should be holding.
|
||||
The resync procedure is based on a Merkle tree that allows to efficiently find
|
||||
differences between nodes.
|
||||
|
||||
In some special cases, e.g. before an upgrade, you might want to run a table
|
||||
resync manually. This can be done using `garage repair tables`.
|
||||
|
||||
## Metadata table reference fixes
|
||||
|
||||
In some very rare cases where nodes are unavailable, some references between objects
|
||||
are broken. For instance, if an object is deleted, the underlying versions or data
|
||||
blocks may still be held by Garage. If you suspect that such corruption has occurred
|
||||
in your cluster, you can run one of the following repair procedures:
|
||||
|
||||
- `garage repair versions`: checks that all versions belong to a non-deleted object, and purges any orphan version
|
||||
- `garage repair block_refs`: checks that all block references belong to a non-deleted object version, and purges any orphan block reference (this will then allow the blocks to be garbage-collected)
|
||||
|
|
@ -1,6 +1,6 @@
|
|||
+++
|
||||
title = "Cluster layout management"
|
||||
weight = 50
|
||||
weight = 20
|
||||
+++
|
||||
|
||||
The cluster layout in Garage is a table that assigns to each node a role in
|
|
@ -1,6 +1,6 @@
|
|||
+++
|
||||
title = "Recovering from failures"
|
||||
weight = 35
|
||||
weight = 40
|
||||
+++
|
||||
|
||||
Garage is meant to work on old, second-hand hardware.
|
85
doc/book/operations/upgrading.md
Normal file
|
@ -0,0 +1,85 @@
|
|||
+++
|
||||
title = "Upgrading Garage"
|
||||
weight = 10
|
||||
+++
|
||||
|
||||
Garage is a stateful clustered application, where all nodes are communicating together and share data structures.
|
||||
It makes upgrade more difficult than stateless applications so you must be more careful when upgrading.
|
||||
On a new version release, there is 2 possibilities:
|
||||
- protocols and data structures remained the same ➡️ this is a **minor upgrade**
|
||||
- protocols or data structures changed ➡️ this is a **major upgrade**
|
||||
|
||||
You can quickly now what type of update you will have to operate by looking at the version identifier:
|
||||
when we require our users to do a major upgrade, we will always bump the first nonzero component of the version identifier
|
||||
(e.g. from v0.7.2 to v0.8.0).
|
||||
Conversely, for versions that only require a minor upgrade, the first nonzero component will always stay the same (e.g. from v0.8.0 to v0.8.1).
|
||||
|
||||
Major upgrades are designed to be run only between contiguous versions.
|
||||
Example: migrations from v0.7.1 to v0.8.0 and from v0.7.0 to v0.8.2 are supported but migrations from v0.6.0 to v0.8.0 are not supported.
|
||||
|
||||
The `garage_build_info`
|
||||
[Prometheus metric](@/documentation/reference-manual/monitoring.md) provides
|
||||
an overview for which Garage versions are currently in use within a cluster.
|
||||
|
||||
## Minor upgrades
|
||||
|
||||
Minor upgrades do not imply cluster downtime.
|
||||
Before upgrading, you should still read [the changelog](https://git.deuxfleurs.fr/Deuxfleurs/garage/releases) and ideally test your deployment on a staging cluster before.
|
||||
|
||||
When you are ready, start by checking the health of your cluster.
|
||||
You can force some checks with `garage repair`, we recommend at least running `garage repair --all-nodes --yes tables` which is very quick to run (less than a minute).
|
||||
You will see that the command correctly terminated in the logs of your daemon, or using `garage worker list` (the repair workers should be in the `Done` state).
|
||||
|
||||
Finally, you can simply upgrade nodes one by one.
|
||||
For each node: stop it, install the new binary, edit the configuration if needed, restart it.
|
||||
|
||||
## Major upgrades
|
||||
|
||||
Major upgrades can be done with minimal downtime with a bit of preparation, but the simplest way is usually to put the cluster offline for the duration of the migration.
|
||||
Before upgrading, you must read [the changelog](https://git.deuxfleurs.fr/Deuxfleurs/garage/releases) and you must test your deployment on a staging cluster before.
|
||||
|
||||
We write guides for each major upgrade, they are stored under the "Working Documents" section of this documentation.
|
||||
|
||||
### Major upgrades with full downtime
|
||||
|
||||
From a high level perspective, a major upgrade looks like this:
|
||||
|
||||
1. Disable API access (for instance in your reverse proxy, or by commenting the corresponding section in your Garage configuration file and restarting Garage)
|
||||
2. Check that your cluster is idle
|
||||
3. Make sure the health of your cluster is good (see `garage repair`)
|
||||
4. Stop the whole cluster
|
||||
5. Back up the metadata folder of all your nodes, so that you will be able to restore it if the upgrade fails (data blocks being immutable, they should not be impacted)
|
||||
6. Install the new binary, update the configuration
|
||||
7. Start the whole cluster
|
||||
8. If needed, run the corresponding migration from `garage migrate`
|
||||
9. Make sure the health of your cluster is good
|
||||
10. Enable API access (reverse step 1)
|
||||
11. Monitor your cluster while load comes back, check that all your applications are happy with this new version
|
||||
|
||||
### Major upgarades with minimal downtime
|
||||
|
||||
There is only one operation that has to be coordinated cluster-wide: the switch of one version of the internal RPC protocol to the next.
|
||||
This means that an upgrade with very limited downtime can simply be performed from one major version to the next by restarting all nodes
|
||||
simultaneously in the new version.
|
||||
The downtime will simply be the time required for all nodes to stop and start again, which should be less than a minute.
|
||||
If all nodes fail to stop and restart simultaneously, some nodes might be temporarily shut out from the cluster as nodes using different RPC protocol
|
||||
versions are prevented to talk to one another.
|
||||
|
||||
The entire procedure would look something like this:
|
||||
|
||||
1. Make sure the health of your cluster is good (see `garage repair`)
|
||||
|
||||
2. Take each node offline individually to back up its metadata folder, bring them back online once the backup is done.
|
||||
You can do all of the nodes in a single zone at once as that won't impact global cluster availability.
|
||||
Do not try to make a backup of the metadata folder of a running node.
|
||||
|
||||
3. Prepare your binaries and configuration files for the new Garage version
|
||||
|
||||
4. Restart all nodes simultaneously in the new version
|
||||
|
||||
5. If any specific migration procedure is required, it is usually in one of the two cases:
|
||||
|
||||
- It can be run on online nodes after the new version has started, during regular cluster operation.
|
||||
- it has to be run offline
|
||||
|
||||
For this last step, please refer to the specific documentation pertaining to the version upgrade you are doing.
|
|
@ -1,6 +1,6 @@
|
|||
+++
|
||||
title = "Quick Start"
|
||||
weight = 0
|
||||
weight = 10
|
||||
sort_by = "weight"
|
||||
template = "documentation.html"
|
||||
+++
|
||||
|
@ -35,6 +35,9 @@ Place this binary somewhere in your `$PATH` so that you can invoke the `garage`
|
|||
command directly (for instance you can copy the binary in `/usr/local/bin`
|
||||
or in `~/.local/bin`).
|
||||
|
||||
You may also check whether your distribution already includes a
|
||||
[binary package for Garage](@/documentation/cookbook/binary-packages.md).
|
||||
|
||||
If a binary of the last version is not available for your architecture,
|
||||
or if you want a build customized for your system,
|
||||
you can [build Garage from source](@/documentation/cookbook/from-source.md).
|
||||
|
@ -42,25 +45,25 @@ you can [build Garage from source](@/documentation/cookbook/from-source.md).
|
|||
|
||||
## Configuring and starting Garage
|
||||
|
||||
### Writing a first configuration file
|
||||
### Generating a first configuration file
|
||||
|
||||
This first configuration file should allow you to get started easily with the simplest
|
||||
possible Garage deployment.
|
||||
**Save it as `/etc/garage.toml`.**
|
||||
You can also store it somewhere else, but you will have to specify `-c path/to/garage.toml`
|
||||
at each invocation of the `garage` binary (for example: `garage -c ./garage.toml server`, `garage -c ./garage.toml status`).
|
||||
|
||||
```toml
|
||||
We will create it with the following command line
|
||||
to generate unique and private secrets for security reasons:
|
||||
|
||||
```bash
|
||||
cat > garage.toml <<EOF
|
||||
metadata_dir = "/tmp/meta"
|
||||
data_dir = "/tmp/data"
|
||||
db_engine = "lmdb"
|
||||
|
||||
replication_mode = "none"
|
||||
|
||||
rpc_bind_addr = "[::]:3901"
|
||||
rpc_public_addr = "127.0.0.1:3901"
|
||||
rpc_secret = "1799bccfd7411eddcf9ebd316bc1f5287ad12a68094e1c6ac6abde7e6feae1ec"
|
||||
|
||||
bootstrap_peers = []
|
||||
rpc_secret = "$(openssl rand -hex 32)"
|
||||
|
||||
[s3_api]
|
||||
s3_region = "garage"
|
||||
|
@ -71,12 +74,26 @@ root_domain = ".s3.garage.localhost"
|
|||
bind_addr = "[::]:3902"
|
||||
root_domain = ".web.garage.localhost"
|
||||
index = "index.html"
|
||||
|
||||
[k2v_api]
|
||||
api_bind_addr = "[::]:3904"
|
||||
|
||||
[admin]
|
||||
api_bind_addr = "0.0.0.0:3903"
|
||||
admin_token = "$(openssl rand -base64 32)"
|
||||
EOF
|
||||
```
|
||||
|
||||
The `rpc_secret` value provided above is just an example. It will work, but in
|
||||
order to secure your cluster you will need to use another one. You can generate
|
||||
such a value with `openssl rand -hex 32`.
|
||||
Now that your configuration file has been created, you can put
|
||||
it in the right place. By default, garage looks at **`/etc/garage.toml`.**
|
||||
|
||||
You can also store it somewhere else, but you will have to specify `-c path/to/garage.toml`
|
||||
at each invocation of the `garage` binary (for example: `garage -c ./garage.toml server`, `garage -c ./garage.toml status`).
|
||||
|
||||
As you can see, the `rpc_secret` is a 32 bytes hexadecimal string.
|
||||
You can regenerate it with `openssl rand -hex 32`.
|
||||
If you target a cluster deployment with multiple nodes, make sure that
|
||||
you use the same value for all nodes.
|
||||
|
||||
As you can see in the `metadata_dir` and `data_dir` parameters, we are saving Garage's data
|
||||
in `/tmp` which gets erased when your system reboots. This means that data stored on this
|
||||
|
@ -219,6 +236,7 @@ Now that we have a bucket and a key, we need to give permissions to the key on t
|
|||
garage bucket allow \
|
||||
--read \
|
||||
--write \
|
||||
--owner \
|
||||
nextcloud-bucket \
|
||||
--key nextcloud-app-key
|
||||
```
|
||||
|
@ -232,54 +250,73 @@ garage bucket info nextcloud-bucket
|
|||
|
||||
## Uploading and downlading from Garage
|
||||
|
||||
We recommend the use of MinIO Client to interact with Garage files (`mc`).
|
||||
Instructions to install it and use it are provided on the
|
||||
[MinIO website](https://docs.min.io/docs/minio-client-quickstart-guide.html).
|
||||
Before reading the following, you need a working `mc` command on your path.
|
||||
To download and upload files on garage, we can use a third-party tool named `awscli`.
|
||||
|
||||
Note that on certain Linux distributions such as Arch Linux, the Minio client binary
|
||||
is called `mcli` instead of `mc` (to avoid name clashes with the Midnight Commander).
|
||||
|
||||
### Configure `mc`
|
||||
### Install and configure `awscli`
|
||||
|
||||
You need your access key and secret key created above.
|
||||
We will assume you are invoking `mc` on the same machine as the Garage server,
|
||||
your S3 API endpoint is therefore `http://127.0.0.1:3900`.
|
||||
For this whole configuration, you must set an alias name: we chose `my-garage`, that you will used for all commands.
|
||||
|
||||
Adapt the following command accordingly and run it:
|
||||
If you have python on your system, you can install it with:
|
||||
|
||||
```bash
|
||||
mc alias set \
|
||||
my-garage \
|
||||
http://127.0.0.1:3900 \
|
||||
<access key> \
|
||||
<secret key> \
|
||||
--api S3v4
|
||||
python -m pip install --user awscli
|
||||
```
|
||||
|
||||
### Use `mc`
|
||||
|
||||
You can not list buckets from `mc` currently.
|
||||
|
||||
But the following commands and many more should work:
|
||||
Now that `awscli` is installed, you must configure it to talk to your Garage instance,
|
||||
with your key. There are multiple ways to do that, the simplest one is to create a file
|
||||
named `~/.awsrc` with this content:
|
||||
|
||||
```bash
|
||||
mc cp image.png my-garage/nextcloud-bucket
|
||||
mc cp my-garage/nextcloud-bucket/image.png .
|
||||
mc ls my-garage/nextcloud-bucket
|
||||
mc mirror localdir/ my-garage/another-bucket
|
||||
export AWS_ACCESS_KEY_ID=xxxx # put your Key ID here
|
||||
export AWS_SECRET_ACCESS_KEY=xxxx # put your Secret key here
|
||||
export AWS_DEFAULT_REGION='garage'
|
||||
export AWS_ENDPOINT='http://localhost:3900'
|
||||
|
||||
function aws { command aws --endpoint-url $AWS_ENDPOINT $@ ; }
|
||||
aws --version
|
||||
```
|
||||
|
||||
Now, each time you want to use `awscli` on this target, run:
|
||||
|
||||
```bash
|
||||
source ~/.awsrc
|
||||
```
|
||||
|
||||
*You can create multiple files with different names if you
|
||||
have multiple Garage clusters or different keys.
|
||||
Switching from one cluster to another is as simple as
|
||||
sourcing the right file.*
|
||||
|
||||
### Example usage of `awscli`
|
||||
|
||||
```bash
|
||||
# list buckets
|
||||
aws s3 ls
|
||||
|
||||
# list objects of a bucket
|
||||
aws s3 ls s3://nextcloud-bucket
|
||||
|
||||
# copy from your filesystem to garage
|
||||
aws s3 cp /proc/cpuinfo s3://nextcloud-bucket/cpuinfo.txt
|
||||
|
||||
# copy from garage to your filesystem
|
||||
aws s3 cp s3://nextcloud-bucket/cpuinfo.txt /tmp/cpuinfo.txt
|
||||
```
|
||||
|
||||
Note that you can use `awscli` for more advanced operations like
|
||||
creating a bucket, pre-signing a request or managing your website.
|
||||
[Read the full documentation to know more](https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3/index.html).
|
||||
|
||||
Some features are however not implemented like ACL or policy.
|
||||
Check [our s3 compatibility list](@/documentation/reference-manual/s3-compatibility.md).
|
||||
|
||||
### Other tools for interacting with Garage
|
||||
|
||||
The following tools can also be used to send and recieve files from/to Garage:
|
||||
|
||||
- the [AWS CLI](https://aws.amazon.com/cli/)
|
||||
- [`rclone`](https://rclone.org/)
|
||||
- [Cyberduck](https://cyberduck.io/)
|
||||
- [`s3cmd`](https://s3tools.org/s3cmd)
|
||||
- [minio-client](@/documentation/connect/cli.md#minio-client)
|
||||
- [s3cmd](@/documentation/connect/cli.md#s3cmd)
|
||||
- [rclone](@/documentation/connect/cli.md#rclone)
|
||||
- [Cyberduck](@/documentation/connect/cli.md#cyberduck)
|
||||
- [WinSCP](@/documentation/connect/cli.md#winscp)
|
||||
|
||||
Refer to the ["Integrations" section](@/documentation/connect/_index.md) to learn how to
|
||||
configure application and command line utilities to integrate with Garage.
|
||||
An exhaustive list is maintained in the ["Integrations" > "Browsing tools" section](@/documentation/connect/_index.md).
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
+++
|
||||
title = "Reference Manual"
|
||||
weight = 4
|
||||
weight = 60
|
||||
sort_by = "weight"
|
||||
template = "documentation.html"
|
||||
+++
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
+++
|
||||
title = "Administration API"
|
||||
weight = 60
|
||||
weight = 40
|
||||
+++
|
||||
|
||||
The Garage administration API is accessible through a dedicated server whose
|
||||
|
@ -39,606 +39,105 @@ Authorization: Bearer <token>
|
|||
|
||||
## Administration API endpoints
|
||||
|
||||
### Metrics-related endpoints
|
||||
|
||||
#### Metrics `GET /metrics`
|
||||
### Metrics `GET /metrics`
|
||||
|
||||
Returns internal Garage metrics in Prometheus format.
|
||||
The metrics are directly documented when returned by the API.
|
||||
|
||||
**Example:**
|
||||
|
||||
```
|
||||
$ curl -i http://localhost:3903/metrics
|
||||
HTTP/1.1 200 OK
|
||||
content-type: text/plain; version=0.0.4
|
||||
content-length: 12145
|
||||
date: Tue, 08 Aug 2023 07:25:05 GMT
|
||||
|
||||
# HELP api_admin_error_counter Number of API calls to the various Admin API endpoints that resulted in errors
|
||||
# TYPE api_admin_error_counter counter
|
||||
api_admin_error_counter{api_endpoint="CheckWebsiteEnabled",status_code="400"} 1
|
||||
api_admin_error_counter{api_endpoint="CheckWebsiteEnabled",status_code="404"} 3
|
||||
# HELP api_admin_request_counter Number of API calls to the various Admin API endpoints
|
||||
# TYPE api_admin_request_counter counter
|
||||
api_admin_request_counter{api_endpoint="CheckWebsiteEnabled"} 7
|
||||
api_admin_request_counter{api_endpoint="Health"} 3
|
||||
# HELP api_admin_request_duration Duration of API calls to the various Admin API endpoints
|
||||
...
|
||||
```
|
||||
|
||||
### Health `GET /health`
|
||||
|
||||
Returns `200 OK` if enough nodes are up to have a quorum (ie. serve requests),
|
||||
otherwise returns `503 Service Unavailable`.
|
||||
|
||||
**Example:**
|
||||
|
||||
```
|
||||
$ curl -i http://localhost:3903/health
|
||||
HTTP/1.1 200 OK
|
||||
content-type: text/plain
|
||||
content-length: 102
|
||||
date: Tue, 08 Aug 2023 07:22:38 GMT
|
||||
|
||||
Garage is fully operational
|
||||
Consult the full health check API endpoint at /v0/health for more details
|
||||
```
|
||||
|
||||
### On-demand TLS `GET /check`
|
||||
|
||||
To prevent abuses for on-demand TLS, Caddy developpers have specified an endpoint that can be queried by the reverse proxy
|
||||
to know if a given domain is allowed to get a certificate. Garage implements this endpoints to tell if a given domain is handled by Garage or is garbage.
|
||||
|
||||
Garage responds with the following logic:
|
||||
- If the domain matches the pattern `<bucket-name>.<s3_api.root_domain>`, returns 200 OK
|
||||
- If the domain matches the pattern `<bucket-name>.<s3_web.root_domain>` and website is configured for `<bucket>`, returns 200 OK
|
||||
- If the domain matches the pattern `<bucket-name>` and website is configured for `<bucket>`, returns 200 OK
|
||||
- Otherwise, returns 404 Not Found, 400 Bad Request or 5xx requests.
|
||||
|
||||
*Note 1: because in the path-style URL mode, there is only one domain that is not known by Garage, hence it is not supported by this API endpoint.
|
||||
You must manually declare the domain in your reverse-proxy. Idem for K2V.*
|
||||
|
||||
*Note 2: buckets in a user's namespace are not supported yet by this endpoint. This is a limitation of this endpoint currently.*
|
||||
|
||||
**Example:** Suppose a Garage instance configured with `s3_api.root_domain = .s3.garage.localhost` and `s3_web.root_domain = .web.garage.localhost`.
|
||||
|
||||
With a private `media` bucket (name in the global namespace, website is disabled), the endpoint will feature the following behavior:
|
||||
|
||||
```
|
||||
$ curl -so /dev/null -w "%{http_code}" http://localhost:3903/check?domain=media.s3.garage.localhost
|
||||
200
|
||||
$ curl -so /dev/null -w "%{http_code}" http://localhost:3903/check?domain=media
|
||||
400
|
||||
$ curl -so /dev/null -w "%{http_code}" http://localhost:3903/check?domain=media.web.garage.localhost
|
||||
400
|
||||
```
|
||||
|
||||
With a public `example.com` bucket (name in the global namespace, website is activated), the endpoint will feature the following behavior:
|
||||
|
||||
```
|
||||
$ curl -so /dev/null -w "%{http_code}" http://localhost:3903/check?domain=example.com.s3.garage.localhost
|
||||
200
|
||||
$ curl -so /dev/null -w "%{http_code}" http://localhost:3903/check?domain=example.com
|
||||
200
|
||||
$ curl -so /dev/null -w "%{http_code}" http://localhost:3903/check?domain=example.com.web.garage.localhost
|
||||
200
|
||||
```
|
||||
|
||||
|
||||
**References:**
|
||||
- [Using On-Demand TLS](https://caddyserver.com/docs/automatic-https#using-on-demand-tls)
|
||||
- [Add option for a backend check to approve use of on-demand TLS](https://github.com/caddyserver/caddy/pull/1939)
|
||||
- [Serving tens of thousands of domains over HTTPS with Caddy](https://caddy.community/t/serving-tens-of-thousands-of-domains-over-https-with-caddy/11179)
|
||||
|
||||
### Cluster operations
|
||||
|
||||
#### GetClusterStatus `GET /v0/status`
|
||||
These endpoints are defined on a dedicated [Redocly page](https://garagehq.deuxfleurs.fr/api/garage-admin-v0.html). You can also download its [OpenAPI specification](https://garagehq.deuxfleurs.fr/api/garage-admin-v0.yml).
|
||||
|
||||
Returns the cluster's current status in JSON, including:
|
||||
Requesting the API from the command line can be as simple as running:
|
||||
|
||||
- ID of the node being queried and its version of the Garage daemon
|
||||
- Live nodes
|
||||
- Currently configured cluster layout
|
||||
- Staged changes to the cluster layout
|
||||
|
||||
Example response body:
|
||||
|
||||
```json
|
||||
{
|
||||
"node": "ec79480e0ce52ae26fd00c9da684e4fa56658d9c64cdcecb094e936de0bfe71f",
|
||||
"garage_version": "git:v0.8.0",
|
||||
"knownNodes": {
|
||||
"ec79480e0ce52ae26fd00c9da684e4fa56658d9c64cdcecb094e936de0bfe71f": {
|
||||
"addr": "10.0.0.11:3901",
|
||||
"is_up": true,
|
||||
"last_seen_secs_ago": 9,
|
||||
"hostname": "node1"
|
||||
},
|
||||
"4a6ae5a1d0d33bf895f5bb4f0a418b7dc94c47c0dd2eb108d1158f3c8f60b0ff": {
|
||||
"addr": "10.0.0.12:3901",
|
||||
"is_up": true,
|
||||
"last_seen_secs_ago": 1,
|
||||
"hostname": "node2"
|
||||
},
|
||||
"23ffd0cdd375ebff573b20cc5cef38996b51c1a7d6dbcf2c6e619876e507cf27": {
|
||||
"addr": "10.0.0.21:3901",
|
||||
"is_up": true,
|
||||
"last_seen_secs_ago": 7,
|
||||
"hostname": "node3"
|
||||
},
|
||||
"e2ee7984ee65b260682086ec70026165903c86e601a4a5a501c1900afe28d84b": {
|
||||
"addr": "10.0.0.22:3901",
|
||||
"is_up": true,
|
||||
"last_seen_secs_ago": 1,
|
||||
"hostname": "node4"
|
||||
}
|
||||
},
|
||||
"layout": {
|
||||
"version": 12,
|
||||
"roles": {
|
||||
"ec79480e0ce52ae26fd00c9da684e4fa56658d9c64cdcecb094e936de0bfe71f": {
|
||||
"zone": "dc1",
|
||||
"capacity": 4,
|
||||
"tags": [
|
||||
"node1"
|
||||
]
|
||||
},
|
||||
"4a6ae5a1d0d33bf895f5bb4f0a418b7dc94c47c0dd2eb108d1158f3c8f60b0ff": {
|
||||
"zone": "dc1",
|
||||
"capacity": 6,
|
||||
"tags": [
|
||||
"node2"
|
||||
]
|
||||
},
|
||||
"23ffd0cdd375ebff573b20cc5cef38996b51c1a7d6dbcf2c6e619876e507cf27": {
|
||||
"zone": "dc2",
|
||||
"capacity": 10,
|
||||
"tags": [
|
||||
"node3"
|
||||
]
|
||||
}
|
||||
},
|
||||
"stagedRoleChanges": {
|
||||
"e2ee7984ee65b260682086ec70026165903c86e601a4a5a501c1900afe28d84b": {
|
||||
"zone": "dc2",
|
||||
"capacity": 5,
|
||||
"tags": [
|
||||
"node4"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```bash
|
||||
curl -H 'Authorization: Bearer s3cr3t' http://localhost:3903/v0/status | jq
|
||||
```
|
||||
|
||||
#### ConnectClusterNodes `POST /v0/connect`
|
||||
|
||||
Instructs this Garage node to connect to other Garage nodes at specified addresses.
|
||||
|
||||
Example request body:
|
||||
|
||||
```json
|
||||
[
|
||||
"ec79480e0ce52ae26fd00c9da684e4fa56658d9c64cdcecb094e936de0bfe71f@10.0.0.11:3901",
|
||||
"4a6ae5a1d0d33bf895f5bb4f0a418b7dc94c47c0dd2eb108d1158f3c8f60b0ff@10.0.0.12:3901"
|
||||
]
|
||||
```
|
||||
|
||||
The format of the string for a node to connect to is: `<node ID>@<ip address>:<port>`, same as in the `garage node connect` CLI call.
|
||||
|
||||
Example response:
|
||||
|
||||
```json
|
||||
[
|
||||
{
|
||||
"success": true,
|
||||
"error": null
|
||||
},
|
||||
{
|
||||
"success": false,
|
||||
"error": "Handshake error"
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
#### GetClusterLayout `GET /v0/layout`
|
||||
|
||||
Returns the cluster's current layout in JSON, including:
|
||||
|
||||
- Currently configured cluster layout
|
||||
- Staged changes to the cluster layout
|
||||
|
||||
(the info returned by this endpoint is a subset of the info returned by GetClusterStatus)
|
||||
|
||||
Example response body:
|
||||
|
||||
```json
|
||||
{
|
||||
"version": 12,
|
||||
"roles": {
|
||||
"ec79480e0ce52ae26fd00c9da684e4fa56658d9c64cdcecb094e936de0bfe71f": {
|
||||
"zone": "dc1",
|
||||
"capacity": 4,
|
||||
"tags": [
|
||||
"node1"
|
||||
]
|
||||
},
|
||||
"4a6ae5a1d0d33bf895f5bb4f0a418b7dc94c47c0dd2eb108d1158f3c8f60b0ff": {
|
||||
"zone": "dc1",
|
||||
"capacity": 6,
|
||||
"tags": [
|
||||
"node2"
|
||||
]
|
||||
},
|
||||
"23ffd0cdd375ebff573b20cc5cef38996b51c1a7d6dbcf2c6e619876e507cf27": {
|
||||
"zone": "dc2",
|
||||
"capacity": 10,
|
||||
"tags": [
|
||||
"node3"
|
||||
]
|
||||
}
|
||||
},
|
||||
"stagedRoleChanges": {
|
||||
"e2ee7984ee65b260682086ec70026165903c86e601a4a5a501c1900afe28d84b": {
|
||||
"zone": "dc2",
|
||||
"capacity": 5,
|
||||
"tags": [
|
||||
"node4"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### UpdateClusterLayout `POST /v0/layout`
|
||||
|
||||
Send modifications to the cluster layout. These modifications will
|
||||
be included in the staged role changes, visible in subsequent calls
|
||||
of `GetClusterLayout`. Once the set of staged changes is satisfactory,
|
||||
the user may call `ApplyClusterLayout` to apply the changed changes,
|
||||
or `Revert ClusterLayout` to clear all of the staged changes in
|
||||
the layout.
|
||||
|
||||
Request body format:
|
||||
|
||||
```json
|
||||
{
|
||||
<node_id>: {
|
||||
"capacity": <new_capacity>,
|
||||
"zone": <new_zone>,
|
||||
"tags": [
|
||||
<new_tag>,
|
||||
...
|
||||
]
|
||||
},
|
||||
<node_id_to_remove>: null,
|
||||
...
|
||||
}
|
||||
```
|
||||
|
||||
Contrary to the CLI that may update only a subset of the fields
|
||||
`capacity`, `zone` and `tags`, when calling this API all of these
|
||||
values must be specified.
|
||||
|
||||
|
||||
#### ApplyClusterLayout `POST /v0/layout/apply`
|
||||
|
||||
Applies to the cluster the layout changes currently registered as
|
||||
staged layout changes.
|
||||
|
||||
Request body format:
|
||||
|
||||
```json
|
||||
{
|
||||
"version": 13
|
||||
}
|
||||
```
|
||||
|
||||
Similarly to the CLI, the body must include the version of the new layout
|
||||
that will be created, which MUST be 1 + the value of the currently
|
||||
existing layout in the cluster.
|
||||
|
||||
#### RevertClusterLayout `POST /v0/layout/revert`
|
||||
|
||||
Clears all of the staged layout changes.
|
||||
|
||||
Request body format:
|
||||
|
||||
```json
|
||||
{
|
||||
"version": 13
|
||||
}
|
||||
```
|
||||
|
||||
Reverting the staged changes is done by incrementing the version number
|
||||
and clearing the contents of the staged change list.
|
||||
Similarly to the CLI, the body must include the incremented
|
||||
version number, which MUST be 1 + the value of the currently
|
||||
existing layout in the cluster.
|
||||
|
||||
|
||||
### Access key operations
|
||||
|
||||
#### ListKeys `GET /v0/key`
|
||||
|
||||
Returns all API access keys in the cluster.
|
||||
|
||||
Example response:
|
||||
|
||||
```json
|
||||
[
|
||||
{
|
||||
"id": "GK31c2f218a2e44f485b94239e",
|
||||
"name": "test"
|
||||
},
|
||||
{
|
||||
"id": "GKe10061ac9c2921f09e4c5540",
|
||||
"name": "test2"
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
#### CreateKey `POST /v0/key`
|
||||
|
||||
Creates a new API access key.
|
||||
|
||||
Request body format:
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "NameOfMyKey"
|
||||
}
|
||||
```
|
||||
|
||||
#### ImportKey `POST /v0/key/import`
|
||||
|
||||
Imports an existing API key.
|
||||
|
||||
Request body format:
|
||||
|
||||
```json
|
||||
{
|
||||
"accessKeyId": "GK31c2f218a2e44f485b94239e",
|
||||
"secretAccessKey": "b892c0665f0ada8a4755dae98baa3b133590e11dae3bcc1f9d769d67f16c3835",
|
||||
"name": "NameOfMyKey"
|
||||
}
|
||||
```
|
||||
|
||||
#### GetKeyInfo `GET /v0/key?id=<acces key id>`
|
||||
#### GetKeyInfo `GET /v0/key?search=<pattern>`
|
||||
|
||||
Returns information about the requested API access key.
|
||||
|
||||
If `id` is set, the key is looked up using its exact identifier (faster).
|
||||
If `search` is set, the key is looked up using its name or prefix
|
||||
of identifier (slower, all keys are enumerated to do this).
|
||||
|
||||
Example response:
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "test",
|
||||
"accessKeyId": "GK31c2f218a2e44f485b94239e",
|
||||
"secretAccessKey": "b892c0665f0ada8a4755dae98baa3b133590e11dae3bcc1f9d769d67f16c3835",
|
||||
"permissions": {
|
||||
"createBucket": false
|
||||
},
|
||||
"buckets": [
|
||||
{
|
||||
"id": "70dc3bed7fe83a75e46b66e7ddef7d56e65f3c02f9f80b6749fb97eccb5e1033",
|
||||
"globalAliases": [
|
||||
"test2"
|
||||
],
|
||||
"localAliases": [],
|
||||
"permissions": {
|
||||
"read": true,
|
||||
"write": true,
|
||||
"owner": false
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "d7452a935e663fc1914f3a5515163a6d3724010ce8dfd9e4743ca8be5974f995",
|
||||
"globalAliases": [
|
||||
"test3"
|
||||
],
|
||||
"localAliases": [],
|
||||
"permissions": {
|
||||
"read": true,
|
||||
"write": true,
|
||||
"owner": false
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "e6a14cd6a27f48684579ec6b381c078ab11697e6bc8513b72b2f5307e25fff9b",
|
||||
"globalAliases": [],
|
||||
"localAliases": [
|
||||
"test"
|
||||
],
|
||||
"permissions": {
|
||||
"read": true,
|
||||
"write": true,
|
||||
"owner": true
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "96470e0df00ec28807138daf01915cfda2bee8eccc91dea9558c0b4855b5bf95",
|
||||
"globalAliases": [
|
||||
"alex"
|
||||
],
|
||||
"localAliases": [],
|
||||
"permissions": {
|
||||
"read": true,
|
||||
"write": true,
|
||||
"owner": true
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
#### DeleteKey `DELETE /v0/key?id=<acces key id>`
|
||||
|
||||
Deletes an API access key.
|
||||
|
||||
#### UpdateKey `POST /v0/key?id=<acces key id>`
|
||||
|
||||
Updates information about the specified API access key.
|
||||
|
||||
Request body format:
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "NameOfMyKey",
|
||||
"allow": {
|
||||
"createBucket": true,
|
||||
},
|
||||
"deny": {}
|
||||
}
|
||||
```
|
||||
|
||||
All fields (`name`, `allow` and `deny`) are optionnal.
|
||||
If they are present, the corresponding modifications are applied to the key, otherwise nothing is changed.
|
||||
The possible flags in `allow` and `deny` are: `createBucket`.
|
||||
|
||||
|
||||
### Bucket operations
|
||||
|
||||
#### ListBuckets `GET /v0/bucket`
|
||||
|
||||
Returns all storage buckets in the cluster.
|
||||
|
||||
Example response:
|
||||
|
||||
```json
|
||||
[
|
||||
{
|
||||
"id": "70dc3bed7fe83a75e46b66e7ddef7d56e65f3c02f9f80b6749fb97eccb5e1033",
|
||||
"globalAliases": [
|
||||
"test2"
|
||||
],
|
||||
"localAliases": []
|
||||
},
|
||||
{
|
||||
"id": "96470e0df00ec28807138daf01915cfda2bee8eccc91dea9558c0b4855b5bf95",
|
||||
"globalAliases": [
|
||||
"alex"
|
||||
],
|
||||
"localAliases": []
|
||||
},
|
||||
{
|
||||
"id": "d7452a935e663fc1914f3a5515163a6d3724010ce8dfd9e4743ca8be5974f995",
|
||||
"globalAliases": [
|
||||
"test3"
|
||||
],
|
||||
"localAliases": []
|
||||
},
|
||||
{
|
||||
"id": "e6a14cd6a27f48684579ec6b381c078ab11697e6bc8513b72b2f5307e25fff9b",
|
||||
"globalAliases": [],
|
||||
"localAliases": [
|
||||
{
|
||||
"accessKeyId": "GK31c2f218a2e44f485b94239e",
|
||||
"alias": "test"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
#### GetBucketInfo `GET /v0/bucket?id=<bucket id>`
|
||||
#### GetBucketInfo `GET /v0/bucket?globalAlias=<alias>`
|
||||
|
||||
Returns information about the requested storage bucket.
|
||||
|
||||
If `id` is set, the bucket is looked up using its exact identifier.
|
||||
If `globalAlias` is set, the bucket is looked up using its global alias.
|
||||
(both are fast)
|
||||
|
||||
Example response:
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "afa8f0a22b40b1247ccd0affb869b0af5cff980924a20e4b5e0720a44deb8d39",
|
||||
"globalAliases": [],
|
||||
"websiteAccess": false,
|
||||
"websiteConfig": null,
|
||||
"keys": [
|
||||
{
|
||||
"accessKeyId": "GK31c2f218a2e44f485b94239e",
|
||||
"name": "Imported key",
|
||||
"permissions": {
|
||||
"read": true,
|
||||
"write": true,
|
||||
"owner": true
|
||||
},
|
||||
"bucketLocalAliases": [
|
||||
"debug"
|
||||
]
|
||||
}
|
||||
],
|
||||
"objects": 14827,
|
||||
"bytes": 13189855625,
|
||||
"unfinshedUploads": 0,
|
||||
"quotas": {
|
||||
"maxSize": null,
|
||||
"maxObjects": null
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### CreateBucket `POST /v0/bucket`
|
||||
|
||||
Creates a new storage bucket.
|
||||
|
||||
Request body format:
|
||||
|
||||
```json
|
||||
{
|
||||
"globalAlias": "NameOfMyBucket"
|
||||
}
|
||||
```
|
||||
|
||||
OR
|
||||
|
||||
```json
|
||||
{
|
||||
"localAlias": {
|
||||
"accessKeyId": "GK31c2f218a2e44f485b94239e",
|
||||
"alias": "NameOfMyBucket",
|
||||
"allow": {
|
||||
"read": true,
|
||||
"write": true,
|
||||
"owner": false
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
OR
|
||||
|
||||
```json
|
||||
{}
|
||||
```
|
||||
|
||||
Creates a new bucket, either with a global alias, a local one,
|
||||
or no alias at all.
|
||||
|
||||
Technically, you can also specify both `globalAlias` and `localAlias` and that would create
|
||||
two aliases, but I don't see why you would want to do that.
|
||||
|
||||
#### DeleteBucket `DELETE /v0/bucket?id=<bucket id>`
|
||||
|
||||
Deletes a storage bucket. A bucket cannot be deleted if it is not empty.
|
||||
|
||||
Warning: this will delete all aliases associated with the bucket!
|
||||
|
||||
#### UpdateBucket `PUT /v0/bucket?id=<bucket id>`
|
||||
|
||||
Updates configuration of the given bucket.
|
||||
|
||||
Request body format:
|
||||
|
||||
```json
|
||||
{
|
||||
"websiteAccess": {
|
||||
"enabled": true,
|
||||
"indexDocument": "index.html",
|
||||
"errorDocument": "404.html"
|
||||
},
|
||||
"quotas": {
|
||||
"maxSize": 19029801,
|
||||
"maxObjects": null,
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
All fields (`websiteAccess` and `quotas`) are optionnal.
|
||||
If they are present, the corresponding modifications are applied to the bucket, otherwise nothing is changed.
|
||||
|
||||
In `websiteAccess`: if `enabled` is `true`, `indexDocument` must be specified.
|
||||
The field `errorDocument` is optional, if no error document is set a generic
|
||||
error message is displayed when errors happen. Conversely, if `enabled` is
|
||||
`false`, neither `indexDocument` nor `errorDocument` must be specified.
|
||||
|
||||
In `quotas`: new values of `maxSize` and `maxObjects` must both be specified, or set to `null`
|
||||
to remove the quotas. An absent value will be considered the same as a `null`. It is not possible
|
||||
to change only one of the two quotas.
|
||||
|
||||
### Operations on permissions for keys on buckets
|
||||
|
||||
#### BucketAllowKey `POST /v0/bucket/allow`
|
||||
|
||||
Allows a key to do read/write/owner operations on a bucket.
|
||||
|
||||
Request body format:
|
||||
|
||||
```json
|
||||
{
|
||||
"bucketId": "e6a14cd6a27f48684579ec6b381c078ab11697e6bc8513b72b2f5307e25fff9b",
|
||||
"accessKeyId": "GK31c2f218a2e44f485b94239e",
|
||||
"permissions": {
|
||||
"read": true,
|
||||
"write": true,
|
||||
"owner": true
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
Flags in `permissions` which have the value `true` will be activated.
|
||||
Other flags will remain unchanged.
|
||||
|
||||
#### BucketDenyKey `POST /v0/bucket/deny`
|
||||
|
||||
Denies a key from doing read/write/owner operations on a bucket.
|
||||
|
||||
Request body format:
|
||||
|
||||
```json
|
||||
{
|
||||
"bucketId": "e6a14cd6a27f48684579ec6b381c078ab11697e6bc8513b72b2f5307e25fff9b",
|
||||
"accessKeyId": "GK31c2f218a2e44f485b94239e",
|
||||
"permissions": {
|
||||
"read": false,
|
||||
"write": false,
|
||||
"owner": true
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
Flags in `permissions` which have the value `true` will be deactivated.
|
||||
Other flags will remain unchanged.
|
||||
|
||||
|
||||
### Operations on bucket aliases
|
||||
|
||||
#### GlobalAliasBucket `PUT /v0/bucket/alias/global?id=<bucket id>&alias=<global alias>`
|
||||
|
||||
Empty body. Creates a global alias for a bucket.
|
||||
|
||||
#### GlobalUnaliasBucket `DELETE /v0/bucket/alias/global?id=<bucket id>&alias=<global alias>`
|
||||
|
||||
Removes a global alias for a bucket.
|
||||
|
||||
#### LocalAliasBucket `PUT /v0/bucket/alias/local?id=<bucket id>&accessKeyId=<access key ID>&alias=<local alias>`
|
||||
|
||||
Empty body. Creates a local alias for a bucket in the namespace of a specific access key.
|
||||
|
||||
#### LocalUnaliasBucket `DELETE /v0/bucket/alias/local?id=<bucket id>&accessKeyId<access key ID>&alias=<local alias>`
|
||||
|
||||
Removes a local alias for a bucket in the namespace of a specific access key.
|
||||
|
||||
For more advanced use cases, we recommend using a SDK.
|
||||
[Go to the "Build your own app" section to know how to use our SDKs](@/documentation/build/_index.md)
|
||||
|
|
|
@ -3,6 +3,8 @@ title = "Configuration file format"
|
|||
weight = 20
|
||||
+++
|
||||
|
||||
## Full example
|
||||
|
||||
Here is an example `garage.toml` configuration file that illustrates all of the possible options:
|
||||
|
||||
```toml
|
||||
|
@ -13,8 +15,9 @@ db_engine = "lmdb"
|
|||
|
||||
block_size = 1048576
|
||||
|
||||
sled_cache_capacity = 134217728
|
||||
sled_cache_capacity = "128MiB"
|
||||
sled_flush_every_ms = 2000
|
||||
lmdb_map_size = "1T"
|
||||
|
||||
replication_mode = "3"
|
||||
|
||||
|
@ -33,12 +36,18 @@ bootstrap_peers = [
|
|||
|
||||
|
||||
[consul_discovery]
|
||||
api = "catalog"
|
||||
consul_http_addr = "http://127.0.0.1:8500"
|
||||
service_name = "garage-daemon"
|
||||
ca_cert = "/etc/consul/consul-ca.crt"
|
||||
client_cert = "/etc/consul/consul-client.crt"
|
||||
client_key = "/etc/consul/consul-key.crt"
|
||||
# for `agent` API mode, unset client_cert and client_key, and optionally enable `token`
|
||||
# token = "abcdef-01234-56789"
|
||||
tls_skip_verify = false
|
||||
tags = [ "dns-enabled" ]
|
||||
meta = { dns-acl = "allow trusted" }
|
||||
|
||||
|
||||
[kubernetes_discovery]
|
||||
namespace = "garage"
|
||||
|
@ -96,7 +105,7 @@ Performance characteristics of the different DB engines are as follows:
|
|||
|
||||
- Sled: the default database engine, which tends to produce
|
||||
large data files and also has performance issues, especially when the metadata folder
|
||||
is on a traditionnal HDD and not on SSD.
|
||||
is on a traditional HDD and not on SSD.
|
||||
- LMDB: the recommended alternative on 64-bit systems,
|
||||
much more space-efficiant and slightly faster. Note that the data format of LMDB is not portable
|
||||
between architectures, so for instance the Garage database of an x86-64
|
||||
|
@ -125,8 +134,8 @@ and not just the path to the metadata directory.
|
|||
### `block_size`
|
||||
|
||||
Garage splits stored objects in consecutive chunks of size `block_size`
|
||||
(except the last one which might be smaller). The default size is 1MB and
|
||||
should work in most cases. We recommend increasing it to e.g. 10MB if
|
||||
(except the last one which might be smaller). The default size is 1MiB and
|
||||
should work in most cases. We recommend increasing it to e.g. 10MiB if
|
||||
you are using Garage to store large files and have fast network connections
|
||||
between all nodes (e.g. 1gbps).
|
||||
|
||||
|
@ -152,6 +161,14 @@ Increase this if sled is thrashing your SSD, at the risk of losing more data in
|
|||
of a power outage (though this should not matter much as data is replicated on other
|
||||
nodes). The default value, 2000ms, should be appropriate for most use cases.
|
||||
|
||||
### `lmdb_map_size`
|
||||
|
||||
This parameters can be used to set the map size used by LMDB,
|
||||
which is the size of the virtual memory region used for mapping the database file.
|
||||
The value of this parameter is the maximum size the metadata database can take.
|
||||
This value is not bound by the physical RAM size of the machine running Garage.
|
||||
If not specified, it defaults to 1GiB on 32-bit machines and 1TiB on 64-bit machines.
|
||||
|
||||
### `replication_mode`
|
||||
|
||||
Garage supports the following replication modes:
|
||||
|
@ -259,13 +276,17 @@ Compression is done synchronously, setting a value too high will add latency to
|
|||
This value can be different between nodes, compression is done by the node which receive the
|
||||
API call.
|
||||
|
||||
### `rpc_secret`
|
||||
### `rpc_secret`, `rpc_secret_file` or `GARAGE_RPC_SECRET` (env)
|
||||
|
||||
Garage uses a secret key that is shared between all nodes of the cluster
|
||||
in order to identify these nodes and allow them to communicate together.
|
||||
This key should be specified here in the form of a 32-byte hex-encoded
|
||||
random string. Such a string can be generated with a command
|
||||
such as `openssl rand -hex 32`.
|
||||
Garage uses a secret key, called an RPC secret, that is shared between all
|
||||
nodes of the cluster in order to identify these nodes and allow them to
|
||||
communicate together. The RPC secret is a 32-byte hex-encoded random string,
|
||||
which can be generated with a command such as `openssl rand -hex 32`.
|
||||
|
||||
The RPC secret should be specified in the `rpc_secret` configuration variable.
|
||||
Since Garage `v0.8.2`, the RPC secret can also be stored in a file whose path is
|
||||
given in the configuration variable `rpc_secret_file`, or specified as an
|
||||
environment variable `GARAGE_RPC_SECRET`.
|
||||
|
||||
### `rpc_bind_addr`
|
||||
|
||||
|
@ -310,6 +331,12 @@ reached by other nodes of the cluster, which should be set in `rpc_public_addr`.
|
|||
|
||||
The `consul_http_addr` parameter should be set to the full HTTP(S) address of the Consul server.
|
||||
|
||||
### `api`
|
||||
|
||||
Two APIs for service registration are supported: `catalog` and `agent`. `catalog`, the default, will register a service using
|
||||
the `/v1/catalog` endpoints, enabling mTLS if `client_cert` and `client_key` are provided. The `agent` API uses the
|
||||
`v1/agent` endpoints instead, where an optional `token` may be provided.
|
||||
|
||||
### `service_name`
|
||||
|
||||
`service_name` should be set to the service name under which Garage's
|
||||
|
@ -318,6 +345,7 @@ RPC ports are announced.
|
|||
### `client_cert`, `client_key`
|
||||
|
||||
TLS client certificate and client key to use when communicating with Consul over TLS. Both are mandatory when doing so.
|
||||
Only available when `api = "catalog"`.
|
||||
|
||||
### `ca_cert`
|
||||
|
||||
|
@ -328,6 +356,29 @@ TLS CA certificate to use when communicating with Consul over TLS.
|
|||
Skip server hostname verification in TLS handshake.
|
||||
`ca_cert` is ignored when this is set.
|
||||
|
||||
### `token`
|
||||
|
||||
Uses the provided token for communication with Consul. Only available when `api = "agent"`.
|
||||
The policy assigned to this token should at least have these rules:
|
||||
|
||||
```hcl
|
||||
// the `service_name` specified above
|
||||
service "garage" {
|
||||
policy = "write"
|
||||
}
|
||||
|
||||
service_prefix "" {
|
||||
policy = "read"
|
||||
}
|
||||
|
||||
node_prefix "" {
|
||||
policy = "read"
|
||||
}
|
||||
```
|
||||
|
||||
### `tags` and `meta`
|
||||
|
||||
Additional list of tags and map of service meta to add during service registration.
|
||||
|
||||
## The `[kubernetes_discovery]` section
|
||||
|
||||
|
@ -367,7 +418,7 @@ message that redirects the client to the correct region.
|
|||
|
||||
### `root_domain` {#root_domain}
|
||||
|
||||
The optionnal suffix to access bucket using vhost-style in addition to path-style request.
|
||||
The optional suffix to access bucket using vhost-style in addition to path-style request.
|
||||
Note path-style requests are always enabled, whether or not vhost-style is configured.
|
||||
Configuring vhost-style S3 required a wildcard DNS entry, and possibly a wildcard TLS certificate,
|
||||
but might be required by softwares not supporting path-style requests.
|
||||
|
@ -390,7 +441,7 @@ This endpoint does not suport TLS: a reverse proxy should be used to provide it.
|
|||
|
||||
### `root_domain`
|
||||
|
||||
The optionnal suffix appended to bucket names for the corresponding HTTP Host.
|
||||
The optional suffix appended to bucket names for the corresponding HTTP Host.
|
||||
|
||||
For instance, if `root_domain` is `web.garage.eu`, a bucket called `deuxfleurs.fr`
|
||||
will be accessible either with hostname `deuxfleurs.fr.web.garage.eu`
|
||||
|
@ -407,24 +458,30 @@ If specified, Garage will bind an HTTP server to this port and address, on
|
|||
which it will listen to requests for administration features.
|
||||
See [administration API reference](@/documentation/reference-manual/admin-api.md) to learn more about these features.
|
||||
|
||||
### `metrics_token` (since version 0.7.2)
|
||||
### `metrics_token`, `metrics_token_file` or `GARAGE_METRICS_TOKEN` (env)
|
||||
|
||||
The token for accessing the Metrics endpoint. If this token is not set in
|
||||
the config file, the Metrics endpoint can be accessed without access
|
||||
control.
|
||||
The token for accessing the Metrics endpoint. If this token is not set, the
|
||||
Metrics endpoint can be accessed without access control.
|
||||
|
||||
You can use any random string for this value. We recommend generating a random token with `openssl rand -hex 32`.
|
||||
|
||||
### `admin_token` (since version 0.7.2)
|
||||
`metrics_token` was introduced in Garage `v0.7.2`.
|
||||
`metrics_token_file` and the `GARAGE_METRICS_TOKEN` environment variable are supported since Garage `v0.8.2`.
|
||||
|
||||
|
||||
### `admin_token`, `admin_token_file` or `GARAGE_ADMIN_TOKEN` (env)
|
||||
|
||||
The token for accessing all of the other administration endpoints. If this
|
||||
token is not set in the config file, access to these endpoints is disabled
|
||||
entirely.
|
||||
token is not set, access to these endpoints is disabled entirely.
|
||||
|
||||
You can use any random string for this value. We recommend generating a random token with `openssl rand -hex 32`.
|
||||
|
||||
`admin_token` was introduced in Garage `v0.7.2`.
|
||||
`admin_token_file` and the `GARAGE_ADMIN_TOKEN` environment variable are supported since Garage `v0.8.2`.
|
||||
|
||||
|
||||
### `trace_sink`
|
||||
|
||||
Optionnally, the address of an Opentelemetry collector. If specified,
|
||||
Garage will send traces in the Opentelemetry format to this endpoint. These
|
||||
Optionally, the address of an OpenTelemetry collector. If specified,
|
||||
Garage will send traces in the OpenTelemetry format to this endpoint. These
|
||||
trace allow to inspect Garage's operation when it handles S3 API requests.
|
||||
|
|
|
@ -35,7 +35,7 @@ This makes setting up and administering storage clusters, we hope, as easy as it
|
|||
|
||||
A Garage cluster can very easily evolve over time, as storage nodes are added or removed.
|
||||
Garage will automatically rebalance data between nodes as needed to ensure the desired number of copies.
|
||||
Read about cluster layout management [here](@/documentation/reference-manual/layout.md).
|
||||
Read about cluster layout management [here](@/documentation/operations/layout.md).
|
||||
|
||||
### No RAFT slowing you down
|
||||
|
||||
|
@ -83,7 +83,7 @@ This feature is totally invisible to S3 clients and does not break compatibility
|
|||
### Cluster administration API
|
||||
|
||||
Garage provides a fully-fledged REST API to administer your cluster programatically.
|
||||
Functionnality included in the admin API include: setting up and monitoring
|
||||
Functionality included in the admin API include: setting up and monitoring
|
||||
cluster nodes, managing access credentials, and managing storage buckets and bucket aliases.
|
||||
A full reference of the administration API is available [here](@/documentation/reference-manual/admin-api.md).
|
||||
|
||||
|
|
|
@ -1,9 +1,9 @@
|
|||
+++
|
||||
title = "K2V"
|
||||
weight = 70
|
||||
weight = 100
|
||||
+++
|
||||
|
||||
Starting with version 0.7.2, Garage introduces an optionnal feature, K2V,
|
||||
Starting with version 0.7.2, Garage introduces an optional feature, K2V,
|
||||
which is an alternative storage API designed to help efficiently store
|
||||
many small values in buckets (in opposition to S3 which is more designed
|
||||
to store large blobs).
|
||||
|
@ -16,7 +16,7 @@ the `k2v` feature flag enabled can be obtained from our download page under
|
|||
with `-k2v` (example: `v0.7.2-k2v`).
|
||||
|
||||
The specification of the K2V API can be found
|
||||
[here](https://git.deuxfleurs.fr/Deuxfleurs/garage/src/branch/k2v/doc/drafts/k2v-spec.md).
|
||||
[here](https://git.deuxfleurs.fr/Deuxfleurs/garage/src/branch/main/doc/drafts/k2v-spec.md).
|
||||
This document also includes a high-level overview of K2V's design.
|
||||
|
||||
The K2V API uses AWSv4 signatures for authentification, same as the S3 API.
|
||||
|
|
285
doc/book/reference-manual/monitoring.md
Normal file
|
@ -0,0 +1,285 @@
|
|||
|
||||
+++
|
||||
title = "Monitoring"
|
||||
weight = 60
|
||||
+++
|
||||
|
||||
|
||||
For information on setting up monitoring, see our [dedicated page](@/documentation/cookbook/monitoring.md) in the Cookbook section.
|
||||
|
||||
## List of exported metrics
|
||||
|
||||
### Garage system metrics
|
||||
|
||||
#### `garage_build_info` (counter)
|
||||
|
||||
Exposes the Garage version number running on a node.
|
||||
|
||||
```
|
||||
garage_build_info{version="1.0"} 1
|
||||
```
|
||||
|
||||
#### `garage_replication_factor` (counter)
|
||||
|
||||
Exposes the Garage replication factor configured on the node
|
||||
|
||||
```
|
||||
garage_replication_factor 3
|
||||
```
|
||||
|
||||
### Metrics of the API endpoints
|
||||
|
||||
#### `api_admin_request_counter` (counter)
|
||||
|
||||
Counts the number of requests to a given endpoint of the administration API. Example:
|
||||
|
||||
```
|
||||
api_admin_request_counter{api_endpoint="Metrics"} 127041
|
||||
```
|
||||
|
||||
#### `api_admin_request_duration` (histogram)
|
||||
|
||||
Evaluates the duration of API calls to the various administration API endpoint. Example:
|
||||
|
||||
```
|
||||
api_admin_request_duration_bucket{api_endpoint="Metrics",le="0.5"} 127041
|
||||
api_admin_request_duration_sum{api_endpoint="Metrics"} 605.250344830999
|
||||
api_admin_request_duration_count{api_endpoint="Metrics"} 127041
|
||||
```
|
||||
|
||||
#### `api_s3_request_counter` (counter)
|
||||
|
||||
Counts the number of requests to a given endpoint of the S3 API. Example:
|
||||
|
||||
```
|
||||
api_s3_request_counter{api_endpoint="CreateMultipartUpload"} 1
|
||||
```
|
||||
|
||||
#### `api_s3_error_counter` (counter)
|
||||
|
||||
Counts the number of requests to a given endpoint of the S3 API that returned an error. Example:
|
||||
|
||||
```
|
||||
api_s3_error_counter{api_endpoint="GetObject",status_code="404"} 39
|
||||
```
|
||||
|
||||
#### `api_s3_request_duration` (histogram)
|
||||
|
||||
Evaluates the duration of API calls to the various S3 API endpoints. Example:
|
||||
|
||||
```
|
||||
api_s3_request_duration_bucket{api_endpoint="CreateMultipartUpload",le="0.5"} 1
|
||||
api_s3_request_duration_sum{api_endpoint="CreateMultipartUpload"} 0.046340762
|
||||
api_s3_request_duration_count{api_endpoint="CreateMultipartUpload"} 1
|
||||
```
|
||||
|
||||
#### `api_k2v_request_counter` (counter), `api_k2v_error_counter` (counter), `api_k2v_error_duration` (histogram)
|
||||
|
||||
Same as for S3, for the K2V API.
|
||||
|
||||
|
||||
### Metrics of the Web endpoint
|
||||
|
||||
|
||||
#### `web_request_counter` (counter)
|
||||
|
||||
Number of requests to the web endpoint
|
||||
|
||||
```
|
||||
web_request_counter{method="GET"} 80
|
||||
```
|
||||
|
||||
#### `web_request_duration` (histogram)
|
||||
|
||||
Duration of requests to the web endpoint
|
||||
|
||||
```
|
||||
web_request_duration_bucket{method="GET",le="0.5"} 80
|
||||
web_request_duration_sum{method="GET"} 1.0528433229999998
|
||||
web_request_duration_count{method="GET"} 80
|
||||
```
|
||||
|
||||
#### `web_error_counter` (counter)
|
||||
|
||||
Number of requests to the web endpoint resulting in errors
|
||||
|
||||
```
|
||||
web_error_counter{method="GET",status_code="404 Not Found"} 64
|
||||
```
|
||||
|
||||
|
||||
### Metrics of the data block manager
|
||||
|
||||
#### `block_bytes_read`, `block_bytes_written` (counter)
|
||||
|
||||
Number of bytes read/written to/from disk in the data storage directory.
|
||||
|
||||
```
|
||||
block_bytes_read 120586322022
|
||||
block_bytes_written 3386618077
|
||||
```
|
||||
|
||||
#### `block_compression_level` (counter)
|
||||
|
||||
Exposes the block compression level configured for the Garage node.
|
||||
|
||||
```
|
||||
block_compression_level 3
|
||||
```
|
||||
|
||||
#### `block_read_duration`, `block_write_duration` (histograms)
|
||||
|
||||
Evaluates the duration of the reading/writing of individual data blocks in the data storage directory.
|
||||
|
||||
```
|
||||
block_read_duration_bucket{le="0.5"} 169229
|
||||
block_read_duration_sum 2761.6902550310056
|
||||
block_read_duration_count 169240
|
||||
block_write_duration_bucket{le="0.5"} 3559
|
||||
block_write_duration_sum 195.59170078500006
|
||||
block_write_duration_count 3571
|
||||
```
|
||||
|
||||
#### `block_delete_counter` (counter)
|
||||
|
||||
Counts the number of data blocks that have been deleted from storage.
|
||||
|
||||
```
|
||||
block_delete_counter 122
|
||||
```
|
||||
|
||||
#### `block_resync_counter` (counter), `block_resync_duration` (histogram)
|
||||
|
||||
Counts the number of resync operations the node has executed, and evaluates their duration.
|
||||
|
||||
```
|
||||
block_resync_counter 308897
|
||||
block_resync_duration_bucket{le="0.5"} 308892
|
||||
block_resync_duration_sum 139.64204196100016
|
||||
block_resync_duration_count 308897
|
||||
```
|
||||
|
||||
#### `block_resync_queue_length` (gauge)
|
||||
|
||||
The number of block hashes currently queued for a resync.
|
||||
This is normal to be nonzero for long periods of time.
|
||||
|
||||
```
|
||||
block_resync_queue_length 0
|
||||
```
|
||||
|
||||
#### `block_resync_errored_blocks` (gauge)
|
||||
|
||||
The number of block hashes that we were unable to resync last time we tried.
|
||||
**THIS SHOULD BE ZERO, OR FALL BACK TO ZERO RAPIDLY, IN A HEALTHY CLUSTER.**
|
||||
Persistent nonzero values indicate that some data is likely to be lost.
|
||||
|
||||
```
|
||||
block_resync_errored_blocks 0
|
||||
```
|
||||
|
||||
|
||||
### Metrics related to RPCs (remote procedure calls) between nodes
|
||||
|
||||
#### `rpc_netapp_request_counter` (counter)
|
||||
|
||||
Number of RPC requests emitted
|
||||
|
||||
```
|
||||
rpc_request_counter{from="<this node>",rpc_endpoint="garage_block/manager.rs/Rpc",to="<remote node>"} 176
|
||||
```
|
||||
|
||||
#### `rpc_netapp_error_counter` (counter)
|
||||
|
||||
Number of communication errors (errors in the Netapp library, generally due to disconnected nodes)
|
||||
|
||||
```
|
||||
rpc_netapp_error_counter{from="<this node>",rpc_endpoint="garage_block/manager.rs/Rpc",to="<remote node>"} 354
|
||||
```
|
||||
|
||||
#### `rpc_timeout_counter` (counter)
|
||||
|
||||
Number of RPC timeouts, should be close to zero in a healthy cluster.
|
||||
|
||||
```
|
||||
rpc_timeout_counter{from="<this node>",rpc_endpoint="garage_rpc/membership.rs/SystemRpc",to="<remote node>"} 1
|
||||
```
|
||||
|
||||
#### `rpc_duration` (histogram)
|
||||
|
||||
The duration of internal RPC calls between Garage nodes.
|
||||
|
||||
```
|
||||
rpc_duration_bucket{from="<this node>",rpc_endpoint="garage_block/manager.rs/Rpc",to="<remote node>",le="0.5"} 166
|
||||
rpc_duration_sum{from="<this node>",rpc_endpoint="garage_block/manager.rs/Rpc",to="<remote node>"} 35.172253716
|
||||
rpc_duration_count{from="<this node>",rpc_endpoint="garage_block/manager.rs/Rpc",to="<remote node>"} 174
|
||||
```
|
||||
|
||||
|
||||
### Metrics of the metadata table manager
|
||||
|
||||
#### `table_gc_todo_queue_length` (gauge)
|
||||
|
||||
Table garbage collector TODO queue length
|
||||
|
||||
```
|
||||
table_gc_todo_queue_length{table_name="block_ref"} 0
|
||||
```
|
||||
|
||||
#### `table_get_request_counter` (counter), `table_get_request_duration` (histogram)
|
||||
|
||||
Number of get/get_range requests internally made on each table, and their duration.
|
||||
|
||||
```
|
||||
table_get_request_counter{table_name="bucket_alias"} 315
|
||||
table_get_request_duration_bucket{table_name="bucket_alias",le="0.5"} 315
|
||||
table_get_request_duration_sum{table_name="bucket_alias"} 0.048509778000000024
|
||||
table_get_request_duration_count{table_name="bucket_alias"} 315
|
||||
```
|
||||
|
||||
|
||||
#### `table_put_request_counter` (counter), `table_put_request_duration` (histogram)
|
||||
|
||||
Number of insert/insert_many requests internally made on this table, and their duration
|
||||
|
||||
```
|
||||
table_put_request_counter{table_name="block_ref"} 677
|
||||
table_put_request_duration_bucket{table_name="block_ref",le="0.5"} 677
|
||||
table_put_request_duration_sum{table_name="block_ref"} 61.617528636
|
||||
table_put_request_duration_count{table_name="block_ref"} 677
|
||||
```
|
||||
|
||||
#### `table_internal_delete_counter` (counter)
|
||||
|
||||
Number of value deletions in the tree (due to GC or repartitioning)
|
||||
|
||||
```
|
||||
table_internal_delete_counter{table_name="block_ref"} 2296
|
||||
```
|
||||
|
||||
#### `table_internal_update_counter` (counter)
|
||||
|
||||
Number of value updates where the value actually changes (includes creation of new key and update of existing key)
|
||||
|
||||
```
|
||||
table_internal_update_counter{table_name="block_ref"} 5996
|
||||
```
|
||||
|
||||
#### `table_merkle_updater_todo_queue_length` (gauge)
|
||||
|
||||
Merkle tree updater TODO queue length (should fall to zero rapidly)
|
||||
|
||||
```
|
||||
table_merkle_updater_todo_queue_length{table_name="block_ref"} 0
|
||||
```
|
||||
|
||||
#### `table_sync_items_received`, `table_sync_items_sent` (counters)
|
||||
|
||||
Number of data items sent to/recieved from other nodes during resync procedures
|
||||
|
||||
```
|
||||
table_sync_items_received{from="<remote node>",table_name="bucket_v2"} 3
|
||||
table_sync_items_sent{table_name="block_ref",to="<remote node>"} 2
|
||||
```
|
||||
|
||||
|
|
@ -1,6 +1,6 @@
|
|||
+++
|
||||
title = "S3 Compatibility status"
|
||||
weight = 40
|
||||
weight = 70
|
||||
+++
|
||||
|
||||
## DISCLAIMER
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
+++
|
||||
title = "Working Documents"
|
||||
weight = 7
|
||||
weight = 90
|
||||
sort_by = "weight"
|
||||
template = "documentation.html"
|
||||
+++
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
+++
|
||||
title = "Design draft (obsolete)"
|
||||
weight = 50
|
||||
weight = 900
|
||||
+++
|
||||
|
||||
**WARNING: this documentation is a design draft which was written before Garage's actual implementation.
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
+++
|
||||
title = "Load balancing data (obsolete)"
|
||||
weight = 60
|
||||
weight = 910
|
||||
+++
|
||||
|
||||
**This is being yet improved in release 0.5. The working document has not been updated yet, it still only applies to Garage 0.2 through 0.4.**
|
||||
|
|
|
@ -12,13 +12,15 @@ back up all your data before attempting it!**
|
|||
Garage v0.8 introduces new data tables that allow the counting of objects in buckets in order to implement bucket quotas.
|
||||
A manual migration step is required to first count objects in Garage buckets and populate these tables with accurate data.
|
||||
|
||||
## Simple migration procedure (takes cluster offline for a while)
|
||||
|
||||
The migration steps are as follows:
|
||||
|
||||
1. Disable API and web access. Garage v0.7 does not support disabling
|
||||
these endpoints but you can change the port number or stop your reverse proxy for instance.
|
||||
2. Do `garage repair --all-nodes --yes tables` and `garage repair --all-nodes --yes blocks`,
|
||||
check the logs and check that all data seems to be synced correctly between
|
||||
nodes. If you have time, do additional checks (`scrub`, `block_refs`, etc.)
|
||||
nodes. If you have time, do additional checks (`versions`, `block_refs`, etc.)
|
||||
3. Check that queues are empty: run `garage stats` to query them or inspect metrics in the Grafana dashboard.
|
||||
4. Turn off Garage v0.7
|
||||
5. **Backup the metadata folder of all your nodes!** For instance, use the following command
|
||||
|
@ -32,3 +34,24 @@ The migration steps are as follows:
|
|||
10. Your upgraded cluster should be in a working state. Re-enable API and Web
|
||||
access and check that everything went well.
|
||||
11. Monitor your cluster in the next hours to see if it works well under your production load, report any issue.
|
||||
|
||||
## Minimal downtime migration procedure
|
||||
|
||||
The migration to Garage v0.8 can be done with almost no downtime,
|
||||
by restarting all nodes at once in the new version. The only limitation with this
|
||||
method is that bucket sizes and item counts will not be estimated correctly
|
||||
until all nodes have had a chance to run their offline migration procedure.
|
||||
|
||||
The migration steps are as follows:
|
||||
|
||||
1. Do `garage repair --all-nodes --yes tables` and `garage repair --all-nodes --yes blocks`,
|
||||
check the logs and check that all data seems to be synced correctly between
|
||||
nodes. If you have time, do additional checks (`versions`, `block_refs`, etc.)
|
||||
|
||||
2. Turn off each node individually; back up its metadata folder (see above); turn it back on again. This will allow you to take a backup of all nodes without impacting global cluster availability. You can do all nodes of a single zone at once as this does not impact the availability of Garage.
|
||||
|
||||
3. Prepare your binaries and configuration files for Garage v0.8
|
||||
|
||||
4. Shut down all v0.7 nodes simultaneously, and restart them all simultaneously in v0.8. Use your favorite deployment tool (Ansible, Kubernetes, Nomad) to achieve this as fast as possible.
|
||||
|
||||
5. At this point, Garage will indicate invalid values for the size and number of objects in each bucket (most likely, it will indicate zero). To fix this, take each node offline individually to do the offline migration step: `garage offline-repair --yes object_counters`. Again you can do all nodes of a single zone at once.
|
||||
|
|
75
doc/book/working-documents/testing-strategy.md
Normal file
|
@ -0,0 +1,75 @@
|
|||
+++
|
||||
title = "Testing strategy"
|
||||
weight = 30
|
||||
+++
|
||||
|
||||
|
||||
## Testing Garage
|
||||
|
||||
Currently, we have the following tests:
|
||||
|
||||
- some unit tests spread around the codebase
|
||||
- integration tests written in Rust (`src/garage/test`) to check that Garage operations perform correctly
|
||||
- integration test for compatibility with external tools (`script/test-smoke.sh`)
|
||||
|
||||
We have also tried `minio/mint` but it fails a lot and for now we haven't gotten a lot from it.
|
||||
|
||||
In the future:
|
||||
|
||||
1. We'd like to have a systematic way of testing with `minio/mint`,
|
||||
it would add value to Garage by providing a compatibility score and reference that can be trusted.
|
||||
2. We'd also like to do testing with Jepsen in some way.
|
||||
|
||||
## How to instrument Garagae
|
||||
|
||||
We should try to test in least invasive ways, i.e. minimize the impact of the testing framework on Garage's source code. This means for example:
|
||||
|
||||
- Not abstracting IO/nondeterminism in the source code
|
||||
- Not making `garage` a shared library (launch using `execve`, it's perfectly fine)
|
||||
|
||||
Instead, we should focus on building a clean outer interface for the `garage` binary,
|
||||
for example loading configuration using environnement variables instead of the configuration file if that's helpfull for writing the tests.
|
||||
|
||||
There are two reasons for this:
|
||||
|
||||
- Keep the soure code clean and focused
|
||||
- Test something that is as close as possible as the true garage that will actually be running
|
||||
|
||||
Reminder: rules of simplicity, concerning changes to Garage's source code.
|
||||
Always question what we are doing.
|
||||
Never do anything just because it looks nice or because we "think" it might be usefull at some later point but without knowing precisely why/when.
|
||||
Only do things that make perfect sense in the context of what we currently know.
|
||||
|
||||
## References
|
||||
|
||||
Testing is a research field on its own.
|
||||
About testing distributed systems:
|
||||
|
||||
- [Jepsen](https://jepsen.io/) is a testing framework designed to test distributed systems. It can mock some part of the system like the time and the network.
|
||||
- [FoundationDB Testing Approach](https://www.micahlerner.com/2021/06/12/foundationdb-a-distributed-unbundled-transactional-key-value-store.html#what-is-unique-about-foundationdbs-testing-framework). They chose to abstract "all sources of nondeterminism and communication are abstracted, including network, disk, time, and pseudo random number generator" to be able to run tests by simulating faults.
|
||||
- [Testing Distributed Systems](https://asatarin.github.io/testing-distributed-systems/) - Curated list of resources on testing distributed systems
|
||||
|
||||
About S3 compatibility:
|
||||
- [ceph/s3-tests](https://github.com/ceph/s3-tests)
|
||||
- (deprecated) [minio/s3verify](https://blog.min.io/s3verify-a-simple-tool-to-verify-aws-s3-api-compatibility/)
|
||||
- [minio/mint](https://github.com/minio/mint)
|
||||
|
||||
About benchmarking S3 (I think it is not necessarily very relevant for this iteration):
|
||||
- [minio/warp](https://github.com/minio/warp)
|
||||
- [wasabi-tech/s3-benchmark](https://github.com/wasabi-tech/s3-benchmark)
|
||||
- [dvassallo/s3-benchmark](https://github.com/dvassallo/s3-benchmark)
|
||||
- [intel-cloud/cosbench](https://github.com/intel-cloud/cosbench) - used by Ceph
|
||||
|
||||
Engineering blog posts:
|
||||
- [Quincy @ Scale: A Tale of Three Large-Scale Clusters](https://ceph.io/en/news/blog/2022/three-large-scale-clusters/)
|
||||
|
||||
Interesting blog posts on the blog of the Sled database:
|
||||
|
||||
- <https://sled.rs/simulation.html>
|
||||
- <https://sled.rs/perf.html>
|
||||
|
||||
Misc:
|
||||
- [mutagen](https://github.com/llogiq/mutagen) - mutation testing is a way to assert our test quality by mutating the code and see if the mutation makes the tests fail
|
||||
- [fuzzing](https://rust-fuzz.github.io/book/) - cargo supports fuzzing, it could be a way to test our software reliability in presence of garbage data.
|
||||
|
||||
|
686
doc/drafts/admin-api.md
Normal file
|
@ -0,0 +1,686 @@
|
|||
+++
|
||||
title = "Administration API"
|
||||
weight = 60
|
||||
+++
|
||||
|
||||
The Garage administration API is accessible through a dedicated server whose
|
||||
listen address is specified in the `[admin]` section of the configuration
|
||||
file (see [configuration file
|
||||
reference](@/documentation/reference-manual/configuration.md))
|
||||
|
||||
**WARNING.** At this point, there is no comittement to stability of the APIs described in this document.
|
||||
We will bump the version numbers prefixed to each API endpoint at each time the syntax
|
||||
or semantics change, meaning that code that relies on these endpoint will break
|
||||
when changes are introduced.
|
||||
|
||||
The Garage administration API was introduced in version 0.7.2, this document
|
||||
does not apply to older versions of Garage.
|
||||
|
||||
|
||||
## Access control
|
||||
|
||||
The admin API uses two different tokens for acces control, that are specified in the config file's `[admin]` section:
|
||||
|
||||
- `metrics_token`: the token for accessing the Metrics endpoint (if this token
|
||||
is not set in the config file, the Metrics endpoint can be accessed without
|
||||
access control);
|
||||
|
||||
- `admin_token`: the token for accessing all of the other administration
|
||||
endpoints (if this token is not set in the config file, access to these
|
||||
endpoints is disabled entirely).
|
||||
|
||||
These tokens are used as simple HTTP bearer tokens. In other words, to
|
||||
authenticate access to an admin API endpoint, add the following HTTP header
|
||||
to your request:
|
||||
|
||||
```
|
||||
Authorization: Bearer <token>
|
||||
```
|
||||
|
||||
## Administration API endpoints
|
||||
|
||||
### Metrics-related endpoints
|
||||
|
||||
#### Metrics `GET /metrics`
|
||||
|
||||
Returns internal Garage metrics in Prometheus format.
|
||||
|
||||
#### Health `GET /health`
|
||||
|
||||
Used for simple health checks in a cluster setting with an orchestrator.
|
||||
Returns an HTTP status 200 if the node is ready to answer user's requests,
|
||||
and an HTTP status 503 (Service Unavailable) if there are some partitions
|
||||
for which a quorum of nodes is not available.
|
||||
A simple textual message is also returned in a body with content-type `text/plain`.
|
||||
See `/v0/health` for an API that also returns JSON output.
|
||||
|
||||
### Cluster operations
|
||||
|
||||
#### GetClusterStatus `GET /v0/status`
|
||||
|
||||
Returns the cluster's current status in JSON, including:
|
||||
|
||||
- ID of the node being queried and its version of the Garage daemon
|
||||
- Live nodes
|
||||
- Currently configured cluster layout
|
||||
- Staged changes to the cluster layout
|
||||
|
||||
Example response body:
|
||||
|
||||
```json
|
||||
{
|
||||
"node": "ec79480e0ce52ae26fd00c9da684e4fa56658d9c64cdcecb094e936de0bfe71f",
|
||||
"garage_version": "git:v0.8.0",
|
||||
"knownNodes": {
|
||||
"ec79480e0ce52ae26fd00c9da684e4fa56658d9c64cdcecb094e936de0bfe71f": {
|
||||
"addr": "10.0.0.11:3901",
|
||||
"is_up": true,
|
||||
"last_seen_secs_ago": 9,
|
||||
"hostname": "node1"
|
||||
},
|
||||
"4a6ae5a1d0d33bf895f5bb4f0a418b7dc94c47c0dd2eb108d1158f3c8f60b0ff": {
|
||||
"addr": "10.0.0.12:3901",
|
||||
"is_up": true,
|
||||
"last_seen_secs_ago": 1,
|
||||
"hostname": "node2"
|
||||
},
|
||||
"23ffd0cdd375ebff573b20cc5cef38996b51c1a7d6dbcf2c6e619876e507cf27": {
|
||||
"addr": "10.0.0.21:3901",
|
||||
"is_up": true,
|
||||
"last_seen_secs_ago": 7,
|
||||
"hostname": "node3"
|
||||
},
|
||||
"e2ee7984ee65b260682086ec70026165903c86e601a4a5a501c1900afe28d84b": {
|
||||
"addr": "10.0.0.22:3901",
|
||||
"is_up": true,
|
||||
"last_seen_secs_ago": 1,
|
||||
"hostname": "node4"
|
||||
}
|
||||
},
|
||||
"layout": {
|
||||
"version": 12,
|
||||
"roles": {
|
||||
"ec79480e0ce52ae26fd00c9da684e4fa56658d9c64cdcecb094e936de0bfe71f": {
|
||||
"zone": "dc1",
|
||||
"capacity": 4,
|
||||
"tags": [
|
||||
"node1"
|
||||
]
|
||||
},
|
||||
"4a6ae5a1d0d33bf895f5bb4f0a418b7dc94c47c0dd2eb108d1158f3c8f60b0ff": {
|
||||
"zone": "dc1",
|
||||
"capacity": 6,
|
||||
"tags": [
|
||||
"node2"
|
||||
]
|
||||
},
|
||||
"23ffd0cdd375ebff573b20cc5cef38996b51c1a7d6dbcf2c6e619876e507cf27": {
|
||||
"zone": "dc2",
|
||||
"capacity": 10,
|
||||
"tags": [
|
||||
"node3"
|
||||
]
|
||||
}
|
||||
},
|
||||
"stagedRoleChanges": {
|
||||
"e2ee7984ee65b260682086ec70026165903c86e601a4a5a501c1900afe28d84b": {
|
||||
"zone": "dc2",
|
||||
"capacity": 5,
|
||||
"tags": [
|
||||
"node4"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### GetClusterHealth `GET /v0/health`
|
||||
|
||||
Returns the cluster's current health in JSON format, with the following variables:
|
||||
|
||||
- `status`: one of `Healthy`, `Degraded` or `Unavailable`:
|
||||
- Healthy: Garage node is connected to all storage nodes
|
||||
- Degraded: Garage node is not connected to all storage nodes, but a quorum of write nodes is available for all partitions
|
||||
- Unavailable: a quorum of write nodes is not available for some partitions
|
||||
- `known_nodes`: the number of nodes this Garage node has had a TCP connection to since the daemon started
|
||||
- `connected_nodes`: the nubmer of nodes this Garage node currently has an open connection to
|
||||
- `storage_nodes`: the number of storage nodes currently registered in the cluster layout
|
||||
- `storage_nodes_ok`: the number of storage nodes to which a connection is currently open
|
||||
- `partitions`: the total number of partitions of the data (currently always 256)
|
||||
- `partitions_quorum`: the number of partitions for which a quorum of write nodes is available
|
||||
- `partitions_all_ok`: the number of partitions for which we are connected to all storage nodes responsible of storing it
|
||||
|
||||
Contrarily to `GET /health`, this endpoint always returns a 200 OK HTTP response code.
|
||||
|
||||
Example response body:
|
||||
|
||||
```json
|
||||
{
|
||||
"status": "Degraded",
|
||||
"known_nodes": 3,
|
||||
"connected_nodes": 2,
|
||||
"storage_nodes": 3,
|
||||
"storage_nodes_ok": 2,
|
||||
"partitions": 256,
|
||||
"partitions_quorum": 256,
|
||||
"partitions_all_ok": 0
|
||||
}
|
||||
```
|
||||
|
||||
#### ConnectClusterNodes `POST /v0/connect`
|
||||
|
||||
Instructs this Garage node to connect to other Garage nodes at specified addresses.
|
||||
|
||||
Example request body:
|
||||
|
||||
```json
|
||||
[
|
||||
"ec79480e0ce52ae26fd00c9da684e4fa56658d9c64cdcecb094e936de0bfe71f@10.0.0.11:3901",
|
||||
"4a6ae5a1d0d33bf895f5bb4f0a418b7dc94c47c0dd2eb108d1158f3c8f60b0ff@10.0.0.12:3901"
|
||||
]
|
||||
```
|
||||
|
||||
The format of the string for a node to connect to is: `<node ID>@<ip address>:<port>`, same as in the `garage node connect` CLI call.
|
||||
|
||||
Example response:
|
||||
|
||||
```json
|
||||
[
|
||||
{
|
||||
"success": true,
|
||||
"error": null
|
||||
},
|
||||
{
|
||||
"success": false,
|
||||
"error": "Handshake error"
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
#### GetClusterLayout `GET /v0/layout`
|
||||
|
||||
Returns the cluster's current layout in JSON, including:
|
||||
|
||||
- Currently configured cluster layout
|
||||
- Staged changes to the cluster layout
|
||||
|
||||
(the info returned by this endpoint is a subset of the info returned by GetClusterStatus)
|
||||
|
||||
Example response body:
|
||||
|
||||
```json
|
||||
{
|
||||
"version": 12,
|
||||
"roles": {
|
||||
"ec79480e0ce52ae26fd00c9da684e4fa56658d9c64cdcecb094e936de0bfe71f": {
|
||||
"zone": "dc1",
|
||||
"capacity": 4,
|
||||
"tags": [
|
||||
"node1"
|
||||
]
|
||||
},
|
||||
"4a6ae5a1d0d33bf895f5bb4f0a418b7dc94c47c0dd2eb108d1158f3c8f60b0ff": {
|
||||
"zone": "dc1",
|
||||
"capacity": 6,
|
||||
"tags": [
|
||||
"node2"
|
||||
]
|
||||
},
|
||||
"23ffd0cdd375ebff573b20cc5cef38996b51c1a7d6dbcf2c6e619876e507cf27": {
|
||||
"zone": "dc2",
|
||||
"capacity": 10,
|
||||
"tags": [
|
||||
"node3"
|
||||
]
|
||||
}
|
||||
},
|
||||
"stagedRoleChanges": {
|
||||
"e2ee7984ee65b260682086ec70026165903c86e601a4a5a501c1900afe28d84b": {
|
||||
"zone": "dc2",
|
||||
"capacity": 5,
|
||||
"tags": [
|
||||
"node4"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### UpdateClusterLayout `POST /v0/layout`
|
||||
|
||||
Send modifications to the cluster layout. These modifications will
|
||||
be included in the staged role changes, visible in subsequent calls
|
||||
of `GetClusterLayout`. Once the set of staged changes is satisfactory,
|
||||
the user may call `ApplyClusterLayout` to apply the changed changes,
|
||||
or `Revert ClusterLayout` to clear all of the staged changes in
|
||||
the layout.
|
||||
|
||||
Request body format:
|
||||
|
||||
```json
|
||||
{
|
||||
<node_id>: {
|
||||
"capacity": <new_capacity>,
|
||||
"zone": <new_zone>,
|
||||
"tags": [
|
||||
<new_tag>,
|
||||
...
|
||||
]
|
||||
},
|
||||
<node_id_to_remove>: null,
|
||||
...
|
||||
}
|
||||
```
|
||||
|
||||
Contrary to the CLI that may update only a subset of the fields
|
||||
`capacity`, `zone` and `tags`, when calling this API all of these
|
||||
values must be specified.
|
||||
|
||||
|
||||
#### ApplyClusterLayout `POST /v0/layout/apply`
|
||||
|
||||
Applies to the cluster the layout changes currently registered as
|
||||
staged layout changes.
|
||||
|
||||
Request body format:
|
||||
|
||||
```json
|
||||
{
|
||||
"version": 13
|
||||
}
|
||||
```
|
||||
|
||||
Similarly to the CLI, the body must include the version of the new layout
|
||||
that will be created, which MUST be 1 + the value of the currently
|
||||
existing layout in the cluster.
|
||||
|
||||
#### RevertClusterLayout `POST /v0/layout/revert`
|
||||
|
||||
Clears all of the staged layout changes.
|
||||
|
||||
Request body format:
|
||||
|
||||
```json
|
||||
{
|
||||
"version": 13
|
||||
}
|
||||
```
|
||||
|
||||
Reverting the staged changes is done by incrementing the version number
|
||||
and clearing the contents of the staged change list.
|
||||
Similarly to the CLI, the body must include the incremented
|
||||
version number, which MUST be 1 + the value of the currently
|
||||
existing layout in the cluster.
|
||||
|
||||
|
||||
### Access key operations
|
||||
|
||||
#### ListKeys `GET /v0/key`
|
||||
|
||||
Returns all API access keys in the cluster.
|
||||
|
||||
Example response:
|
||||
|
||||
```json
|
||||
[
|
||||
{
|
||||
"id": "GK31c2f218a2e44f485b94239e",
|
||||
"name": "test"
|
||||
},
|
||||
{
|
||||
"id": "GKe10061ac9c2921f09e4c5540",
|
||||
"name": "test2"
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
#### CreateKey `POST /v0/key`
|
||||
|
||||
Creates a new API access key.
|
||||
|
||||
Request body format:
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "NameOfMyKey"
|
||||
}
|
||||
```
|
||||
|
||||
#### ImportKey `POST /v0/key/import`
|
||||
|
||||
Imports an existing API key.
|
||||
|
||||
Request body format:
|
||||
|
||||
```json
|
||||
{
|
||||
"accessKeyId": "GK31c2f218a2e44f485b94239e",
|
||||
"secretAccessKey": "b892c0665f0ada8a4755dae98baa3b133590e11dae3bcc1f9d769d67f16c3835",
|
||||
"name": "NameOfMyKey"
|
||||
}
|
||||
```
|
||||
|
||||
#### GetKeyInfo `GET /v0/key?id=<acces key id>`
|
||||
#### GetKeyInfo `GET /v0/key?search=<pattern>`
|
||||
|
||||
Returns information about the requested API access key.
|
||||
|
||||
If `id` is set, the key is looked up using its exact identifier (faster).
|
||||
If `search` is set, the key is looked up using its name or prefix
|
||||
of identifier (slower, all keys are enumerated to do this).
|
||||
|
||||
Example response:
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "test",
|
||||
"accessKeyId": "GK31c2f218a2e44f485b94239e",
|
||||
"secretAccessKey": "b892c0665f0ada8a4755dae98baa3b133590e11dae3bcc1f9d769d67f16c3835",
|
||||
"permissions": {
|
||||
"createBucket": false
|
||||
},
|
||||
"buckets": [
|
||||
{
|
||||
"id": "70dc3bed7fe83a75e46b66e7ddef7d56e65f3c02f9f80b6749fb97eccb5e1033",
|
||||
"globalAliases": [
|
||||
"test2"
|
||||
],
|
||||
"localAliases": [],
|
||||
"permissions": {
|
||||
"read": true,
|
||||
"write": true,
|
||||
"owner": false
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "d7452a935e663fc1914f3a5515163a6d3724010ce8dfd9e4743ca8be5974f995",
|
||||
"globalAliases": [
|
||||
"test3"
|
||||
],
|
||||
"localAliases": [],
|
||||
"permissions": {
|
||||
"read": true,
|
||||
"write": true,
|
||||
"owner": false
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "e6a14cd6a27f48684579ec6b381c078ab11697e6bc8513b72b2f5307e25fff9b",
|
||||
"globalAliases": [],
|
||||
"localAliases": [
|
||||
"test"
|
||||
],
|
||||
"permissions": {
|
||||
"read": true,
|
||||
"write": true,
|
||||
"owner": true
|
||||
}
|
||||
},
|
||||
{
|
||||
"id": "96470e0df00ec28807138daf01915cfda2bee8eccc91dea9558c0b4855b5bf95",
|
||||
"globalAliases": [
|
||||
"alex"
|
||||
],
|
||||
"localAliases": [],
|
||||
"permissions": {
|
||||
"read": true,
|
||||
"write": true,
|
||||
"owner": true
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
#### DeleteKey `DELETE /v0/key?id=<acces key id>`
|
||||
|
||||
Deletes an API access key.
|
||||
|
||||
#### UpdateKey `POST /v0/key?id=<acces key id>`
|
||||
|
||||
Updates information about the specified API access key.
|
||||
|
||||
Request body format:
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "NameOfMyKey",
|
||||
"allow": {
|
||||
"createBucket": true,
|
||||
},
|
||||
"deny": {}
|
||||
}
|
||||
```
|
||||
|
||||
All fields (`name`, `allow` and `deny`) are optional.
|
||||
If they are present, the corresponding modifications are applied to the key, otherwise nothing is changed.
|
||||
The possible flags in `allow` and `deny` are: `createBucket`.
|
||||
|
||||
|
||||
### Bucket operations
|
||||
|
||||
#### ListBuckets `GET /v0/bucket`
|
||||
|
||||
Returns all storage buckets in the cluster.
|
||||
|
||||
Example response:
|
||||
|
||||
```json
|
||||
[
|
||||
{
|
||||
"id": "70dc3bed7fe83a75e46b66e7ddef7d56e65f3c02f9f80b6749fb97eccb5e1033",
|
||||
"globalAliases": [
|
||||
"test2"
|
||||
],
|
||||
"localAliases": []
|
||||
},
|
||||
{
|
||||
"id": "96470e0df00ec28807138daf01915cfda2bee8eccc91dea9558c0b4855b5bf95",
|
||||
"globalAliases": [
|
||||
"alex"
|
||||
],
|
||||
"localAliases": []
|
||||
},
|
||||
{
|
||||
"id": "d7452a935e663fc1914f3a5515163a6d3724010ce8dfd9e4743ca8be5974f995",
|
||||
"globalAliases": [
|
||||
"test3"
|
||||
],
|
||||
"localAliases": []
|
||||
},
|
||||
{
|
||||
"id": "e6a14cd6a27f48684579ec6b381c078ab11697e6bc8513b72b2f5307e25fff9b",
|
||||
"globalAliases": [],
|
||||
"localAliases": [
|
||||
{
|
||||
"accessKeyId": "GK31c2f218a2e44f485b94239e",
|
||||
"alias": "test"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
#### GetBucketInfo `GET /v0/bucket?id=<bucket id>`
|
||||
#### GetBucketInfo `GET /v0/bucket?globalAlias=<alias>`
|
||||
|
||||
Returns information about the requested storage bucket.
|
||||
|
||||
If `id` is set, the bucket is looked up using its exact identifier.
|
||||
If `globalAlias` is set, the bucket is looked up using its global alias.
|
||||
(both are fast)
|
||||
|
||||
Example response:
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "afa8f0a22b40b1247ccd0affb869b0af5cff980924a20e4b5e0720a44deb8d39",
|
||||
"globalAliases": [],
|
||||
"websiteAccess": false,
|
||||
"websiteConfig": null,
|
||||
"keys": [
|
||||
{
|
||||
"accessKeyId": "GK31c2f218a2e44f485b94239e",
|
||||
"name": "Imported key",
|
||||
"permissions": {
|
||||
"read": true,
|
||||
"write": true,
|
||||
"owner": true
|
||||
},
|
||||
"bucketLocalAliases": [
|
||||
"debug"
|
||||
]
|
||||
}
|
||||
],
|
||||
"objects": 14827,
|
||||
"bytes": 13189855625,
|
||||
"unfinshedUploads": 0,
|
||||
"quotas": {
|
||||
"maxSize": null,
|
||||
"maxObjects": null
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### CreateBucket `POST /v0/bucket`
|
||||
|
||||
Creates a new storage bucket.
|
||||
|
||||
Request body format:
|
||||
|
||||
```json
|
||||
{
|
||||
"globalAlias": "NameOfMyBucket"
|
||||
}
|
||||
```
|
||||
|
||||
OR
|
||||
|
||||
```json
|
||||
{
|
||||
"localAlias": {
|
||||
"accessKeyId": "GK31c2f218a2e44f485b94239e",
|
||||
"alias": "NameOfMyBucket",
|
||||
"allow": {
|
||||
"read": true,
|
||||
"write": true,
|
||||
"owner": false
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
OR
|
||||
|
||||
```json
|
||||
{}
|
||||
```
|
||||
|
||||
Creates a new bucket, either with a global alias, a local one,
|
||||
or no alias at all.
|
||||
|
||||
Technically, you can also specify both `globalAlias` and `localAlias` and that would create
|
||||
two aliases, but I don't see why you would want to do that.
|
||||
|
||||
#### DeleteBucket `DELETE /v0/bucket?id=<bucket id>`
|
||||
|
||||
Deletes a storage bucket. A bucket cannot be deleted if it is not empty.
|
||||
|
||||
Warning: this will delete all aliases associated with the bucket!
|
||||
|
||||
#### UpdateBucket `PUT /v0/bucket?id=<bucket id>`
|
||||
|
||||
Updates configuration of the given bucket.
|
||||
|
||||
Request body format:
|
||||
|
||||
```json
|
||||
{
|
||||
"websiteAccess": {
|
||||
"enabled": true,
|
||||
"indexDocument": "index.html",
|
||||
"errorDocument": "404.html"
|
||||
},
|
||||
"quotas": {
|
||||
"maxSize": 19029801,
|
||||
"maxObjects": null,
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
All fields (`websiteAccess` and `quotas`) are optional.
|
||||
If they are present, the corresponding modifications are applied to the bucket, otherwise nothing is changed.
|
||||
|
||||
In `websiteAccess`: if `enabled` is `true`, `indexDocument` must be specified.
|
||||
The field `errorDocument` is optional, if no error document is set a generic
|
||||
error message is displayed when errors happen. Conversely, if `enabled` is
|
||||
`false`, neither `indexDocument` nor `errorDocument` must be specified.
|
||||
|
||||
In `quotas`: new values of `maxSize` and `maxObjects` must both be specified, or set to `null`
|
||||
to remove the quotas. An absent value will be considered the same as a `null`. It is not possible
|
||||
to change only one of the two quotas.
|
||||
|
||||
### Operations on permissions for keys on buckets
|
||||
|
||||
#### BucketAllowKey `POST /v0/bucket/allow`
|
||||
|
||||
Allows a key to do read/write/owner operations on a bucket.
|
||||
|
||||
Request body format:
|
||||
|
||||
```json
|
||||
{
|
||||
"bucketId": "e6a14cd6a27f48684579ec6b381c078ab11697e6bc8513b72b2f5307e25fff9b",
|
||||
"accessKeyId": "GK31c2f218a2e44f485b94239e",
|
||||
"permissions": {
|
||||
"read": true,
|
||||
"write": true,
|
||||
"owner": true
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
Flags in `permissions` which have the value `true` will be activated.
|
||||
Other flags will remain unchanged.
|
||||
|
||||
#### BucketDenyKey `POST /v0/bucket/deny`
|
||||
|
||||
Denies a key from doing read/write/owner operations on a bucket.
|
||||
|
||||
Request body format:
|
||||
|
||||
```json
|
||||
{
|
||||
"bucketId": "e6a14cd6a27f48684579ec6b381c078ab11697e6bc8513b72b2f5307e25fff9b",
|
||||
"accessKeyId": "GK31c2f218a2e44f485b94239e",
|
||||
"permissions": {
|
||||
"read": false,
|
||||
"write": false,
|
||||
"owner": true
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
Flags in `permissions` which have the value `true` will be deactivated.
|
||||
Other flags will remain unchanged.
|
||||
|
||||
|
||||
### Operations on bucket aliases
|
||||
|
||||
#### GlobalAliasBucket `PUT /v0/bucket/alias/global?id=<bucket id>&alias=<global alias>`
|
||||
|
||||
Empty body. Creates a global alias for a bucket.
|
||||
|
||||
#### GlobalUnaliasBucket `DELETE /v0/bucket/alias/global?id=<bucket id>&alias=<global alias>`
|
||||
|
||||
Removes a global alias for a bucket.
|
||||
|
||||
#### LocalAliasBucket `PUT /v0/bucket/alias/local?id=<bucket id>&accessKeyId=<access key ID>&alias=<local alias>`
|
||||
|
||||
Empty body. Creates a local alias for a bucket in the namespace of a specific access key.
|
||||
|
||||
#### LocalUnaliasBucket `DELETE /v0/bucket/alias/local?id=<bucket id>&accessKeyId<access key ID>&alias=<local alias>`
|
||||
|
||||
Removes a local alias for a bucket in the namespace of a specific access key.
|
||||
|
|
@ -706,6 +706,73 @@ HTTP/1.1 200 OK
|
|||
]
|
||||
```
|
||||
|
||||
**PollRange: `POST /<bucket>/<partition key>?poll_range`**, or alternatively<br/>
|
||||
**PollRange: `SEARCH /<bucket>/<partition key>?poll_range`**
|
||||
|
||||
Polls a range of items for changes.
|
||||
|
||||
The query body is a JSON object consisting of the following fields:
|
||||
|
||||
| name | default value | meaning |
|
||||
|-----------------|---------------|----------------------------------------------------------------------------------------|
|
||||
| `prefix` | `null` | Restrict items to poll to those whose sort keys start with this prefix |
|
||||
| `start` | `null` | The sort key of the first item to poll |
|
||||
| `end` | `null` | The sort key of the last item to poll (excluded) |
|
||||
| `timeout` | 300 | The timeout before 304 NOT MODIFIED is returned if no value in the range is updated |
|
||||
| `seenMarker` | `null` | An opaque string returned by a previous PollRange call, that represents items already seen |
|
||||
|
||||
The timeout can be set to any number of seconds, with a maximum of 600 seconds (10 minutes).
|
||||
|
||||
The response is either:
|
||||
|
||||
- A HTTP 304 NOT MODIFIED response with an empty body, if the timeout expired and no changes occurred
|
||||
|
||||
- A HTTP 200 response, indicating that some changes have occurred since the last PollRange call, in which case a JSON object is returned in the body with the following fields:
|
||||
|
||||
| name | meaning |
|
||||
|-----------------|----------------------------------------------------------------------------------------|
|
||||
| `seenMarker` | An opaque string that represents items already seen for future PollRange calls |
|
||||
| `items` | The list of items that have changed since last PollRange call, in the same format as ReadBatch |
|
||||
|
||||
If no seen marker is known by the caller, it can do a PollRange call
|
||||
without specifying `seenMarker`. In this case, the PollRange call will
|
||||
complete immediately, and return the current content of the range (which
|
||||
can be empty) and a seen marker to be used in further PollRange calls. This
|
||||
is the only case in which PollRange might return an HTTP 200 with an empty
|
||||
set of items.
|
||||
|
||||
A seen marker returned as a response to a PollRange query can be used for further PollRange
|
||||
queries on the same range, or for PollRange queries in a subrange of the initial range.
|
||||
It may not be used for PollRange queries on ranges larger or outside of the initial range.
|
||||
|
||||
Example query:
|
||||
|
||||
```json
|
||||
SEARCH /my_bucket?poll_range HTTP/1.1
|
||||
|
||||
{
|
||||
"prefix": "0391.",
|
||||
"start": "0391.000001973107",
|
||||
"seenMarker": "opaquestring123",
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
Example response:
|
||||
|
||||
```json
|
||||
HTTP/1.1 200 OK
|
||||
Content-Type: application/json
|
||||
|
||||
{
|
||||
"seenMarker": "opaquestring456",
|
||||
"items": [
|
||||
{ sk: "0391.000001973221", ct: "opaquetoken123", v: ["b64cryptoblob123", "b64cryptoblob'123"] },
|
||||
{ sk: "0391.000001974191", ct: "opaquetoken456", v: ["b64cryptoblob456", "b64cryptoblob'456"] },
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
## Internals: causality tokens
|
||||
|
||||
|
|
BIN
doc/logo/garage_hires_crop.png
Normal file
After Width: | Height: | Size: 41 KiB |
13
doc/optimal_layout_report/.gitignore
vendored
|
@ -1,13 +0,0 @@
|
|||
optimal_layout.aux
|
||||
optimal_layout.log
|
||||
optimal_layout.synctex.gz
|
||||
optimal_layout.bbl
|
||||
optimal_layout.blg
|
||||
|
||||
geodistrib.aux
|
||||
geodistrib.bbl
|
||||
geodistrib.blg
|
||||
geodistrib.log
|
||||
geodistrib.out
|
||||
geodistrib.synctex.gz
|
||||
|
Before Width: | Height: | Size: 161 KiB |
Before Width: | Height: | Size: 560 KiB |
Before Width: | Height: | Size: 287 KiB |
Before Width: | Height: | Size: 112 KiB |
Before Width: | Height: | Size: 270 KiB |
|
@ -1,317 +0,0 @@
|
|||
\documentclass[]{article}
|
||||
|
||||
\usepackage{amsmath,amssymb}
|
||||
\usepackage{amsthm}
|
||||
|
||||
\usepackage{stmaryrd}
|
||||
|
||||
\usepackage{graphicx,xcolor}
|
||||
\usepackage{hyperref}
|
||||
|
||||
\usepackage{algorithm,algpseudocode,float}
|
||||
|
||||
\renewcommand\thesubsubsection{\Alph{subsubsection})}
|
||||
|
||||
\newtheorem{proposition}{Proposition}
|
||||
|
||||
%opening
|
||||
\title{An algorithm for geo-distributed and redundant storage in Garage}
|
||||
\author{Mendes Oulamara \\ \emph{mendes@deuxfleurs.fr}}
|
||||
\date{}
|
||||
|
||||
\begin{document}
|
||||
|
||||
\maketitle
|
||||
|
||||
\begin{abstract}
|
||||
Garage
|
||||
\end{abstract}
|
||||
|
||||
\section{Introduction}
|
||||
|
||||
Garage\footnote{\url{https://garagehq.deuxfleurs.fr/}} is an open-source distributed object storage service tailored for self-hosting. It was designed by the Deuxfleurs association\footnote{\url{https://deuxfleurs.fr/}} to enable small structures (associations, collectives, small companies) to share storage resources to reliably self-host their data, possibly with old and non-reliable machines.
|
||||
|
||||
To achieve these reliability and availability goals, the data is broken into \emph{partitions} and every partition is replicated over 3 different machines (that we call \emph{nodes}). When the data is queried, a consensus algorithm allows to fetch it from one of the nodes. A \emph{replication factor} of 3 ensures the best guarantees in the consensus algorithm \cite{ADD RREF}, but this parameter can be different.
|
||||
|
||||
Moreover, if the nodes are spread over different \emph{zones} (different houses, offices, cities\dots), we can ask the data to be replicated over nodes belonging to different zones, to improve the storage robustness against zone failure (such as power outage). To do so, we set a \emph{redundancy parameter}, that is no more than the replication factor, and we ask that any partition is replicated over this number of zones at least.
|
||||
|
||||
In this work, we propose a repartition algorithm that, given the nodes specifications and the replication and redundancy parameters, computes an optimal assignation of partitions to nodes. We say that the assignation is optimal in the sense that it maximizes the size of the partitions, and hence the effective storage capacity of the system.
|
||||
|
||||
Moreover, when a former assignation exists, which is not optimal anymore due to nodes or zones updates, our algorithm computes a new optimal assignation that minimizes the amount of data to be transferred during the assignation update (the \emph{transfer load}).
|
||||
|
||||
We call the set of nodes cooperating to store the data a \emph{cluster}, and a description of the nodes, zones and the assignation of partitions to nodes a \emph{cluster layout}
|
||||
|
||||
\subsection{Notations}
|
||||
|
||||
Let $k$ be some fixed parameter value, typically 8, that we call the ``partition bits''.
|
||||
Every object to be stored in the system is split into data blocks of fixed size. We compute a hash $h(\mathbf{b})$ of every such block $\mathbf{b}$, and we define the $k$ last bits of this hash to be the partition number $p(\mathbf{b})$ of the block. This label can take $P=2^k$ different values, and hence there are $P$ different partitions. We denote $\mathbf{P}$ the set of partition labels (i.e. $\mathbf{P}=\llbracket1,P\rrbracket$).
|
||||
|
||||
We are given a set $\mathbf{N}$ of $N$ nodes and a set $\mathbf{Z}$ of $Z$ zones. Every node $n$ has a non-negative storage capacity $c_n\ge 0$ and belongs to a zone $z_n\in \mathbf{Z}$. We are also given a replication parameter $\rho_\mathbf{N}$ and a redundancy parameter $\rho_\mathbf{Z}$ such that $1\le \rho_\mathbf{Z} \le \rho_\mathbf{N}$ (typical values would be $\rho_N=3$ and $\rho_Z=2$).
|
||||
|
||||
Our goal is to compute an assignment $\alpha = (\alpha_p^1, \ldots, \alpha_p^{\rho_\mathbf{N}})_{p\in \mathbf{P}}$ such that every partition $p$ is associated to $\rho_\mathbf{N}$ distinct nodes $\alpha_p^1, \ldots, \alpha_p^{\rho_\mathbf{N}} \in \mathbf{N}$ and these nodes belong to at least $\rho_\mathbf{Z}$ distinct zones. Among the possible assignations, we choose one that \emph{maximizes} the effective storage capacity of the cluster. If the layout contained a previous assignment $\alpha'$, we \emph{minimize} the amount of data to transfer during the layout update by making $\alpha$ as close as possible to $\alpha'$. These maximization and minimization are described more formally in the following section.
|
||||
|
||||
\subsection{Optimization parameters}
|
||||
|
||||
To link the effective storage capacity of the cluster to partition assignment, we make the following assumption:
|
||||
\begin{equation}
|
||||
\tag{H1}
|
||||
\text{\emph{All partitions have the same size $s$.}}
|
||||
\end{equation}
|
||||
This assumption is justified by the dispersion of the hashing function, when the number of partitions is small relative to the number of stored blocks.
|
||||
|
||||
Every node $n$ wille store some number $p_n$ of partitions (it is the number of partitions $p$ such that $n$ appears in the $\alpha_p$). Hence the partitions stored by $n$ (and hence all partitions by our assumption) have there size bounded by $c_n/p_n$. This remark leads us to define the optimal size that we will want to maximize:
|
||||
|
||||
\begin{equation}
|
||||
\label{eq:optimal}
|
||||
\tag{OPT}
|
||||
s^* = \min_{n \in N} \frac{c_n}{p_n}.
|
||||
\end{equation}
|
||||
|
||||
When the capacities of the nodes are updated (this includes adding or removing a node), we want to update the assignment as well. However, transferring the data between nodes has a cost and we would like to limit the number of changes in the assignment. We make the following assumption:
|
||||
\begin{equation}
|
||||
\tag{H2}
|
||||
\text{\emph{Nodes updates happen rarely relatively to block operations.}}
|
||||
\end{equation}
|
||||
This assumption justifies that when we compute the new assignment $\alpha$, it is worth to optimize the partition size \eqref{eq:optimal} first, and then, among the possible optimal solution, to try to minimize the number of partition transfers. More formally, we minimize the distance between two assignments defined by
|
||||
\begin{equation}
|
||||
d(\alpha, \alpha') := \#\{ (n,p) \in \mathbf{N}\times\mathbf{P} ~|~ n\in \alpha_p \triangle \alpha'_p \}
|
||||
\end{equation}
|
||||
where the symmetric difference $\alpha_p \triangle \alpha'_p$ denotes the nodes appearing in one of the assignations but not in both.
|
||||
|
||||
\section{Computation of an optimal assignment}
|
||||
|
||||
The algorithm that we propose takes as inputs the cluster layout parameters $\mathbf{N}$, $\mathbf{Z}$, $\mathbf{P}$, $(c_n)_{n\in \mathbf{N}}$, $\rho_\mathbf{N}$, $\rho_\mathbf{Z}$, that we defined in the introduction, together with the former assignation $\alpha'$ (if any). The computation of the new optimal assignation $\alpha^*$ is done in three successive steps that will be detailed in the following sections. The first step computes the largest partition size $s^*$ that an assignation can achieve. The second step computes an optimal candidate assignment $\alpha$ that achieves $s^*$ and a heuristic is used in the computation to make it hopefully close to $\alpha'$. The third steps modifies $\alpha$ iteratively to reduces $d(\alpha, \alpha')$ and yields an assignation $\alpha^*$ achieving $s^*$, and minimizing $d(\cdot, \alpha')$ among such assignations.
|
||||
|
||||
We will explain in the next section how to represent an assignment $\alpha$ by a flow $f$ on a weighted graph $G$ to enable the use of flow and graph algorithms. The main function of the algorithm can be written as follows.
|
||||
|
||||
\subsubsection*{Algorithm}
|
||||
|
||||
\begin{algorithmic}[1]
|
||||
\Function{Compute Layout}{$\mathbf{N}$, $\mathbf{Z}$, $\mathbf{P}$, $(c_n)_{n\in \mathbf{N}}$, $\rho_\mathbf{N}$, $\rho_\mathbf{Z}$, $\alpha'$}
|
||||
\State $s^* \leftarrow$ \Call{Compute Partition Size}{$\mathbf{N}$, $\mathbf{Z}$, $\mathbf{P}$, $(c_n)_{n\in \mathbf{N}}$, $\rho_\mathbf{N}$, $\rho_\mathbf{Z}$}
|
||||
\State $G \leftarrow G(s^*)$
|
||||
\State $f \leftarrow$ \Call{Compute Candidate Assignment}{$G$, $\alpha'$}
|
||||
\State $f^* \leftarrow$ \Call{Minimize transfer load}{$G$, $f$, $\alpha'$}
|
||||
\State Build $\alpha^*$ from $f^*$
|
||||
\State \Return $\alpha^*$
|
||||
\EndFunction
|
||||
\end{algorithmic}
|
||||
|
||||
\subsubsection*{Complexity}
|
||||
As we will see in the next sections, the worst case complexity of this algorithm is $O(P^2 N^2)$. The minimization of transfer load is the most expensive step, and it can run with a timeout since it is only an optimization step. Without this step (or with a smart timeout), the worst cas complexity can be $O((PN)^{3/2}\log C)$ where $C$ is the total storage capacity of the cluster.
|
||||
|
||||
\subsection{Determination of the partition size $s^*$}
|
||||
|
||||
We will represent an assignment $\alpha$ as a flow in a specific graph $G$. We will not compute the optimal partition size $s^*$ a priori, but we will determine it by dichotomy, as the largest size $s$ such that the maximal flow achievable on $G=G(s)$ has value $\rho_\mathbf{N}P$. We will assume that the capacities are given in a small enough unit (say, Megabytes), and we will determine $s^*$ at the precision of the given unit.
|
||||
|
||||
Given some candidate size value $s$, we describe the oriented weighted graph $G=(V,E)$ with vertex set $V$ arc set $E$ (see Figure \ref{fig:flowgraph}).
|
||||
|
||||
The set of vertices $V$ contains the source $\mathbf{s}$, the sink $\mathbf{t}$, vertices
|
||||
$\mathbf{p^+, p^-}$ for every partition $p$, vertices $\mathbf{x}_{p,z}$ for every partition $p$ and zone $z$, and vertices $\mathbf{n}$ for every node $n$.
|
||||
|
||||
The set of arcs $E$ contains:
|
||||
\begin{itemize}
|
||||
\item ($\mathbf{s}$,$\mathbf{p}^+$, $\rho_\mathbf{Z}$) for every partition $p$;
|
||||
\item ($\mathbf{s}$,$\mathbf{p}^-$, $\rho_\mathbf{N}-\rho_\mathbf{Z}$) for every partition $p$;
|
||||
\item ($\mathbf{p}^+$,$\mathbf{x}_{p,z}$, 1) for every partition $p$ and zone $z$;
|
||||
\item ($\mathbf{p}^-$,$\mathbf{x}_{p,z}$, $\rho_\mathbf{N}-\rho_\mathbf{Z}$) for every partition $p$ and zone $z$;
|
||||
\item ($\mathbf{x}_{p,z}$,$\mathbf{n}$, 1) for every partition $p$, zone $z$ and node $n\in z$;
|
||||
\item ($\mathbf{n}$, $\mathbf{t}$, $\lfloor c_n/s \rfloor$) for every node $n$.
|
||||
\end{itemize}
|
||||
|
||||
\begin{figure}
|
||||
\centering
|
||||
\includegraphics[width=\linewidth]{figures/flow_graph_param}
|
||||
\caption{An example of graph $G(s)$. Arcs are oriented from left to right, and unlabeled arcs have capacity 1. In this example, nodes $n_1,n_2,n_3$ belong to zone $z_1$, and nodes $n_4,n_5$ belong to zone $z_2$.}
|
||||
\label{fig:flowgraph}
|
||||
\end{figure}
|
||||
|
||||
In the following complexity calculations, we will use the number of vertices and edges of $G$. Remark from now that $\# V = O(PZ)$ and $\# E = O(PN)$.
|
||||
|
||||
\begin{proposition}
|
||||
An assignment $\alpha$ is realizable with partition size $s$ and the redundancy constraints $(\rho_\mathbf{N},\rho_\mathbf{Z})$ if and only if there exists a maximal flow function $f$ in $G$ with total flow $\rho_\mathbf{N}P$, such that the arcs ($\mathbf{x}_{p,z}$,$\mathbf{n}$, 1) used are exactly those for which $p$ is associated to $n$ in $\alpha$.
|
||||
\end{proposition}
|
||||
\begin{proof}
|
||||
Given such flow $f$, we can reconstruct a candidate $\alpha$. In $f$, the flow passing through $\mathbf{p^+}$ and $\mathbf{p^-}$ is $\rho_\mathbf{N}$, and since the outgoing capacity of every $\mathbf{x}_{p,z}$ is 1, every partition is associated to $\rho_\mathbf{N}$ distinct nodes. The fraction $\rho_\mathbf{Z}$ of the flow passing through every $\mathbf{p^+}$ must be spread over as many distinct zones as every arc outgoing from $\mathbf{p^+}$ has capacity 1. So the reconstructed $\alpha$ verifies the redundancy constraints. For every node $n$, the flow between $\mathbf{n}$ and $\mathbf{t}$ corresponds to the number of partitions associated to $n$. By construction of $f$, this does not exceed $\lfloor c_n/s \rfloor$. We assumed that the partition size is $s$, hence this association does not exceed the storage capacity of the nodes.
|
||||
|
||||
In the other direction, given an assignment $\alpha$, one can similarly check that the facts that $\alpha$ respects the redundancy constraints, and the storage capacities of the nodes, are necessary condition to construct a maximal flow function $f$.
|
||||
\end{proof}
|
||||
|
||||
\textbf{Implementation remark:} In the flow algorithm, while exploring the graph, we explore the neighbours of every vertex in a random order to heuristically spread the associations between nodes and partitions.
|
||||
|
||||
\subsubsection*{Algorithm}
|
||||
With this result mind, we can describe the first step of our algorithm. All divisions are supposed to be integer divisions.
|
||||
\begin{algorithmic}[1]
|
||||
\Function{Compute Partition Size}{$\mathbf{N}$, $\mathbf{Z}$, $\mathbf{P}$, $(c_n)_{n\in \mathbf{N}}$, $\rho_\mathbf{N}$, $\rho_\mathbf{Z}$}
|
||||
|
||||
\State Build the graph $G=G(s=1)$
|
||||
\State $ f \leftarrow$ \Call{Maximal flow}{$G$}
|
||||
\If{$f.\mathrm{total flow} < \rho_\mathbf{N}P$}
|
||||
|
||||
\State \Return Error: capacities too small or constraints too strong.
|
||||
\EndIf
|
||||
|
||||
\State $s^- \leftarrow 1$
|
||||
\State $s^+ \leftarrow 1+\frac{1}{\rho_\mathbf{N}}\sum_{n \in \mathbf{N}} c_n$
|
||||
|
||||
\While{$s^-+1 < s^+$}
|
||||
\State Build the graph $G=G(s=(s^-+s^+)/2)$
|
||||
\State $ f \leftarrow$ \Call{Maximal flow}{$G$}
|
||||
\If{$f.\mathrm{total flow} < \rho_\mathbf{N}P$}
|
||||
\State $s^+ \leftarrow (s^- + s^+)/2$
|
||||
\Else
|
||||
\State $s^- \leftarrow (s^- + s^+)/2$
|
||||
\EndIf
|
||||
\EndWhile
|
||||
|
||||
\State \Return $s^-$
|
||||
\EndFunction
|
||||
\end{algorithmic}
|
||||
|
||||
\subsubsection*{Complexity}
|
||||
|
||||
To compute the maximal flow, we use Dinic's algorithm. Its complexity on general graphs is $O(\#V^2 \#E)$, but on graphs with edge capacity bounded by a constant, it turns out to be $O(\#E^{3/2})$. The graph $G$ does not fall in this case since the capacities of the arcs incoming to $\mathbf{t}$ are far from bounded. However, the proof of this complexity function works readily for graphs where we only ask the edges \emph{not} incoming to the sink $\mathbf{t}$ to have their capacities bounded by a constant. One can find the proof of this claim in \cite[Section 2]{even1975network}.
|
||||
The dichotomy adds a logarithmic factor $\log (C)$ where $C=\sum_{n \in \mathbf{N}} c_n$ is the total capacity of the cluster. The total complexity of this first function is hence
|
||||
$O(\#E^{3/2}\log C ) = O\big((PN)^{3/2} \log C\big)$.
|
||||
|
||||
\subsubsection*{Metrics}
|
||||
We can display the discrepancy between the computed $s^*$ and the best size we could have hoped for the given total capacity, that is $C/\rho_\mathbf{N}$.
|
||||
|
||||
\subsection{Computation of a candidate assignment}
|
||||
|
||||
Now that we have the optimal partition size $s^*$, to compute a candidate assignment it would be enough to compute a maximal flow function $f$ on $G(s^*)$. This is what we do if there is no former assignation $\alpha'$.
|
||||
|
||||
If there is some $\alpha'$, we add a step that will heuristically help to obtain a candidate $\alpha$ closer to $\alpha'$. We fist compute a flow function $\tilde{f}$ that uses only the partition-to-node associations appearing in $\alpha'$. Most likely, $\tilde{f}$ will not be a maximal flow of $G(s^*)$. In Dinic's algorithm, we can start from a non maximal flow function and then discover improving paths. This is what we do by starting from $\tilde{f}$. The hope\footnote{This is only a hope, because one can find examples where the construction of $f$ from $\tilde{f}$ produces an assignment $\alpha$ that is not as close as possible to $\alpha'$.} is that the final flow function $f$ will tend to keep the associations appearing in $\tilde{f}$.
|
||||
|
||||
More formally, we construct the graph $G_{|\alpha'}$ from $G$ by removing all the arcs $(\mathbf{x}_{p,z},\mathbf{n}, 1)$ where $p$ is not associated to $n$ in $\alpha'$. We compute a maximal flow function $\tilde{f}$ in $G_{|\alpha'}$. The flow $\tilde{f}$ is also a valid (most likely non maximal) flow function on $G$. We compute a maximal flow function $f$ on $G$ by starting Dinic's algorithm on $\tilde{f}$.
|
||||
|
||||
\subsubsection*{Algorithm}
|
||||
\begin{algorithmic}[1]
|
||||
\Function{Compute Candidate Assignment}{$G$, $\alpha'$}
|
||||
\State Build the graph $G_{|\alpha'}$
|
||||
\State $ \tilde{f} \leftarrow$ \Call{Maximal flow}{$G_{|\alpha'}$}
|
||||
\State $ f \leftarrow$ \Call{Maximal flow from flow}{$G$, $\tilde{f}$}
|
||||
\State \Return $f$
|
||||
\EndFunction
|
||||
\end{algorithmic}
|
||||
|
||||
~
|
||||
|
||||
\textbf{Remark:} The function ``Maximal flow'' can be just seen as the function ``Maximal flow from flow'' called with the zero flow function as starting flow.
|
||||
|
||||
\subsubsection*{Complexity}
|
||||
With the considerations of the last section, we have the complexity of the Dinic's algorithm $O(\#E^{3/2}) = O((PN)^{3/2})$.
|
||||
|
||||
\subsubsection*{Metrics}
|
||||
|
||||
We can display the flow value of $\tilde{f}$, which is an upper bound of the distance between $\alpha$ and $\alpha'$. It might be more a Debug level display than Info.
|
||||
|
||||
\subsection{Minimization of the transfer load}
|
||||
|
||||
Now that we have a candidate flow function $f$, we want to modify it to make its corresponding assignation $\alpha$ as close as possible to $\alpha'$. Denote by $f'$ the maximal flow corresponding to $\alpha'$, and let $d(f, \alpha')=d(f, f'):=d(\alpha,\alpha')$\footnote{It is the number of arcs of type $(\mathbf{x}_{p,z},\mathbf{n})$ saturated in one flow and not in the other.}.
|
||||
We want to build a sequence $f=f_0, f_1, f_2 \dots$ of maximal flows such that $d(f_i, \alpha')$ decreases as $i$ increases. The distance being a non-negative integer, this sequence of flow functions must be finite. We now explain how to find some improving $f_{i+1}$ from $f_i$.
|
||||
|
||||
For any maximal flow $f$ in $G$, we define the oriented weighted graph $G_f=(V, E_f)$ as follows. The vertices of $G_f$ are the same as the vertices of $G$. $E_f$ contains the arc $(v_1,v_2, w)$ between vertices $v_1,v_2\in V$ with weight $w$ if and only if the arc $(v_1,v_2)$ is not saturated in $f$ (i.e. $c(v_1,v_2)-f(v_1,v_2) \ge 1$, we also consider reversed arcs). The weight $w$ is:
|
||||
\begin{itemize}
|
||||
\item $-1$ if $(v_1,v_2)$ is of type $(\mathbf{x}_{p,z},\mathbf{n})$ or $(\mathbf{x}_{p,z},\mathbf{n})$ and is saturated in only one of the two flows $f,f'$;
|
||||
\item $+1$ if $(v_1,v_2)$ is of type $(\mathbf{x}_{p,z},\mathbf{n})$ or $(\mathbf{x}_{p,z},\mathbf{n})$ and is saturated in either both or none of the two flows $f,f'$;
|
||||
\item $0$ otherwise.
|
||||
\end{itemize}
|
||||
|
||||
If $\gamma$ is a simple cycle of arcs in $G_f$, we define its weight $w(\gamma)$ as the sum of the weights of its arcs. We can add $+1$ to the value of $f$ on the arcs of $\gamma$, and by construction of $G_f$ and the fact that $\gamma$ is a cycle, the function that we get is still a valid flow function on $G$, it is maximal as it has the same flow value as $f$. We denote this new function $f+\gamma$.
|
||||
|
||||
\begin{proposition}
|
||||
Given a maximal flow $f$ and a simple cycle $\gamma$ in $G_f$, we have $d(f+\gamma, f') - d(f,f') = w(\gamma)$.
|
||||
\end{proposition}
|
||||
\begin{proof}
|
||||
Let $X$ be the set of arcs of type $(\mathbf{x}_{p,z},\mathbf{n})$. Then we can express $d(f,f')$ as
|
||||
\begin{align*}
|
||||
d(f,f') & = \#\{e\in X ~|~ f(e)\neq f'(e)\}
|
||||
= \sum_{e\in X} 1_{f(e)\neq f'(e)} \\
|
||||
& = \frac{1}{2}\big( \#X + \sum_{e\in X} 1_{f(e)\neq f'(e)} - 1_{f(e)= f'(e)} \big).
|
||||
\end{align*}
|
||||
We can express the cycle weight as
|
||||
\begin{align*}
|
||||
w(\gamma) & = \sum_{e\in X, e\in \gamma} - 1_{f(e)\neq f'(e)} + 1_{f(e)= f'(e)}.
|
||||
\end{align*}
|
||||
Remark that since we passed on unit of flow in $\gamma$ to construct $f+\gamma$, we have for any $e\in X$, $f(e)=f'(e)$ if and only if $(f+\gamma)(e) \neq f'(e)$.
|
||||
Hence
|
||||
\begin{align*}
|
||||
w(\gamma) & = \frac{1}{2}(w(\gamma) + w(\gamma)) \\
|
||||
&= \frac{1}{2} \Big(
|
||||
\sum_{e\in X, e\in \gamma} - 1_{f(e)\neq f'(e)} + 1_{f(e)= f'(e)} \\
|
||||
& \qquad +
|
||||
\sum_{e\in X, e\in \gamma} 1_{(f+\gamma)(e)\neq f'(e)} + 1_{(f+\gamma)(e)= f'(e)}
|
||||
\Big).
|
||||
\end{align*}
|
||||
Plugging this in the previous equation, we find that
|
||||
$$d(f,f')+w(\gamma) = d(f+\gamma, f').$$
|
||||
\end{proof}
|
||||
|
||||
This result suggests that given some flow $f_i$, we just need to find a negative cycle $\gamma$ in $G_{f_i}$ to construct $f_{i+1}$ as $f_i+\gamma$. The following proposition ensures that this greedy strategy reaches an optimal flow.
|
||||
|
||||
\begin{proposition}
|
||||
For any maximal flow $f$, $G_f$ contains a negative cycle if and only if there exists a maximal flow $f^*$ in $G$ such that $d(f^*, f') < d(f, f')$.
|
||||
\end{proposition}
|
||||
\begin{proof}
|
||||
Suppose that there is such flow $f^*$. Define the oriented multigraph $M_{f,f^*}=(V,E_M)$ with the same vertex set $V$ as in $G$, and for every $v_1,v_2 \in V$, $E_M$ contains $(f^*(v_1,v_2) - f(v_1,v_2))_+$ copies of the arc $(v_1,v_2)$. For every vertex $v$, its total degree (meaning its outer degree minus its inner degree) is equal to
|
||||
\begin{align*}
|
||||
\deg v & = \sum_{u\in V} (f^*(v,u) - f(v,u))_+ - \sum_{u\in V} (f^*(u,v) - f(u,v))_+ \\
|
||||
& = \sum_{u\in V} f^*(v,u) - f(v,u) = \sum_{u\in V} f^*(v,u) - \sum_{u\in V} f(v,u).
|
||||
\end{align*}
|
||||
The last two sums are zero for any inner vertex since $f,f^*$ are flows, and they are equal on the source and sink since the two flows are both maximal and have hence the same value. Thus, $\deg v = 0$ for every vertex $v$.
|
||||
|
||||
This implies that the multigraph $M_{f,f^*}$ is the union of disjoint simple cycles. $f$ can be transformed into $f^*$ by pushing a mass 1 along all these cycles in any order. Since $d(f^*, f')<d(f,f')$, there must exists one of these simple cycles $\gamma$ with $d(f+\gamma, f') < d(f, f')$. Finally, since we can push a mass in $f$ along $\gamma$, it must appear in $G_f$. Hence $\gamma$ is a cycle of $G_f$ with negative weight.
|
||||
\end{proof}
|
||||
|
||||
In the next section we describe the corresponding algorithm. Instead of discovering only one cycle, we are allowed to discover a set $\Gamma$ of disjoint negative cycles.
|
||||
|
||||
\subsubsection*{Algorithm}
|
||||
\begin{algorithmic}[1]
|
||||
\Function{Minimize transfer load}{$G$, $f$, $\alpha'$}
|
||||
\State Build the graph $G_f$
|
||||
\State $\Gamma \leftarrow$ \Call{Detect Negative Cycles}{$G_f$}
|
||||
\While{$\Gamma \neq \emptyset$}
|
||||
\ForAll{$\gamma \in \Gamma$}
|
||||
\State $f \leftarrow f+\gamma$
|
||||
\EndFor
|
||||
\State Update $G_f$
|
||||
\State $\Gamma \leftarrow$ \Call{Detect Negative Cycles}{$G_f$}
|
||||
\EndWhile
|
||||
\State \Return $f$
|
||||
\EndFunction
|
||||
\end{algorithmic}
|
||||
|
||||
\subsubsection*{Complexity}
|
||||
The distance $d(f,f')$ is bounded by the maximal number of differences in the associated assignment. If these assignment are totally disjoint, this distance is $2\rho_N P$. At every iteration of the While loop, the distance decreases, so there is at most $O(\rho_N P) = O(P)$ iterations.
|
||||
|
||||
The detection of negative cycle is done with the Bellman-Ford algorithm, whose complexity should normally be $O(\#E\#V)$. In our case, it amounts to $O(P^2ZN)$. Multiplied by the complexity of the outer loop, it amounts to $O(P^3ZN)$ which is a lot when the number of partitions and nodes starts to be large. To avoid that, we adapt the Bellman-Ford algorithm.
|
||||
|
||||
The Bellman-Ford algorithm runs $\#V$ iterations of an outer loop, and an inner loop over $E$. The idea is to compute the shortest paths from a source vertex $v$ to all other vertices. After $k$ iterations of the outer loop, the algorithm has computed all shortest path of length at most $k$. All simple paths have length at most $\#V-1$, so if there is an update in the last iteration of the loop, it means that there is a negative cycle in the graph. The observation that will enable us to improve the complexity is the following:
|
||||
|
||||
\begin{proposition}
|
||||
In the graph $G_f$ (and $G$), all simple paths have a length at most $4N$.
|
||||
\end{proposition}
|
||||
\begin{proof}
|
||||
Since $f$ is a maximal flow, there is no outgoing edge from $\mathbf{s}$ in $G_f$. One can thus check than any simple path of length 4 must contain at least two node of type $\mathbf{n}$. Hence on a path, at most 4 arcs separate two successive nodes of type $\mathbf{n}$.
|
||||
\end{proof}
|
||||
|
||||
Thus, in the absence of negative cycles, shortest paths in $G_f$ have length at most $4N$. So we can do only $4N+1$ iterations of the outer loop in the Bellman-Ford algorithm. This makes the complexity of the detection of one set of cycle to be $O(N\#E) = O(N^2 P)$.
|
||||
|
||||
With this improvement, the complexity of the whole algorithm is, in the worst case, $O(N^2P^2)$. However, since we detect several cycles at once and we start with a flow that might be close to the previous one, the number of iterations of the outer loop might be smaller in practice.
|
||||
|
||||
|
||||
|
||||
\subsubsection*{Metrics}
|
||||
We can display the node and zone utilization ratio, by dividing the flow passing through them divided by their outgoing capacity. In particular, we can pinpoint saturated nodes and zones (i.e. used at their full potential).
|
||||
|
||||
We can display the distance to the previous assignment, and the number of partition transfers.
|
||||
|
||||
|
||||
\bibliography{optimal_layout}
|
||||
\bibliographystyle{ieeetr}
|
||||
|
||||
\end{document}
|
||||
|
||||
|
||||
|
|
@ -1,11 +0,0 @@
|
|||
|
||||
@article{even1975network,
|
||||
title={Network flow and testing graph connectivity},
|
||||
author={Even, Shimon and Tarjan, R Endre},
|
||||
journal={SIAM journal on computing},
|
||||
volume={4},
|
||||
number={4},
|
||||
pages={507--518},
|
||||
year={1975},
|
||||
publisher={SIAM}
|
||||
}
|
|
@ -1,709 +0,0 @@
|
|||
\documentclass[]{article}
|
||||
|
||||
\usepackage{amsmath,amssymb}
|
||||
\usepackage{amsthm}
|
||||
|
||||
\usepackage{graphicx,xcolor}
|
||||
|
||||
\usepackage{algorithm,algpseudocode,float}
|
||||
|
||||
\renewcommand\thesubsubsection{\Alph{subsubsection})}
|
||||
|
||||
\newtheorem{proposition}{Proposition}
|
||||
|
||||
%opening
|
||||
\title{Optimal partition assignment in Garage}
|
||||
\author{Mendes}
|
||||
|
||||
\begin{document}
|
||||
|
||||
\maketitle
|
||||
|
||||
\section{Introduction}
|
||||
|
||||
\subsection{Context}
|
||||
|
||||
Garage is an open-source distributed storage service blablabla$\dots$
|
||||
|
||||
Every object to be stored in the system falls in a partition given by the last $k$ bits of its hash. There are $P=2^k$ partitions. Every partition will be stored on distinct nodes of the system. The goal of the assignment of partitions to nodes is to ensure (nodes and zone) redundancy and to be as efficient as possible.
|
||||
|
||||
\subsection{Formal description of the problem}
|
||||
|
||||
We are given a set of nodes $\mathbf{N}$ and a set of zones $\mathbf{Z}$. Every node $n$ has a non-negative storage capacity $c_n\ge 0$ and belongs to a zone $z\in \mathbf{Z}$. We are also given a number of partition $P>0$ (typically $P=256$).
|
||||
|
||||
We would like to compute an assignment of nodes to partitions. We will impose some redundancy constraints to this assignment, and under these constraints, we want our system to have the largest storage capacity possible. To link storage capacity to partition assignment, we make the following assumption:
|
||||
\begin{equation}
|
||||
\tag{H1}
|
||||
\text{\emph{All partitions have the same size $s$.}}
|
||||
\end{equation}
|
||||
This assumption is justified by the dispersion of the hashing function, when the number of partitions is small relative to the number of stored large objects.
|
||||
|
||||
Every node $n$ wille store some number $k_n$ of partitions. Hence the partitions stored by $n$ (and hence all partitions by our assumption) have there size bounded by $c_n/k_n$. This remark leads us to define the optimal size that we will want to maximize:
|
||||
|
||||
\begin{equation}
|
||||
\label{eq:optimal}
|
||||
\tag{OPT}
|
||||
s^* = \min_{n \in N} \frac{c_n}{k_n}.
|
||||
\end{equation}
|
||||
|
||||
When the capacities of the nodes are updated (this includes adding or removing a node), we want to update the assignment as well. However, transferring the data between nodes has a cost and we would like to limit the number of changes in the assignment. We make the following assumption:
|
||||
\begin{equation}
|
||||
\tag{H2}
|
||||
\text{\emph{Updates of capacity happens rarely relatively to object storing.}}
|
||||
\end{equation}
|
||||
This assumption justifies that when we compute the new assignment, it is worth to optimize the partition size \eqref{eq:optimal} first, and then, among the possible optimal solution, to try to minimize the number of partition transfers.
|
||||
|
||||
For now, in the following, we ask the following redundancy constraint:
|
||||
|
||||
\textbf{Parametric node and zone redundancy:} Given two integer parameters $1\le \rho_\mathbf{Z} \le \rho_\mathbf{N}$, we ask every partition to be stored on $\rho_\mathbf{N}$ distinct nodes, and these nodes must belong to at least $\rho_\mathbf{Z}$ distinct zones.
|
||||
|
||||
|
||||
\textbf{Mode 3-strict:} every partition needs to be assignated to three nodes belonging to three different zones.
|
||||
|
||||
\textbf{Mode 3:} every partition needs to be assignated to three nodes. We try to spread the three nodes over different zones as much as possible.
|
||||
|
||||
\textbf{Warning:} This is a working document written incrementaly. The last version of the algorithm is the \textbf{parametric assignment} described in the next section.
|
||||
|
||||
|
||||
\section{Computation of a parametric assignment}
|
||||
\textbf{Attention : }We change notations in this section.
|
||||
|
||||
Notations : let $P$ be the number of partitions, $N$ the number of nodes, $Z$ the number of zones. Let $\mathbf{P,N,Z}$ be the label sets of, respectively, partitions, nodes and zones.
|
||||
Let $s^*$ be the largest partition size achievable with the redundancy constraints. Let $(c_n)_{n\in \mathbf{N}}$ be the storage capacity of every node.
|
||||
|
||||
In this section, we propose a third specification of the problem. The user inputs two redundancy parameters $1\le \rho_\mathbf{Z} \le \rho_\mathbf{N}$. We compute an assignment $\alpha = (\alpha_p^1, \ldots, \alpha_p^{\rho_\mathbf{N}})_{p\in \mathbf{P}}$ such that every partition $p$ is associated to $\rho_\mathbf{N}$ distinct nodes $\alpha_p^1, \ldots, \alpha_p^{\rho_\mathbf{N}}$ and these nodes belong to at least $\rho_\mathbf{Z}$ distinct zones.
|
||||
|
||||
If the layout contained a previous assignment $\alpha'$, we try to minimize the amount of data to transfer during the layout update by making $\alpha$ as close as possible to $\alpha'$.
|
||||
|
||||
In the following subsections, we describe the successive steps of the algorithm we propose to compute $\alpha$.
|
||||
|
||||
\subsubsection*{Algorithm}
|
||||
|
||||
\begin{algorithmic}[1]
|
||||
\Function{Compute Layout}{$\mathbf{N}$, $\mathbf{Z}$, $\mathbf{P}$, $(c_n)_{n\in \mathbf{N}}$, $\rho_\mathbf{N}$, $\rho_\mathbf{Z}$, $\alpha'$}
|
||||
\State $s^* \leftarrow$ \Call{Compute Partition Size}{$\mathbf{N}$, $\mathbf{Z}$, $\mathbf{P}$, $(c_n)_{n\in \mathbf{N}}$, $\rho_\mathbf{N}$, $\rho_\mathbf{Z}$}
|
||||
\State $G \leftarrow G(s^*)$
|
||||
\State $f \leftarrow$ \Call{Compute Candidate Assignment}{$G$, $\alpha'$}
|
||||
\State $f^* \leftarrow$ \Call{Minimize transfer load}{$G$, $f$, $\alpha'$}
|
||||
\State Build $\alpha^*$ from $f^*$
|
||||
\State \Return $\alpha^*$
|
||||
\EndFunction
|
||||
\end{algorithmic}
|
||||
|
||||
\subsubsection*{Complexity}
|
||||
As we will see in the next sections, the worst case complexity of this algorithm is $O(P^2 N^2)$. The minimization of transfer load is the most expensive step, and it can run with a timeout since it is only an optimization step. Without this step (or with a smart timeout), the worst cas complexity can be $O((PN)^{3/2}\log C)$ where $C$ is the total storage capacity of the cluster.
|
||||
|
||||
\subsection{Determination of the partition size $s^*$}
|
||||
|
||||
Again, we will represent an assignment $\alpha$ as a flow in a specific graph $G$. We will not compute the optimal partition size $s^*$ a priori, but we will determine it by dichotomy, as the largest size $s$ such that the maximal flow achievable on $G=G(s)$ has value $\rho_\mathbf{N}P$. We will assume that the capacities are given in a small enough unit (say, Megabytes), and we will determine $s^*$ at the precision of the given unit.
|
||||
|
||||
Given some candidate size value $s$, we describe the oriented weighted graph $G=(V,E)$ with vertex set $V$ arc set $E$.
|
||||
|
||||
The set of vertices $V$ contains the source $\mathbf{s}$, the sink $\mathbf{t}$, vertices
|
||||
$\mathbf{p^+, p^-}$ for every partition $p$, vertices $\mathbf{x}_{p,z}$ for every partition $p$ and zone $z$, and vertices $\mathbf{n}$ for every node $n$.
|
||||
|
||||
The set of arcs $E$ contains:
|
||||
\begin{itemize}
|
||||
\item ($\mathbf{s}$,$\mathbf{p}^+$, $\rho_\mathbf{Z}$) for every partition $p$;
|
||||
\item ($\mathbf{s}$,$\mathbf{p}^-$, $\rho_\mathbf{N}-\rho_\mathbf{Z}$) for every partition $p$;
|
||||
\item ($\mathbf{p}^+$,$\mathbf{x}_{p,z}$, 1) for every partition $p$ and zone $z$;
|
||||
\item ($\mathbf{p}^-$,$\mathbf{x}_{p,z}$, $\rho_\mathbf{N}-\rho_\mathbf{Z}$) for every partition $p$ and zone $z$;
|
||||
\item ($\mathbf{x}_{p,z}$,$\mathbf{n}$, 1) for every partition $p$, zone $z$ and node $n\in z$;
|
||||
\item ($\mathbf{n}$, $\mathbf{t}$, $\lfloor c_n/s \rfloor$) for every node $n$.
|
||||
\end{itemize}
|
||||
|
||||
In the following complexity calculations, we will use the number of vertices and edges of $G$. Remark from now that $\# V = O(PZ)$ and $\# E = O(PN)$.
|
||||
|
||||
\begin{proposition}
|
||||
An assignment $\alpha$ is realizable with partition size $s$ and the redundancy constraints $(\rho_\mathbf{N},\rho_\mathbf{Z})$ if and only if there exists a maximal flow function $f$ in $G$ with total flow $\rho_\mathbf{N}P$, such that the arcs ($\mathbf{x}_{p,z}$,$\mathbf{n}$, 1) used are exactly those for which $p$ is associated to $n$ in $\alpha$.
|
||||
\end{proposition}
|
||||
\begin{proof}
|
||||
Given such flow $f$, we can reconstruct a candidate $\alpha$. In $f$, the flow passing through $\mathbf{p^+}$ and $\mathbf{p^-}$ is $\rho_\mathbf{N}$, and since the outgoing capacity of every $\mathbf{x}_{p,z}$ is 1, every partition is associated to $\rho_\mathbf{N}$ distinct nodes. The fraction $\rho_\mathbf{Z}$ of the flow passing through every $\mathbf{p^+}$ must be spread over as many distinct zones as every arc outgoing from $\mathbf{p^+}$ has capacity 1. So the reconstructed $\alpha$ verifies the redundancy constraints. For every node $n$, the flow between $\mathbf{n}$ and $\mathbf{t}$ corresponds to the number of partitions associated to $n$. By construction of $f$, this does not exceed $\lfloor c_n/s \rfloor$. We assumed that the partition size is $s$, hence this association does not exceed the storage capacity of the nodes.
|
||||
|
||||
In the other direction, given an assignment $\alpha$, one can similarly check that the facts that $\alpha$ respects the redundancy constraints, and the storage capacities of the nodes, are necessary condition to construct a maximal flow function $f$.
|
||||
\end{proof}
|
||||
|
||||
\textbf{Implementation remark:} In the flow algorithm, while exploring the graph, we explore the neighbours of every vertex in a random order to heuristically spread the association between nodes and partitions.
|
||||
|
||||
\subsubsection*{Algorithm}
|
||||
With this result mind, we can describe the first step of our algorithm. All divisions are supposed to be integer division.
|
||||
\begin{algorithmic}[1]
|
||||
\Function{Compute Partition Size}{$\mathbf{N}$, $\mathbf{Z}$, $\mathbf{P}$, $(c_n)_{n\in \mathbf{N}}$, $\rho_\mathbf{N}$, $\rho_\mathbf{Z}$}
|
||||
|
||||
\State Build the graph $G=G(s=1)$
|
||||
\State $ f \leftarrow$ \Call{Maximal flow}{$G$}
|
||||
\If{$f.\mathrm{total flow} < \rho_\mathbf{N}P$}
|
||||
|
||||
\State \Return Error: capacities too small or constraints too strong.
|
||||
\EndIf
|
||||
|
||||
\State $s^- \leftarrow 1$
|
||||
\State $s^+ \leftarrow 1+\frac{1}{\rho_\mathbf{N}}\sum_{n \in \mathbf{N}} c_n$
|
||||
|
||||
\While{$s^-+1 < s^+$}
|
||||
\State Build the graph $G=G(s=(s^-+s^+)/2)$
|
||||
\State $ f \leftarrow$ \Call{Maximal flow}{$G$}
|
||||
\If{$f.\mathrm{total flow} < \rho_\mathbf{N}P$}
|
||||
\State $s^+ \leftarrow (s^- + s^+)/2$
|
||||
\Else
|
||||
\State $s^- \leftarrow (s^- + s^+)/2$
|
||||
\EndIf
|
||||
\EndWhile
|
||||
|
||||
\State \Return $s^-$
|
||||
\EndFunction
|
||||
\end{algorithmic}
|
||||
|
||||
\subsubsection*{Complexity}
|
||||
|
||||
To compute the maximal flow, we use Dinic's algorithm. Its complexity on general graphs is $O(\#V^2 \#E)$, but on graphs with edge capacity bounded by a constant, it turns out to be $O(\#E^{3/2})$. The graph $G$ does not fall in this case since the capacities of the arcs incoming to $\mathbf{t}$ are far from bounded. However, the proof of this complexity works readily for graph where we only ask the edges \emph{not} incoming to the sink $\mathbf{t}$ to have their capacities bounded by a constant. One can find the proof of this claim in \cite[Section 2]{even1975network}.
|
||||
The dichotomy adds a logarithmic factor $\log (C)$ where $C=\sum_{n \in \mathbf{N}} c_n$ is the total capacity of the cluster. The total complexity of this first function is hence
|
||||
$O(\#E^{3/2}\log C ) = O\big((PN)^{3/2} \log C\big)$.
|
||||
|
||||
\subsubsection*{Metrics}
|
||||
We can display the discrepancy between the computed $s^*$ and the best size we could hope for a given total capacity, that is $C/\rho_\mathbf{N}$.
|
||||
|
||||
\subsection{Computation of a candidate assignment}
|
||||
|
||||
Now that we have the optimal partition size $s^*$, to compute a candidate assignment, it would be enough to compute a maximal flow function $f$ on $G(s^*)$. This is what we do if there was no previous assignment $\alpha'$.
|
||||
|
||||
If there was some $\alpha'$, we add a step that will heuristically help to obtain a candidate $\alpha$ closer to $\alpha'$. to do so, we fist compute a flow function $\tilde{f}$ that uses only the partition-to-node association appearing in $\alpha'$. Most likely, $\tilde{f}$ will not be a maximal flow of $G(s^*)$. In Dinic's algorithm, we can start from a non maximal flow function and then discover improving paths. This is what we do in starting from $\tilde{f}$. The hope\footnote{This is only a hope, because one can find examples where the construction of $f$ from $\tilde{f}$ produces an assignment $\alpha$ that is not as close as possible to $\alpha'$.} is that the final flow function $f$ will tend to keep the associations appearing in $\tilde{f}$.
|
||||
|
||||
More formally, we construct the graph $G_{|\alpha'}$ from $G$ by removing all the arcs $(\mathbf{x}_{p,z},\mathbf{n}, 1)$ where $p$ is not associated to $n$ in $\alpha'$. We compute a maximal flow function $\tilde{f}$ in $G_{|\alpha'}$. $\tilde{f}$ is also a valid (most likely non maximal) flow function in $G$. We compute a maximal flow function $f$ on $G$ by starting Dinic's algorithm on $\tilde{f}$.
|
||||
|
||||
\subsubsection*{Algorithm}
|
||||
\begin{algorithmic}[1]
|
||||
\Function{Compute Candidate Assignment}{$G$, $\alpha'$}
|
||||
\State Build the graph $G_{|\alpha'}$
|
||||
\State $ \tilde{f} \leftarrow$ \Call{Maximal flow}{$G_{|\alpha'}$}
|
||||
\State $ f \leftarrow$ \Call{Maximal flow from flow}{$G$, $\tilde{f}$}
|
||||
\State \Return $f$
|
||||
\EndFunction
|
||||
\end{algorithmic}
|
||||
|
||||
\textbf{Remark:} The function ``Maximal flow'' can be just seen as the function ``Maximal flow from flow'' called with the zero flow function as starting flow.
|
||||
|
||||
\subsubsection*{Complexity}
|
||||
From the consideration of the last section, we have the complexity of the Dinic's algorithm $O(\#E^{3/2}) = O((PN)^{3/2})$.
|
||||
|
||||
\subsubsection*{Metrics}
|
||||
|
||||
We can display the flow value of $\tilde{f}$, which is an upper bound of the distance between $\alpha$ and $\alpha'$. It might be more a Debug level display than Info.
|
||||
|
||||
\subsection{Minimization of the transfer load}
|
||||
|
||||
Now that we have a candidate flow function $f$, we want to modify it to make its associated assignment as close as possible to $\alpha'$. Denote by $f'$ the maximal flow associated to $\alpha'$, and let $d(f, f')$ be distance between the associated assignments\footnote{It is the number of arcs of type $(\mathbf{x}_{p,z},\mathbf{n})$ saturated in one flow and not in the other.}.
|
||||
We want to build a sequence $f=f_0, f_1, f_2 \dots$ of maximal flows such that $d(f_i, \alpha')$ decreases as $i$ increases. The distance being a non-negative integer, this sequence of flow functions must be finite. We now explain how to find some improving $f_{i+1}$ from $f_i$.
|
||||
|
||||
For any maximal flow $f$ in $G$, we define the oriented weighted graph $G_f=(V, E_f)$ as follows. The vertices of $G_f$ are the same as the vertices of $G$. $E_f$ contains the arc $(v_1,v_2, w)$ between vertices $v_1,v_2\in V$ with weight $w$ if and only if the arc $(v_1,v_2)$ is not saturated in $f$ (i.e. $c(v_1,v_2)-f(v_1,v_2) \ge 1$, we also consider reversed arcs). The weight $w$ is:
|
||||
\begin{itemize}
|
||||
\item $-1$ if $(v_1,v_2)$ is of type $(\mathbf{x}_{p,z},\mathbf{n})$ or $(\mathbf{x}_{p,z},\mathbf{n})$ and is saturated in only one of the two flows $f,f'$;
|
||||
\item $+1$ if $(v_1,v_2)$ is of type $(\mathbf{x}_{p,z},\mathbf{n})$ or $(\mathbf{x}_{p,z},\mathbf{n})$ and is saturated in either both or none of the two flows $f,f'$;
|
||||
\item $0$ otherwise.
|
||||
\end{itemize}
|
||||
|
||||
If $\gamma$ is a simple cycle of arcs in $G_f$, we define its weight $w(\gamma)$ as the sum of the weights of its arcs. We can add $+1$ to the value of $f$ on the arcs of $\gamma$, and by construction of $G_f$ and the fact that $\gamma$ is a cycle, the function that we get is still a valid flow function on $G$, it is maximal as it has the same flow value as $f$. We denote this new function $f+\gamma$.
|
||||
|
||||
\begin{proposition}
|
||||
Given a maximal flow $f$ and a simple cycle $\gamma$ in $G_f$, we have $d(f+\gamma, f') - d(f,f') = w(\gamma)$.
|
||||
\end{proposition}
|
||||
\begin{proof}
|
||||
Let $X$ be the set of arcs of type $(\mathbf{x}_{p,z},\mathbf{n})$. Then we can express $d(f,f')$ as
|
||||
\begin{align*}
|
||||
d(f,f') & = \#\{e\in X ~|~ f(e)\neq f'(e)\}
|
||||
= \sum_{e\in X} 1_{f(e)\neq f'(e)} \\
|
||||
& = \frac{1}{2}\big( \#X + \sum_{e\in X} 1_{f(e)\neq f'(e)} - 1_{f(e)= f'(e)} \big).
|
||||
\end{align*}
|
||||
We can express the cycle weight as
|
||||
\begin{align*}
|
||||
w(\gamma) & = \sum_{e\in X, e\in \gamma} - 1_{f(e)\neq f'(e)} + 1_{f(e)= f'(e)}.
|
||||
\end{align*}
|
||||
Remark that since we passed on unit of flow in $\gamma$ to construct $f+\gamma$, we have for any $e\in X$, $f(e)=f'(e)$ if and only if $(f+\gamma)(e) \neq f'(e)$.
|
||||
Hence
|
||||
\begin{align*}
|
||||
w(\gamma) & = \frac{1}{2}(w(\gamma) + w(\gamma)) \\
|
||||
&= \frac{1}{2} \Big(
|
||||
\sum_{e\in X, e\in \gamma} - 1_{f(e)\neq f'(e)} + 1_{f(e)= f'(e)} \\
|
||||
& \qquad +
|
||||
\sum_{e\in X, e\in \gamma} 1_{(f+\gamma)(e)\neq f'(e)} + 1_{(f+\gamma)(e)= f'(e)}
|
||||
\Big).
|
||||
\end{align*}
|
||||
Plugging this in the previous equation, we find that
|
||||
$$d(f,f')+w(\gamma) = d(f+\gamma, f').$$
|
||||
\end{proof}
|
||||
|
||||
This result suggests that given some flow $f_i$, we just need to find a negative cycle $\gamma$ in $G_{f_i}$ to construct $f_{i+1}$ as $f_i+\gamma$. The following proposition ensures that this greedy strategy reaches an optimal flow.
|
||||
|
||||
\begin{proposition}
|
||||
For any maximal flow $f$, $G_f$ contains a negative cycle if and only if there exists a maximal flow $f^*$ in $G$ such that $d(f^*, f') < d(f, f')$.
|
||||
\end{proposition}
|
||||
\begin{proof}
|
||||
Suppose that there is such flow $f^*$. Define the oriented multigraph $M_{f,f^*}=(V,E_M)$ with the same vertex set $V$ as in $G$, and for every $v_1,v_2 \in V$, $E_M$ contains $(f^*(v_1,v_2) - f(v_1,v_2))_+$ copies of the arc $(v_1,v_2)$. For every vertex $v$, its total degree (meaning its outer degree minus its inner degree) is equal to
|
||||
\begin{align*}
|
||||
\deg v & = \sum_{u\in V} (f^*(v,u) - f(v,u))_+ - \sum_{u\in V} (f^*(u,v) - f(u,v))_+ \\
|
||||
& = \sum_{u\in V} f^*(v,u) - f(v,u) = \sum_{u\in V} f^*(v,u) - \sum_{u\in V} f(v,u).
|
||||
\end{align*}
|
||||
The last two sums are zero for any inner vertex since $f,f^*$ are flows, and they are equal on the source and sink since the two flows are both maximal and have hence the same value. Thus, $\deg v = 0$ for every vertex $v$.
|
||||
|
||||
This implies that the multigraph $M_{f,f^*}$ is the union of disjoint simple cycles. $f$ can be transformed into $f^*$ by pushing a mass 1 along all these cycles in any order. Since $d(f^*, f')<d(f,f')$, there must exists one of these simple cycles $\gamma$ with $d(f+\gamma, f') < d(f, f')$. Finally, since we can push a mass in $f$ along $\gamma$, it must appear in $G_f$. Hence $\gamma$ is a cycle of $G_f$ with negative weight.
|
||||
\end{proof}
|
||||
|
||||
In the next section we describe the corresponding algorithm. Instead of discovering only one cycle, we are allowed to discover a set $\Gamma$ of disjoint negative cycles.
|
||||
|
||||
\subsubsection*{Algorithm}
|
||||
\begin{algorithmic}[1]
|
||||
\Function{Minimize transfer load}{$G$, $f$, $\alpha'$}
|
||||
\State Build the graph $G_f$
|
||||
\State $\Gamma \leftarrow$ \Call{Detect Negative Cycles}{$G_f$}
|
||||
\While{$\Gamma \neq \emptyset$}
|
||||
\ForAll{$\gamma \in \Gamma$}
|
||||
\State $f \leftarrow f+\gamma$
|
||||
\EndFor
|
||||
\State Update $G_f$
|
||||
\State $\Gamma \leftarrow$ \Call{Detect Negative Cycles}{$G_f$}
|
||||
\EndWhile
|
||||
\State \Return $f$
|
||||
\EndFunction
|
||||
\end{algorithmic}
|
||||
|
||||
\subsubsection*{Complexity}
|
||||
The distance $d(f,f')$ is bounded by the maximal number of differences in the associated assignment. If these assignment are totally disjoint, this distance is $2\rho_N P$. At every iteration of the While loop, the distance decreases, so there is at most $O(\rho_N P) = O(P)$ iterations.
|
||||
|
||||
The detection of negative cycle is done with the Bellman-Ford algorithm, whose complexity should normally be $O(\#E\#V)$. In our case, it amounts to $O(P^2ZN)$. Multiplied by the complexity of the outer loop, it amounts to $O(P^3ZN)$ which is a lot when the number of partitions and nodes starts to be large. To avoid that, we adapt the Bellman-Ford algorithm.
|
||||
|
||||
The Bellman-Ford algorithm runs $\#V$ iterations of an outer loop, and an inner loop over $E$. The idea is to compute the shortest paths from a source vertex $v$ to all other vertices. After $k$ iterations of the outer loop, the algorithm has computed all shortest path of length at most $k$. All simple paths have length at most $\#V-1$, so if there is an update in the last iteration of the loop, it means that there is a negative cycle in the graph. The observation that will enable us to improve the complexity is the following:
|
||||
|
||||
\begin{proposition}
|
||||
In the graph $G_f$ (and $G$), all simple paths have a length at most $4N$.
|
||||
\end{proposition}
|
||||
\begin{proof}
|
||||
Since $f$ is a maximal flow, there is no outgoing edge from $\mathbf{s}$ in $G_f$. One can thus check than any simple path of length 4 must contain at least two node of type $\mathbf{n}$. Hence on a path, at most 4 arcs separate two successive nodes of type $\mathbf{n}$.
|
||||
\end{proof}
|
||||
|
||||
Thus, in the absence of negative cycles, shortest paths in $G_f$ have length at most $4N$. So we can do only $4N+1$ iterations of the outer loop in Bellman-Ford algorithm. This makes the complexity of the detection of one set of cycle to be $O(N\#E) = O(N^2 P)$.
|
||||
|
||||
With this improvement, the complexity of the whole algorithm is, in the worst case, $O(N^2P^2)$. However, since we detect several cycles at once and we start with a flow that might be close to the previous one, the number of iterations of the outer loop might be smaller in practice.
|
||||
|
||||
|
||||
|
||||
\subsubsection*{Metrics}
|
||||
We can display the node and zone utilization ratio, by dividing the flow passing through them divided by their outgoing capacity. In particular, we can pinpoint saturated nodes and zones (i.e. used at their full potential).
|
||||
|
||||
We can display the distance to the previous assignment, and the number of partition transfers.
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
\section{Properties of an optimal 3-strict assignment}
|
||||
|
||||
\subsection{Optimal assignment}
|
||||
\label{sec:opt_assign}
|
||||
|
||||
For every zone $z\in Z$, define the zone capacity $c_z = \sum_{v, z_v=z} c_v$ and define $C = \sum_v c_v = \sum_z c_z$.
|
||||
|
||||
One can check that the best we could be doing to maximize $s^*$ would be to use the nodes proportionally to their capacity. This would yield $s^*=C/(3N)$. This is not possible because of (i) redundancy constraints and (ii) integer rounding but it gives and upper bound.
|
||||
|
||||
\subsubsection*{Optimal utilization}
|
||||
|
||||
We call an \emph{utilization} a collection of non-negative integers $(n_v)_{v\in V}$ such that $\sum_v n_v = 3N$ and for every zone $z$, $\sum_{v\in z} n_v \le N$. We call such utilization \emph{optimal} if it maximizes $s^*$.
|
||||
|
||||
We start by computing a node sub-utilization $(\hat{n}_v)_{v\in V}$ such that for every zone $z$, $\sum_{v\in z} \hat{n}_v \le N$ and we show that there is an optimal utilization respecting the constraints and such that $\hat{n}_v \le n_v$ for every node.
|
||||
|
||||
Assume that there is a zone $z_0$ such that $c_{z_0}/C \ge 1/3$. Then for any $v\in z_0$, we define
|
||||
$$\hat{n}_v = \left\lfloor\frac{c_v}{c_{z_0}}N\right\rfloor.$$
|
||||
This choice ensures for any such $v$ that
|
||||
$$
|
||||
\frac{c_v}{\hat{n}_v} \ge \frac{c_{z_0}}{N} \ge \frac{C}{3N}
|
||||
$$
|
||||
which is the universal upper bound on $s^*$. Hence any optimal utilization $(n_v)$ can be modified to another optimal utilization such that $n_v\ge \hat{n}_v$
|
||||
|
||||
Because $z_0$ cannot store more than $N$ partition occurences, in any assignment, at least $2N$ partitions must be assignated to the zones $Z\setminus\{z_0\}$. Let $C_0 = C-c_{z_0}$. Suppose that there exists a zone $z_1\neq z_0$ such that $c_{z_1}/C_0 \ge 1/2$. Then, with the same argument as for $z_0$, we can define
|
||||
$$\hat{n}_v = \left\lfloor\frac{c_v}{c_{z_1}}N\right\rfloor$$
|
||||
for every $v\in z_1$.
|
||||
|
||||
Now we can assign the remaining partitions. Let $(\hat{N}, \hat{C})$ to be
|
||||
\begin{itemize}
|
||||
\item $(3N,C)$ if we did not find any $z_0$;
|
||||
\item $(2N,C-c_{z_0})$ if there was a $z_0$ but no $z_1$;
|
||||
\item $(N,C-c_{z_0}-c_{z_1})$ if there was a $z_0$ and a $z_1$.
|
||||
\end{itemize}
|
||||
Then at least $\hat{N}$ partitions must be spread among the remaining zones. Hence $s^*$ is upper bounded by $\hat{C}/\hat{N}$ and without loss of generality, we can define, for every node that is not in $z_0$ nor $z_1$,
|
||||
$$\hat{n}_v = \left\lfloor\frac{c_v}{\hat{C}}\hat{N}\right\rfloor.$$
|
||||
|
||||
We constructed a sub-utilization $\hat{n}_v$. Now notice that $3N-\sum_v \hat{n}_v \le \# V$ where $\# V$ denotes the number of nodes. We can iteratively pick a node $v^*$ such that
|
||||
\begin{itemize}
|
||||
\item $\sum_{v\in z_{v^*}} \hat{n}_v < N$ where $z_{v^*}$ is the zone of $v^*$;
|
||||
\item $v^*$ maximizes the quantity $c_v/(\hat{n}_v+1)$ among the vertices satisfying the first condition (i.e. not in a saturated zone).
|
||||
\end{itemize}
|
||||
We iterate these instructions until $\sum_v \hat{n}_v= 3N$, and at this stage we define $(n_v) = (\hat{n}_v)$. It is easy to prove by induction that at every step, there is an optimal utilization that is pointwise larger than $\hat{n}_v$, and in particular, that $(n_v)$ is optimal.
|
||||
|
||||
\subsubsection*{Existence of an optimal assignment}
|
||||
|
||||
As for now, the \emph{optimal utilization} that we obtained is just a vector of numbers and it is not clear that it can be realized as the utilization of some concrete assignment. Here is a way to get a concrete assignment.
|
||||
|
||||
Define $3N$ tokens $t_1,\ldots, t_{3N}\in V$ as follows:
|
||||
\begin{itemize}
|
||||
\item Enumerate the zones $z$ of $Z$ in any order;
|
||||
\item enumerate the nodes $v$ of $z$ in any order;
|
||||
\item repeat $n_v$ times the token $v$.
|
||||
\end{itemize}
|
||||
Then for $1\le i \le N$, define the triplet $T_i$ to be
|
||||
$(t_i, t_{i+N}, t_{i+2N})$. Since the same nodes of a zone appear contiguously, the three nodes of a triplet must belong to three distinct zones.
|
||||
|
||||
However simple, this solution to go from an utilization to an assignment has the drawback of not spreading the triplets: a node will tend to be associated to the same two other nodes for many partitions. Hence, during data transfer, it will tend to use only two link, instead of spreading the bandwith use over many other links to other nodes. To achieve this goal, we will reframe the search of an assignment as a flow problem. and in the flow algorithm, we will introduce randomness in the order of exploration. This will be sufficient to obtain a good dispersion of the triplets.
|
||||
|
||||
\begin{figure}
|
||||
\centering
|
||||
\includegraphics[width=0.9\linewidth]{figures/naive}
|
||||
\caption{On the left, the creation of a concrete assignment with the naive approach of repeating tokens. On the right, the zones containing the nodes.}
|
||||
\end{figure}
|
||||
|
||||
\subsubsection*{Assignment as a maximum flow problem}
|
||||
|
||||
We describe the flow problem via its graph $(X,E)$ where $X$ is a set of vertices, and $E$ are directed weighted edges between the vertices. For every zone $z$, define $n_z=\sum_{v\in z} n_v$.
|
||||
|
||||
The set of vertices $X$ contains the source $\mathbf{s}$ and the sink $\mathbf{t}$; a vertex $\mathbf{x}_z$ for every zone $z\in Z$, and a vertex $\mathbf{y}_i$ for every partition index $1\le i\le N$.
|
||||
|
||||
The set of edges $E$ contains
|
||||
\begin{itemize}
|
||||
\item the edge $(\mathbf{s}, \mathbf{x}_z, n_z)$ for every zone $z\in Z$;
|
||||
\item the edge $(\mathbf{x}_z, \mathbf{y}_i, 1)$ for every zone $z\in Z$ and partition $1\le i\le N$;
|
||||
\item the edge $(\mathbf{y}_i, \mathbf{t}, 3)$ for every partition $1\le i\le N$.
|
||||
\end{itemize}
|
||||
|
||||
\begin{figure}[b]
|
||||
\centering
|
||||
\includegraphics[width=0.6\linewidth]{figures/flow}
|
||||
\caption{Flow problem to compute and optimal assignment.}
|
||||
\end{figure}
|
||||
|
||||
We first show the equivalence between this problem and and the construction of an assignment. Given some optimal assignment $(n_v)$, define the flow $f:E\to \mathbb{N}$ that saturates every edge from $\mathbf{s}$ or to $\mathbf{t}$, takes value $1$ on the edge between $\mathbf{x}_z$ and $\mathbf{y}_i$ if partition $i$ is stored in some node of the zone $z$, and $0$ otherwise. One can easily check that $f$ thus defined is indeed a flow and is maximum.
|
||||
|
||||
Reciprocally, by the existence of maximum flows constructed from optimal assignments, any maximum flow must saturate the edges linked to the source or the sink. It can only take value 0 or 1 on the other edge, and every partition vertex is associated to exactly three distinct zone vertices. Every zone is associated to exactly $n_z$ partitions.
|
||||
|
||||
A maximum flow can be constructed using, for instance, Dinic's algorithm. This algorithm works by discovering augmenting path to iteratively increase the flow. During the exploration of the graph to find augmenting path, we can shuffle the order of enumeration of the neighbours to spread the associations between zones and partitions.
|
||||
|
||||
Once we have such association, we can randomly distribute the $n_z$ edges picked for every zone $z$ to its nodes $v\in z$ such that every such $v$ gets $n_z$ edges. This defines an optimal assignment of partitions to nodes.
|
||||
|
||||
|
||||
\subsection{Minimal transfer}
|
||||
|
||||
Assume that there was a previous assignment $(T'_i)_{1\le i\le N}$ corresponding to utilizations $(n'_v)_{v\in V}$. We would like the new computed assignment $(T_i)_{1\le i\le N}$ from some $(n_v)_{v\in V}$ to minimize the number of partitions that need to be transferred. We can imagine two different objectives corresponding to different hypotheses:
|
||||
\begin{equation}
|
||||
\tag{H3A}
|
||||
\label{hyp:A}
|
||||
\text{\emph{Transfers between different zones cost much more than inside a zone.}}
|
||||
\end{equation}
|
||||
\begin{equation}
|
||||
\tag{H3B}
|
||||
\label{hyp:B}
|
||||
\text{\emph{Changing zone is not the largest cost when transferring a partition.}}
|
||||
\end{equation}
|
||||
|
||||
In case $A$, our goal will be to minimize the number of changes of zone in the assignment of partitions to zone. More formally, we will maximize the quantity
|
||||
$$
|
||||
Q_Z :=
|
||||
\sum_{1\le i\le N}
|
||||
\#\{z\in Z ~|~ z\cap T_i \neq \emptyset, z\cap T'_i \neq \emptyset \}
|
||||
.$$
|
||||
|
||||
In case $B$, our goal will be to minimize the number of changes of nodes in the assignment of partitions to nodes. We will maximize the quantity
|
||||
$$
|
||||
Q_V :=
|
||||
\sum_{1\le i\le N} \#(T_i \cap T'_i).
|
||||
$$
|
||||
|
||||
It is tempting to hope that there is a way to maximize both quantity, that having the least discrepancy in terms of nodes will lead to the least discrepancy in terms of zones. But this is actually wrong! We propose the following counter-example to convince the reader:
|
||||
|
||||
We consider eight nodes $a, a', b, c, d, d', e, e'$ belonging to five different zones $\{a,a'\}, \{b\}, \{c\}, \{d,d'\}, \{e, e'\}$. We take three partitions ($N=3$), that are originally assigned with some utilization $(n'_v)_{v\in V}$ as follows:
|
||||
$$
|
||||
T'_1=(a,b,c) \qquad
|
||||
T'_2=(a',b,d) \qquad
|
||||
T'_3=(b,c,e).
|
||||
$$
|
||||
This assignment, with updated utilizations $(n_v)_{v\in V}$ minimizes the number of zone changes:
|
||||
$$
|
||||
T_1=(d,b,c) \qquad
|
||||
T_2=(a,b,d) \qquad
|
||||
T_3=(b,c,e').
|
||||
$$
|
||||
This one, with the same utilization, minimizes the number of node changes:
|
||||
$$
|
||||
T_1=(a,b,c) \qquad
|
||||
T_2=(e',b,d) \qquad
|
||||
T_3=(b,c,d').
|
||||
$$
|
||||
One can check that in this case, it is impossible to minimize both the number of zone and node changes.
|
||||
|
||||
Because of the redundancy constraint, we cannot use a greedy algorithm to just replace nodes in the triplets to try to get the new utilization rate: this could lead to blocking situation where there is still a hole to fill in a triplet but no available node satisfies the zone separation constraint. To circumvent this issue, we propose an algorithm based on finding cycles in a graph encoding of the assignment. As in section \ref{sec:opt_assign}, we can explore the neigbours in a random order in the graph algorithms, to spread the triplets distribution.
|
||||
|
||||
|
||||
\subsubsection{Minimizing the zone discrepancy}
|
||||
|
||||
|
||||
First, notice that, given an assignment of partitions to \emph{zones}, it is easy to deduce an assignment to \emph{nodes} that minimizes the number of transfers for this zone assignment: For every zone $z$ and every node $v\in z$, pick in any way a set $P_v$ of partitions that where assigned to $v$ in $T'$, to $z_v$ in $T$, with the cardinality of $P_v$ smaller than $n_v$. Once all these sets are chosen, complement the assignment to reach the right utilization for every node. If $\#P_v > n_v$, it means that all the partitions that could stay in $v$ (i.e. that were already in $v$ and are still assigned to its zone) do stay in $v$. If $\#P_v = n_v$, then $n_v$ partitions stay in $v$, which is the number of partitions that need to be in $v$ in the end. In both cases, we could not hope for better given the partition to zone assignment.
|
||||
|
||||
Our goal now is to find a assignment of partitions to zones that minimizes the number of zone transfers. To do so we are going to represent an assignment as a graph.
|
||||
|
||||
Let $G_T=(X,E_T)$ be the directed weighted graph with vertices $(\mathbf{x}_i)_{1\le i\le N}$ and $(\mathbf{y}_z)_{z\in Z}$. For any $1\le i\le N$ and $z\in Z$, $E_T$ contains the arc:
|
||||
\begin{itemize}
|
||||
\item $(\mathbf{x}_i, \mathbf{y}_z, +1)$, if $z$ appears in $T_i'$ and $T_i$;
|
||||
\item $(\mathbf{x}_i, \mathbf{y}_z, -1)$, if $z$ appears in $T_i$ but not in $T'_i$;
|
||||
\item $(\mathbf{y}_z, \mathbf{x}_i, -1)$, if $z$ appears in $T'_i$ but not in $T_i$;
|
||||
\item $(\mathbf{y}_z, \mathbf{x}_i, +1)$, if $z$ does not appear in $T'_i$ nor in $T_i$.
|
||||
\end{itemize}
|
||||
In other words, the orientation of the arc encodes whether partition $i$ is stored in zone $z$ in the assignment $T$ and the weight $\pm 1$ encodes whether this corresponds to what happens in the assignment $T'$.
|
||||
|
||||
\begin{figure}[t]
|
||||
\centering
|
||||
\begin{minipage}{.40\linewidth}
|
||||
\centering
|
||||
\includegraphics[width=.8\linewidth]{figures/mini_zone}
|
||||
\end{minipage}
|
||||
\begin{minipage}{.55\linewidth}
|
||||
\centering
|
||||
\includegraphics[width=.8\linewidth]{figures/mini_node}
|
||||
\end{minipage}
|
||||
\caption{On the left: the graph $G_T$ encoding an assignment to minimize the zone discrepancy. On the right: the graph $G_T$ encoding an assignment to minimize the node discrepancy.}
|
||||
\end{figure}
|
||||
|
||||
|
||||
Notice that at every partition, there are three outgoing arcs, and at every zone, there are $n_z$ incoming arcs. Moreover, if $w(e)$ is the weight of an arc $e$, define the weight of $G_T$ by
|
||||
\begin{align*}
|
||||
w(G_T) := \sum_{e\in E} w(e) &= \#Z \times N - 4 \sum_{1\le i\le N} \#\{z\in Z ~|~ z\cap T_i = \emptyset, z\cap T'_i \neq \emptyset\} \\
|
||||
&=\#Z \times N - 4 \sum_{1\le i\le N} 3- \#\{z\in Z ~|~ z\cap T_i \neq \emptyset, z\cap T'_i \neq \emptyset\} \\
|
||||
&= (\#Z-12)N + 4 Q_Z.
|
||||
\end{align*}
|
||||
Hence maximizing $Q_Z$ is equivalent to maximizing $w(G_T)$.
|
||||
|
||||
Assume that their exist some assignment $T^*$ with the same utilization $(n_v)_{v\in V}$. Define $G_{T^*}$ similarly and consider the set $E_\mathrm{Diff} = E_T \setminus E_{T^*}$ of arcs that appear only in $G_T$. Since all vertices have the same number of incoming arcs in $G_T$ and $G_{T^*}$, the vertices of the graph $(X, E_\mathrm{Diff})$ must all have the same number number of incoming and outgoing arrows. So $E_\mathrm{Diff}$ can be expressed as a union of disjoint cycles. Moreover, the edges of $E_\mathrm{Diff}$ must appear in $E_{T^*}$ with reversed orientation and opposite weight. Hence, we have
|
||||
$$
|
||||
w(G_T) - w(G_{T^*}) = 2 \sum_{e\in E_\mathrm{Diff}} w(e).
|
||||
$$
|
||||
Hence, if $T$ is not optimal, there exists some $T^*$ with $w(G_T) < w(G_{T^*})$, and by the considerations above, there must exist a cycle in $E_\mathrm{Diff}$, and hence in $G_T$, with negative weight. If we reverse the edges and weights along this cycle, we obtain some graph. Since we did not change the incoming degree of any vertex, this is the graph encoding of some valid assignment $T^+$ such that $w(G_{T^+}) > w(G_T)$. We can iterate this operation until there is no other assignment $T^*$ with larger weight, that is until we obtain an optimal assignment.
|
||||
|
||||
|
||||
|
||||
\subsubsection{Minimizing the node discrepancy}
|
||||
|
||||
We will follow an approach similar to the one where we minimize the zone discrepancy. Here we will directly obtain a node assignment from a graph encoding.
|
||||
|
||||
Let $G_T=(X,E_T)$ be the directed weighted graph with vertices $(\mathbf{x}_i)_{1\le i\le N}$, $(\mathbf{y}_{z,i})_{z\in Z, 1\le i\le N}$ and $(\mathbf{u}_v)_{v\in V}$. For any $1\le i\le N$ and $z\in Z$, $E_T$ contains the arc:
|
||||
\begin{itemize}
|
||||
\item $(\mathbf{x}_i, \mathbf{y}_{z,i}, 0)$, if $z$ appears in $T_i$;
|
||||
\item $(\mathbf{y}_{z,i}, \mathbf{x}_i, 0)$, if $z$ does not appear in $T_i$.
|
||||
\end{itemize}
|
||||
For any $1\le i\le N$ and $v\in V$, $E_T$ contains the arc:
|
||||
\begin{itemize}
|
||||
\item $(\mathbf{y}_{z_v,i}, \mathbf{u}_v, +1)$, if $v$ appears in $T_i'$ and $T_i$;
|
||||
\item $(\mathbf{y}_{z_v,i}, \mathbf{u}_v, -1)$, if $v$ appears in $T_i$ but not in $T'_i$;
|
||||
\item $(\mathbf{u}_v, \mathbf{y}_{z_v,i}, -1)$, if $v$ appears in $T'_i$ but not in $T_i$;
|
||||
\item $(\mathbf{u}_v, \mathbf{y}_{z_v,i}, +1)$, if $v$ does not appear in $T'_i$ nor in $T_i$.
|
||||
\end{itemize}
|
||||
Every vertex $\mathbb{x}_i$ has outgoing degree 3, every vertex $\mathbf{y}_{z,v}$ has outgoing degree 1, and every vertex $\mathbf{u}_v$ has incoming degree $n_v$.
|
||||
Remark that any graph respecting these degree constraints is the encoding of a valid assignment with utilizations $(n_v)_{v\in V}$, in particular no partition is stored in two nodes of the same zone.
|
||||
|
||||
We define $w(G_T)$ similarly:
|
||||
\begin{align*}
|
||||
w(G_T) := \sum_{e\in E_T} w(e) &= \#V \times N - 4\sum_{1\le i\le N} 3-\#(T_i\cap T'_i) \\
|
||||
&= (\#V-12)N + 4Q_V.
|
||||
\end{align*}
|
||||
|
||||
Exactly like in the previous section, the existence of an assignment with larger weight implies the existence of a negatively weighted cycle in $G_T$. Reversing this cycle gives us the encoding of a valid assignment with a larger weight. Iterating this operation yields an optimal assignment.
|
||||
|
||||
|
||||
\subsubsection{Linear combination of both criteria}
|
||||
|
||||
In the graph $G_T$ defined in the previous section, instead of having weights $0$ and $\pm 1$, we could be having weights $\pm\alpha$ between $\mathbf{x}$ and $\mathbf{y}$ vertices, and weights $\pm\beta$ between $\mathbf{y}$ and $\mathbf{u}$ vertices, for some $\alpha,\beta>0$ (we have positive weight if the assignment corresponds to $T'$ and negative otherwise). Then
|
||||
\begin{align*}
|
||||
w(G_T) &= \sum_{e\in E_T} w(e) =
|
||||
\alpha \big( (\#Z-12)N + 4 Q_Z\big) +
|
||||
\beta \big( (\#V-12)N + 4 Q_V\big) \\
|
||||
&= \mathrm{const}+ 4(\alpha Q_Z + \beta Q_V).
|
||||
\end{align*}
|
||||
So maximizing the weight of such graph encoding would be equivalent to maximizing a linear combination of $Q_Z$ and $Q_V$.
|
||||
|
||||
|
||||
\subsection{Algorithm}
|
||||
We give a high level description of the algorithm to compute an optimal 3-strict assignment. The operations appearing at lines 1,2,4 are respectively described by Algorithms \ref{alg:util},\ref{alg:opt} and \ref{alg:mini}.
|
||||
|
||||
|
||||
|
||||
\begin{algorithm}[H]
|
||||
\caption{Optimal 3-strict assignment}
|
||||
\label{alg:total}
|
||||
\begin{algorithmic}[1]
|
||||
\Function{Optimal 3-strict assignment}{$N$, $(c_v)_{v\in V}$, $T'$}
|
||||
\State $(n_v)_{v\in V} \leftarrow$ \Call{Compute optimal utilization}{$N$, $(c_v)_{v\in V}$}
|
||||
\State $(T_i)_{1\le i\le N} \leftarrow$ \Call{Compute candidate assignment}{$N$, $(n_v)_{v\in V}$}
|
||||
\If {there was a previous assignment $T'$}
|
||||
\State $T \leftarrow$ \Call{Minimization of transfers}{$(T_i)_{1\le i\le N}$, $(T'_i)_{1\le i\le N}$}
|
||||
\EndIf
|
||||
\State \Return $T$.
|
||||
\EndFunction
|
||||
\end{algorithmic}
|
||||
\end{algorithm}
|
||||
|
||||
We give some considerations of worst case complexity for these algorithms. In the following, we assume $N>\#V>\#Z$. The complexity of Algorithm \ref{alg:total} is $O(N^3\# Z)$ if we assume \eqref{hyp:A} and $O(N^3 \#Z \#V)$ if we assume \eqref{hyp:B}.
|
||||
|
||||
Algorithm \ref{alg:util} can be implemented with complexity $O(\#V^2)$. The complexity of the function call at line \ref{lin:subutil} is $O(\#V)$. The difference between the sum of the subutilizations and $3N$ is at most the sum of the rounding errors when computing the $\hat{n}_v$. Hence it is bounded by $\#V$ and the loop at line \ref{lin:loopsub} is iterated at most $\#V$ times. Finding the minimizing $v$ at line \ref{lin:findmin} takes $O(\#V)$ operations (naively, we could also use a heap).
|
||||
|
||||
Algorithm \ref{alg:opt} can be implemented with complexity $O(N^3\times \#Z)$. The flow graph has $O(N+\#Z)$ vertices and $O(N\times \#Z)$ edges. Dinic's algorithm has complexity $O(\#\mathrm{Vertices}^2\#\mathrm{Edges})$ hence in our case it is $O(N^3\times \#Z)$.
|
||||
|
||||
Algorithm \ref{alg:mini} can be implented with complexity $O(N^3\# Z)$ under \eqref{hyp:A} and $O(N^3 \#Z \#V)$ under \eqref{hyp:B}.
|
||||
The graph $G_T$ has $O(N)$ vertices and $O(N\times \#Z)$ edges under assumption \eqref{hyp:A} and respectively $O(N\times \#Z)$ vertices and $O(N\times \#V)$ edges under assumption \eqref{hyp:B}. The loop at line \ref{lin:repeat} is iterated at most $N$ times since the distance between $T$ and $T'$ decreases at every iteration. Bellman-Ford algorithm has complexity $O(\#\mathrm{Vertices}\#\mathrm{Edges})$, which in our case amounts to $O(N^2\# Z)$ under \eqref{hyp:A} and $O(N^2 \#Z \#V)$ under \eqref{hyp:B}.
|
||||
|
||||
\begin{algorithm}
|
||||
\caption{Computation of the optimal utilization}
|
||||
\label{alg:util}
|
||||
\begin{algorithmic}[1]
|
||||
\Function{Compute optimal utilization}{$N$, $(c_v)_{v\in V}$}
|
||||
\State $(\hat{n}_v)_{v\in V} \leftarrow $ \Call{Compute subutilization}{$N$, $(c_v)_{v\in V}$} \label{lin:subutil}
|
||||
\While{$\sum_{v\in V} \hat{n}_v < 3N$} \label{lin:loopsub}
|
||||
\State Pick $v\in V$ minimizing $\frac{c_v}{\hat{n}_v+1}$ and such that
|
||||
$\sum_{v'\in z_v} \hat{n}_{v'} < N$ \label{lin:findmin}
|
||||
\State $\hat{n}_v \leftarrow \hat{n}_v+1$
|
||||
\EndWhile
|
||||
\State \Return $(\hat{n}_v)_{v\in V}$
|
||||
\EndFunction
|
||||
\State
|
||||
|
||||
\Function{Compute subutilization}{$N$, $(c_v)_{v\in V}$}
|
||||
\State $R \leftarrow 3$
|
||||
\For{$v\in V$}
|
||||
\State $\hat{n}_v \leftarrow \mathrm{unset}$
|
||||
\EndFor
|
||||
\For{$z\in Z$}
|
||||
\State $c_z \leftarrow \sum_{v\in z} c_v$
|
||||
\EndFor
|
||||
\State $C \leftarrow \sum_{z\in Z} c_z$
|
||||
\While{$\exists z \in Z$ such that $R\times c_{z} > C$}
|
||||
\For{$v\in z$}
|
||||
\State $\hat{n}_v \leftarrow \left\lfloor \frac{c_v}{c_z} N \right\rfloor$
|
||||
\EndFor
|
||||
\State $C \leftarrow C-c_z$
|
||||
\State $R\leftarrow R-1$
|
||||
\EndWhile
|
||||
\For{$v\in V$}
|
||||
\If{$\hat{n}_v = \mathrm{unset}$}
|
||||
\State $\hat{n}_v \leftarrow \left\lfloor \frac{Rc_v}{C} N \right\rfloor$
|
||||
\EndIf
|
||||
\EndFor
|
||||
\State \Return $(\hat{n}_v)_{v\in V}$
|
||||
\EndFunction
|
||||
\end{algorithmic}
|
||||
\end{algorithm}
|
||||
|
||||
\begin{algorithm}
|
||||
\caption{Computation of a candidate assignment}
|
||||
\label{alg:opt}
|
||||
\begin{algorithmic}[1]
|
||||
\Function{Compute candidate assignment}{$N$, $(n_v)_{v\in V}$}
|
||||
\State Compute the flow graph $G$
|
||||
\State Compute the maximal flow $f$ using Dinic's algorithm with randomized neighbours enumeration
|
||||
\State Construct the assignment $(T_i)_{1\le i\le N}$ from $f$
|
||||
\State \Return $(T_i)_{1\le i\le N}$
|
||||
\EndFunction
|
||||
\end{algorithmic}
|
||||
\end{algorithm}
|
||||
|
||||
|
||||
\begin{algorithm}
|
||||
\caption{Minimization of the number of transfers}
|
||||
\label{alg:mini}
|
||||
\begin{algorithmic}[1]
|
||||
\Function{Minimization of transfers}{$(T_i)_{1\le i\le N}$, $(T'_i)_{1\le i\le N}$}
|
||||
\State Construct the graph encoding $G_T$
|
||||
\Repeat \label{lin:repeat}
|
||||
\State Find a negative cycle $\gamma$ using Bellman-Ford algorithm on $G_T$
|
||||
\State Reverse the orientations and weights of edges in $\gamma$
|
||||
\Until{no negative cycle is found}
|
||||
\State Update $(T_i)_{1\le i\le N}$ from $G_T$
|
||||
\State \Return $(T_i)_{1\le i\le N}$
|
||||
\EndFunction
|
||||
\end{algorithmic}
|
||||
\end{algorithm}
|
||||
|
||||
\newpage
|
||||
|
||||
\section{Computation of a 3-non-strict assignment}
|
||||
|
||||
\subsection{Choices of optimality}
|
||||
|
||||
In this mode, we primarily want to store every partition on three nodes, and only secondarily try to spread the nodes among different zone. So we make the choice of not taking the zone repartition in the criterion of optimality.
|
||||
|
||||
We try to maximize $s^*$ defined in \eqref{eq:optimal}. So we can compute the optimal utilizations $(n_v)_{v\in V}$ with the only constraint that $n_v \le N$ for every node $v$. As in the previous section, we start with a sub-utilization proportional to $c_v$ (and capped at $N$), and we iteratively increase the $\hat{n}_v$ that is less than $N$ and maximizes the quantity $c_v/(\hat{n}_v+1)$, until the total sum is $3N$.
|
||||
|
||||
\subsection{Computation of a candidate assignment}
|
||||
|
||||
To compute a candidate assignment (that does not optimize zone spreading nor distance to a previous assignment yet), we can use the folowing flow problem.
|
||||
|
||||
Define the oriented weighted graph $(X,E)$. The set of vertices $X$ contains the source $\mathbf{s}$, the sink $\mathbf{t}$, vertices
|
||||
$\mathbf{x}_p, \mathbf{u}^+_p, \mathbf{u}^-_p$ for every partition $p$, vertices $\mathbf{y}_{p,z}$ for every partition $p$ and zone $z$, and vertices $\mathbf{z}_v$ for every node $v$.
|
||||
|
||||
The set of edges is composed of the following arcs:
|
||||
\begin{itemize}
|
||||
\item ($\mathbf{s}$,$\mathbf{x}_p$, 3) for every partition $p$;
|
||||
\item ($\mathbf{x}_p$,$\mathbf{u}^+_p$, 3) for every partition $p$;
|
||||
\item ($\mathbf{x}_p$,$\mathbf{u}^-_p$, 2) for every partition $p$;
|
||||
\item ($\mathbf{u}^+_p$,$\mathbf{y}_{p,z}$, 1) for every partition $p$ and zone $z$;
|
||||
\item ($\mathbf{u}^-_p$,$\mathbf{y}_{p,z}$, 2) for every partition $p$ and zone $z$;
|
||||
\item ($\mathbf{y}_{p,z}$,$\mathbf{z}_v$, 1) for every partition $p$, zone $z$ and node $v\in z$;
|
||||
\item ($\mathbf{z}_v$, $\mathbf{t}$, $n_v$) for every node $v$;
|
||||
\end{itemize}
|
||||
|
||||
One can check that any maximal flow in this graph corresponds to an assignment of partitions to nodes. In such a flow, all the arcs from $\mathbf{s}$ and to $\mathbf{t}$ are saturated. The arc from $\mathbf{y}_{p,z}$ to $\mathbf{z}_v$ is saturated if and only if $p$ is associated to~$v$.
|
||||
Finally the flow from $\mathbf{x}_p$ to $\mathbf{y}_{p,z}$ can go either through $\mathbf{u}^+_p$ or $\mathbf{u}^-_p$.
|
||||
|
||||
|
||||
|
||||
\subsection{Maximal spread and minimal transfers}
|
||||
Notice that if the arc $\mathbf{u}_p^+\mathbf{y}_{p,z}$ is not saturated but there is some flow in $\mathbf{u}_p^-\mathbf{y}_{p,z}$, then it is possible to transfer a unit of flow from the path $\mathbf{x}_p\mathbf{u}_p^-\mathbf{y}_{p,z}$ to the path $\mathbf{x}_p\mathbf{u}_p^+\mathbf{y}_{p,z}$. So we can always find an equivalent maximal flow $f^*$ that uses the path through $\mathbf{u}_p^-$ only if the path through $\mathbf{u}_p^+$ is saturated.
|
||||
|
||||
We will use this fact to consider the amount of flow going through the vertices $\mathbf{u}^+$ as a measure of how well the partitions are spread over nodes belonging to different zones. If the partition $p$ is associated to 3 different zones, then a flow of 3 will cross $\mathbf{u}_p^+$ in $f^*$ (i.e. a flow of 0 will cross $\mathbf{u}_p^+$). If $p$ is associated to two zones, a flow of $2$ will cross $\mathbf{u}_p^+$. If $p$ is associated to a single zone, a flow of $1$ will cross $\mathbf{u}_p^+$.
|
||||
|
||||
Let $N_1, N_2, N_3$ be the number of partitions associated to respectively 1,2 and 3 distinct zones. We will optimize a linear combination of these variables using the discovery of positively weighted circuits in a graph.
|
||||
|
||||
At the same step, we will also optimize the distance to a previous assignment $T'$. Let $\alpha> \beta> \gamma \ge 0$ be three parameters.
|
||||
|
||||
Given the flow $f$, let $G_f=(X',E_f)$ be the multi-graph where $X' = X\setminus\{\mathbf{s},\mathbf{t}\}$. The set $E_f$ is composed of the arcs:
|
||||
\begin{itemize}
|
||||
\item As many arcs from $(\mathbf{x}_p, \mathbf{u}^+_p,\alpha), (\mathbf{x}_p, \mathbf{u}^+_p,\beta), (\mathbf{x}_p, \mathbf{u}^+_p,\gamma)$ (selected in this order) as there is flow crossing $\mathbf{u}^+_p$ in $f$;
|
||||
\item As many arcs from $(\mathbf{u}^+_p, \mathbf{x}_p,-\gamma), (\mathbf{u}^+_p, \mathbf{x}_p,-\beta), (\mathbf{u}^+_p, \mathbf{x}_p,-\alpha)$ (selected in this order) as there is flow crossing $\mathbf{u}^-_p$ in $f$;
|
||||
\item As many copies of $(\mathbf{x}_p, \mathbf{u}^-_p,0)$ as there is flow through $\mathbf{u}^-_p$;
|
||||
\item As many copies of $(\mathbf{u}^-_p,\mathbf{x}_p,0)$ so that the number of arcs between these two vertices is 2;
|
||||
\item $(\mathbf{u}^+_p,\mathbf{y}_{p,z}, 0)$ if the flow between these vertices is 1, and the opposite arc otherwise;
|
||||
\item as many copies of $(\mathbf{u}^-_p,\mathbf{y}_{p,z}, 0)$ as the flow between these vertices, and as many copies of the opposite arc as 2~$-$~the flow;
|
||||
\item $(\mathbf{y}_{p,z},\mathbf{z}_v, \pm1)$ if it is saturated in $f$, with $+1$ if $v\in T'_p$ and $-1$ otherwise;
|
||||
\item $(\mathbf{z}_v,\mathbf{y}_{p,z}, \pm1)$ if it is not saturated in $f$, with $+1$ if $v\notin T'_p$ and $-1$ otherwise.
|
||||
\end{itemize}
|
||||
To summarize, arcs are oriented left to right if they correspond to a presence of flow in $f$, and right to left if they correspond to an absence of flow. They are positively weighted if we want them to stay at their current state, and negatively if we want them to switch. Let us compute the weight of such graph.
|
||||
|
||||
\begin{multline*}
|
||||
w(G_f) = \sum_{e\in E_f} w(e_f) \\
|
||||
=
|
||||
(\alpha - \beta -\gamma) N_1 + (\alpha +\beta - \gamma) N_2 + (\alpha+\beta+\gamma) N_3
|
||||
\\ +
|
||||
\#V\times N - 4 \sum_p 3-\#(T_p\cap T'_p) \\
|
||||
=(\#V-12+\alpha-\beta-\gamma)\times N + 4Q_V + 2\beta N_2 + 2(\beta+\gamma) N_3 \\
|
||||
\end{multline*}
|
||||
|
||||
As for the mode 3-strict, one can check that the difference of two such graphs corresponding to the same $(n_v)$ is always eulerian. Hence we can navigate in this class with the same greedy algorithm that discovers positive cycles and flips them.
|
||||
|
||||
The function that we optimize is
|
||||
$$
|
||||
2Q_V + \beta N_2 + (\beta+\gamma) N_3.
|
||||
$$
|
||||
The choice of parameters $\beta$ and $\gamma$ should be lead by the following question: For $\beta$, where to put the tradeoff between zone dispersion and distance to the previous configuration? For $\gamma$, do we prefer to have more partitions spread between 2 zones, or have less between at least 2 zones but more between 3 zones.
|
||||
|
||||
The quantity $Q_V$ varies between $0$ and $3N$, it should be of order $N$. The quantity $N_2+N_3$ should also be of order $N$ (it is exactly $N$ in the strict mode). So the two terms of the function are comparable.
|
||||
|
||||
|
||||
\bibliography{optimal_layout}
|
||||
\bibliographystyle{ieeetr}
|
||||
|
||||
\end{document}
|
||||
|
||||
|
||||
|
Before Width: | Height: | Size: 16 KiB After Width: | Height: | Size: 36 KiB |
Before Width: | Height: | Size: 74 KiB After Width: | Height: | Size: 15 KiB |
BIN
doc/sticker/Garage_NGI.pdf
Normal file
BIN
doc/sticker/Garage_NGI.png
Normal file
After Width: | Height: | Size: 16 KiB |
1
doc/sticker/Garage_NGI.svg
Normal file
After Width: | Height: | Size: 74 KiB |
1
doc/talks/.envrc
Normal file
|
@ -0,0 +1 @@
|
|||
use_nix
|
1
doc/talks/.gitignore
vendored
Normal file
|
@ -0,0 +1 @@
|
|||
.direnv/
|
10
doc/talks/2022-11-19-Capitole-du-Libre/.gitignore
vendored
Normal file
|
@ -0,0 +1,10 @@
|
|||
*.aux
|
||||
*.bbl
|
||||
*.blg
|
||||
*.log
|
||||
*.nav
|
||||
*.out
|
||||
*.snm
|
||||
*.synctex.gz
|
||||
*.toc
|
||||
*.dvi
|
8
doc/talks/2022-11-19-Capitole-du-Libre/Makefile
Normal file
|
@ -0,0 +1,8 @@
|
|||
all:
|
||||
pdflatex présentation.tex
|
||||
|
||||
clean:
|
||||
rm -f *.aux *.bbl *.blg *.log *.nav *.out *.snm *.synctex.gz *.toc *.dvi présentation.pdf
|
||||
|
||||
clean_sauf_pdf:
|
||||
rm -f *.aux *.bbl *.blg *.log *.nav *.out *.snm *.synctex.gz *.toc *.dvi
|