Merge branch 'main' into pnet_datalink-0.33.0

This commit is contained in:
teutat3s 2023-03-13 13:59:42 +01:00
commit 8ad6efb338
Signed by untrusted user who does not match committer: teutat3s
GPG key ID: 18DAE600A6BBE705
14 changed files with 444 additions and 33 deletions

View file

@ -6,7 +6,7 @@ sort_by = "weight"
+++ +++
A cookbook, when you cook, is a collection of recipes. A cookbook, when you cook, is a collection of recipes.
Similarly, Garage's cookbook contains a collection of recipes that are known to works well! Similarly, Garage's cookbook contains a collection of recipes that are known to work well!
This chapter could also be referred as "Tutorials" or "Best practices". This chapter could also be referred as "Tutorials" or "Best practices".
- **[Multi-node deployment](@/documentation/cookbook/real-world.md):** This page will walk you through all of the necessary - **[Multi-node deployment](@/documentation/cookbook/real-world.md):** This page will walk you through all of the necessary
@ -16,6 +16,10 @@ This chapter could also be referred as "Tutorials" or "Best practices".
source in case a binary is not provided for your architecture, or if you want to source in case a binary is not provided for your architecture, or if you want to
hack with us! hack with us!
- **[Binary packages](@/documentation/cookbook/binary-packages.md):** This page
lists the different platforms that provide ready-built software packages for
Garage.
- **[Integration with Systemd](@/documentation/cookbook/systemd.md):** This page explains how to run Garage - **[Integration with Systemd](@/documentation/cookbook/systemd.md):** This page explains how to run Garage
as a Systemd service (instead of as a Docker container). as a Systemd service (instead of as a Docker container).
@ -26,6 +30,10 @@ This chapter could also be referred as "Tutorials" or "Best practices".
- **[Configuring a reverse-proxy](@/documentation/cookbook/reverse-proxy.md):** This page explains how to configure a reverse-proxy to add TLS support to your S3 api endpoint. - **[Configuring a reverse-proxy](@/documentation/cookbook/reverse-proxy.md):** This page explains how to configure a reverse-proxy to add TLS support to your S3 api endpoint.
- **[Deploying on Kubernetes](@/documentation/cookbook/kubernetes.md):** This page explains how to deploy Garage on Kubernetes using our Helm chart.
- **[Deploying with Ansible](@/documentation/cookbook/ansible.md):** This page lists available Ansible roles developed by the community to deploy Garage.
- **[Monitoring Garage](@/documentation/cookbook/monitoring.md)** This page - **[Monitoring Garage](@/documentation/cookbook/monitoring.md)** This page
explains the Prometheus metrics available for monitoring the Garage explains the Prometheus metrics available for monitoring the Garage
cluster/nodes. cluster/nodes.

View file

@ -0,0 +1,51 @@
+++
title = "Deploying with Ansible"
weight = 35
+++
While Ansible is not officially supported to deploy Garage, several community members
have published Ansible roles. We list them and compare them below.
## Comparison of Ansible roles
| Feature | [ansible-role-garage](#zorun-ansible-role-garage) | [garage-docker-ansible-deploy](#moan0s-garage-docker-ansible-deploy) |
|------------------------------------|---------------------------------------------|---------------------------------------------------------------|
| **Runtime** | Systemd | Docker |
| **Target OS** | Any Linux | Any Linux |
| **Architecture** | amd64, arm64, i686 | amd64, arm64 |
| **Additional software** | None | Traefik |
| **Automatic node connection** | ❌ | ✅ |
| **Layout management** | ❌ | ✅ |
| **Manage buckets & keys** | ❌ | ✅ (basic) |
| **Allow custom Garage config** | ✅ | ❌ |
| **Facilitate Garage upgrades** | ✅ | ❌ |
| **Multiple instances on one host** | ✅ | ✅ |
## zorun/ansible-role-garage
[Source code](https://github.com/zorun/ansible-role-garage), [Ansible galaxy](https://galaxy.ansible.com/zorun/garage)
This role is voluntarily simple: it relies on the official Garage static
binaries and only requires Systemd. As such, it should work on any
Linux-based OS.
To make things more flexible, the user has to provide a Garage
configuration template. This allows to customize Garage configuration in
any way.
Some more features might be added, such as a way to automatically connect
nodes to each other or to define a layout.
## moan0s/garage-docker-ansible-deploy
[Source code](https://github.com/moan0s/garage-docker-ansible-deploy), [Blog post](https://hyteck.de/post/garage/)
This role is based on the Docker image for Garage, and comes with
"batteries included": it will additionally install Docker and Traefik. In
addition, it is "opinionated" in the sense that it expects a particular
deployment structure (one instance per disk, one gateway per host,
structured DNS names, etc).
As a result, this role makes it easier to start with Garage on Ansible,
but is less flexible.

View file

@ -0,0 +1,28 @@
+++
title = "Binary packages"
weight = 11
+++
Garage is also available in binary packages on:
## Alpine Linux
```bash
apk install garage
```
## Arch Linux
Garage is available in the [AUR](https://aur.archlinux.org/packages/garage).
## FreeBSD
```bash
pkg install garage
```
## NixOS
```bash
nix-shell -p garage
```

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 74 KiB

After

Width:  |  Height:  |  Size: 15 KiB

View file

Before

Width:  |  Height:  |  Size: 16 KiB

After

Width:  |  Height:  |  Size: 16 KiB

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 74 KiB

View file

@ -19,6 +19,7 @@ use opentelemetry::{
}; };
use garage_util::error::Error as GarageError; use garage_util::error::Error as GarageError;
use garage_util::forwarded_headers;
use garage_util::metrics::{gen_trace_id, RecordDuration}; use garage_util::metrics::{gen_trace_id, RecordDuration};
pub(crate) trait ApiEndpoint: Send + Sync + 'static { pub(crate) trait ApiEndpoint: Send + Sync + 'static {
@ -126,15 +127,9 @@ impl<A: ApiHandler> ApiServer<A> {
) -> Result<Response<Body>, GarageError> { ) -> Result<Response<Body>, GarageError> {
let uri = req.uri().clone(); let uri = req.uri().clone();
let has_forwarded_for_header = req.headers().contains_key("x-forwarded-for"); if let Ok(forwarded_for_ip_addr) =
if has_forwarded_for_header { forwarded_headers::handle_forwarded_for_headers(&req.headers())
let forwarded_for_ip_addr = &req {
.headers()
.get("x-forwarded-for")
.expect("Could not parse X-Forwarded-For header")
.to_str()
.unwrap_or_default();
info!( info!(
"{} (via {}) {} {}", "{} (via {}) {} {}",
forwarded_for_ip_addr, forwarded_for_ip_addr,

View file

@ -152,6 +152,7 @@ impl BlockManager {
tx_scrub_command: ArcSwapOption::new(None), tx_scrub_command: ArcSwapOption::new(None),
}); });
block_manager.endpoint.set_handler(block_manager.clone()); block_manager.endpoint.set_handler(block_manager.clone());
block_manager.scrub_persister.set_with(|_| ()).unwrap();
block_manager block_manager
} }
@ -185,6 +186,9 @@ impl BlockManager {
vars.register_ro(&self.scrub_persister, "scrub-last-completed", |p| { vars.register_ro(&self.scrub_persister, "scrub-last-completed", |p| {
p.get_with(|x| msec_to_rfc3339(x.time_last_complete_scrub)) p.get_with(|x| msec_to_rfc3339(x.time_last_complete_scrub))
}); });
vars.register_ro(&self.scrub_persister, "scrub-next-run", |p| {
p.get_with(|x| msec_to_rfc3339(x.time_next_run_scrub))
});
vars.register_ro(&self.scrub_persister, "scrub-corruptions_detected", |p| { vars.register_ro(&self.scrub_persister, "scrub-corruptions_detected", |p| {
p.get_with(|x| x.corruptions_detected) p.get_with(|x| x.corruptions_detected)
}); });

View file

@ -4,7 +4,7 @@ use std::sync::Arc;
use std::time::Duration; use std::time::Duration;
use async_trait::async_trait; use async_trait::async_trait;
use serde::{Deserialize, Serialize}; use rand::Rng;
use tokio::fs; use tokio::fs;
use tokio::select; use tokio::select;
use tokio::sync::mpsc; use tokio::sync::mpsc;
@ -19,8 +19,8 @@ use garage_util::tranquilizer::Tranquilizer;
use crate::manager::*; use crate::manager::*;
// Full scrub every 30 days // Full scrub every 25 days with a random element of 10 days mixed in below
const SCRUB_INTERVAL: Duration = Duration::from_secs(3600 * 24 * 30); const SCRUB_INTERVAL: Duration = Duration::from_secs(3600 * 24 * 25);
// Scrub tranquility is initially set to 4, but can be changed in the CLI // Scrub tranquility is initially set to 4, but can be changed in the CLI
// and the updated version is persisted over Garage restarts // and the updated version is persisted over Garage restarts
const INITIAL_SCRUB_TRANQUILITY: u32 = 4; const INITIAL_SCRUB_TRANQUILITY: u32 = 4;
@ -161,6 +161,50 @@ impl Worker for RepairWorker {
// and whose parameter (esp. speed) can be controlled at runtime. // and whose parameter (esp. speed) can be controlled at runtime.
// ---- ---- ---- // ---- ---- ----
mod v081 {
use serde::{Deserialize, Serialize};
#[derive(Serialize, Deserialize)]
pub struct ScrubWorkerPersisted {
pub tranquility: u32,
pub(crate) time_last_complete_scrub: u64,
pub(crate) corruptions_detected: u64,
}
impl garage_util::migrate::InitialFormat for ScrubWorkerPersisted {}
}
mod v082 {
use serde::{Deserialize, Serialize};
use super::v081;
#[derive(Serialize, Deserialize)]
pub struct ScrubWorkerPersisted {
pub tranquility: u32,
pub(crate) time_last_complete_scrub: u64,
pub(crate) time_next_run_scrub: u64,
pub(crate) corruptions_detected: u64,
}
impl garage_util::migrate::Migrate for ScrubWorkerPersisted {
type Previous = v081::ScrubWorkerPersisted;
fn migrate(old: v081::ScrubWorkerPersisted) -> ScrubWorkerPersisted {
use crate::repair::randomize_next_scrub_run_time;
ScrubWorkerPersisted {
tranquility: old.tranquility,
time_last_complete_scrub: old.time_last_complete_scrub,
time_next_run_scrub: randomize_next_scrub_run_time(old.time_last_complete_scrub),
corruptions_detected: old.corruptions_detected,
}
}
}
}
pub use v082::*;
pub struct ScrubWorker { pub struct ScrubWorker {
manager: Arc<BlockManager>, manager: Arc<BlockManager>,
rx_cmd: mpsc::Receiver<ScrubWorkerCommand>, rx_cmd: mpsc::Receiver<ScrubWorkerCommand>,
@ -171,17 +215,25 @@ pub struct ScrubWorker {
persister: PersisterShared<ScrubWorkerPersisted>, persister: PersisterShared<ScrubWorkerPersisted>,
} }
#[derive(Serialize, Deserialize)] fn randomize_next_scrub_run_time(timestamp: u64) -> u64 {
pub struct ScrubWorkerPersisted { // Take SCRUB_INTERVAL and mix in a random interval of 10 days to attempt to
pub tranquility: u32, // balance scrub load across different cluster nodes.
pub(crate) time_last_complete_scrub: u64,
pub(crate) corruptions_detected: u64, let next_run_timestamp = timestamp
+ SCRUB_INTERVAL
.saturating_add(Duration::from_secs(
rand::thread_rng().gen_range(0..3600 * 24 * 10),
))
.as_millis() as u64;
next_run_timestamp
} }
impl garage_util::migrate::InitialFormat for ScrubWorkerPersisted {}
impl Default for ScrubWorkerPersisted { impl Default for ScrubWorkerPersisted {
fn default() -> Self { fn default() -> Self {
ScrubWorkerPersisted { ScrubWorkerPersisted {
time_last_complete_scrub: 0, time_last_complete_scrub: 0,
time_next_run_scrub: randomize_next_scrub_run_time(now_msec()),
tranquility: INITIAL_SCRUB_TRANQUILITY, tranquility: INITIAL_SCRUB_TRANQUILITY,
corruptions_detected: 0, corruptions_detected: 0,
} }
@ -279,12 +331,13 @@ impl Worker for ScrubWorker {
} }
fn status(&self) -> WorkerStatus { fn status(&self) -> WorkerStatus {
let (corruptions_detected, tranquility, time_last_complete_scrub) = let (corruptions_detected, tranquility, time_last_complete_scrub, time_next_run_scrub) =
self.persister.get_with(|p| { self.persister.get_with(|p| {
( (
p.corruptions_detected, p.corruptions_detected,
p.tranquility, p.tranquility,
p.time_last_complete_scrub, p.time_last_complete_scrub,
p.time_next_run_scrub,
) )
}); });
@ -302,10 +355,16 @@ impl Worker for ScrubWorker {
s.freeform = vec![format!("Scrub paused, resumes at {}", msec_to_rfc3339(*rt))]; s.freeform = vec![format!("Scrub paused, resumes at {}", msec_to_rfc3339(*rt))];
} }
ScrubWorkerState::Finished => { ScrubWorkerState::Finished => {
s.freeform = vec![format!( s.freeform = vec![
"Last scrub completed at {}", format!(
msec_to_rfc3339(time_last_complete_scrub) "Last scrub completed at {}",
)]; msec_to_rfc3339(time_last_complete_scrub),
),
format!(
"Next scrub scheduled for {}",
msec_to_rfc3339(time_next_run_scrub)
),
];
} }
} }
s s
@ -334,8 +393,10 @@ impl Worker for ScrubWorker {
.tranquilizer .tranquilizer
.tranquilize_worker(self.persister.get_with(|p| p.tranquility))) .tranquilize_worker(self.persister.get_with(|p| p.tranquility)))
} else { } else {
self.persister self.persister.set_with(|p| {
.set_with(|p| p.time_last_complete_scrub = now_msec())?; p.time_last_complete_scrub = now_msec();
p.time_next_run_scrub = randomize_next_scrub_run_time(now_msec());
})?;
self.work = ScrubWorkerState::Finished; self.work = ScrubWorkerState::Finished;
self.tranquilizer.clear(); self.tranquilizer.clear();
Ok(WorkerState::Idle) Ok(WorkerState::Idle)
@ -350,8 +411,7 @@ impl Worker for ScrubWorker {
ScrubWorkerState::Running(_) => return WorkerState::Busy, ScrubWorkerState::Running(_) => return WorkerState::Busy,
ScrubWorkerState::Paused(_, resume_time) => (*resume_time, ScrubWorkerCommand::Resume), ScrubWorkerState::Paused(_, resume_time) => (*resume_time, ScrubWorkerCommand::Resume),
ScrubWorkerState::Finished => ( ScrubWorkerState::Finished => (
self.persister.get_with(|p| p.time_last_complete_scrub) self.persister.get_with(|p| p.time_next_run_scrub),
+ SCRUB_INTERVAL.as_millis() as u64,
ScrubWorkerCommand::Start, ScrubWorkerCommand::Start,
), ),
}; };

View file

@ -136,9 +136,16 @@ impl Garage {
env_builder.flag(heed::flags::Flags::MdbNoSync); env_builder.flag(heed::flags::Flags::MdbNoSync);
env_builder.flag(heed::flags::Flags::MdbNoMetaSync); env_builder.flag(heed::flags::Flags::MdbNoMetaSync);
} }
let db = env_builder let db = match env_builder.open(&db_path) {
.open(&db_path) Err(heed::Error::Io(e)) if e.kind() == std::io::ErrorKind::OutOfMemory => {
.ok_or_message("Unable to open LMDB DB")?; return Err(Error::Message(
"OutOfMemory error while trying to open LMDB database. This can happen \
if your operating system is not allowing you to use sufficient virtual \
memory address space. Please check that no limit is set (ulimit -v). \
On 32-bit machines, you should probably switch to another database engine.".into()))
}
x => x.ok_or_message("Unable to open LMDB DB")?,
};
db::lmdb_adapter::LmdbDb::init(db) db::lmdb_adapter::LmdbDb::init(db)
} }
#[cfg(not(feature = "lmdb"))] #[cfg(not(feature = "lmdb"))]

View file

@ -0,0 +1,63 @@
use http::{HeaderMap, HeaderValue};
use std::net::IpAddr;
use std::str::FromStr;
use crate::error::{Error, OkOrMessage};
pub fn handle_forwarded_for_headers(headers: &HeaderMap<HeaderValue>) -> Result<String, Error> {
let forwarded_for_header = headers
.get("x-forwarded-for")
.ok_or_message("X-Forwarded-For header not provided")?;
let forwarded_for_ip_str = forwarded_for_header
.to_str()
.ok_or_message("Error parsing X-Forwarded-For header")?;
let client_ip = IpAddr::from_str(&forwarded_for_ip_str)
.ok_or_message("Valid IP address not found in X-Forwarded-For header")?;
Ok(client_ip.to_string())
}
#[cfg(test)]
mod test {
use super::*;
#[test]
fn test_handle_forwarded_for_headers_ipv4_client() {
let mut test_headers = HeaderMap::new();
test_headers.insert("X-Forwarded-For", "192.0.2.100".parse().unwrap());
if let Ok(forwarded_ip) = handle_forwarded_for_headers(&test_headers) {
assert_eq!(forwarded_ip, "192.0.2.100");
}
}
#[test]
fn test_handle_forwarded_for_headers_ipv6_client() {
let mut test_headers = HeaderMap::new();
test_headers.insert("X-Forwarded-For", "2001:db8::f00d:cafe".parse().unwrap());
if let Ok(forwarded_ip) = handle_forwarded_for_headers(&test_headers) {
assert_eq!(forwarded_ip, "2001:db8::f00d:cafe");
}
}
#[test]
fn test_handle_forwarded_for_headers_invalid_ip() {
let mut test_headers = HeaderMap::new();
test_headers.insert("X-Forwarded-For", "www.example.com".parse().unwrap());
let result = handle_forwarded_for_headers(&test_headers);
assert!(result.is_err());
}
#[test]
fn test_handle_forwarded_for_headers_missing() {
let mut test_headers = HeaderMap::new();
test_headers.insert("Host", "www.deuxfleurs.fr".parse().unwrap());
let result = handle_forwarded_for_headers(&test_headers);
assert!(result.is_err());
}
}

View file

@ -11,6 +11,7 @@ pub mod data;
pub mod encode; pub mod encode;
pub mod error; pub mod error;
pub mod formater; pub mod formater;
pub mod forwarded_headers;
pub mod metrics; pub mod metrics;
pub mod migrate; pub mod migrate;
pub mod persister; pub mod persister;

View file

@ -29,6 +29,7 @@ use garage_model::garage::Garage;
use garage_table::*; use garage_table::*;
use garage_util::error::Error as GarageError; use garage_util::error::Error as GarageError;
use garage_util::forwarded_headers;
use garage_util::metrics::{gen_trace_id, RecordDuration}; use garage_util::metrics::{gen_trace_id, RecordDuration};
struct WebMetrics { struct WebMetrics {
@ -104,7 +105,19 @@ impl WebServer {
req: Request<Body>, req: Request<Body>,
addr: SocketAddr, addr: SocketAddr,
) -> Result<Response<Body>, Infallible> { ) -> Result<Response<Body>, Infallible> {
info!("{} {} {}", addr, req.method(), req.uri()); if let Ok(forwarded_for_ip_addr) =
forwarded_headers::handle_forwarded_for_headers(&req.headers())
{
info!(
"{} (via {}) {} {}",
forwarded_for_ip_addr,
addr,
req.method(),
req.uri()
);
} else {
info!("{} {} {}", addr, req.method(), req.uri());
}
// Lots of instrumentation // Lots of instrumentation
let tracer = opentelemetry::global::tracer("garage"); let tracer = opentelemetry::global::tracer("garage");