This commit is contained in:
parent
255b01b626
commit
2eb9fcae20
60 changed files with 116 additions and 116 deletions
|
@ -349,7 +349,7 @@ Check [our s3 compatibility list](@/documentation/reference-manual/s3-compatibil
|
|||
|
||||
### Other tools for interacting with Garage
|
||||
|
||||
The following tools can also be used to send and recieve files from/to Garage:
|
||||
The following tools can also be used to send and receive files from/to Garage:
|
||||
|
||||
- [minio-client](@/documentation/connect/cli.md#minio-client)
|
||||
- [s3cmd](@/documentation/connect/cli.md#s3cmd)
|
||||
|
|
|
@ -61,7 +61,7 @@ directed to a Garage cluster can be handled independently of one another instead
|
|||
of going through a central bottleneck (the leader node).
|
||||
As a consequence, requests can be handled much faster, even in cases where latency
|
||||
between cluster nodes is important (see our [benchmarks](@/documentation/design/benchmarks/index.md) for data on this).
|
||||
This is particularly usefull when nodes are far from one another and talk to one other through standard Internet connections.
|
||||
This is particularly useful when nodes are far from one another and talk to one other through standard Internet connections.
|
||||
|
||||
### Web server for static websites
|
||||
|
||||
|
|
|
@ -392,7 +392,7 @@ table_merkle_updater_todo_queue_length{table_name="block_ref"} 0
|
|||
|
||||
#### `table_sync_items_received`, `table_sync_items_sent` (counters)
|
||||
|
||||
Number of data items sent to/recieved from other nodes during resync procedures
|
||||
Number of data items sent to/received from other nodes during resync procedures
|
||||
|
||||
```
|
||||
table_sync_items_received{from="<remote node>",table_name="bucket_v2"} 3
|
||||
|
|
|
@ -42,7 +42,7 @@ The general principle are similar, but details have not been updated.**
|
|||
A version is defined by the existence of at least one entry in the blocks table for a certain version UUID.
|
||||
We must keep the following invariant: if a version exists in the blocks table, it has to be referenced in the objects table.
|
||||
We explicitly manage concurrent versions of an object: the version timestamp and version UUID columns are index columns, thus we may have several concurrent versions of an object.
|
||||
Important: before deleting an older version from the objects table, we must make sure that we did a successfull delete of the blocks of that version from the blocks table.
|
||||
Important: before deleting an older version from the objects table, we must make sure that we did a successful delete of the blocks of that version from the blocks table.
|
||||
|
||||
Thus, the workflow for reading an object is as follows:
|
||||
|
||||
|
@ -95,7 +95,7 @@ Known issue: if someone is reading from a version that we want to delete and the
|
|||
Usefull metadata:
|
||||
|
||||
- list of versions that reference this block in the Casandra table, so that we can do GC by checking in Cassandra that the lines still exist
|
||||
- list of other nodes that we know have acknowledged a write of this block, usefull in the rebalancing algorithm
|
||||
- list of other nodes that we know have acknowledged a write of this block, useful in the rebalancing algorithm
|
||||
|
||||
Write strategy: have a single thread that does all write IO so that it is serialized (or have several threads that manage independent parts of the hash space). When writing a blob, write it to a temporary file, close, then rename so that a concurrent read gets a consistent result (either not found or found with whole content).
|
||||
|
||||
|
|
|
@ -68,7 +68,7 @@ The migration steps are as follows:
|
|||
5. Turn off Garage 0.3
|
||||
|
||||
6. Backup metadata folders if you can (i.e. if you have space to do it
|
||||
somewhere). Backuping data folders could also be usefull but that's much
|
||||
somewhere). Backuping data folders could also be useful but that's much
|
||||
harder to do. If your filesystem supports snapshots, this could be a good
|
||||
time to use them.
|
||||
|
||||
|
|
|
@ -37,7 +37,7 @@ There are two reasons for this:
|
|||
|
||||
Reminder: rules of simplicity, concerning changes to Garage's source code.
|
||||
Always question what we are doing.
|
||||
Never do anything just because it looks nice or because we "think" it might be usefull at some later point but without knowing precisely why/when.
|
||||
Never do anything just because it looks nice or because we "think" it might be useful at some later point but without knowing precisely why/when.
|
||||
Only do things that make perfect sense in the context of what we currently know.
|
||||
|
||||
## References
|
||||
|
|
|
@ -562,7 +562,7 @@ token>", v: ["<value1>", ...] }`, with the following fields:
|
|||
- in case of concurrent update and deletion, a `null` is added to the list of concurrent values
|
||||
|
||||
- if the `tombstones` query parameter is set to `true`, tombstones are returned
|
||||
for items that have been deleted (this can be usefull for inserting after an
|
||||
for items that have been deleted (this can be useful for inserting after an
|
||||
item that has been deleted, so that the insert is not considered
|
||||
concurrent with the delete). Tombstones are returned as tuples in the
|
||||
same format with only `null` values
|
||||
|
|
|
@ -77,7 +77,7 @@ impl ApiHandler for K2VApiServer {
|
|||
} = endpoint;
|
||||
let garage = self.garage.clone();
|
||||
|
||||
// The OPTIONS method is procesed early, before we even check for an API key
|
||||
// The OPTIONS method is processed early, before we even check for an API key
|
||||
if let Endpoint::Options = endpoint {
|
||||
let options_res = handle_options_api(garage, &req, Some(bucket_name))
|
||||
.await
|
||||
|
|
|
@ -204,7 +204,7 @@ macro_rules! generateQueryParameters {
|
|||
}
|
||||
|
||||
/// Get an error message in case not all parameters where used when extracting them to
|
||||
/// build an Enpoint variant
|
||||
/// build an Endpoint variant
|
||||
fn nonempty_message(&self) -> Option<&str> {
|
||||
if self.keyword.is_some() {
|
||||
Some("Keyword not used")
|
||||
|
|
|
@ -340,8 +340,8 @@ pub(crate) fn request_checksum_value(
|
|||
Ok(ret.pop())
|
||||
}
|
||||
|
||||
/// Checks for the presense of x-amz-checksum-algorithm
|
||||
/// if so extract the corrseponding x-amz-checksum-* value
|
||||
/// Checks for the presence of x-amz-checksum-algorithm
|
||||
/// if so extract the corresponding x-amz-checksum-* value
|
||||
pub(crate) fn request_checksum_algorithm_value(
|
||||
headers: &HeaderMap<HeaderValue>,
|
||||
) -> Result<Option<ChecksumValue>, Error> {
|
||||
|
|
|
@ -63,7 +63,7 @@ pub async fn handle_copy(
|
|||
let source_checksum_algorithm = source_checksum.map(|x| x.algorithm());
|
||||
|
||||
// If source object has a checksum, the destination object must as well.
|
||||
// The x-amz-checksum-algorihtm header allows to change that algorithm,
|
||||
// The x-amz-checksum-algorithm header allows to change that algorithm,
|
||||
// but if it is absent, we must use the same as before
|
||||
let checksum_algorithm = checksum_algorithm.or(source_checksum_algorithm);
|
||||
|
||||
|
|
|
@ -398,7 +398,7 @@ enum ExtractionResult {
|
|||
key: String,
|
||||
},
|
||||
// Fallback key is used for legacy APIs that only support
|
||||
// exlusive pagination (and not inclusive one).
|
||||
// exclusive pagination (and not inclusive one).
|
||||
SkipTo {
|
||||
key: String,
|
||||
fallback_key: Option<String>,
|
||||
|
@ -408,7 +408,7 @@ enum ExtractionResult {
|
|||
#[derive(PartialEq, Clone, Debug)]
|
||||
enum RangeBegin {
|
||||
// Fallback key is used for legacy APIs that only support
|
||||
// exlusive pagination (and not inclusive one).
|
||||
// exclusive pagination (and not inclusive one).
|
||||
IncludingKey {
|
||||
key: String,
|
||||
fallback_key: Option<String>,
|
||||
|
|
|
@ -213,7 +213,7 @@ pub async fn handle_post_object(
|
|||
}
|
||||
|
||||
// if we ever start supporting ACLs, we likely want to map "acl" to x-amz-acl" somewhere
|
||||
// arround here to make sure the rest of the machinery takes our acl into account.
|
||||
// around here to make sure the rest of the machinery takes our acl into account.
|
||||
let headers = get_headers(¶ms)?;
|
||||
|
||||
let expected_checksums = ExpectedChecksums {
|
||||
|
|
|
@ -276,7 +276,7 @@ impl Redirect {
|
|||
return Err(Error::bad_request("Bad XML: invalid protocol"));
|
||||
}
|
||||
}
|
||||
// TODO there are probably more invalide cases, but which ones?
|
||||
// TODO there are probably more invalid cases, but which ones?
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
|
|
@ -47,8 +47,8 @@ pub async fn check_payload_signature(
|
|||
let query = parse_query_map(request.uri())?;
|
||||
|
||||
if query.contains_key(&X_AMZ_ALGORITHM) {
|
||||
// We check for presigned-URL-style authentification first, because
|
||||
// the browser or someting else could inject an Authorization header
|
||||
// We check for presigned-URL-style authentication first, because
|
||||
// the browser or something else could inject an Authorization header
|
||||
// that is totally unrelated to AWS signatures.
|
||||
check_presigned_signature(garage, service, request, query).await
|
||||
} else if request.headers().contains_key(AUTHORIZATION) {
|
||||
|
@ -132,7 +132,7 @@ async fn check_presigned_signature(
|
|||
let authorization = Authorization::parse_presigned(&algorithm.value, &query)?;
|
||||
|
||||
// Verify that all necessary request headers are included in signed_headers
|
||||
// For AWSv4 pre-signed URLs, the following must be incldued:
|
||||
// For AWSv4 pre-signed URLs, the following must be included:
|
||||
// - the Host header (mandatory)
|
||||
// - all x-amz-* headers used in the request
|
||||
let signed_headers = split_signed_headers(&authorization)?;
|
||||
|
@ -306,7 +306,7 @@ pub fn canonical_request(
|
|||
// Note that there is also the issue of path normalization, which I hope is unrelated to the
|
||||
// one of URI-encoding. At least in aws-sigv4 both parameters can be set independently,
|
||||
// and rusoto_signature does not seem to do any effective path normalization, even though
|
||||
// it mentions it in the comments (same link to the souce code as above).
|
||||
// it mentions it in the comments (same link to the source code as above).
|
||||
// We make the explicit choice of NOT normalizing paths in the K2V API because doing so
|
||||
// would make non-normalized paths invalid K2V partition keys, and we don't want that.
|
||||
let canonical_uri: std::borrow::Cow<str> = if service != "s3" {
|
||||
|
|
|
@ -105,7 +105,7 @@ impl BlockResyncManager {
|
|||
}
|
||||
}
|
||||
|
||||
/// Get lenght of resync queue
|
||||
/// Get length of resync queue
|
||||
pub fn queue_len(&self) -> Result<usize, Error> {
|
||||
Ok(self.queue.len()?)
|
||||
}
|
||||
|
@ -185,10 +185,10 @@ impl BlockResyncManager {
|
|||
//
|
||||
// - resync.errors: a tree that indicates for each block
|
||||
// if the last resync resulted in an error, and if so,
|
||||
// the following two informations (see the ErrorCounter struct):
|
||||
// the following two information (see the ErrorCounter struct):
|
||||
// - how many consecutive resync errors for this block?
|
||||
// - when was the last try?
|
||||
// These two informations are used to implement an
|
||||
// These two information are used to implement an
|
||||
// exponential backoff retry strategy.
|
||||
// The key in this tree is the 32-byte hash of the block,
|
||||
// and the value is the encoded ErrorCounter value.
|
||||
|
|
|
@ -122,7 +122,7 @@ impl Db {
|
|||
_ => unreachable!(),
|
||||
},
|
||||
Err(TxError::Db(e2)) => match ret {
|
||||
// Ok was stored -> the error occured when finalizing
|
||||
// Ok was stored -> the error occurred when finalizing
|
||||
// transaction
|
||||
Ok(_) => Err(TxError::Db(e2)),
|
||||
// An error was already stored: that's the one we want to
|
||||
|
|
|
@ -233,7 +233,7 @@ impl<'a> LmdbTx<'a> {
|
|||
fn get_tree(&self, i: usize) -> TxOpResult<&Database> {
|
||||
self.trees.get(i).ok_or_else(|| {
|
||||
TxOpError(Error(
|
||||
"invalid tree id (it might have been openned after the transaction started)".into(),
|
||||
"invalid tree id (it might have been opened after the transaction started)".into(),
|
||||
))
|
||||
})
|
||||
}
|
||||
|
|
|
@ -142,7 +142,7 @@ impl IDb for SqliteDb {
|
|||
fn snapshot(&self, to: &PathBuf) -> Result<()> {
|
||||
fn progress(p: rusqlite::backup::Progress) {
|
||||
let percent = (p.pagecount - p.remaining) * 100 / p.pagecount;
|
||||
info!("Sqlite snapshot progres: {}%", percent);
|
||||
info!("Sqlite snapshot progress: {}%", percent);
|
||||
}
|
||||
self.db
|
||||
.get()?
|
||||
|
@ -304,7 +304,7 @@ impl<'a> SqliteTx<'a> {
|
|||
fn get_tree(&self, i: usize) -> TxOpResult<&'_ str> {
|
||||
self.trees.get(i).map(Arc::as_ref).ok_or_else(|| {
|
||||
TxOpError(Error(
|
||||
"invalid tree id (it might have been openned after the transaction started)".into(),
|
||||
"invalid tree id (it might have been opened after the transaction started)".into(),
|
||||
))
|
||||
})
|
||||
}
|
||||
|
|
|
@ -129,7 +129,7 @@ pub async fn cmd_assign_role(
|
|||
zone: args
|
||||
.zone
|
||||
.clone()
|
||||
.ok_or("Please specifiy a zone with the -z flag")?,
|
||||
.ok_or("Please specify a zone with the -z flag")?,
|
||||
capacity,
|
||||
tags: args.tags.clone(),
|
||||
}
|
||||
|
@ -145,7 +145,7 @@ pub async fn cmd_assign_role(
|
|||
|
||||
send_layout(rpc_cli, rpc_host, layout).await?;
|
||||
|
||||
println!("Role changes are staged but not yet commited.");
|
||||
println!("Role changes are staged but not yet committed.");
|
||||
println!("Use `garage layout show` to view staged role changes,");
|
||||
println!("and `garage layout apply` to enact staged changes.");
|
||||
Ok(())
|
||||
|
@ -172,7 +172,7 @@ pub async fn cmd_remove_role(
|
|||
|
||||
send_layout(rpc_cli, rpc_host, layout).await?;
|
||||
|
||||
println!("Role removal is staged but not yet commited.");
|
||||
println!("Role removal is staged but not yet committed.");
|
||||
println!("Use `garage layout show` to view staged role changes,");
|
||||
println!("and `garage layout apply` to enact staged changes.");
|
||||
Ok(())
|
||||
|
|
|
@ -184,7 +184,7 @@ pub struct SkipDeadNodesOpt {
|
|||
/// This will generally be the current layout version.
|
||||
#[structopt(long = "version")]
|
||||
pub(crate) version: u64,
|
||||
/// Allow the skip even if a quorum of ndoes could not be found for
|
||||
/// Allow the skip even if a quorum of nodes could not be found for
|
||||
/// the data among the remaining nodes
|
||||
#[structopt(long = "allow-missing-data")]
|
||||
pub(crate) allow_missing_data: bool,
|
||||
|
|
|
@ -107,7 +107,7 @@ async fn main() {
|
|||
);
|
||||
|
||||
// Initialize panic handler that aborts on panic and shows a nice message.
|
||||
// By default, Tokio continues runing normally when a task panics. We want
|
||||
// By default, Tokio continues running normally when a task panics. We want
|
||||
// to avoid this behavior in Garage as this would risk putting the process in an
|
||||
// unknown/uncontrollable state. We prefer to exit the process and restart it
|
||||
// from scratch, so that it boots back into a fresh, known state.
|
||||
|
|
|
@ -104,7 +104,7 @@ pub(crate) fn fill_secret(
|
|||
|
||||
if let Some(val) = cli_value {
|
||||
if config_secret.is_some() || config_secret_file.is_some() {
|
||||
debug!("Overriding secret `{}` using value specified using CLI argument or environnement variable.", name);
|
||||
debug!("Overriding secret `{}` using value specified using CLI argument or environment variable.", name);
|
||||
}
|
||||
|
||||
*config_secret = Some(val);
|
||||
|
|
|
@ -153,7 +153,7 @@ impl<'a> RequestBuilder<'a> {
|
|||
|
||||
pub async fn send(&mut self) -> Result<Response<Body>, String> {
|
||||
// TODO this is a bit incorrect in that path and query params should be url-encoded and
|
||||
// aren't, but this is good enought for now.
|
||||
// aren't, but this is good enough for now.
|
||||
|
||||
let query = query_param_to_string(&self.query_params);
|
||||
let (host, path) = if self.vhost_style {
|
||||
|
@ -210,9 +210,9 @@ impl<'a> RequestBuilder<'a> {
|
|||
HeaderName::from_static("x-amz-decoded-content-length"),
|
||||
HeaderValue::from_str(&self.body.len().to_string()).unwrap(),
|
||||
);
|
||||
// Get lenght of body by doing the conversion to a streaming body with an
|
||||
// Get length of body by doing the conversion to a streaming body with an
|
||||
// invalid signature (we don't know the seed) just to get its length. This
|
||||
// is a pretty lazy and inefficient way to do it, but it's enought for test
|
||||
// is a pretty lazy and inefficient way to do it, but it's enough for test
|
||||
// code.
|
||||
all_headers.insert(
|
||||
CONTENT_LENGTH,
|
||||
|
|
|
@ -54,7 +54,7 @@ enum Command {
|
|||
partition_key: String,
|
||||
/// Sort key to read from
|
||||
sort_key: String,
|
||||
/// Output formating
|
||||
/// Output formatting
|
||||
#[clap(flatten)]
|
||||
output_kind: ReadOutputKind,
|
||||
},
|
||||
|
@ -70,7 +70,7 @@ enum Command {
|
|||
/// Timeout, in seconds
|
||||
#[clap(short = 'T', long)]
|
||||
timeout: Option<u64>,
|
||||
/// Output formating
|
||||
/// Output formatting
|
||||
#[clap(flatten)]
|
||||
output_kind: ReadOutputKind,
|
||||
},
|
||||
|
@ -87,7 +87,7 @@ enum Command {
|
|||
/// Timeout, in seconds
|
||||
#[clap(short = 'T', long)]
|
||||
timeout: Option<u64>,
|
||||
/// Output formating
|
||||
/// Output formatting
|
||||
#[clap(flatten)]
|
||||
output_kind: BatchOutputKind,
|
||||
},
|
||||
|
@ -103,7 +103,7 @@ enum Command {
|
|||
},
|
||||
/// List partition keys
|
||||
ReadIndex {
|
||||
/// Output formating
|
||||
/// Output formatting
|
||||
#[clap(flatten)]
|
||||
output_kind: BatchOutputKind,
|
||||
/// Output only partition keys matching this filter
|
||||
|
@ -114,7 +114,7 @@ enum Command {
|
|||
ReadRange {
|
||||
/// Partition key to read from
|
||||
partition_key: String,
|
||||
/// Output formating
|
||||
/// Output formatting
|
||||
#[clap(flatten)]
|
||||
output_kind: BatchOutputKind,
|
||||
/// Output only sort keys matching this filter
|
||||
|
@ -125,7 +125,7 @@ enum Command {
|
|||
DeleteRange {
|
||||
/// Partition key to delete from
|
||||
partition_key: String,
|
||||
/// Output formating
|
||||
/// Output formatting
|
||||
#[clap(flatten)]
|
||||
output_kind: BatchOutputKind,
|
||||
/// Delete only sort keys matching this filter
|
||||
|
@ -185,10 +185,10 @@ struct ReadOutputKind {
|
|||
/// Raw output. Conflicts generate error, causality token is not returned
|
||||
#[clap(short, long, group = "output-kind")]
|
||||
raw: bool,
|
||||
/// Human formated output
|
||||
/// Human formatted output
|
||||
#[clap(short = 'H', long, group = "output-kind")]
|
||||
human: bool,
|
||||
/// JSON formated output
|
||||
/// JSON formatted output
|
||||
#[clap(short, long, group = "output-kind")]
|
||||
json: bool,
|
||||
}
|
||||
|
@ -207,7 +207,7 @@ impl ReadOutputKind {
|
|||
let mut val = val.value;
|
||||
if val.len() != 1 {
|
||||
eprintln!(
|
||||
"Raw mode can only read non-concurent values, found {} values, expected 1",
|
||||
"Raw mode can only read non-concurrent values, found {} values, expected 1",
|
||||
val.len()
|
||||
);
|
||||
exit(1);
|
||||
|
@ -265,10 +265,10 @@ impl ReadOutputKind {
|
|||
#[derive(Parser, Debug)]
|
||||
#[clap(group = clap::ArgGroup::new("output-kind").multiple(false).required(false))]
|
||||
struct BatchOutputKind {
|
||||
/// Human formated output
|
||||
/// Human formatted output
|
||||
#[clap(short = 'H', long, group = "output-kind")]
|
||||
human: bool,
|
||||
/// JSON formated output
|
||||
/// JSON formatted output
|
||||
#[clap(short, long, group = "output-kind")]
|
||||
json: bool,
|
||||
}
|
||||
|
|
|
@ -336,7 +336,7 @@ impl K2vClient {
|
|||
.collect())
|
||||
}
|
||||
|
||||
/// Perform a DeleteBatch request, deleting mutiple values or range of values at once, without
|
||||
/// Perform a DeleteBatch request, deleting multiple values or range of values at once, without
|
||||
/// providing causality information.
|
||||
pub async fn delete_batch(&self, operations: &[BatchDeleteOp<'_>]) -> Result<Vec<u64>, Error> {
|
||||
let url = self.build_url(None, &[("delete", "")]);
|
||||
|
|
|
@ -89,9 +89,9 @@ pub fn is_valid_bucket_name(n: &str) -> bool {
|
|||
// Bucket names must start and end with a letter or a number
|
||||
&& !n.starts_with(&['-', '.'][..])
|
||||
&& !n.ends_with(&['-', '.'][..])
|
||||
// Bucket names must not be formated as an IP address
|
||||
// Bucket names must not be formatted as an IP address
|
||||
&& n.parse::<std::net::IpAddr>().is_err()
|
||||
// Bucket names must not start wih "xn--"
|
||||
// Bucket names must not start with "xn--"
|
||||
&& !n.starts_with("xn--")
|
||||
// Bucket names must not end with "-s3alias"
|
||||
&& !n.ends_with("-s3alias")
|
||||
|
|
|
@ -14,7 +14,7 @@ mod v08 {
|
|||
/// A bucket is a collection of objects
|
||||
///
|
||||
/// Its parameters are not directly accessible as:
|
||||
/// - It must be possible to merge paramaters, hence the use of a LWW CRDT.
|
||||
/// - It must be possible to merge parameters, hence the use of a LWW CRDT.
|
||||
/// - A bucket has 2 states, Present or Deleted and parameters make sense only if present.
|
||||
#[derive(PartialEq, Eq, Clone, Debug, Serialize, Deserialize)]
|
||||
pub struct Bucket {
|
||||
|
@ -126,7 +126,7 @@ impl AutoCrdt for BucketQuotas {
|
|||
}
|
||||
|
||||
impl BucketParams {
|
||||
/// Create an empty BucketParams with no authorized keys and no website accesss
|
||||
/// Create an empty BucketParams with no authorized keys and no website access
|
||||
fn new() -> Self {
|
||||
BucketParams {
|
||||
creation_date: now_msec(),
|
||||
|
|
|
@ -231,7 +231,7 @@ impl<'a> LockedHelper<'a> {
|
|||
let bucket_p_local_alias_key = (key.key_id.clone(), alias_name.clone());
|
||||
|
||||
// Calculate the timestamp to assign to this aliasing in the two local_aliases maps
|
||||
// (the one from key to bucket, and the reverse one stored in the bucket iself)
|
||||
// (the one from key to bucket, and the reverse one stored in the bucket itself)
|
||||
// so that merges on both maps in case of a concurrent operation resolve
|
||||
// to the same alias being set
|
||||
let alias_ts = increment_logical_clock_2(
|
||||
|
|
|
@ -310,7 +310,7 @@ impl K2VRpcHandler {
|
|||
// - we have a response to a read quorum of requests (e.g. 2/3), and an extra delay
|
||||
// has passed since the quorum was achieved
|
||||
// - a global RPC timeout expired
|
||||
// The extra delay after a quorum was received is usefull if the third response was to
|
||||
// The extra delay after a quorum was received is useful if the third response was to
|
||||
// arrive during this short interval: this would allow us to consider all the data seen
|
||||
// by that last node in the response we produce, and would likely help reduce the
|
||||
// size of the seen marker that we will return (because we would have an info of the
|
||||
|
@ -500,7 +500,7 @@ impl K2VRpcHandler {
|
|||
} else {
|
||||
// If no seen marker was specified, we do not poll for anything.
|
||||
// We return immediately with the set of known items (even if
|
||||
// it is empty), which will give the client an inital view of
|
||||
// it is empty), which will give the client an initial view of
|
||||
// the dataset and an initial seen marker for further
|
||||
// PollRange calls.
|
||||
self.poll_range_read_range(range, &RangeSeenMarker::default())
|
||||
|
|
|
@ -31,11 +31,11 @@ mod v08 {
|
|||
/// The key at which the object is stored in its bucket, used as sorting key
|
||||
pub key: String,
|
||||
|
||||
/// The list of currenty stored versions of the object
|
||||
/// The list of currently stored versions of the object
|
||||
pub(super) versions: Vec<ObjectVersion>,
|
||||
}
|
||||
|
||||
/// Informations about a version of an object
|
||||
/// Information about a version of an object
|
||||
#[derive(PartialEq, Eq, Clone, Debug, Serialize, Deserialize)]
|
||||
pub struct ObjectVersion {
|
||||
/// Id of the version
|
||||
|
@ -109,11 +109,11 @@ mod v09 {
|
|||
/// The key at which the object is stored in its bucket, used as sorting key
|
||||
pub key: String,
|
||||
|
||||
/// The list of currenty stored versions of the object
|
||||
/// The list of currently stored versions of the object
|
||||
pub(super) versions: Vec<ObjectVersion>,
|
||||
}
|
||||
|
||||
/// Informations about a version of an object
|
||||
/// Information about a version of an object
|
||||
#[derive(PartialEq, Eq, Clone, Debug, Serialize, Deserialize)]
|
||||
pub struct ObjectVersion {
|
||||
/// Id of the version
|
||||
|
@ -186,11 +186,11 @@ mod v010 {
|
|||
/// The key at which the object is stored in its bucket, used as sorting key
|
||||
pub key: String,
|
||||
|
||||
/// The list of currenty stored versions of the object
|
||||
/// The list of currently stored versions of the object
|
||||
pub(super) versions: Vec<ObjectVersion>,
|
||||
}
|
||||
|
||||
/// Informations about a version of an object
|
||||
/// Information about a version of an object
|
||||
#[derive(PartialEq, Eq, Clone, Debug, Serialize, Deserialize)]
|
||||
pub struct ObjectVersion {
|
||||
/// Id of the version
|
||||
|
|
|
@ -49,7 +49,7 @@ mod v08 {
|
|||
pub offset: u64,
|
||||
}
|
||||
|
||||
/// Informations about a single block
|
||||
/// Information about a single block
|
||||
#[derive(PartialEq, Eq, Ord, PartialOrd, Clone, Copy, Debug, Serialize, Deserialize)]
|
||||
pub struct VersionBlock {
|
||||
/// Blake2 sum of the block
|
||||
|
|
|
@ -20,7 +20,7 @@ static SNAPSHOT_MUTEX: Mutex<()> = Mutex::new(());
|
|||
|
||||
// ================ snapshotting logic =====================
|
||||
|
||||
/// Run snashot_metadata in a blocking thread and async await on it
|
||||
/// Run snapshot_metadata in a blocking thread and async await on it
|
||||
pub async fn async_snapshot_metadata(garage: &Arc<Garage>) -> Result<(), Error> {
|
||||
let garage = garage.clone();
|
||||
let worker = tokio::task::spawn_blocking(move || snapshot_metadata(&garage));
|
||||
|
|
|
@ -59,7 +59,7 @@ impl<T> From<tokio::sync::mpsc::error::SendError<T>> for Error {
|
|||
}
|
||||
}
|
||||
|
||||
/// Ths trait adds a `.log_err()` method on `Result<(), E>` types,
|
||||
/// The trait adds a `.log_err()` method on `Result<(), E>` types,
|
||||
/// which dismisses the error by logging it to stderr.
|
||||
pub trait LogError {
|
||||
fn log_err(self, msg: &'static str);
|
||||
|
|
|
@ -18,7 +18,7 @@ use crate::util::*;
|
|||
/// in the send queue of the client, and their responses in the send queue of the
|
||||
/// server. Lower values mean higher priority.
|
||||
///
|
||||
/// This mechanism is usefull for messages bigger than the maximum chunk size
|
||||
/// This mechanism is useful for messages bigger than the maximum chunk size
|
||||
/// (set at `0x4000` bytes), such as large file transfers.
|
||||
/// In such case, all of the messages in the send queue with the highest priority
|
||||
/// will take turns to send individual chunks, in a round-robin fashion.
|
||||
|
@ -102,7 +102,7 @@ pub trait Message: Serialize + for<'de> Deserialize<'de> + Send + Sync + 'static
|
|||
|
||||
/// The Req<M> is a helper object used to create requests and attach them
|
||||
/// a stream of data. If the stream is a fixed Bytes and not a ByteStream,
|
||||
/// Req<M> is cheaply clonable to allow the request to be sent to different
|
||||
/// Req<M> is cheaply cloneable to allow the request to be sent to different
|
||||
/// peers (Clone will panic if the stream is a ByteStream).
|
||||
pub struct Req<M: Message> {
|
||||
pub(crate) msg: Arc<M>,
|
||||
|
|
|
@ -41,7 +41,7 @@ pub(crate) type VersionTag = [u8; 16];
|
|||
pub(crate) const NETAPP_VERSION_TAG: u64 = 0x6772676e65740010; // grgnet 0x0010 (1.0)
|
||||
|
||||
/// HelloMessage is sent by the client on a Netapp connection to indicate
|
||||
/// that they are also a server and ready to recieve incoming connections
|
||||
/// that they are also a server and ready to receive incoming connections
|
||||
/// at the specified address and port. If the client doesn't know their
|
||||
/// public address, they don't need to specify it and we look at the
|
||||
/// remote address of the socket is used instead.
|
||||
|
@ -290,7 +290,7 @@ impl NetApp {
|
|||
/// Attempt to connect to a peer, given by its ip:port and its public key.
|
||||
/// The public key will be checked during the secret handshake process.
|
||||
/// This function returns once the connection has been established and a
|
||||
/// successfull handshake was made. At this point we can send messages to
|
||||
/// successful handshake was made. At this point we can send messages to
|
||||
/// the other node with `Netapp::request`
|
||||
pub async fn try_connect(self: Arc<Self>, ip: SocketAddr, id: NodeID) -> Result<(), Error> {
|
||||
// Don't connect to ourself, we don't care
|
||||
|
|
|
@ -138,7 +138,7 @@ pub enum PeerConnState {
|
|||
/// A connection tentative is in progress (the nth, where n is the value stored)
|
||||
Trying(usize),
|
||||
|
||||
/// We abandonned trying to connect to this peer (too many failed attempts)
|
||||
/// We abandoned trying to connect to this peer (too many failed attempts)
|
||||
Abandonned,
|
||||
}
|
||||
|
||||
|
|
|
@ -28,7 +28,7 @@ use crate::stream::*;
|
|||
// - if error:
|
||||
// - u8: error kind, encoded using error::io_errorkind_to_u8
|
||||
// - rest: error message
|
||||
// - absent for cancel messag
|
||||
// - absent for cancel message
|
||||
|
||||
pub(crate) type RequestID = u32;
|
||||
pub(crate) type ChunkLength = u16;
|
||||
|
@ -217,7 +217,7 @@ impl<'a> futures::Future for SendQueuePollNextReady<'a> {
|
|||
|
||||
enum DataFrame {
|
||||
/// a fixed size buffer containing some data + a boolean indicating whether
|
||||
/// there may be more data comming from this stream. Can be used for some
|
||||
/// there may be more data coming from this stream. Can be used for some
|
||||
/// optimization. It's an error to set it to false if there is more data, but it is correct
|
||||
/// (albeit sub-optimal) to set it to true if there is nothing coming after
|
||||
Data(Bytes, bool),
|
||||
|
@ -310,7 +310,7 @@ pub(crate) trait SendLoop: Sync {
|
|||
// recv_fut is cancellation-safe according to tokio doc,
|
||||
// send_fut is cancellation-safe as implemented above?
|
||||
tokio::select! {
|
||||
biased; // always read incomming channel first if it has data
|
||||
biased; // always read incoming channel first if it has data
|
||||
sth = recv_fut => {
|
||||
match sth {
|
||||
Some(SendItem::Stream(id, prio, order_tag, data)) => {
|
||||
|
|
|
@ -16,7 +16,7 @@ use crate::bytes_buf::BytesBuf;
|
|||
///
|
||||
/// Items sent in the ByteStream may be errors of type `std::io::Error`.
|
||||
/// An error indicates the end of the ByteStream: a reader should no longer read
|
||||
/// after recieving an error, and a writer should stop writing after sending an error.
|
||||
/// after receiving an error, and a writer should stop writing after sending an error.
|
||||
pub type ByteStream = Pin<Box<dyn Stream<Item = Packet> + Send + Sync>>;
|
||||
|
||||
/// A packet sent in a ByteStream, which may contain either
|
||||
|
|
|
@ -66,7 +66,7 @@ async fn run_test_inner(port_base: u16) {
|
|||
println!("A pl2: {:?}", pl2);
|
||||
assert_eq!(pl2.len(), 2);
|
||||
|
||||
// Connect third ndoe and check it peers with everyone
|
||||
// Connect third node and check it peers with everyone
|
||||
let (thread3, _netapp3, peering3) =
|
||||
run_netapp(netid, pk3, sk3, addr3, vec![(pk2, addr2)], stop_rx.clone());
|
||||
tokio::time::sleep(Duration::from_secs(3)).await;
|
||||
|
|
|
@ -25,7 +25,7 @@ where
|
|||
/// This async function returns only when a true signal was received
|
||||
/// from a watcher that tells us when to exit.
|
||||
///
|
||||
/// Usefull in a select statement to interrupt another
|
||||
/// Useful in a select statement to interrupt another
|
||||
/// future:
|
||||
/// ```ignore
|
||||
/// select!(
|
||||
|
|
|
@ -133,7 +133,7 @@ impl Graph<FlowEdge> {
|
|||
/// This function shuffles the order of the edge lists. It keeps the ids of the
|
||||
/// reversed edges consistent.
|
||||
fn shuffle_edges(&mut self) {
|
||||
// We use deterministic randomness so that the layout calculation algorihtm
|
||||
// We use deterministic randomness so that the layout calculation algorithm
|
||||
// will output the same thing every time it is run. This way, the results
|
||||
// pre-calculated in `garage layout show` will match exactly those used
|
||||
// in practice with `garage layout apply`
|
||||
|
|
|
@ -90,7 +90,7 @@ impl LayoutHelper {
|
|||
// sync_map_min is the minimum value of sync_map among storage nodes
|
||||
// in the cluster (non-gateway nodes only, current and previous layouts).
|
||||
// It is the highest layout version for which we know that all relevant
|
||||
// storage nodes have fullfilled a sync, and therefore it is safe to
|
||||
// storage nodes have fulfilled a sync, and therefore it is safe to
|
||||
// use a read quorum within that layout to ensure consistency.
|
||||
// Gateway nodes are excluded here because they hold no relevant data
|
||||
// (they store the bucket and access key tables, but we don't have
|
||||
|
|
|
@ -48,7 +48,7 @@ impl LayoutManager {
|
|||
Ok(x) => {
|
||||
if x.current().replication_factor != replication_factor.replication_factor() {
|
||||
return Err(Error::Message(format!(
|
||||
"Prevous cluster layout has replication factor {}, which is different than the one specified in the config file ({}). The previous cluster layout can be purged, if you know what you are doing, simply by deleting the `cluster_layout` file in your metadata directory.",
|
||||
"Previous cluster layout has replication factor {}, which is different than the one specified in the config file ({}). The previous cluster layout can be purged, if you know what you are doing, simply by deleting the `cluster_layout` file in your metadata directory.",
|
||||
x.current().replication_factor,
|
||||
replication_factor.replication_factor()
|
||||
)));
|
||||
|
|
|
@ -241,7 +241,7 @@ mod v010 {
|
|||
/// The versions currently in use in the cluster
|
||||
pub versions: Vec<LayoutVersion>,
|
||||
/// At most 5 of the previous versions, not used by the garage_table
|
||||
/// module, but usefull for the garage_block module to find data blocks
|
||||
/// module, but useful for the garage_block module to find data blocks
|
||||
/// that have not yet been moved
|
||||
pub old_versions: Vec<LayoutVersion>,
|
||||
|
||||
|
|
|
@ -9,7 +9,7 @@ use crate::replication_mode::ReplicationFactor;
|
|||
|
||||
// This function checks that the partition size S computed is at least better than the
|
||||
// one given by a very naive algorithm. To do so, we try to run the naive algorithm
|
||||
// assuming a partion size of S+1. If we succed, it means that the optimal assignment
|
||||
// assuming a partition size of S+1. If we succeed, it means that the optimal assignment
|
||||
// was not optimal. The naive algorithm is the following :
|
||||
// - we compute the max number of partitions associated to every node, capped at the
|
||||
// partition number. It gives the number of tokens of every node.
|
||||
|
|
|
@ -471,7 +471,7 @@ impl LayoutVersion {
|
|||
}
|
||||
}
|
||||
|
||||
// We clear the ring assignemnt data
|
||||
// We clear the ring assignment data
|
||||
self.ring_assignment_data = Vec::<CompactNodeType>::new();
|
||||
|
||||
Ok(Some(old_assignment))
|
||||
|
|
|
@ -413,7 +413,7 @@ impl RpcHelper {
|
|||
/// Make a RPC call to multiple servers, returning either a Vec of responses,
|
||||
/// or an error if quorum could not be reached due to too many errors
|
||||
///
|
||||
/// Contrary to try_call_many, this fuction is especially made for broadcast
|
||||
/// Contrary to try_call_many, this function is especially made for broadcast
|
||||
/// write operations. In particular:
|
||||
///
|
||||
/// - The request are sent to all specified nodes as soon as `try_write_many_sets`
|
||||
|
@ -506,7 +506,7 @@ impl RpcHelper {
|
|||
|
||||
// If we have a quorum of ok in all quorum sets, then it's a success!
|
||||
if result_tracker.all_quorums_ok() {
|
||||
// Continue all other requets in background
|
||||
// Continue all other requests in background
|
||||
tokio::spawn(async move {
|
||||
resp_stream.collect::<Vec<(Uuid, Result<_, _>)>>().await;
|
||||
drop(drop_on_complete);
|
||||
|
|
|
@ -54,7 +54,7 @@ pub const SYSTEM_RPC_PATH: &str = "garage_rpc/system.rs/SystemRpc";
|
|||
/// RPC messages related to membership
|
||||
#[derive(Debug, Serialize, Deserialize, Clone)]
|
||||
pub enum SystemRpc {
|
||||
/// Response to successfull advertisements
|
||||
/// Response to successful advertisements
|
||||
Ok,
|
||||
/// Request to connect to a specific node (in <pubkey>@<host>:<port> format, pubkey = full-length node ID)
|
||||
Connect(String),
|
||||
|
@ -172,7 +172,7 @@ pub struct ClusterHealth {
|
|||
pub enum ClusterHealthStatus {
|
||||
/// All nodes are available
|
||||
Healthy,
|
||||
/// Some storage nodes are unavailable, but quorum is stil
|
||||
/// Some storage nodes are unavailable, but quorum is still
|
||||
/// achieved for all partitions
|
||||
Degraded,
|
||||
/// Quorum is not available for some partitions
|
||||
|
@ -286,7 +286,7 @@ impl System {
|
|||
let mut local_status = NodeStatus::initial(replication_factor, &layout_manager);
|
||||
local_status.update_disk_usage(&config.metadata_dir, &config.data_dir);
|
||||
|
||||
// ---- if enabled, set up additionnal peer discovery methods ----
|
||||
// ---- if enabled, set up additional peer discovery methods ----
|
||||
#[cfg(feature = "consul-discovery")]
|
||||
let consul_discovery = match &config.consul_discovery {
|
||||
Some(cfg) => Some(
|
||||
|
@ -337,7 +337,7 @@ impl System {
|
|||
Ok(sys)
|
||||
}
|
||||
|
||||
/// Perform bootstraping, starting the ping loop
|
||||
/// Perform bootstrapping, starting the ping loop
|
||||
pub async fn run(self: Arc<Self>, must_exit: watch::Receiver<bool>) {
|
||||
join!(
|
||||
self.netapp.clone().listen(
|
||||
|
|
|
@ -258,14 +258,14 @@ impl<F: TableSchema, R: TableReplication> TableGc<F, R> {
|
|||
.await
|
||||
.err_context("GC: remote delete tombstones")?;
|
||||
|
||||
// GC has been successfull for all of these entries.
|
||||
// GC has been successful for all of these entries.
|
||||
// We now remove them all from our local table and from the GC todo list.
|
||||
for item in items {
|
||||
self.data
|
||||
.delete_if_equal_hash(&item.key[..], item.value_hash)
|
||||
.err_context("GC: local delete tombstones")?;
|
||||
item.remove_if_equal(&self.data.gc_todo)
|
||||
.err_context("GC: remove from todo list after successfull GC")?;
|
||||
.err_context("GC: remove from todo list after successful GC")?;
|
||||
}
|
||||
|
||||
Ok(())
|
||||
|
@ -383,7 +383,7 @@ impl GcTodoEntry {
|
|||
|
||||
/// Removes the GcTodoEntry from the gc_todo tree if the
|
||||
/// hash of the serialized value is the same here as in the tree.
|
||||
/// This is usefull to remove a todo entry only under the condition
|
||||
/// This is useful to remove a todo entry only under the condition
|
||||
/// that it has not changed since the time it was read, i.e.
|
||||
/// what we have to do is still the same
|
||||
pub(crate) fn remove_if_equal(&self, gc_todo_tree: &db::Tree) -> Result<(), Error> {
|
||||
|
|
|
@ -13,12 +13,12 @@ pub trait TableReplication: Send + Sync + 'static {
|
|||
|
||||
/// Which nodes to send read requests to
|
||||
fn read_nodes(&self, hash: &Hash) -> Vec<Uuid>;
|
||||
/// Responses needed to consider a read succesfull
|
||||
/// Responses needed to consider a read successful
|
||||
fn read_quorum(&self) -> usize;
|
||||
|
||||
/// Which nodes to send writes to
|
||||
fn write_sets(&self, hash: &Hash) -> Self::WriteSets;
|
||||
/// Responses needed to consider a write succesfull in each set
|
||||
/// Responses needed to consider a write successful in each set
|
||||
fn write_quorum(&self) -> usize;
|
||||
|
||||
// Accessing partitions, for Merkle tree & sync
|
||||
|
|
|
@ -316,7 +316,7 @@ impl<F: TableSchema, R: TableReplication> TableSyncer<F, R> {
|
|||
SyncRpc::RootCkDifferent(true) => VecDeque::from(vec![root_ck_key]),
|
||||
x => {
|
||||
return Err(Error::Message(format!(
|
||||
"Invalid respone to RootCkHash RPC: {}",
|
||||
"Invalid response to RootCkHash RPC: {}",
|
||||
debug_serialize(x)
|
||||
)));
|
||||
}
|
||||
|
@ -362,7 +362,7 @@ impl<F: TableSchema, R: TableReplication> TableSyncer<F, R> {
|
|||
SyncRpc::Node(_, node) => node,
|
||||
x => {
|
||||
return Err(Error::Message(format!(
|
||||
"Invalid respone to GetNode RPC: {}",
|
||||
"Invalid response to GetNode RPC: {}",
|
||||
debug_serialize(x)
|
||||
)));
|
||||
}
|
||||
|
|
|
@ -171,11 +171,11 @@ impl<F: TableSchema, R: TableReplication> Table<F, R> {
|
|||
// We will here batch all items into a single request for each concerned
|
||||
// node, with all of the entries it must store within that request.
|
||||
// Each entry has to be saved to a specific list of "write sets", i.e. a set
|
||||
// of node within wich a quorum must be achieved. In normal operation, there
|
||||
// of node within which a quorum must be achieved. In normal operation, there
|
||||
// is a single write set which corresponds to the quorum in the current
|
||||
// cluster layout, but when the layout is updated, multiple write sets might
|
||||
// have to be handled at once. Here, since we are sending many entries, we
|
||||
// will have to handle many write sets in all cases. The algorihtm is thus
|
||||
// will have to handle many write sets in all cases. The algorithm is thus
|
||||
// to send one request to each node with all the items it must save,
|
||||
// and keep track of the OK responses within each write set: if for all sets
|
||||
// a quorum of nodes has answered OK, then the insert has succeeded and
|
||||
|
|
|
@ -14,7 +14,7 @@ use crate::background::{WorkerInfo, WorkerStatus};
|
|||
use crate::error::Error;
|
||||
use crate::time::now_msec;
|
||||
|
||||
// All workers that haven't exited for this time after an exit signal was recieved
|
||||
// All workers that haven't exited for this time after an exit signal was received
|
||||
// will be interrupted in the middle of whatever they are doing.
|
||||
const EXIT_DEADLINE: Duration = Duration::from_secs(8);
|
||||
|
||||
|
@ -54,7 +54,7 @@ pub trait Worker: Send {
|
|||
async fn work(&mut self, must_exit: &mut watch::Receiver<bool>) -> Result<WorkerState, Error>;
|
||||
|
||||
/// Wait for work: await for some task to become available. This future can be interrupted in
|
||||
/// the middle for any reason, for example if an interrupt signal was recieved.
|
||||
/// the middle for any reason, for example if an interrupt signal was received.
|
||||
async fn wait_for_work(&mut self) -> WorkerState;
|
||||
}
|
||||
|
||||
|
|
|
@ -93,12 +93,12 @@ pub struct Config {
|
|||
/// the addresses announced to other peers to a specific subnet.
|
||||
pub rpc_public_addr_subnet: Option<String>,
|
||||
|
||||
/// Timeout for Netapp's ping messagess
|
||||
/// Timeout for Netapp's ping messages
|
||||
pub rpc_ping_timeout_msec: Option<u64>,
|
||||
/// Timeout for Netapp RPC calls
|
||||
pub rpc_timeout_msec: Option<u64>,
|
||||
|
||||
// -- Bootstraping and discovery
|
||||
// -- Bootstrapping and discovery
|
||||
/// Bootstrap peers RPC address
|
||||
#[serde(default)]
|
||||
pub bootstrap_peers: Vec<String>,
|
||||
|
|
|
@ -33,8 +33,8 @@ pub trait Crdt {
|
|||
/// arises very often, for example with a Lww or a LwwMap: the value type has to be a CRDT so that
|
||||
/// we have a rule for what to do when timestamps aren't enough to disambiguate (in a distributed
|
||||
/// system, anything can happen!), and with AutoCrdt the rule is to make an arbitrary (but
|
||||
/// determinstic) choice between the two. When using an Option<T> instead with this impl, ambiguity
|
||||
/// cases are explicitely stored as None, which allows us to detect the ambiguity and handle it in
|
||||
/// deterministic) choice between the two. When using an Option<T> instead with this impl, ambiguity
|
||||
/// cases are explicitly stored as None, which allows us to detect the ambiguity and handle it in
|
||||
/// the way we want. (this can only work if we are happy with losing the value when an ambiguity
|
||||
/// arises)
|
||||
impl<T> Crdt for Option<T>
|
||||
|
|
|
@ -16,7 +16,7 @@ use crate::crdt::crdt::*;
|
|||
/// In our case, we add the constraint that the value that is wrapped inside the LWW CRDT must
|
||||
/// itself be a CRDT: in the case when the timestamp does not allow us to decide on which value to
|
||||
/// keep, the merge rule of the inner CRDT is applied on the wrapped values. (Note that all types
|
||||
/// that implement the `Ord` trait get a default CRDT implemetnation that keeps the maximum value.
|
||||
/// that implement the `Ord` trait get a default CRDT implementation that keeps the maximum value.
|
||||
/// This enables us to use LWW directly with primitive data types such as numbers or strings. It is
|
||||
/// generally desirable in this case to never explicitly produce LWW values with the same timestamp
|
||||
/// but different inner values, as the rule to keep the maximum value isn't generally the desired
|
||||
|
@ -28,9 +28,9 @@ use crate::crdt::crdt::*;
|
|||
///
|
||||
/// Given that clocks are not too desynchronized, this assumption
|
||||
/// is enough for most cases, as there is few chance that two humans
|
||||
/// coordonate themself faster than the time difference between two NTP servers.
|
||||
/// coordinate themself faster than the time difference between two NTP servers.
|
||||
///
|
||||
/// As a more concret example, let's suppose you want to upload a file
|
||||
/// As a more concrete example, let's suppose you want to upload a file
|
||||
/// with the same key (path) in the same bucket at the very same time.
|
||||
/// For each request, the file will be timestamped by the receiving server
|
||||
/// and may differ from what you observed with your atomic clock!
|
||||
|
@ -84,16 +84,16 @@ where
|
|||
&self.v
|
||||
}
|
||||
|
||||
/// Take the value inside the CRDT (discards the timesamp)
|
||||
/// Take the value inside the CRDT (discards the timestamp)
|
||||
pub fn take(self) -> T {
|
||||
self.v
|
||||
}
|
||||
|
||||
/// Get a mutable reference to the CRDT's value
|
||||
///
|
||||
/// This is usefull to mutate the inside value without changing the LWW timestamp.
|
||||
/// This is useful to mutate the inside value without changing the LWW timestamp.
|
||||
/// When such mutation is done, the merge between two LWW values is done using the inner
|
||||
/// CRDT's merge operation. This is usefull in the case where the inner CRDT is a large
|
||||
/// CRDT's merge operation. This is useful in the case where the inner CRDT is a large
|
||||
/// data type, such as a map, and we only want to change a single item in the map.
|
||||
/// To do this, we can produce a "CRDT delta", i.e. a LWW that contains only the modification.
|
||||
/// This delta consists in a LWW with the same timestamp, and the map
|
||||
|
|
|
@ -109,7 +109,7 @@ where
|
|||
}
|
||||
|
||||
/// Takes all of the values of the map and returns them. The current map is reset to the
|
||||
/// empty map. This is very usefull to produce in-place a new map that contains only a delta
|
||||
/// empty map. This is very useful to produce in-place a new map that contains only a delta
|
||||
/// that modifies a certain value:
|
||||
///
|
||||
/// ```ignore
|
||||
|
@ -162,7 +162,7 @@ where
|
|||
}
|
||||
}
|
||||
|
||||
/// Gets a reference to all of the items, as a slice. Usefull to iterate on all map values.
|
||||
/// Gets a reference to all of the items, as a slice. Useful to iterate on all map values.
|
||||
/// In most case you will want to ignore the timestamp (second item of the tuple).
|
||||
pub fn items(&self) -> &[(K, u64, V)] {
|
||||
&self.vals[..]
|
||||
|
|
|
@ -57,7 +57,7 @@ where
|
|||
Err(_) => None,
|
||||
}
|
||||
}
|
||||
/// Gets a reference to all of the items, as a slice. Usefull to iterate on all map values.
|
||||
/// Gets a reference to all of the items, as a slice. Useful to iterate on all map values.
|
||||
pub fn items(&self) -> &[(K, V)] {
|
||||
&self.vals[..]
|
||||
}
|
||||
|
|
|
@ -1,7 +1,7 @@
|
|||
use serde::{Deserialize, Serialize};
|
||||
|
||||
/// Serialize to MessagePacki, without versionning
|
||||
/// (see garage_util::migrate for functions that manage versionned
|
||||
/// Serialize to MessagePack, without versioning
|
||||
/// (see garage_util::migrate for functions that manage versioned
|
||||
/// data formats)
|
||||
pub fn nonversioned_encode<T>(val: &T) -> Result<Vec<u8>, rmp_serde::encode::Error>
|
||||
where
|
||||
|
@ -13,8 +13,8 @@ where
|
|||
Ok(wr)
|
||||
}
|
||||
|
||||
/// Deserialize from MessagePacki, without versionning
|
||||
/// (see garage_util::migrate for functions that manage versionned
|
||||
/// Deserialize from MessagePack, without versioning
|
||||
/// (see garage_util::migrate for functions that manage versioned
|
||||
/// data formats)
|
||||
pub fn nonversioned_decode<T>(bytes: &[u8]) -> Result<T, rmp_serde::decode::Error>
|
||||
where
|
||||
|
|
Loading…
Reference in a new issue