This removes the >1mb s3_copy restriction.
This restriction doesn't seem to be documented anywhere (I could be wrong). It also causes some software to fail (such as #248).
Co-authored-by: Rob Landers <landers.robert@gmail.com>
Reviewed-on: Deuxfleurs/garage#280
Co-authored-by: withinboredom <landers.robert@gmail.com>
Co-committed-by: withinboredom <landers.robert@gmail.com>
This change helps ensure that nodes for each partition are spread
over all datacenters, a property that wasn't ensured previously
when going from a 2 DC deployment to a 3 DC deployment
- Global dependencies updated in Cargo.lock
- New module created in src/admin to host:
- the (future) admin REST API
- the metric collection
- add configuration block
No metrics implemented yet
This commit adds support to discover garage instances running in
kubernetes.
Once enabled by setting `kubernetes_namespace` and
`kubernetes_service_name` garage will create a Custom Resources
`garagenodes.deuxfleurs.fr` with nodes public key as the resource name.
and IP and Port information as spec in the namespace configured by
`kubernetes_namespace`.
For discovering nodes the resources are filtered with the optionally set
`kubernetes_service_name` which sets a label
`garage.deuxfleurs.fr/service` on the resources.
This allows to separate multiple garage deployments in a single
namespace.
the `kubernetes_skip_crd` variable allows to disable the creation of the
CRD by garage itself. The user must deploy this manually.
Nodes would stabilize on different encoding formats for the values,
some having the pre-migration format and some having the post-migration
format. This would be reflected in the Merkle trees never converging
and thus having an infinite resync loop.
Implement ListMultipartUploads, also refactor ListObjects and ListObjectsV2.
It took me some times as I wanted to propose the following things:
- Using an iterator instead of the loop+goto pattern. I find it easier to read and it should enable some optimizations. For example, when consuming keys of a common prefix, we do many [redundant checks](https://git.deuxfleurs.fr/Deuxfleurs/garage/src/branch/main/src/api/s3_list.rs#L125-L156) while the only thing to do is to [check if the following key is still part of the common prefix](https://git.deuxfleurs.fr/Deuxfleurs/garage/src/branch/feature/s3-multipart-compat/src/api/s3_list.rs#L476).
- Try to name things (see ExtractionResult and RangeBegin enums) and to separate concerns (see ListQuery and Accumulator)
- An IO closure to make unit tests possibles.
- Unit tests, to track regressions and document how to interact with the code
- Integration tests with `s3api`. In the future, I would like to move them in Rust with the aws rust SDK.
Merging of the logic of ListMultipartUploads and ListObjects was not a goal but a consequence of the previous modifications.
Some points that we might want to discuss:
- ListObjectsV1, when using pagination and delimiters, has a weird behavior (it lists multiple times the same prefix) with `aws s3api` due to the fact that it can not use our optimization to skip the whole prefix. It is independant from my refactor and can be tested with the commented `s3api` tests in `test-smoke.sh`. It probably has the same weird behavior on the official AWS S3 implementation.
- Considering ListMultipartUploads, I had to "abuse" upload id marker to support prefix skipping. I send an `upload-id-marker` with the hardcoded value `include` to emulate your "including" token.
- Some ways to test ListMultipartUploads with existing software (my tests are limited to s3api for now).
Co-authored-by: Quentin Dufour <quentin@deuxfleurs.fr>
Reviewed-on: Deuxfleurs/garage#171
Co-authored-by: Quentin <quentin@dufour.io>
Co-committed-by: Quentin <quentin@dufour.io>
- Fix bucket delete
- fix merge of bucket creation date
- Replace deletable with option in aliases
Rationale: if two aliases point to conflicting bucket, resolving
by making an arbitrary choice risks making data accessible when it
shouldn't be. We'd rather resolve to deleting the alias until
someone puts it back.
- ensure bucket names are correct aws s3 names
- when making aliases, ensure timestamps of links in both ways are the
same
- fix small remarks by trinity
- don't have a separate website_access field
fix#77
this does not store anything but a on/off switch for website, and does not implement GetBucketWebsite as it would require storing more. GetBucketWebsite should be pretty easy to implement once data is stored though.
Co-authored-by: Trinity Pointard <trinity.pointard@gmail.com>
Reviewed-on: Deuxfleurs/garage#174
Co-authored-by: trinity-1686a <trinity.pointard@gmail.com>
Co-committed-by: trinity-1686a <trinity.pointard@gmail.com>
fix#161
Current request router was organically grown, and is getting messier and messier with each addition.
This router cover exaustively existing API endpoints (with exceptions listed in [#161(comment)](Deuxfleurs/garage#161 (comment)) either because new and old api endpoint can't feasabily be differentied, or it's more lambda than s3).
Co-authored-by: Trinity Pointard <trinity.pointard@gmail.com>
Reviewed-on: Deuxfleurs/garage#163
Reviewed-by: Alex <alex@adnab.me>
Co-authored-by: trinity-1686a <trinity.pointard@gmail.com>
Co-committed-by: trinity-1686a <trinity.pointard@gmail.com>
- change the terminology: the network configuration becomes the role
table, the configuration of a nodes becomes a node's role
- the modification of the role table takes place in two steps: first,
changes are staged in a CRDT data structure. Then, once the user is
happy with the changes, they can commit them all at once (or revert
them).
- update documentation
- fix tests
- implement smarter partition assignation algorithm
This patch breaks the format of the network configuration: when
migrating, the cluster will be in a state where no roles are assigned.
All roles must be re-assigned and commited at once. This migration
should not pose an issue.