garage/doc/book/working-documents/testing-strategy.md
Alex Auvolat 05547f2ba6
All checks were successful
continuous-integration/drone/push Build is passing
continuous-integration/drone/pr Build is passing
Move testing strategy to a dedicated doc section (fix #114)
2022-11-14 13:34:00 +01:00

3.7 KiB

+++ title = "Testing strategy" weight = 30 +++

Testing Garage

Currently, we have the following tests:

  • some unit tests spread around the codebase
  • integration tests written in Rust (src/garage/test) to check that Garage operations perform correctly
  • integration test for compatibility with external tools (script/test-smoke.sh)

We have also tried minio/mint but it fails a lot and for now we haven't gotten a lot from it.

In the future:

  1. We'd like to have a systematic way of testing with minio/mint, it would add value to Garage by providing a compatibility score and reference that can be trusted.
  2. We'd also like to do testing with Jepsen in some way.

How to instrument Garagae

We should try to test in least invasive ways, i.e. minimize the impact of the testing framework on Garage's source code. This means for example:

  • Not abstracting IO/nondeterminism in the source code
  • Not making garage a shared library (launch using execve, it's perfectly fine)

Instead, we should focus on building a clean outer interface for the garage binary, for example loading configuration using environnement variables instead of the configuration file if that's helpfull for writing the tests.

There are two reasons for this:

  • Keep the soure code clean and focused
  • Test something that is as close as possible as the true garage that will actually be running

Reminder: rules of simplicity, concerning changes to Garage's source code. Always question what we are doing. Never do anything just because it looks nice or because we "think" it might be usefull at some later point but without knowing precisely why/when. Only do things that make perfect sense in the context of what we currently know.

References

Testing is a research field on its own. About testing distributed systems:

  • Jepsen is a testing framework designed to test distributed systems. It can mock some part of the system like the time and the network.
  • FoundationDB Testing Approach. They chose to abstract "all sources of nondeterminism and communication are abstracted, including network, disk, time, and pseudo random number generator" to be able to run tests by simulating faults.
  • Testing Distributed Systems - Curated list of resources on testing distributed systems

About S3 compatibility:

About benchmarking S3 (I think it is not necessarily very relevant for this iteration):

Engineering blog posts:

Interesting blog posts on the blog of the Sled database:

Misc:

  • mutagen - mutation testing is a way to assert our test quality by mutating the code and see if the mutation makes the tests fail
  • fuzzing - cargo supports fuzzing, it could be a way to test our software reliability in presence of garbage data.