Does not work on AlmaLinux (RedHat 8/CentOs and so on) #359

Closed
opened 2022-08-12 04:20:07 +00:00 by Mako · 7 comments

Fresh default install from cargo

metadata_dir = "/var/lib/garage/meta"
data_dir = "/var/lib/garage/data"
● garage.service - Garage Data Store
   Loaded: loaded (/etc/systemd/system/garage.service; disabled; vendor preset: disabled)
   Active: failed (Result: exit-code) since Fri 2022-08-12 06:36:40; 10s ago
  Process: 1874753 ExecStart=/usr/local/bin/garage server (code=exited, status=238/STATE_DIRECTORY)                                                                                          
 Main PID: 1874753 (code=exited, status=238/STATE_DIRECTORY)

Aug 12 06:36:40  systemd[1]: Started Garage Data Store.
Aug 12 06:36:40  systemd[1]: garage.service: Main process exited, code=exited, status=238/STATE_DIRECTORY                                                                             
Aug 12 06:36:40  systemd[1]: garage.service: Failed with result 'exit-code'.

After setting selinux to permissive mode:

● garage.service - Garage Data Store
   Loaded: loaded (/etc/systemd/system/garage.service; disabled; vendor preset: disabled)
   Active: failed (Result: core-dump) since Fri 2022-08-12 06:37:44; 2min 27s ago
 Main PID: 1874790 (code=dumped, signal=ABRT)

Aug 12 06:37:44  systemd[1]: Started Garage Data Store.
Aug 12 06:37:44  garage[1874790]:  INFO  garage::server > Loading configuration...
Aug 12 06:37:44  garage[1874790]:  ERROR garage         > panicked at 'Unable to read config file: Io(Os { code: 2, kind: NotFound, message: "No such file or directory" })', /home/user>
Aug 12 06:37:44  systemd-coredump[1874796]: Process 1874790 (garage) of user 64665 dumped core.
Aug 12 06:37:44  systemd[1]: garage.service: Main process exited, code=dumped, status=6/ABRT
Aug 12 06:37:44  systemd[1]: garage.service: Failed with result 'core-dump'.

Looks like some specific install instruction for RedHat 8 versions is necessary.

Fresh default install from cargo ``` metadata_dir = "/var/lib/garage/meta" data_dir = "/var/lib/garage/data" ``` ``` ● garage.service - Garage Data Store Loaded: loaded (/etc/systemd/system/garage.service; disabled; vendor preset: disabled) Active: failed (Result: exit-code) since Fri 2022-08-12 06:36:40; 10s ago Process: 1874753 ExecStart=/usr/local/bin/garage server (code=exited, status=238/STATE_DIRECTORY) Main PID: 1874753 (code=exited, status=238/STATE_DIRECTORY) Aug 12 06:36:40 systemd[1]: Started Garage Data Store. Aug 12 06:36:40 systemd[1]: garage.service: Main process exited, code=exited, status=238/STATE_DIRECTORY Aug 12 06:36:40 systemd[1]: garage.service: Failed with result 'exit-code'. ``` After setting selinux to permissive mode: ``` ● garage.service - Garage Data Store Loaded: loaded (/etc/systemd/system/garage.service; disabled; vendor preset: disabled) Active: failed (Result: core-dump) since Fri 2022-08-12 06:37:44; 2min 27s ago Main PID: 1874790 (code=dumped, signal=ABRT) Aug 12 06:37:44 systemd[1]: Started Garage Data Store. Aug 12 06:37:44 garage[1874790]: INFO garage::server > Loading configuration... Aug 12 06:37:44 garage[1874790]: ERROR garage > panicked at 'Unable to read config file: Io(Os { code: 2, kind: NotFound, message: "No such file or directory" })', /home/user> Aug 12 06:37:44 systemd-coredump[1874796]: Process 1874790 (garage) of user 64665 dumped core. Aug 12 06:37:44 systemd[1]: garage.service: Main process exited, code=dumped, status=6/ABRT Aug 12 06:37:44 systemd[1]: garage.service: Failed with result 'core-dump'. ``` Looks like some specific install instruction for RedHat 8 versions is necessary.
Owner

Hi mako,

Sorry to learn that Garage did not work on your machine.

Where did you put your configuration file? The error says that Garage was not able to read your configuration file. Keep in mind that the systemd configuration file we provide is hardened (DynamicUser=true, ProtectHome=true, etc.). One of this hardening (ProtectHome) prevents Garage from accessing /home.

This hardening is probably not handled by SELinux but by another security mechanism in the Linux kernel, which can be simply remounting an empty folder on /home. -> Setting SELinux in the permissive mode does not deactivate all hardenings.

As I can see in your logs, it seems the path you indicated to Garage server is located in your home directory, so I am pretty convinced that's the issue you have here.

So my first advice would be to either:

  • Move your configuration file in /etc, edit your systemd service to update it, and reload systemd (systemctl daemon-reload).
  • Remove the hardening from our service by editing its file (not recommended as it put your server at risk if a security issue is discovered in Garage one day), and again do not forget to reload (systemctl daemon-reload).

You can get more information about our systemd service in our documentation: https://garagehq.deuxfleurs.fr/documentation/cookbook/systemd/

Let me know if it fixes your problem!

Hi mako, Sorry to learn that Garage did not work on your machine. Where did you put your configuration file? The error says that Garage was not able to read your configuration file. Keep in mind that the systemd configuration file we provide is hardened (`DynamicUser=true`, `ProtectHome=true`, etc.). One of this hardening (`ProtectHome`) prevents Garage from accessing `/home`. This hardening is probably not handled by SELinux but by another security mechanism in the Linux kernel, which can be simply remounting an empty folder on `/home`. -> Setting SELinux in the permissive mode does not deactivate all hardenings. As I can see in your logs, it seems the path you indicated to Garage server is located in your home directory, so I am pretty convinced that's the issue you have here. So my first advice would be to either: - Move your configuration file in `/etc`, edit your systemd service to update it, and reload systemd (`systemctl daemon-reload`). - Remove the hardening from our service by editing its file (not recommended as it put your server at risk if a security issue is discovered in Garage one day), and again do not forget to reload (`systemctl daemon-reload`). You can get more information about our systemd service in our documentation: https://garagehq.deuxfleurs.fr/documentation/cookbook/systemd/ Let me know if it fixes your problem!
quentin added the
Documentation
label 2022-08-12 07:28:27 +00:00
Author

Where did you put your configuration file? The error says that Garage was not able to read your configuration file. Keep in mind that the systemd configuration file we provide is hardened (DynamicUser=true, ProtectHome=true, etc.). One of this hardening (ProtectHome) prevents Garage from accessing /home.

I setup all by your default instructions.
So it is in /etc/garage/garage.toml with

metadata_dir = "/var/lib/garage/meta"
data_dir = "/var/lib/garage/data"

replication_mode = "none"

rpc_bind_addr = "[::]:3901"


Now I remove cargo version and install the
latest https://garagehq.deuxfleurs.fr/_releases/v0.7.2.1/x86_64-unknown-linux-musl/garage to /usr/local/bin/

/usr/local/bin/garage server
 INFO  garage::server > Loading configuration...
 ERROR garage         > panicked at 'Unable to read config file: Io(Os { code: 2, kind: NotFound, message: "No such file or directory" })', server.rs:30:43 
ls -la /etc/garage/garage.toml
-rw-r--r--. 1 root root 459 Aug 12 07:37 /etc/garage/garage.toml

I think default security of RedHat blocks access to necessary files.

So setup instructions do not work for RedHat 8 clones.
Just test on it.

> > Where did you put your configuration file? The error says that Garage was not able to read your configuration file. Keep in mind that the systemd configuration file we provide is hardened (`DynamicUser=true`, `ProtectHome=true`, etc.). One of this hardening (`ProtectHome`) prevents Garage from accessing `/home`. > I setup all by your default instructions. So it is in /etc/garage/garage.toml with ``` metadata_dir = "/var/lib/garage/meta" data_dir = "/var/lib/garage/data" replication_mode = "none" rpc_bind_addr = "[::]:3901" ``` Now I remove cargo version and install the latest https://garagehq.deuxfleurs.fr/_releases/v0.7.2.1/x86_64-unknown-linux-musl/garage to /usr/local/bin/ ``` /usr/local/bin/garage server INFO garage::server > Loading configuration... ERROR garage > panicked at 'Unable to read config file: Io(Os { code: 2, kind: NotFound, message: "No such file or directory" })', server.rs:30:43 ``` ``` ls -la /etc/garage/garage.toml -rw-r--r--. 1 root root 459 Aug 12 07:37 /etc/garage/garage.toml ``` I think default security of RedHat blocks access to necessary files. So setup instructions do not work for RedHat 8 clones. Just test on it.
Owner

Tested now on a fresh Scaleway DEV1-S running Alma Linux 8.6 and it works (see log below).

Based on your last message, it seems you put your config file at the following path: /etc/garage/garage.toml. It is not the default path checked by Garage, as Garage tries to open /etc/garage.toml. So you can either move your config file in the root folder, directly in /etc to match the default path OR you can inform Garage of the non-standard path you chose by running garage -c /etc/garage/garage.toml server.

This logic is also described in our quickstart. Our systemd doc page also mentions that we assume you put your configuration file at /etc/garage.toml.

If it does not solve your problem, can you be more precise about the flavor of AlmaLinux you are running, some additional packages and hardening you installed/configured?


And I have just tested on AlmaLinux 8.6, some info about the VPS as a proof:

[root@testing-garage ~]# uname -a
Linux testing-garage 4.18.0-372.9.1.el8.x86_64 #1 SMP Tue May 10 08:57:35 EDT 2022 x86_64 x86_64 x86_64 GNU/Linux
[root@testing-garage ~]# cat /etc/redhat-release
AlmaLinux release 8.6 (Sky Tiger)

I created my config file by copy/pasting the one from our quickstart:

[root@testing-garage ~]# cat > /etc/garage.toml <<EOF
> metadata_dir = "/tmp/meta"
> data_dir = "/tmp/data"
>
> replication_mode = "none"
>
> rpc_bind_addr = "[::]:3901"
> rpc_public_addr = "127.0.0.1:3901"
> rpc_secret = "1799bccfd7411eddcf9ebd316bc1f5287ad12a68094e1c6ac6abde7e6feae1ec"
>
> bootstrap_peers = []
>
> [s3_api]
> s3_region = "garage"
> api_bind_addr = "[::]:3900"
> root_domain = ".s3.garage.localhost"
>
> [s3_web]
> bind_addr = "[::]:3902"
> root_domain = ".web.garage.localhost"
> index = "index.html"
> EOF

Then I downloaded + chmoded the binary as follow:

[root@testing-garage ~]# curl https://garagehq.deuxfleurs.fr/_releases/v0.7.2.1/x86_64-unknown-linux-musl/garage -o /usr/local/bin/garage
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 46.6M  100 46.6M    0     0  14.8M      0  0:00:03  0:00:03 --:--:-- 14.8M
[root@testing-garage ~]# chmod +x /usr/local/bin/garage

And finally I was able to run the server seamlessly as you can see:

[root@testing-garage ~]# garage server
 INFO  garage::server > Loading configuration...
 INFO  garage::server > Opening database...
 INFO  garage::server > Initializing background runner...
 INFO  garage::server > Initializing Garage main data store...
 INFO  garage_model::garage > Initialize membership management system...
 INFO  garage_rpc::system   > Generating new node key pair.
 INFO  garage_rpc::system   > Node ID of this node: 88b586fe4eecf620
 INFO  garage_rpc::system   > No valid previous cluster layout stored (IO error: No such file or directory (os error 2)), starting fresh.
 WARN  garage_rpc::ring     > Could not build ring: network role assignation data has invalid length
 INFO  garage_model::garage > Initialize block manager...
 INFO  garage_model::garage > Initialize bucket_table...
 INFO  garage_model::garage > Initialize bucket_alias_table...
 INFO  garage_model::garage > Initialize key_table_table...
 INFO  garage_model::garage > Initialize block_ref_table...
 INFO  garage_model::garage > Initialize version_table...
 INFO  garage_model::garage > Initialize object_table...
 INFO  garage_model::garage > Initialize K2V counter table...
 INFO  garage_model::garage > Initialize K2V subscription manager...
 INFO  garage_model::garage > Initialize K2V item table...
 INFO  garage_model::garage > Initialize Garage...
 INFO  garage::server       > Initialize tracing...
 INFO  garage::server       > Initialize Admin API server and metrics collector...
 INFO  garage::server       > Create admin RPC handler...
 INFO  garage::server       > Initializing S3 API server...
 INFO  garage::server       > Initializing K2V API server...
 INFO  garage::server       > Initializing web server...
 INFO  garage::server       > Launching Admin API server...
 INFO  garage_util::background > Worker started: Merkle tree updater for bucket_v2
 INFO  garage_util::background > Worker started: table sync watcher for bucket_v2
 INFO  garage_util::background > Worker started: table syncer for bucket_v2
 INFO  garage_util::background > Worker started: GC loop for bucket_v2
 INFO  garage_util::background > Worker started: Merkle tree updater for bucket_alias
 INFO  garage_util::background > Worker started: table sync watcher for bucket_alias
 INFO  garage_util::background > Worker started: table syncer for bucket_alias
 INFO  garage_util::background > Worker started: GC loop for bucket_alias
 INFO  garage_util::background > Worker started: Merkle tree updater for key
 INFO  garage_util::background > Worker started: table sync watcher for key
 INFO  garage_util::background > Worker started: table syncer for key
 INFO  garage_util::background > Worker started: GC loop for key
 INFO  garage_util::background > Worker started: Merkle tree updater for block_ref
 INFO  garage_util::background > Worker started: table sync watcher for block_ref
 INFO  garage_util::background > Worker started: table syncer for block_ref
 INFO  garage_util::background > Worker started: GC loop for block_ref
 INFO  garage_util::background > Worker started: Merkle tree updater for version
 INFO  garage_util::background > Worker started: table sync watcher for version
 INFO  garage_util::background > Worker started: table syncer for version
 INFO  garage_util::background > Worker started: GC loop for version
 INFO  garage_util::background > Worker started: Merkle tree updater for object
 INFO  garage_util::background > Worker started: table sync watcher for object
 INFO  garage_util::background > Worker started: table syncer for object
 INFO  garage_util::background > Worker started: GC loop for object
 INFO  garage_util::background > Worker started: Merkle tree updater for k2v_index_counter
 INFO  garage_util::background > Worker started: table sync watcher for k2v_index_counter
 INFO  garage_util::background > Worker started: table syncer for k2v_index_counter
 INFO  garage_util::background > Worker started: GC loop for k2v_index_counter
 INFO  garage_util::background > Worker started: k2v_index_counter index counter propagator
 INFO  garage_util::background > Worker started: Merkle tree updater for k2v_item
 INFO  garage_util::background > Worker started: table sync watcher for k2v_item
 INFO  garage_util::background > Worker started: table syncer for k2v_item
 INFO  garage_util::background > Worker started: GC loop for k2v_item
 INFO  netapp::netapp          > Listening on [::]:3901
 INFO  garage_rpc::system      > Doing a bootstrap/discovery step (not_configured: true, no_peers: false, bad_peers: true)
 INFO  garage_api::generic_server > S3 API server listening on http://[::]:3900
 INFO  garage_web::web_server     > Web server listening on http://[::]:3902
Tested now on a fresh Scaleway DEV1-S running Alma Linux 8.6 and it works (see log below). Based on your last message, it seems you put your config file at the following path: `/etc/garage/garage.toml`. It is not the default path checked by Garage, as Garage tries to open `/etc/garage.toml`. So you can either move your config file in the root folder, directly in `/etc` to match the default path OR you can inform Garage of the non-standard path you chose by running `garage -c /etc/garage/garage.toml server`. This logic is also described in our [quickstart]( https://garagehq.deuxfleurs.fr/documentation/quick-start/#writing-a-first-configuration-file). Our [systemd](https://garagehq.deuxfleurs.fr/documentation/cookbook/systemd/) doc page also mentions that we assume you put your configuration file at `/etc/garage.toml`. If it does not solve your problem, can you be more precise about the flavor of AlmaLinux you are running, some additional packages and hardening you installed/configured? --- And I have just tested on AlmaLinux 8.6, some info about the VPS as a proof: ``` [root@testing-garage ~]# uname -a Linux testing-garage 4.18.0-372.9.1.el8.x86_64 #1 SMP Tue May 10 08:57:35 EDT 2022 x86_64 x86_64 x86_64 GNU/Linux [root@testing-garage ~]# cat /etc/redhat-release AlmaLinux release 8.6 (Sky Tiger) ``` I created my config file by copy/pasting the one from our [quickstart](https://garagehq.deuxfleurs.fr/documentation/quick-start/#writing-a-first-configuration-file): ``` [root@testing-garage ~]# cat > /etc/garage.toml <<EOF > metadata_dir = "/tmp/meta" > data_dir = "/tmp/data" > > replication_mode = "none" > > rpc_bind_addr = "[::]:3901" > rpc_public_addr = "127.0.0.1:3901" > rpc_secret = "1799bccfd7411eddcf9ebd316bc1f5287ad12a68094e1c6ac6abde7e6feae1ec" > > bootstrap_peers = [] > > [s3_api] > s3_region = "garage" > api_bind_addr = "[::]:3900" > root_domain = ".s3.garage.localhost" > > [s3_web] > bind_addr = "[::]:3902" > root_domain = ".web.garage.localhost" > index = "index.html" > EOF ``` Then I downloaded + chmoded the binary as follow: ``` [root@testing-garage ~]# curl https://garagehq.deuxfleurs.fr/_releases/v0.7.2.1/x86_64-unknown-linux-musl/garage -o /usr/local/bin/garage % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 46.6M 100 46.6M 0 0 14.8M 0 0:00:03 0:00:03 --:--:-- 14.8M [root@testing-garage ~]# chmod +x /usr/local/bin/garage ``` And finally I was able to run the server seamlessly as you can see: ``` [root@testing-garage ~]# garage server INFO garage::server > Loading configuration... INFO garage::server > Opening database... INFO garage::server > Initializing background runner... INFO garage::server > Initializing Garage main data store... INFO garage_model::garage > Initialize membership management system... INFO garage_rpc::system > Generating new node key pair. INFO garage_rpc::system > Node ID of this node: 88b586fe4eecf620 INFO garage_rpc::system > No valid previous cluster layout stored (IO error: No such file or directory (os error 2)), starting fresh. WARN garage_rpc::ring > Could not build ring: network role assignation data has invalid length INFO garage_model::garage > Initialize block manager... INFO garage_model::garage > Initialize bucket_table... INFO garage_model::garage > Initialize bucket_alias_table... INFO garage_model::garage > Initialize key_table_table... INFO garage_model::garage > Initialize block_ref_table... INFO garage_model::garage > Initialize version_table... INFO garage_model::garage > Initialize object_table... INFO garage_model::garage > Initialize K2V counter table... INFO garage_model::garage > Initialize K2V subscription manager... INFO garage_model::garage > Initialize K2V item table... INFO garage_model::garage > Initialize Garage... INFO garage::server > Initialize tracing... INFO garage::server > Initialize Admin API server and metrics collector... INFO garage::server > Create admin RPC handler... INFO garage::server > Initializing S3 API server... INFO garage::server > Initializing K2V API server... INFO garage::server > Initializing web server... INFO garage::server > Launching Admin API server... INFO garage_util::background > Worker started: Merkle tree updater for bucket_v2 INFO garage_util::background > Worker started: table sync watcher for bucket_v2 INFO garage_util::background > Worker started: table syncer for bucket_v2 INFO garage_util::background > Worker started: GC loop for bucket_v2 INFO garage_util::background > Worker started: Merkle tree updater for bucket_alias INFO garage_util::background > Worker started: table sync watcher for bucket_alias INFO garage_util::background > Worker started: table syncer for bucket_alias INFO garage_util::background > Worker started: GC loop for bucket_alias INFO garage_util::background > Worker started: Merkle tree updater for key INFO garage_util::background > Worker started: table sync watcher for key INFO garage_util::background > Worker started: table syncer for key INFO garage_util::background > Worker started: GC loop for key INFO garage_util::background > Worker started: Merkle tree updater for block_ref INFO garage_util::background > Worker started: table sync watcher for block_ref INFO garage_util::background > Worker started: table syncer for block_ref INFO garage_util::background > Worker started: GC loop for block_ref INFO garage_util::background > Worker started: Merkle tree updater for version INFO garage_util::background > Worker started: table sync watcher for version INFO garage_util::background > Worker started: table syncer for version INFO garage_util::background > Worker started: GC loop for version INFO garage_util::background > Worker started: Merkle tree updater for object INFO garage_util::background > Worker started: table sync watcher for object INFO garage_util::background > Worker started: table syncer for object INFO garage_util::background > Worker started: GC loop for object INFO garage_util::background > Worker started: Merkle tree updater for k2v_index_counter INFO garage_util::background > Worker started: table sync watcher for k2v_index_counter INFO garage_util::background > Worker started: table syncer for k2v_index_counter INFO garage_util::background > Worker started: GC loop for k2v_index_counter INFO garage_util::background > Worker started: k2v_index_counter index counter propagator INFO garage_util::background > Worker started: Merkle tree updater for k2v_item INFO garage_util::background > Worker started: table sync watcher for k2v_item INFO garage_util::background > Worker started: table syncer for k2v_item INFO garage_util::background > Worker started: GC loop for k2v_item INFO netapp::netapp > Listening on [::]:3901 INFO garage_rpc::system > Doing a bootstrap/discovery step (not_configured: true, no_peers: false, bad_peers: true) INFO garage_api::generic_server > S3 API server listening on http://[::]:3900 INFO garage_web::web_server > Web server listening on http://[::]:3902 ```
Mako closed this issue 2022-08-13 03:02:42 +00:00
Mako reopened this issue 2022-08-13 07:03:48 +00:00
Author

Based on your last message, it seems you put your config file at the following path: /etc/garage/garage.toml. It is not the default path checked by Garage, as Garage tries to open /etc/garage.toml. So you can either move your config file in the root folder, directly in /etc to match the default path OR you can inform Garage of the non-standard path you chose by running garage -c /etc/garage/garage.toml server.

This logic is also described in our quickstart. Our systemd doc page also mentions that we assume you put your configuration file at /etc/garage.toml.

I put my config to /etc/garage/garage.toml based on :your cookbook/real-world instructions page:

"A valid /etc/garage/garage.toml for our cluster would look as follows..."

So it is necessary just to change wrong instructions on this page.

With your great help garage works as you showed before.

But start from "systemd" does not work anyway, please check it too.

sudo systemctl start garage
sudo systemctl status garage

● garage.service - Garage Data Store
   Loaded: loaded (/etc/systemd/system/garage.service; disabled; vendor preset: disabled)
   Active: failed (Result: core-dump) since Sat 2022-08-13 09:10:58; 8s ago
  Process: 1888465 ExecStart=/usr/local/bin/garage server (code=dumped, signal=ABRT)
 Main PID: 1888465 (code=dumped, signal=ABRT)

Aug 13 09:10:58 serv systemd[1]: Started Garage Data Store.
Aug 13 09:10:58 serv garage[1888465]:  INFO  garage::server > Loading configuration...
Aug 13 09:10:58 serv garage[1888465]:  INFO  garage::server > Opening database...
Aug 13 09:10:58 serv garage[1888465]:  ERROR garage         > panicked at 'Unable to open sled DB: Io(Os { code: 13, kind: PermissionDenied, message: "Permission denied" })', server.rs:4>
Aug 13 09:10:58 serv systemd-coredump[1888469]: Process 1888465 (garage) of user 64665 dumped core.
Aug 13 09:10:58 serv systemd[1]: garage.service: Main process exited, code=dumped, status=6/ABRT
Aug 13 09:10:58 serv systemd[1]: garage.service: Failed with result 'core-dump'.

Thank you very much!!

> Based on your last message, it seems you put your config file at the following path: `/etc/garage/garage.toml`. It is not the default path checked by Garage, as Garage tries to open `/etc/garage.toml`. So you can either move your config file in the root folder, directly in `/etc` to match the default path OR you can inform Garage of the non-standard path you chose by running `garage -c /etc/garage/garage.toml server`. > > This logic is also described in our [quickstart]( > https://garagehq.deuxfleurs.fr/documentation/quick-start/#writing-a-first-configuration-file). Our [systemd](https://garagehq.deuxfleurs.fr/documentation/cookbook/systemd/) doc page also mentions that we assume you put your configuration file at `/etc/garage.toml`. > > > I put my config to /etc/garage/garage.toml based on :[your cookbook/real-world instructions page](https://garagehq.deuxfleurs.fr/documentation/cookbook/real-world/): "A valid ***/etc/garage/garage.tom***l for our cluster would look as follows..." So it is necessary just to change wrong instructions on this page. With your great help garage works as you showed before. But start from "systemd" does not work anyway, please check it too. sudo systemctl start garage sudo systemctl status garage ``` ● garage.service - Garage Data Store Loaded: loaded (/etc/systemd/system/garage.service; disabled; vendor preset: disabled) Active: failed (Result: core-dump) since Sat 2022-08-13 09:10:58; 8s ago Process: 1888465 ExecStart=/usr/local/bin/garage server (code=dumped, signal=ABRT) Main PID: 1888465 (code=dumped, signal=ABRT) Aug 13 09:10:58 serv systemd[1]: Started Garage Data Store. Aug 13 09:10:58 serv garage[1888465]: INFO garage::server > Loading configuration... Aug 13 09:10:58 serv garage[1888465]: INFO garage::server > Opening database... Aug 13 09:10:58 serv garage[1888465]: ERROR garage > panicked at 'Unable to open sled DB: Io(Os { code: 13, kind: PermissionDenied, message: "Permission denied" })', server.rs:4> Aug 13 09:10:58 serv systemd-coredump[1888469]: Process 1888465 (garage) of user 64665 dumped core. Aug 13 09:10:58 serv systemd[1]: garage.service: Main process exited, code=dumped, status=6/ABRT Aug 13 09:10:58 serv systemd[1]: garage.service: Failed with result 'core-dump'. ``` Thank you very much!!
Owner

So I tried with a fresh Alma Linux with systemd:

[root@alma ~]# uname -a
Linux alma 4.18.0-372.9.1.el8.x86_64 #1 SMP Tue May 10 08:57:35 EDT 2022 x86_64 x86_64 x86_64 GNU/Linux
[root@alma ~]# cat /etc/redhat-release
AlmaLinux release 8.6 (Sky Tiger)

I copy/pasted our systemd service:

[root@alma ~]# cat /etc/systemd/system/garage.service
[Unit]
Description=Garage Data Store
After=network-online.target
Wants=network-online.target

[Service]
Environment='RUST_LOG=garage=info' 'RUST_BACKTRACE=1'
ExecStart=/usr/local/bin/garage server
StateDirectory=garage
DynamicUser=true
ProtectHome=true
NoNewPrivileges=true

[Install]
WantedBy=multi-user.target

Started Garage and looked at its status, it works:

[root@alma ~]# systemctl status garage
● garage.service - Garage Data Store
   Loaded: loaded (/etc/systemd/system/garage.service; disabled; vendor preset: disabled)
   Active: active (running) since Tue 2022-08-16 07:29:12 UTC; 45s ago
 Main PID: 1674 (garage)
    Tasks: 11 (limit: 12244)
   Memory: 28.9M
   CGroup: /system.slice/garage.service
           └─1674 /usr/local/bin/garage server

Aug 16 07:29:12 alma garage[1674]:  INFO  garage_util::background > Worker started: k2v_index_counter index counter propagator
Aug 16 07:29:12 alma garage[1674]:  INFO  garage_util::background > Worker started: Merkle tree updater for k2v_item
Aug 16 07:29:12 alma garage[1674]:  INFO  garage_util::background > Worker started: table sync watcher for k2v_item
Aug 16 07:29:12 alma garage[1674]:  INFO  garage_util::background > Worker started: table syncer for k2v_item
Aug 16 07:29:12 alma garage[1674]:  INFO  garage_util::background > Worker started: GC loop for k2v_item
Aug 16 07:29:12 alma garage[1674]:  INFO  garage_rpc::system      > Doing a bootstrap/discovery step (not_configured: true, no_peers: false, bad_peers: true)
Aug 16 07:29:12 alma garage[1674]:  INFO  garage_util::background > Worker started: table sync watcher for object
Aug 16 07:29:12 alma garage[1674]:  INFO  garage_web::web_server  > Web server listening on http://[::]:3902
Aug 16 07:29:12 alma garage[1674]:  INFO  garage_api::generic_server > S3 API server listening on http://[::]:3900
Aug 16 07:29:22 alma garage[1674]:  INFO  garage_util::background    > Worker started: block resync worker

SELinux is activated (and I checked journalctl, no SELinux alerts):

[root@alma ~]# getenforce
Enforcing

And I am able to interact with garage after that:

[root@alma ~]# garage status
 INFO  netapp::netapp > Connected to 127.0.0.1:3901, negotiating handshake...
 INFO  netapp::netapp > Connection established to 5c60359f1ee6a383
==== HEALTHY NODES ====
ID                Hostname  Address         Tags              Zone  Capacity
5c60359f1ee6a383  alma      127.0.0.1:3901  NO ROLE ASSIGNED
[root@alma ~]# garage layout assign -c 1 -z dc1 5c60 ; garage layout apply --version 1
 INFO  netapp::netapp > Connected to 127.0.0.1:3901, negotiating handshake...
 INFO  netapp::netapp > Connection established to 5c60359f1ee6a383
Role changes are staged but not yet commited.
Use `garage layout show` to view staged role changes,
and `garage layout apply` to enact staged changes.
 INFO  netapp::netapp > Connected to 127.0.0.1:3901, negotiating handshake...
 INFO  netapp::netapp > Connection established to 5c60359f1ee6a383
Calculating updated partition assignation, this may take some time...

Target number of partitions per node:
5c60359f1ee6a383	256

New number of partitions per node:
5c60359f1ee6a383	256	(100% of 256)

Number of partitions that move:
	256	[] -> [5c60359f1ee6a383]

New cluster layout with updated role assignation has been applied in cluster.
Data will now be moved around between nodes accordingly.
[root@alma ~]# garage status
 INFO  netapp::netapp > Connected to 127.0.0.1:3901, negotiating handshake...
 INFO  netapp::netapp > Connection established to 5c60359f1ee6a383
==== HEALTHY NODES ====
ID                Hostname  Address         Tags  Zone  Capacity
5c60359f1ee6a383  alma      127.0.0.1:3901  []    dc1   1

So now, I am pretty convinced we have covered Garage over a vanilla AlmaLinux.
Concerning your problem, I think this is specific to your deployment.

Garage reports the following error:

ERROR garage         > panicked at 'Unable to open sled DB: Io(Os { code: 13, kind: PermissionDenied, message: "Permission denied" })', server.rs:4>

Sled is currently used to store our metadata, so it refers to this line in our config directory:

metadata_dir = "/var/lib/garage/meta"

Are you sure your /etc/garage.toml file has this path set even after our previous test where I changed it to /tmp? Do you have special permissions on your /var/lib/ folder?

To help you check your paths, it looks like this on my FS after Garage first successful run, so make sure that files, folders and symlinks can be created here:

[root@alma ~]# ls -lah /var/lib/garage /var/lib/private /var/lib/private/garage /var/lib/private/garage/meta
lrwxrwxrwx. 1 root   root   14 Aug 16 07:29 /var/lib/garage -> private/garage

/var/lib/private:
total 4.0K
drwx------.  3 root   root     20 Aug 16 07:29 .
drwxr-xr-x. 35 root   root   4.0K Aug 16 07:29 ..
drwxr-xr-x.  3 garage garage   18 Aug 16 07:29 garage

/var/lib/private/garage:
total 0
drwxr-xr-x. 3 garage garage 18 Aug 16 07:29 .
drwx------. 3 root   root   20 Aug 16 07:29 ..
drwxr-xr-x. 3 garage garage 91 Aug 16 07:31 meta

/var/lib/private/garage/meta:
total 16K
drwxr-xr-x. 3 garage garage  91 Aug 16 07:31 .
drwxr-xr-x. 3 garage garage  18 Aug 16 07:29 ..
-rw-r--r--. 1 garage garage 503 Aug 16 07:31 cluster_layout
drwxr-xr-x. 3 garage garage  41 Aug 16 07:29 db
-rw-------. 1 garage garage  64 Aug 16 07:29 node_key
-rw-r--r--. 1 garage garage  32 Aug 16 07:29 node_key.pub
-rw-r--r--. 1 garage garage  49 Aug 16 07:38 peer_list

Note that I did not create a user named garage, this is handled by systemd directly through the DynamicUser hardening, as we can see here:

[root@alma ~]# cat /etc/passwd|grep garage || echo not found
not found
[root@alma ~]# getent passwd|grep garage
garage:*:64665:64665:Dynamic User:/:/sbin/nologin

If it still does not work for you, can you try to remove the hardening:

  1. Make sure garage is stopped
  2. Remove /var/lib/garage and /var/lib/private/garage/ if any
  3. Remove the following lines from your service file
StateDirectory=garage
DynamicUser=true
ProtectHome=true
NoNewPrivileges=true
  1. Run mkdir -p /var/lib/garage
  2. Run systemctl daemon-reload
  3. Run systemctl start garage
So I tried with a fresh Alma Linux with systemd: ``` [root@alma ~]# uname -a Linux alma 4.18.0-372.9.1.el8.x86_64 #1 SMP Tue May 10 08:57:35 EDT 2022 x86_64 x86_64 x86_64 GNU/Linux [root@alma ~]# cat /etc/redhat-release AlmaLinux release 8.6 (Sky Tiger) ``` I copy/pasted our systemd service: ``` [root@alma ~]# cat /etc/systemd/system/garage.service [Unit] Description=Garage Data Store After=network-online.target Wants=network-online.target [Service] Environment='RUST_LOG=garage=info' 'RUST_BACKTRACE=1' ExecStart=/usr/local/bin/garage server StateDirectory=garage DynamicUser=true ProtectHome=true NoNewPrivileges=true [Install] WantedBy=multi-user.target ``` Started Garage and looked at its status, **it works**: ``` [root@alma ~]# systemctl status garage ● garage.service - Garage Data Store Loaded: loaded (/etc/systemd/system/garage.service; disabled; vendor preset: disabled) Active: active (running) since Tue 2022-08-16 07:29:12 UTC; 45s ago Main PID: 1674 (garage) Tasks: 11 (limit: 12244) Memory: 28.9M CGroup: /system.slice/garage.service └─1674 /usr/local/bin/garage server Aug 16 07:29:12 alma garage[1674]: INFO garage_util::background > Worker started: k2v_index_counter index counter propagator Aug 16 07:29:12 alma garage[1674]: INFO garage_util::background > Worker started: Merkle tree updater for k2v_item Aug 16 07:29:12 alma garage[1674]: INFO garage_util::background > Worker started: table sync watcher for k2v_item Aug 16 07:29:12 alma garage[1674]: INFO garage_util::background > Worker started: table syncer for k2v_item Aug 16 07:29:12 alma garage[1674]: INFO garage_util::background > Worker started: GC loop for k2v_item Aug 16 07:29:12 alma garage[1674]: INFO garage_rpc::system > Doing a bootstrap/discovery step (not_configured: true, no_peers: false, bad_peers: true) Aug 16 07:29:12 alma garage[1674]: INFO garage_util::background > Worker started: table sync watcher for object Aug 16 07:29:12 alma garage[1674]: INFO garage_web::web_server > Web server listening on http://[::]:3902 Aug 16 07:29:12 alma garage[1674]: INFO garage_api::generic_server > S3 API server listening on http://[::]:3900 Aug 16 07:29:22 alma garage[1674]: INFO garage_util::background > Worker started: block resync worker ``` SELinux is activated (and I checked `journalctl`, no SELinux alerts): ``` [root@alma ~]# getenforce Enforcing ``` And I am able to interact with garage after that: ``` [root@alma ~]# garage status INFO netapp::netapp > Connected to 127.0.0.1:3901, negotiating handshake... INFO netapp::netapp > Connection established to 5c60359f1ee6a383 ==== HEALTHY NODES ==== ID Hostname Address Tags Zone Capacity 5c60359f1ee6a383 alma 127.0.0.1:3901 NO ROLE ASSIGNED [root@alma ~]# garage layout assign -c 1 -z dc1 5c60 ; garage layout apply --version 1 INFO netapp::netapp > Connected to 127.0.0.1:3901, negotiating handshake... INFO netapp::netapp > Connection established to 5c60359f1ee6a383 Role changes are staged but not yet commited. Use `garage layout show` to view staged role changes, and `garage layout apply` to enact staged changes. INFO netapp::netapp > Connected to 127.0.0.1:3901, negotiating handshake... INFO netapp::netapp > Connection established to 5c60359f1ee6a383 Calculating updated partition assignation, this may take some time... Target number of partitions per node: 5c60359f1ee6a383 256 New number of partitions per node: 5c60359f1ee6a383 256 (100% of 256) Number of partitions that move: 256 [] -> [5c60359f1ee6a383] New cluster layout with updated role assignation has been applied in cluster. Data will now be moved around between nodes accordingly. [root@alma ~]# garage status INFO netapp::netapp > Connected to 127.0.0.1:3901, negotiating handshake... INFO netapp::netapp > Connection established to 5c60359f1ee6a383 ==== HEALTHY NODES ==== ID Hostname Address Tags Zone Capacity 5c60359f1ee6a383 alma 127.0.0.1:3901 [] dc1 1 ``` So now, I am pretty convinced we have covered Garage over a vanilla AlmaLinux. Concerning your problem, I think this is specific to your deployment. Garage reports the following error: ``` ERROR garage > panicked at 'Unable to open sled DB: Io(Os { code: 13, kind: PermissionDenied, message: "Permission denied" })', server.rs:4> ``` Sled is currently used to store our metadata, so it refers to this line in our config directory: ``` metadata_dir = "/var/lib/garage/meta" ``` Are you sure your `/etc/garage.toml` file has this path set even after our previous test where I changed it to `/tmp`? Do you have special permissions on your `/var/lib/` folder? To help you check your paths, it looks like this on my FS after Garage first successful run, so make sure that files, folders and symlinks can be created here: ``` [root@alma ~]# ls -lah /var/lib/garage /var/lib/private /var/lib/private/garage /var/lib/private/garage/meta lrwxrwxrwx. 1 root root 14 Aug 16 07:29 /var/lib/garage -> private/garage /var/lib/private: total 4.0K drwx------. 3 root root 20 Aug 16 07:29 . drwxr-xr-x. 35 root root 4.0K Aug 16 07:29 .. drwxr-xr-x. 3 garage garage 18 Aug 16 07:29 garage /var/lib/private/garage: total 0 drwxr-xr-x. 3 garage garage 18 Aug 16 07:29 . drwx------. 3 root root 20 Aug 16 07:29 .. drwxr-xr-x. 3 garage garage 91 Aug 16 07:31 meta /var/lib/private/garage/meta: total 16K drwxr-xr-x. 3 garage garage 91 Aug 16 07:31 . drwxr-xr-x. 3 garage garage 18 Aug 16 07:29 .. -rw-r--r--. 1 garage garage 503 Aug 16 07:31 cluster_layout drwxr-xr-x. 3 garage garage 41 Aug 16 07:29 db -rw-------. 1 garage garage 64 Aug 16 07:29 node_key -rw-r--r--. 1 garage garage 32 Aug 16 07:29 node_key.pub -rw-r--r--. 1 garage garage 49 Aug 16 07:38 peer_list ``` Note that I did not create a user named `garage`, this is handled by systemd directly through the `DynamicUser` hardening, as we can see here: ``` [root@alma ~]# cat /etc/passwd|grep garage || echo not found not found [root@alma ~]# getent passwd|grep garage garage:*:64665:64665:Dynamic User:/:/sbin/nologin ``` If it still does not work for you, can you try to remove the hardening: 1. Make sure garage is stopped 2. Remove `/var/lib/garage` and `/var/lib/private/garage/` if any 3. Remove the following lines from your service file ``` StateDirectory=garage DynamicUser=true ProtectHome=true NoNewPrivileges=true ``` 4. Run `mkdir -p /var/lib/garage` 5. Run `systemctl daemon-reload` 6. Run `systemctl start garage`
Owner

Is this fixed?

Is this fixed?
Owner

Closing due to inactivity, feel free to re-open this issue or a new one with additional information.

Closing due to inactivity, feel free to re-open this issue or a new one with additional information.
Sign in to join this conversation.
No Milestone
No Assignees
3 Participants
Notifications
Due Date
The due date is invalid or out of range. Please use the format 'yyyy-mm-dd'.

No due date set.

Dependencies

No dependencies set.

Reference: Deuxfleurs/garage#359
No description provided.