Upgraded Synapse and Element-web on cluster's nomad, and the OP guide
This commit is contained in:
parent
d286da23d8
commit
24dcc09695
2 changed files with 11 additions and 11 deletions
|
@ -15,7 +15,7 @@ job "im" {
|
||||||
driver = "docker"
|
driver = "docker"
|
||||||
|
|
||||||
config {
|
config {
|
||||||
image = "superboum/amd64_synapse:v40"
|
image = "particallydone/amd64_synapse:v41"
|
||||||
network_mode = "host"
|
network_mode = "host"
|
||||||
readonly_rootfs = true
|
readonly_rootfs = true
|
||||||
ports = [ "client_port", "federation_port" ]
|
ports = [ "client_port", "federation_port" ]
|
||||||
|
@ -220,7 +220,7 @@ job "im" {
|
||||||
task "server" {
|
task "server" {
|
||||||
driver = "docker"
|
driver = "docker"
|
||||||
config {
|
config {
|
||||||
image = "superboum/amd64_riotweb:v19"
|
image = "particallydone/amd64_riotweb:v20"
|
||||||
ports = [ "web_port" ]
|
ports = [ "web_port" ]
|
||||||
volumes = [
|
volumes = [
|
||||||
"secrets/config.json:/srv/http/config.json"
|
"secrets/config.json:/srv/http/config.json"
|
||||||
|
|
|
@ -38,9 +38,9 @@ Don't forget to commit and push your changes before doing anything else!
|
||||||
|
|
||||||
## 2. Deploy the new containers
|
## 2. Deploy the new containers
|
||||||
|
|
||||||
Now, we will edit the deployment file `app/deployment/im.hcl`.
|
Now, we will edit the deployment file `app/im/deploy/im.hcl`.
|
||||||
|
|
||||||
Find where the image is defined in the file, for example in Riot, it will look like that:
|
Find where the image is defined in the file, for example Element-web will look like that:
|
||||||
|
|
||||||
|
|
||||||
```hcl
|
```hcl
|
||||||
|
@ -56,25 +56,25 @@ Find where the image is defined in the file, for example in Riot, it will look l
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
And replace the `image =` entry with your image name.
|
And replace the `image =` entry with its new version created above.
|
||||||
Do the same thing for `synapse`.
|
Do the same thing for the `synapse` service.
|
||||||
|
|
||||||
Now, you need a way to access the cluster to deploy this file.
|
Now, you need a way to access the cluster to deploy this file.
|
||||||
To do this, you must bind nomad on your machine through a SSH tunnel.
|
To do this, you must bind nomad on your machine through a SSH tunnel.
|
||||||
Check the end of `README.md` to do it.
|
Check the end of [the parent `README.md`](../README.md) to do it.
|
||||||
If you have access to the Nomad web UI when entering http://127.0.0.1:4646
|
If you have access to the Nomad web UI when entering http://127.0.0.1:4646
|
||||||
you are ready to go.
|
you are ready to go.
|
||||||
|
|
||||||
You must have installed the Nomad command line tool on your machine (also explained in `README.md`).
|
You must have installed the Nomad command line tool on your machine (also explained in [the parent `README.md`](../README.md)).
|
||||||
|
|
||||||
Now, on your machine, you must be able to run (from the `app/deployment` folder) :
|
Now, on your machine and from the `app/im/deploy` folder, you must be able to run:
|
||||||
|
|
||||||
```
|
```
|
||||||
nomad plan im.hcl
|
nomad plan im.hcl
|
||||||
```
|
```
|
||||||
|
|
||||||
Check that the proposed diff corresponds to what you have in mind.
|
Check that the proposed diff corresponds to what you have in mind.
|
||||||
If it seems OK, just copy paste the proposed `nomad job run ... im.hcl` command proposed as part of the output of the `nomad plan` command.
|
If it seems OK, just copy paste the `nomad job run ... im.hcl` command proposed as part of the output of the `nomad plan` command.
|
||||||
|
|
||||||
From now, it will take around ~2 minutes to deploy the new images.
|
From now, it will take around ~2 minutes to deploy the new images.
|
||||||
You can follow the deployment from the Nomad UI.
|
You can follow the deployment from the Nomad UI.
|
||||||
|
@ -88,6 +88,6 @@ If something went wrong, you must rollback your deployment.
|
||||||
2. Revert to this deployment with [nomad job revert](https://www.nomadproject.io/docs/commands/job/revert)
|
2. Revert to this deployment with [nomad job revert](https://www.nomadproject.io/docs/commands/job/revert)
|
||||||
|
|
||||||
Now, if the deployment failed, you should probably investigate what went wrong offline.
|
Now, if the deployment failed, you should probably investigate what went wrong offline.
|
||||||
In this case, I build a test stack with docker-compose in `app/integration` (for now, I had to do that only for plume and jitsi).
|
I built a test stack with docker-compose in `app/<service>/integration` that should help you out (for now, test suites are only written for plume and jitsi).
|
||||||
|
|
||||||
|
|
||||||
|
|
Reference in a new issue