Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Ryuk might shutdown reused Container while connected to it #2445

Open
mdonkers opened this issue Mar 21, 2024 · 5 comments
Open

[Bug]: Ryuk might shutdown reused Container while connected to it #2445

mdonkers opened this issue Mar 21, 2024 · 5 comments
Labels
bug An issue with the library

Comments

@mdonkers
Copy link
Contributor

mdonkers commented Mar 21, 2024

Testcontainers version

0.29.1

Using the latest Testcontainers version?

Yes

Host OS

Linux

Host arch

x86

Go version

1.22

Docker version

unrelated

Docker info

unrelated

What happened?

As I was improving some integration tests in our own project, I sometimes noticed failures after we switched to reusing containers. Since I was improving the run-time of the tests, I was executing ITs several times after each other to make sure code compilation was not included in the time (while go clean -testcache && make integration-test; do :; done. I had a suspicion that me quickly running tests after each other was related to the failures so I did some further investigation.

From testcontainers output I saw several times that more than 1 container was created (while Reuse: true):

2024/03/21 23:10:58 🐳 Creating container for image testcontainers/ryuk:0.6.0
2024/03/21 23:10:58 ✅ Container created: efa32a26eb2e
2024/03/21 23:10:58 🐳 Starting container: efa32a26eb2e
2024/03/21 23:10:59 ✅ Container started: efa32a26eb2e
2024/03/21 23:10:59 🚧 Waiting for container id efa32a26eb2e image: testcontainers/ryuk:0.6.0. Waiting for: &{Port:8080/tcp timeout:<nil> PollInterval:100ms}
2024/03/21 23:10:59 🔔 Container is ready: efa32a26eb2e
2024/03/21 23:10:59 ✅ Container started: 0c8e757faff1
2024/03/21 23:10:59 🚧 Waiting for container id 0c8e757faff1 image: clickhouse/clickhouse-server:24.2-alpine. Waiting for: &{timeout:0xc000799058 URL:0x13356e0 Driver:clickhouse Port:9000/tcp startupTimeout:60000
000000 PollInterval:100ms query:SELECT 1}                                                                                                                                                                           2024/03/21 23:10:59 🔔 Container is ready: 0c8e757faff1
2024/03/21 23:10:59 ✅ Container started: 0c8e757faff1
2024/03/21 23:10:59 🚧 Waiting for container id 0c8e757faff1 image: clickhouse/clickhouse-server:24.2-alpine. Waiting for: &{timeout:0xc0005849e8 URL:0x13356e0 Driver:clickhouse Port:9000/tcp startupTimeout:60000
000000 PollInterval:100ms query:SELECT 1}
...
{"status":"error","errorType":"bad_data","error":"dial tcp [::1]:32918: connect: connection refused"}
...
2024/03/21 23:11:07 🐳 Creating container for image clickhouse/clickhouse-server:24.2-alpine
2024/03/21 23:11:07 🚧 Waiting for container id 7e037d775014 image: clickhouse/clickhouse-server:24.2-alpine. Waiting for: &{timeout:0xc0046bf638 URL:0x13356e0 Driver:clickhouse Port:9000/tcp startupTimeout:60000
000000 PollInterval:100ms query:SELECT 1}                                                                                                                                                                           2024/03/21 23:11:08 🔔 Container is ready: 7e037d775014
...

As you see, in the same test-run a new ClickHouse container gets created while we never call Terminate. Tests start failing as they can no longer connect to the old container (to which they hold a connection based on the mapped port).
My suspicion was that Ryuk was for some reason terminating the 'old' ClickHouse container, still live from a previous run.

Looking at the code, this appears indeed what is causing it:

  1. When starting a new test-run, because reusing containers based on name, testcontainers finds the 'old' still running ClickHouse:
    c, err := p.findContainerByName(ctx, req.Name)
  2. For Ryuk however, a new Container is created based on the SessionID:
    r, err := reuseOrCreateReaper(context.WithValue(ctx, core.DockerHostContextKey, p.host), sessionID, p)
  3. The 'old' Ryuk, belonging to the previous test run, won't get a new connection and so after 10s kills itself, but also the still running ClickHouse container.

To fix this, my proposal would be to somehow add the SessionID also to the 'reusable' container name. Either implicitly or by some flag or by exposing the SessionID so the user can add it (currently it's part of the internal package so not reachable).

I'm happy to work on a fix, if I can get any suggestion for a preferred approach.

Relevant log output

docker ps -a output after the failure, showing two reapers and one CH container:

$ docker ps -a
CONTAINER ID   IMAGE                                      COMMAND            CREATED          STATUS          PORTS                                                        NAMES
efa32a26eb2e   testcontainers/ryuk:0.6.0                  "/bin/ryuk"        6 seconds ago    Up 5 seconds    0.0.0.0:32920->8080/tcp, :::32822->8080/tcp                  reaper_39db6ba506d2d713d174270a8ad7aeb95fcdc7e5e13895ae3be33fa70ade946a
0c8e757faff1   clickhouse/clickhouse-server:24.2-alpine   "/entrypoint.sh"   17 seconds ago   Up 16 seconds   9009/tcp, 0.0.0.0:32919->8123/tcp, 0.0.0.0:32918->9000/tcp   otel-clickhouse
cdc2a4e002e6   testcontainers/ryuk:0.6.0                  "/bin/ryuk"        17 seconds ago   Up 16 seconds   0.0.0.0:32917->8080/tcp, :::32821->8080/tcp                  reaper_ae0426fe5786540b6ee4155474bd2ccf2d21bbe9dd1e0134154826996e67fd9b

Additional information

No response

@Alviner
Copy link
Contributor

Alviner commented Mar 22, 2024

As I know reaper is grabbing containers by labels, where session id is already stored. Could you please attach labels with your case?

@mdonkers
Copy link
Contributor Author

Yes, that is correct. Reaper will kill the container based on the matching SessionID label.
The problem is however that the 'reused' container is only selected by name, it doesn't look at the SessionID label. And so a second Reaper will be created, but the previously running 'reused' container is used.

Containers created on first test run:

$ docker container ls --format "table {{.ID}}\t{{.Image}}\t{{.Names}}\t{{.Labels}}"
CONTAINER ID   IMAGE                                      NAMES                                                                     LABELS
bb1591f66ec7   clickhouse/clickhouse-server:24.2-alpine   otel-clickhouse                                                           org.testcontainers.version=0.29.1,build-url=https://github.com/ClickHouse/ClickHouse/actions/runs/8292363504,com.clickhouse.build.githash=9293d361e72be9f6ccfd444d504e2137b2e837cf,org.testcontainers=true,org.testcontainers.lang=go,org.testcontainers.sessionId=aff9f8eec6c8b2cb10a7ced14215192ac300148e6eb93d3902805c904b68dec3
824e4c0cc1ec   testcontainers/ryuk:0.6.0                  reaper_aff9f8eec6c8b2cb10a7ced14215192ac300148e6eb93d3902805c904b68dec3   org.testcontainers.sessionId=aff9f8eec6c8b2cb10a7ced14215192ac300148e6eb93d3902805c904b68dec3,org.testcontainers.version=0.29.1,org.testcontainers=true,org.testcontainers.lang=go,org.testcontainers.reaper=true,org.testcontainers.ryuk=true

Now I'm running tests for a second time, see the output, where the ClickHouse container with ID bb1591f66ec7 is picked up again initially:

2024/03/22 23:04:40 🐳 Creating container for image testcontainers/ryuk:0.6.0
2024/03/22 23:04:40 ✅ Container created: f59540ea88b6
2024/03/22 23:04:40 🐳 Starting container: f59540ea88b6
2024/03/22 23:04:41 ✅ Container started: f59540ea88b6
2024/03/22 23:04:41 🚧 Waiting for container id f59540ea88b6 image: testcontainers/ryuk:0.6.0. Waiting for: &{Port:8080/tcp timeout:<nil> PollInterval:100ms}
2024/03/22 23:04:41 🔔 Container is ready: f59540ea88b6
2024/03/22 23:04:41 ✅ Container started: bb1591f66ec7
2024/03/22 23:04:41 🚧 Waiting for container id bb1591f66ec7 image: clickhouse/clickhouse-server:24.2-alpine. Waiting for: &{timeout:0xc000ca6038 URL:0x1335700 Driver:clickhouse Port:9000/tcp startupTimeout:60000
000000 PollInterval:100ms query:SELECT 1}                                                                                                                                                                           2024/03/22 23:04:41 🔔 Container is ready: bb1591f66ec7
[clickhouse][conn=1][127.0.0.1:33094][handshake] ->  0.0.0

At some point during the test, the previous Reaper kills the existing ClickHouse, as you can see with the mapped port returning EOF. And a new container is created:

2024/03/22 23:04:42 🔔 Container is ready: bb1591f66ec7
2024/03/22 23:04:42 ✅ Container started: bb1591f66ec7
2024/03/22 23:04:42 🚧 Waiting for container id bb1591f66ec7 image: clickhouse/clickhouse-server:24.2-alpine. Waiting for: &{timeout:0xc00074c0a8 URL:0x1335700 Driver:clickhouse Port:9000/tcp startupTimeout:60000
000000 PollInterval:100ms query:SELECT 1}
    testutils.go:74: Post "http://localhost:33095?query=INSERT+INTO+<table>+FORMAT+JSON": EOF
2024/03/22 23:04:42 failed accessing container logs: Error response from daemon: can not get logs from container which is dead or marked for removal
    testcontainerprovider.go:43: unexpected container status "removing": failed to create container
2024/03/22 23:04:42 🐳 Creating container for image clickhouse/clickhouse-server:24.2-alpine
2024/03/22 23:04:42 ✅ Container created: eb9537fc1ef1
2024/03/22 23:04:42 🐳 Starting container: eb9537fc1ef1
2024/03/22 23:04:42 ✅ Container started: eb9537fc1ef1
2024/03/22 23:04:42 🚧 Waiting for container id eb9537fc1ef1 image: clickhouse/clickhouse-server:24.2-alpine. Waiting for: &{timeout:0xc000ca6068 URL:0x1335700 Driver:clickhouse Port:9000/tcp startupTimeout:60000
000000 PollInterval:100ms query:SELECT 1}                                                                                                                                                                           2024/03/22 23:04:43 🔔 Container is ready: eb9537fc1ef1
[clickhouse][conn=1][127.0.0.1:33097][handshake] ->  0.0.0

A new container is created, but the killed container already causes some tests to fail.

@codefromthecrypt
Copy link
Contributor

I'm not sure if it is related, but I ran into an issue when I moved tests into different packages (so different TestMain to setup testcontainers k3s). After one package finished, the next dies unless I disable ryuk

2024/03/24 20:06:23 🔥 Reaper obtained from Docker for this test session 0c4d803193470acd6fc5019c9a96e32c3a1b34a7719b5d77cad86acc3d3bf9ba
2024/03/24 20:06:23 skipping due to docker error: dial tcp [::1]:32777: connect: connection refused: Connecting to Ryuk on localhost:32777 failed: connecting to reaper failed: failed to create container

@mdonkers
Copy link
Contributor Author

I'm not sure if it is related, but I ran into an issue when I moved tests into different packages (so different TestMain to setup testcontainers k3s). After one package finished, the next dies unless I disable ryuk

2024/03/24 20:06:23 🔥 Reaper obtained from Docker for this test session 0c4d803193470acd6fc5019c9a96e32c3a1b34a7719b5d77cad86acc3d3bf9ba
2024/03/24 20:06:23 skipping due to docker error: dial tcp [::1]:32777: connect: connection refused: Connecting to Ryuk on localhost:32777 failed: connecting to reaper failed: failed to create container

I think that could indeed be related. Two packages run with two different SessionIDs ( during the test, you should see multiple Ryuk / Reapers created ). The 'reused' container will be killed by one of the Reapers shutting down, if those tests have finished and timed out. But because the other test package connects to a different Reaper, it does not prevent the 'other' Reaper to shutdown and take the reused container with it.

So either Reaper should be fully re-used as well, with the same semantics (e.g. SessionID not part of the container name), or the 'reused' container needs to be scoped to the session / package as well.

Now I could see both making sense from a user perspective, so it has to be a config option somehow.

@cfstras
Copy link

cfstras commented Apr 8, 2024

I think I'm being hit by the same problem - I have two separate packages using the same ContainerRequest definition (from a util package) to start a database.
in about 50% of cases, the tests fail as such:

🔥 Reaper obtained from Docker for this test session 2a922fde398d6fe9e8f12ca45ef893b6ef3c42c41eacbd4abb2c2ce2c0590ea5
    db.go:57:
                Error Trace:    /.../internal/testutil/db.go:57
                                                        /opt/homebrew/Cellar/go/1.22.2/libexec/src/sync/once.go:74
                                                        /opt/homebrew/Cellar/go/1.22.2/libexec/src/sync/once.go:65
                                                        /.../internal/testutil/db.go:41
                                                        /.../internal/db/migrations_test.go:22
                Error:          Received unexpected error:
                                dial tcp [::1]:32829: connect: connection refused: Connecting to Ryuk on localhost:32829 failed: connecting to reaper failed: failed to create container
                Test:           TestMigrate
                Messages:       Could not start postgres container

As a workaround, I'm setting TESTCONTAINERS_RYUK_DISABLED=true, which means I have to be careful about cleanup, but at least the tests work.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug An issue with the library
Projects
None yet
Development

No branches or pull requests

4 participants