Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Enhancement]: clean up old reuse containers #1166

Open
jkogler-cloudflight opened this issue Apr 29, 2024 · 4 comments
Open

[Enhancement]: clean up old reuse containers #1166

jkogler-cloudflight opened this issue Apr 29, 2024 · 4 comments
Labels
enhancement New feature or request

Comments

@jkogler-cloudflight
Copy link

Problem

I'm not entirely sure, if this is an indented use-case or not.

In one of my projects, we have a huge number of database migrations and also a lot of seeding data.
So my goal was, to use the new reuse-feature ( #1051 ), so that I once start the database in a testcontainer, apply all migrations and seed some data, and then reuse that container.

_container = new MsSqlBuilder()
    ...
    .WithCleanUp(true)
    .WithReuse(true)
    .WithLabel("reuse-id", hash_of_migrations_and_seeding)
    .Build();

Whenever a new migration is added, or some seeding data changes, the hash changes, and a new container is created and used.
When it stays the same, the existing container is reused.

That actually works, however the old containers are never cleaned up.
New containers are created, but the old containers (with different reuse-id labels) either keep running (is .Dispose() call is disabled), or keep existing in a stopped state (if .Dispose() is used).
In both variants, more and more resources of the system are used, until you manually clean it up.

Solution

I would prefer it, that if the reuse-id changes, all old containers with the same image and different reuse-id are deleted automatically.

Benefit

Less resources used on your system.

Alternatives

We keep it as is.

Would you like to help contributing this enhancement?

Yes

@jkogler-cloudflight jkogler-cloudflight added the enhancement New feature or request label Apr 29, 2024
@jkogler-cloudflight
Copy link
Author

@david-szabo97 Since you implemented the reuse feature, what do you think?

@HofmeisterAn
Copy link
Collaborator

I would prefer it, that if the reuse-id changes, all old containers with the same image and different reuse-id are deleted automatically.

We cannot detect that, as we generate the hash from the actual builder configuration.

This is somewhat akin to a chicken-and-egg problem, and showing how effectively and almost invisibly Testcontainers handles the cleanup of test resources 🔥. It is crucial to understand the concept of Ryuk and why it is necessary (as simply running cleanup from the test process does not work reliable).

During my time at AJ, I proposed the idea of integrating external events into Ryuk. If Ryuk detects an event and exits, it would then remove all related resources. Testcontainers for .NET theoretically supports individual Ryuk instances (running independently from each other). Resources can be grouped, and the assigned Ryuk instance will manage them accordingly. AJ introduced Testcontainers Desktop (maybe that supports your use case), which also supports reuse. However, I have not had the chance to try it yet; AFAIK it is only available for macOS.

@jkogler-cloudflight
Copy link
Author

Pity...

Theoretically, it should be possible to implement it somehow (not sure if Ryuk supports this or not, so far I never had any reason to look into it).
You could specify a hardcoded label in the container definition .AddLabel("hardcoded-testcontainer-id", "42"), so that all containers will have that label.
And then filter for all containers (regardless of running or stopped) with that label, and stop/delete all that have a different reuse-id. At least you can do so with some bash scripting and the docker container --all --filter "label=..." commands.

@mrudat
Copy link

mrudat commented May 14, 2024

I suspect that you'd need to keep track of the hashes of the containers being requested in a constantly running container akin to or as an extension of ryuk, and if you don't see a request for, say, a day, you can probably expire the old resources.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants