Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Concurrent map writes in dockerCompose.Up #811

Closed
joshua-hill-form3 opened this issue Feb 7, 2023 · 2 comments · Fixed by #812
Closed

[Bug]: Concurrent map writes in dockerCompose.Up #811

joshua-hill-form3 opened this issue Feb 7, 2023 · 2 comments · Fixed by #812
Labels
bug An issue with the library

Comments

@joshua-hill-form3
Copy link
Contributor

joshua-hill-form3 commented Feb 7, 2023

Testcontainers version

0.17.0

Using the latest Testcontainers version?

Yes

Host OS

Darwin

Host arch

amd64

Go version

1.18.10

Docker version

not relevant

Docker info

not relevant

What happened?

Given a test set up that uses the Docker Compose module
And there are multiple wait strategies
When Up is called
Then a panic occurred with output fatal error: concurrent map writes

Relevant log output

fatal error: concurrent map writes
fatal error: concurrent map writes

goroutine 167 [running]:
runtime.throw({0x29dfd2f?, 0x70?})
	/opt/actions-runner/_work/_tool/go/1.18.10/x64/src/runtime/panic.go:992 +0x71 fp=0xc0005d3c78 sp=0xc0005d3c48 pc=0x43aa91
runtime.mapassign_faststr(0x2582120, 0xc000a35e60, {0x29b9e06, 0x5})
	/opt/actions-runner/_work/_tool/go/1.18.10/x64/src/runtime/map_faststr.go:295 +0x38b fp=0xc0005d3ce0 sp=0xc0005d3c78 pc=0x4158ab
github.com/testcontainers/testcontainers-go/modules/compose.(*dockerCompose).lookupContainer(0xc00093b8c0, {0x2f88698, 0xc000c11840}, {0x29b9e06, 0x5})
	/opt/actions-runner/_work/REDACTED/REDACTED/vendor/github.com/testcontainers/testcontainers-go/modules/compose/compose_api.go:300 +0x4ea fp=0xc0005d3f20 sp=0xc0005d3ce0 pc=0x1c7766a
github.com/testcontainers/testcontainers-go/modules/compose.(*dockerCompose).Up.func1()
	/opt/actions-runner/_work/REDACTED/REDACTED/vendor/github.com/testcontainers/testcontainers-go/modules/compose/compose_api.go:233 +0x4d fp=0xc0005d3f78 sp=0xc0005d3f20 pc=0x1c76b4d
golang.org/x/sync/errgroup.(*Group).Go.func1()
	/opt/actions-runner/_work/REDACTED/REDACTED/vendor/golang.org/x/sync/errgroup/errgroup.go:75 +0x64 fp=0xc0005d3fe0 sp=0xc0005d3f78 pc=0xb51644
runtime.goexit()
	/opt/actions-runner/_work/_tool/go/1.18.10/x64/src/runtime/asm_amd64.s:1571 +0x1 fp=0xc0005d3fe8 sp=0xc0005d3fe0 pc=0x46e3a1
created by golang.org/x/sync/errgroup.(*Group).Go
	/opt/actions-runner/_work/REDACTED/REDACTED/vendor/golang.org/x/sync/errgroup/errgroup.go:72 +0xa5

The location of the concurrent map write is in dockerCompose.lookupContainer.

The dockerCompose.containers map is written to by multiple Go routines launched by dockerCompose.Up.

Additional information

This panic can occur when there are multiple wait strategies, for example:

compose, err := compose.NewDockerComposeWith(
  compose.WithStackFiles("some-path"),
  compose.StackIdentifier("some-project"))
if err != nil {
  return err
}

stack := compose.
  WaitForService("service_a", wait.NewLogStrategy("some log for service a")).
  WaitForService("service_b", wait.NewLogStrategy("some log for service b")).
  WaitForService("service_c", wait.NewLogStrategy("some log for service c")).
  WaitForService("service_d", wait.NewLogStrategy("some log for service d"))

stack.Up(context.Background(), compose.Wait(true))
@joshua-hill-form3 joshua-hill-form3 added the bug An issue with the library label Feb 7, 2023
@joshua-hill-form3
Copy link
Contributor Author

This issue was spotted by @mdelapenya: #476 (comment).

@mdelapenya
Copy link
Collaborator

Thanks for opening this issue. Indeed, I remember discussing about it when the compose code came in.

Please let us know if you feel yourself comfortable to submit a fix 🙏

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug An issue with the library
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants