Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Applying Wait Strategy adds 4 Go Routine Leaks in v13.0 #2007

Closed
mhogara opened this issue Dec 19, 2023 · 4 comments
Closed

[Bug]: Applying Wait Strategy adds 4 Go Routine Leaks in v13.0 #2007

mhogara opened this issue Dec 19, 2023 · 4 comments
Labels
bug An issue with the library

Comments

@mhogara
Copy link

mhogara commented Dec 19, 2023

Testcontainers version

0.13.0

Using the latest Testcontainers version?

No

Host OS

Linux

Host arch

x86_64

Go version

1.21.3

Docker version

Version:           20.10.5+dfsg1
 API version:       1.41
 Go version:        go1.15.15
 Git commit:        55c4c88
 Built:             Mon May 30 18:34:49 2022
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true

Server:
 Engine:
  Version:          20.10.5+dfsg1
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.15.15
  Git commit:       363e9a8
  Built:            Mon May 30 18:34:49 2022
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.4.13~ds1
  GitCommit:        1.4.13~ds1-1~deb11u4
 runc:
  Version:          1.0.0~rc93+ds1
  GitCommit:        1.0.0~rc93+ds1-5+deb11u2
 docker-init:
  Version:          0.19.0
  GitCommit:

Docker info

Client:
 Context:    default
 Debug Mode: false
 Plugins:
  buildx: Docker Buildx (Docker Inc., v0.11.2)
  compose: Docker Compose (Docker Inc., v2.20.2)

Server:
 Containers: 0
  Running: 0
  Paused: 0
  Stopped: 0
 Images: 6682
 Server Version: 20.10.5+dfsg1
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
 Logging Driver: json-file
 Cgroup Driver: systemd
 Cgroup Version: 2
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: runc io.containerd.runc.v2 io.containerd.runtime.v1.linux
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 1.4.13~ds1-1~deb11u4
 runc version: 1.0.0~rc93+ds1-5+deb11u2
 init version: 
 Security Options:
  apparmor
  seccomp
   Profile: default
  cgroupns
 Kernel Version: 5.10.0-25-amd64
 Operating System: Debian GNU/Linux 11 (bullseye)
 OSType: linux
 Architecture: x86_64
 CPUs: 8
 Total Memory: 39.14GiB
 Name: vdi-dd1ah2-158
 ID: PPEH:OYUB:PB3G:EOCK:BISM:UIXT:K4Y3:QMVY:PY2U:7SUR:GOFB:SOME
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false
 Default Address Pools:
   Base: 192.168.0.0/16, Size: 24

WARNING: Support for cgroup v2 is experimental

What happened?

Initial Problem

In a test I work on, when adding a call to defer goleak.VerifyNone(t), we discovered that 4 Goroutines are leaking; this stems from a call to WithExposedService/waitForLog. We are using NewLocalDockerCompose as well. We have found this happens independent of service/docker compose file, but if we comment out the wait strategy for an exposed service, the leaks stop. I'll add reproduction code under additional information to provide clarity.

Additional Context/What I've Observed

I've done some debugging and believe the (leaking) go routines stem from applyStrategyToRunningContainer. It might also have to do with the Docker client that testcontainers-go uses, but I am not as sure of that. I have also found a similar go routine leak(though possibly not the same) in the latest version of testcontainers-go; I'll make a separate issue for that and link back to this in the off chance they are separate issues. Besides my issues, I was not able to find any open issues for goroutine leaks.

In the next section, I'll include the output the test spits out.

Please let me know if I can provide any additional information to help.

Relevant log output

FAIL: TestIntegrationSuite (81.00s)
    store_test.go:34: found unexpected goroutines:
        [Goroutine 18 in state IO wait, with internal/poll.runtime_pollWait on top of the stack:
        goroutine 18 [IO wait]:
        internal/poll.runtime_pollWait(0x1161528?, 0x72)
            /glnxa64/golang/src/runtime/netpoll.go:343 +0x3c
        internal/poll.(*pollDesc).wait(0xc000396020, 0x72, 0x0)
            /glnxa64/golang/src/internal/poll/fd_poll_runtime.go:84 +0x7e
        internal/poll.(*pollDesc).waitRead(0xc000396020, 0x0)
            /glnxa64/golang/src/internal/poll/fd_poll_runtime.go:89 +0x31
        internal/poll.(*FD).Read(0xc000396000, {0xc00039a000, 0x1000, 0x1000})
            /glnxa64/golang/src/internal/poll/fd_unix.go:164 +0x40f
        net.(*netFD).Read(0xc000396000, {0xc00039a000, 0x1000, 0x1000})
            /glnxa64/golang/src/net/fd_posix.go:55 +0x73
        net.(*conn).Read(0xc000392008, {0xc00039a000, 0x1000, 0x1000})
            /glnxa64/golang/src/net/net.go:179 +0x9d
        net/http.(*persistConn).Read(0xc00038a000, {0xc00039a000, 0x1000, 0x1000})
            /glnxa64/golang/src/net/http/transport.go:1954 +0x1da
        bufio.(*Reader).fill(0xc000380120)
            /glnxa64/golang/src/bufio/bufio.go:113 +0x271
        bufio.(*Reader).Peek(0xc000380120, 0x1)
            /glnxa64/golang/src/bufio/bufio.go:151 +0x15b
        net/http.(*persistConn).readLoop(0xc00038a000)
            /glnxa64/golang/src/net/http/transport.go:2118 +0x265
        created by net/http.(*Transport).dialConn in goroutine 8
            /glnxa64/golang/src/net/http/transport.go:1776 +0x2c10
        
         Goroutine 19 in state select, with net/http.(*persistConn).writeLoop on top of the stack:
        goroutine 19 [select]:
        net/http.(*persistConn).writeLoop(0xc00038a000)
            /glnxa64/golang/src/net/http/transport.go:2421 +0x167
        created by net/http.(*Transport).dialConn in goroutine 8
            /glnxa64/golang/src/net/http/transport.go:1777 +0x2c85
        
         Goroutine 35 in state IO wait, with internal/poll.runtime_pollWait on top of the stack:
        goroutine 35 [IO wait]:
        internal/poll.runtime_pollWait(0x1161528?, 0x72)
            /glnxa64/golang/src/runtime/netpoll.go:343 +0x3c
        internal/poll.(*pollDesc).wait(0xc000156120, 0x72, 0x0)
            /glnxa64/golang/src/internal/poll/fd_poll_runtime.go:84 +0x7e
        internal/poll.(*pollDesc).waitRead(0xc000156120, 0x0)
            /glnxa64/golang/src/internal/poll/fd_poll_runtime.go:89 +0x31
        internal/poll.(*FD).Read(0xc000156100, {0xc000139000, 0x1000, 0x1000})
            /glnxa64/golang/src/internal/poll/fd_unix.go:164 +0x40f
        net.(*netFD).Read(0xc000156100, {0xc000139000, 0x1000, 0x1000})
            /glnxa64/golang/src/net/fd_posix.go:55 +0x73
        net.(*conn).Read(0xc

  
         Goroutine 36 in state select, with net/http.(*persistConn).writeLoop on top of the stack:
        goroutine 36 [select]:
        net/http.(*persistConn).writeLoop(0xc0000bb680)
        	/glnxa64/golang/src/net/http/transport.go:2421 +0xe5
        created by net/http.(*Transport).dialConn in goroutine 8
        	/glnxa64/golang/src/net/http/transport.go:1777 +0x16f1
        ]

Additional information

Here is some sample code to help reproduce the issue; dockerComposeFile can be any valid Docker compose file; mine just contained a basic Redis container.

func TestIntegrationSuite(t *testing.T) {
	defer goleak.VerifyNone(t)
	identifier := "marisa"

	compose := testcontainers.NewLocalDockerCompose([]string{dockerComposeFile}, identifier)
	err := compose.WithCommand([]string{"up", "-d"}).
		WithExposedService("redis", 6379, wait.ForHealthCheck()).
		Invoke()

	if err.Error != nil {
		defer compose.Down()
		// t.Fatal("Invoking docker-compose failed with error: ", err)
	}

	t.Cleanup(func() {
		defer compose.Down()
	})

}
@mhogara mhogara added the bug An issue with the library label Dec 19, 2023
@mdelapenya
Copy link
Collaborator

Just to double check, this issue report if for testcontainers-go v0.13.0, right? COuld you verify it in both main and the latest release? 🙏

@mhogara
Copy link
Author

mhogara commented Dec 26, 2023

Just to double check, this issue report if for testcontainers-go v0.13.0, right? COuld you verify it in both main and the latest release? 🙏

Yep this is for testcontainers-go v0.13.0; from my understanding, NewLocalDockerCompose was moved along with all the compose code to a separate module for v0.17.0(#650). By verifying it, do you mean verifying the behavior in the latest release of the compose package using the linked NewLocalDockerCompose function (https://pkg.go.dev/github.com/testcontainers/testcontainers-go/modules/compose#NewLocalDockerCompose)?

If that is what you mean, then this issue could be considered closed since NewLocalDockerCompose (although it is deprecated) did not leak in v0.26.0(see #2008). I think you could classify #2008 as the same issue if you'd like, since it discusses NewLocalDockerCompose versus NewDockerCompose in v0.26.0 (which was the latest version up until a few days ago). I could also verify everything is working in v0.27.0 if you'd think it'd be beneficial (especially in case any of the related code was changed in v0.27.0).

Thanks for your quick response, let me know if I've misunderstood anything or can offer any more information to help. 🙂

@mdelapenya
Copy link
Collaborator

Hi @mhogara sorry for the delay, I was on Xmas PTO 🎄

Yeah, if this issue is related to 0.13.0 only, and you confirmed that v0.26.0 is free of the leaks, and we can admit that NewLocalDockerCompose will be eventually removed in a v1 release, then I'd close this one. I'll let you do it in favour of #2008.

@mdelapenya
Copy link
Collaborator

Because this issue relates to a very old release, and my previous comment about the existence of #2008, I'm closing this one.

Thanks!

@mdelapenya mdelapenya closed this as not planned Won't fix, can't repro, duplicate, stale Feb 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug An issue with the library
Projects
None yet
Development

No branches or pull requests

2 participants