Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NRI plugin registration can trigger a deadlock #10085

Closed
acurtiz opened this issue Apr 17, 2024 · 11 comments · Fixed by containerd/nri#79 or #10089
Closed

NRI plugin registration can trigger a deadlock #10085

acurtiz opened this issue Apr 17, 2024 · 11 comments · Fixed by containerd/nri#79 or #10089
Labels
area/nri Node Resource Interface (NRI) dependencies Pull requests that update a dependency file kind/bug

Comments

@acurtiz
Copy link

acurtiz commented Apr 17, 2024

We have a plugin that is registered to containerd externally (as opposed to being pre-registered). This plugin is deployed as a k8s DaemonSet.

We've detected a deadlock, in version containerd v1.7.3 (which uses containerd/nri 0.4.0). It looks to still be unfixed.

There are two involved locks: the adaptation.go lock, and the nri.go lock.

The deadlock can happen because these independent routines acquire the locks in inverse order from each other:

  1. During plugin registration, the adaptation.go lock is acquired and then syncFn is invoked; in this case, syncFn is defined here, which attempts to immediately acquire the nri.go lock.
  2. An independent StartContainer can occur in which the nri.go lock is acquired which goes through here and attempts to acquire the adaptation.go lock. Other events do exactly the same, so it's not limited to StartContainer.

The stack traces that confirm this are below.

The plugin registration stack trace:

goroutine 2650 [sync.Mutex.Lock, 1129 minutes]:
sync.runtime_SemacquireMutex(0xc001b82600?, 0x26?, 0xc0014c9af8?)
	/usr/lib64/go/x86_64-cros-linux-gnu/src/runtime/sema.go:77 +0x25
sync.(*Mutex).lockSlow(0xc0000a1600)
	/usr/lib64/go/x86_64-cros-linux-gnu/src/sync/mutex.go:171 +0x15d
sync.(*Mutex).Lock(...)
	/usr/lib64/go/x86_64-cros-linux-gnu/src/sync/mutex.go:90
github.com/containerd/containerd/pkg/nri.(*local).syncPlugin(0xc0000a1600, {0x58c98a692b40, 0x58c98b49b600}, 0xc001c96930)
	/build/lakitu/tmp/portage/app-containers/containerd-1.7.13-r1/work/containerd-1.7.13/src/github.com/containerd/containerd/pkg/nri/nri.go:440 +0x74
github.com/containerd/nri/pkg/adaptation.(*Adaptation).acceptPluginConnections.func1()
	/build/lakitu/tmp/portage/app-containers/containerd-1.7.13-r1/work/containerd-1.7.13/src/github.com/containerd/containerd/vendor/github.com/containerd/nri/pkg/adaptation/adaptation.go:424 +0x1c4
created by github.com/containerd/nri/pkg/adaptation.(*Adaptation).acceptPluginConnections in goroutine 358
	/build/lakitu/tmp/portage/app-containers/containerd-1.7.13-r1/work/containerd-1.7.13/src/github.com/containerd/containerd/vendor/github.com/containerd/nri/pkg/adaptation/adaptation.go:403 +0xcd

The StartContainer stack trace:

goroutine 2636 [sync.Mutex.Lock, 1129 minutes]:
sync.runtime_SemacquireMutex(0x7ba912937f18?, 0x80?, 0xc0012e4c00?)
        /usr/lib64/go/x86_64-cros-linux-gnu/src/runtime/sema.go:77 +0x25
sync.(*Mutex).lockSlow(0xc0002f8a00)
        /usr/lib64/go/x86_64-cros-linux-gnu/src/sync/mutex.go:171 +0x15d
sync.(*Mutex).Lock(...)
        /usr/lib64/go/x86_64-cros-linux-gnu/src/sync/mutex.go:90
github.com/containerd/nri/pkg/adaptation.(*Adaptation).StateChange(0x58c98a69de58?, {0x58c98a692b78, 0xc001a8a4b0}, 0xc001a21bc0)
        /build/lakitu/tmp/portage/app-containers/containerd-1.7.13-r1/work/containerd-1.7.13/src/github.com/containerd/containerd/vendor/github.com/containerd/nri/pkg/adaptation/adaptation.go:285 +0x85
github.com/containerd/nri/pkg/adaptation.(*Adaptation).StartContainer(...)
        /build/lakitu/tmp/portage/app-containers/containerd-1.7.13-r1/work/containerd-1.7.13/src/github.com/containerd/containerd/vendor/github.com/containerd/nri/pkg/adaptation/adaptation.go:216
github.com/containerd/containerd/pkg/nri.(*local).StartContainer(0xc0000a1600, {0x58c98a692b78, 0xc001a8a4b0}, {0x58c98a69ce30?, 0xc001c408d0?}, {0x58c98a69de58, 0xc00269b2f0})
        /build/lakitu/tmp/portage/app-containers/containerd-1.7.13-r1/work/containerd-1.7.13/src/github.com/containerd/containerd/pkg/nri/nri.go:290 +0x19f
github.com/containerd/containerd/pkg/cri/nri.(*API).StartContainer(0xc0000a1760, {0x58c98a692b78, 0xc001a8a4b0}, 0x6?, 0x0?)
        /build/lakitu/tmp/portage/app-containers/containerd-1.7.13-r1/work/containerd-1.7.13/src/github.com/containerd/containerd/pkg/cri/nri/nri_api_linux.go:156 +0xdc
github.com/containerd/containerd/pkg/cri/server.(*criService).StartContainer(0xc0001e3b00, {0x58c98a692b78?, 0xc001a8a4b0}, 0xc00209c0a8)
        /build/lakitu/tmp/portage/app-containers/containerd-1.7.13-r1/work/containerd-1.7.13/src/github.com/containerd/containerd/pkg/cri/server/container_start.go:158 +0x150b
github.com/containerd/containerd/pkg/cri/instrument.(*instrumentedService).StartContainer(0xc0004f7330, {0x58c98a692b78?, 0xc001a8a270}, 0xc00209c0a8)
        /build/lakitu/tmp/portage/app-containers/containerd-1.7.13-r1/work/containerd-1.7.13/src/github.com/containerd/containerd/pkg/cri/instrument/instrumented_service.go:507 +0x1db
k8s.io/cri-api/pkg/apis/runtime/v1._RuntimeService_StartContainer_Handler.func1({0x58c98a692b78, 0xc001a8a270}, {0x58c98a5cdd40?, 0xc00209c0a8})
        /build/lakitu/tmp/portage/app-containers/containerd-1.7.13-r1/work/containerd-1.7.13/src/github.com/containerd/containerd/vendor/k8s.io/cri-api/pkg/apis/runtime/v1/api.pb.go:10863 +0x75
github.com/containerd/containerd/services/server.unaryNamespaceInterceptor({0x58c98a692b78, 0xc001a8a270}, {0x58c98a5cdd40, 0xc00209c0a8}, 0xc000124478?, 0xc00209c0c0)
        /build/lakitu/tmp/portage/app-containers/containerd-1.7.13-r1/work/containerd-1.7.13/src/github.com/containerd/containerd/services/server/namespace.go:31 +0x65
github.com/containerd/containerd/services/server.New.ChainUnaryServer.func5.1.1({0x58c98a692b78?, 0xc001a8a270?}, {0x58c98a5cdd40?, 0xc00209c0a8?})
        /build/lakitu/tmp/portage/app-containers/containerd-1.7.13-r1/work/containerd-1.7.13/src/github.com/containerd/containerd/vendor/github.com/grpc-ecosystem/go-grpc-middleware/chain.go:25 +0x37
github.com/grpc-ecosystem/go-grpc-prometheus.init.(*ServerMetrics).UnaryServerInterceptor.func3({0x58c98a692b78, 0xc001a8a270}, {0x58c98a5cdd40, 0xc00209c0a8}, 0xc0017e95b0?, 0xc00228c0c0)
        /build/lakitu/tmp/portage/app-containers/containerd-1.7.13-r1/work/containerd-1.7.13/src/github.com/containerd/containerd/vendor/github.com/grpc-ecosystem/go-grpc-prometheus/server_metrics.go:107 +0x83
github.com/containerd/containerd/services/server.New.ChainUnaryServer.func5.1.1({0x58c98a692b78?, 0xc001a8a270?}, {0x58c98a5cdd40?, 0xc00209c0a8?})
        /build/lakitu/tmp/portage/app-containers/containerd-1.7.13-r1/work/containerd-1.7.13/src/github.com/containerd/containerd/vendor/github.com/grpc-ecosystem/go-grpc-middleware/chain.go:25 +0x37
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc.UnaryServerInterceptor.func1({0x58c98a692b78, 0xc001a8a1b0}, {0x58c98a5cdd40, 0xc00209c0a8}, 0xc00228c0a0, 0xc00228c0e0)
        /build/lakitu/tmp/portage/app-containers/containerd-1.7.13-r1/work/containerd-1.7.13/src/github.com/containerd/containerd/vendor/go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc/interceptor.go:376 +0x5cd
github.com/containerd/containerd/services/server.New.ChainUnaryServer.func5.1.1({0x58c98a692b78?, 0xc001a8a1b0?}, {0x58c98a5cdd40?, 0xc00209c0a8?})
        /build/lakitu/tmp/portage/app-containers/containerd-1.7.13-r1/work/containerd-1.7.13/src/github.com/containerd/containerd/vendor/github.com/grpc-ecosystem/go-grpc-middleware/chain.go:25 +0x37
github.com/containerd/containerd/services/server.New.ChainUnaryServer.func5({0x58c98a692b78, 0xc001a8a1b0}, {0x58c98a5cdd40, 0xc00209c0a8}, 0xc000f56a38?, 0x58c98a3f4400?)
        /build/lakitu/tmp/portage/app-containers/containerd-1.7.13-r1/work/containerd-1.7.13/src/github.com/containerd/containerd/vendor/github.com/grpc-ecosystem/go-grpc-middleware/chain.go:34 +0xb5
k8s.io/cri-api/pkg/apis/runtime/v1._RuntimeService_StartContainer_Handler({0x58c98a63a3a0?, 0xc0004f7330}, {0x58c98a692b78, 0xc001a8a1b0}, 0xc00149c070, 0xc0002000c0)
        /build/lakitu/tmp/portage/app-containers/containerd-1.7.13-r1/work/containerd-1.7.13/src/github.com/containerd/containerd/vendor/k8s.io/cri-api/pkg/apis/runtime/v1/api.pb.go:10865 +0x135
google.golang.org/grpc.(*Server).processUnaryRPC(0xc00044a000, {0x58c98a69bb00, 0xc0018cc000}, 0xc001608000, 0xc000200d50, 0x58c98b3e2b28, 0x0)
        /build/lakitu/tmp/portage/app-containers/containerd-1.7.13-r1/work/containerd-1.7.13/src/github.com/containerd/containerd/vendor/google.golang.org/grpc/server.go:1374 +0xde7
google.golang.org/grpc.(*Server).handleStream(0xc00044a000, {0x58c98a69bb00, 0xc0018cc000}, 0xc001608000, 0x0)
        /build/lakitu/tmp/portage/app-containers/containerd-1.7.13-r1/work/containerd-1.7.13/src/github.com/containerd/containerd/vendor/google.golang.org/grpc/server.go:1751 +0x9e7
google.golang.org/grpc.(*Server).serveStreams.func1.1()
        /build/lakitu/tmp/portage/app-containers/containerd-1.7.13-r1/work/containerd-1.7.13/src/github.com/containerd/containerd/vendor/google.golang.org/grpc/server.go:986 +0xbb
created by google.golang.org/grpc.(*Server).serveStreams.func1 in goroutine 861
        /build/lakitu/tmp/portage/app-containers/containerd-1.7.13-r1/work/containerd-1.7.13/src/github.com/containerd/containerd/vendor/google.golang.org/grpc/server.go:997 +0x145

The effects of this bug are the plugin is stuck (Synchronize callback is never invoked) and containerd is unable to process certain events (such as StartContainer). The only remedy appears to be restarting containerd.

@acurtiz
Copy link
Author

acurtiz commented Apr 17, 2024

/cc @bobbypage
/cc @samuelkarp

@samuelkarp samuelkarp transferred this issue from containerd/nri Apr 17, 2024
@samuelkarp samuelkarp added kind/bug dependencies Pull requests that update a dependency file area/nri Node Resource Interface (NRI) labels Apr 17, 2024
@samuelkarp
Copy link
Member

Moved to containerd/containerd since this bug affects released versions of containerd.

@samuelkarp samuelkarp changed the title Plugin registration can trigger a deadlock NRI plugin registration can trigger a deadlock Apr 17, 2024
@bobbypage
Copy link
Contributor

bobbypage commented Apr 17, 2024

A possible fix we have discussed is to move the adaptation r.Lock() after syncFn.

syncFn doesn't seem to access any members of the adaptation lock, so it seems we should be able to lock only after the syncFn call.

This would ensure that the locking order in acceptPluginConnections would be nri lock [done by syncFn] -> adaptation lock.

That locking order makes it consistent with the locking order in the StartContainer flow which is also nri lock -> adaptation lock which should prevent the deadlock...

Possible patch...

diff --git a/vendor/github.com/containerd/nri/pkg/adaptation/adaptation.go b/vendor/github.com/containerd/nri/pkg/adaptation/adaptation.go
index 141cb85be..9201e6411 100644
--- a/vendor/github.com/containerd/nri/pkg/adaptation/adaptation.go
+++ b/vendor/github.com/containerd/nri/pkg/adaptation/adaptation.go
@@ -431,18 +431,16 @@ func (r *Adaptation) acceptPluginConnections(l net.Listener) error {
 				continue
 			}
 
-			r.Lock()
-
 			err = r.syncFn(ctx, p.synchronize)
 			if err != nil {
 				log.Infof(ctx, "failed to synchronize plugin: %v", err)
 			} else {
+				r.Lock()
 				r.plugins = append(r.plugins, p)
 				r.sortPlugins()
+				r.Unlock()
 			}
 
-			r.Unlock()
-
 			log.Infof(ctx, "plugin %q connected", p.name())
 		}
 	}()

@bobbypage
Copy link
Contributor

bobbypage commented Apr 18, 2024

I have a repro of the deadlock:

  1. Create a local kind cluster, I use the following script - https://gist.github.com/bobbypage/7219c004bcc83b59428f98e45c75aefd

$ docker exec -it kind-worker /bin/bash
$ apt-get update && apt-get -y install vim
$ vim /etc/containerd/config.toml

## add 

[plugins."io.containerd.nri.v1.nri"]
 disable = false
 disable_connections = false
 plugin_config_path = "/etc/nri/conf.d"
 plugin_path = "/home/kubernetes/nri/plugins"
 plugin_registration_timeout = "5s"
 plugin_request_timeout = "5s"
 socket_path = "/var/run/nri/nri.sock"

# to the nri config
  1. systemctl restart containerd

  2. Modify the NRI logger example with the following patch (have it exit after Synchronize). We will consistently try to restart the plugin; thus making it more likely to hit the deadlock:

diff --git a/plugins/logger/nri-logger.go b/plugins/logger/nri-logger.go
index 66fcb4a..5571142 100644
--- a/plugins/logger/nri-logger.go
+++ b/plugins/logger/nri-logger.go
@@ -80,7 +80,9 @@ func (p *plugin) Configure(_ context.Context, config, runtime, version string) (
 }

 func (p *plugin) Synchronize(_ context.Context, pods []*api.PodSandbox, containers []*api.Container) ([]*api.ContainerUpdate, error) {
-       dump("Synchronize", "pods", pods, "containers", containers)
+       //dump("Synchronize", "pods", pods, "containers", containers)
+       log.Infof("at Synchronize!!")
+       os.Exit(0)
        return nil, nil
 }
  1. make - Build the NRI examples

  2. Copy the modified logger example to the kind-worker

$ docker cp build/bin/logger kind-worker:/ 
  1. Inside the kind worker run the following:
while true; do /logger --idx 00; done
  1. Apply the following job to create pod churn https://gist.github.com/bobbypage/a04b54d522fbfbfa221af32988fb0ac1

  2. After a bit, you should see the logger lockup with Started plugin 00-logger and failing to reach the Synchronize log (took me a few mins)

Here is stack trace from the repro (obtained via kill -SIGUSR1 $(pidof containerd)): https://gist.github.com/bobbypage/54739eced260790d19ae077001c4d024

Same as mentioned in original comment, we have:

goroutine 493 [sync.Mutex.Lock, 5 minutes]:
sync.runtime_SemacquireMutex(0x55fcbecbf819?, 0x26?, 0x55fcbecc6f0a?)
	/root/.gimme/versions/go1.20.13.linux.amd64/src/runtime/sema.go:77 +0x26
sync.(*Mutex).lockSlow(0xc0000a0a40)
	/root/.gimme/versions/go1.20.13.linux.amd64/src/sync/mutex.go:171 +0x165
sync.(*Mutex).Lock(...)
	/root/.gimme/versions/go1.20.13.linux.amd64/src/sync/mutex.go:90
github.com/containerd/containerd/pkg/nri.(*local).syncPlugin(0xc0000a0a40, {0x55fcc070fbd8, 0xc000056060}, 0xc000578770)
	/containerd/pkg/nri/nri.go:440 +0x85
github.com/containerd/nri/pkg/adaptation.(*Adaptation).acceptPluginConnections.func1()
	/containerd/vendor/github.com/containerd/nri/pkg/adaptation/adaptation.go:424 +0x1c6
created by github.com/containerd/nri/pkg/adaptation.(*Adaptation).acceptPluginConnections
	/containerd/vendor/github.com/containerd/nri/pkg/adaptation/adaptation.go:403 +0xe5

and many other containers stuck in acquiring the NRI lock:

goroutine 24202 [sync.Mutex.Lock, 5 minutes]:
sync.runtime_SemacquireMutex(0x7ff9be687d28?, 0xe0?, 0x16?)
	/root/.gimme/versions/go1.20.13.linux.amd64/src/runtime/sema.go:77 +0x26
sync.(*Mutex).lockSlow(0xc0000a0a40)
	/root/.gimme/versions/go1.20.13.linux.amd64/src/sync/mutex.go:171 +0x165
sync.(*Mutex).Lock(...)
	/root/.gimme/versions/go1.20.13.linux.amd64/src/sync/mutex.go:90
github.com/containerd/containerd/pkg/nri.(*local).CreateContainer(0xc0000a0a40, {0x55fcc070fc48, 0xc0035e8060}, {0x55fcc07194a8?, 0xc003692a38?}, {0x55fcc071a9c0, 0xc0035e9bc0})
	/containerd/pkg/nri/nri.go:233 +0xf0
github.com/containerd/containerd/pkg/cri/nri.(*API).CreateContainer(0xc0001252e0, {0x55fcc070fc48, 0xc0035e8060}, 0x55fcc0353ca0?, 0xc0031d14d0?)
	/containerd/pkg/cri/nri/nri_api_linux.go:130 +0x229
github.com/containerd/containerd/pkg/cri/nri.(*API).WithContainerAdjustment.func5({0x55fcc070fc48, 0xc0035e8060}, 0xc000dde690?, 0xc002eab980)
	/containerd/pkg/cri/nri/nri_api_linux.go:326 +0x110
github.com/containerd/containerd.(*Client).NewContainer(0xc0003aa400, {0x55fcc070fc48?, 0xc000dde690?}, {0xc0032e9500, 0x40}, {0xc0034b8fc0, 0x7, 0x55fcc0703c28?})
	/containerd/client.go:294 +0x283
github.com/containerd/containerd/pkg/cri/server.(*criService).CreateContainer(0xc00034e900, {0x55fcc070fc48, 0xc000dde690}, 0xc0026f3b90)
	/containerd/pkg/cri/server/container_create.go:263 +0x2a59
github.com/containerd/containerd/pkg/cri/instrument.(*instrumentedService).CreateContainer(0xc00061e410, {0x55fcc070fc48, 0xc000dde270}, 0xc0026f3b90)
	/containerd/pkg/cri/instrument/instrumented_service.go:450 +0x238

bobbypage added a commit to bobbypage/nri that referenced this issue Apr 18, 2024
During NRI external plugin registration:

* acceptPluginConnections() is called
* adaptation lock from `nri/pkg/adaptation` is acquired
* `syncFn` is invoked
* `syncFn` acquires NRI lock in `pkg/nri/nri.go`

During container lifecycle events such as `ContainerStart`
* NRI lock is acquired in pkg/nri.go
* adaptation lock is acquired in `StateChange()` in `nri/pkg/adaptation`

As a result, the locking order during NRI plugin registration is:
* adaptation lock -> NRI lock

While the locking order during container starts is:
* NRI lock -> adaptation lock

Due the fact that the locking order is inverted and not consistent, it
it possible to encounter a deadlock.

To fix the issue, during NRI plugin registration, first acquire the NRI
lock (done via `syncFn` call) and only after acquire the adaptation lock.
This ensures that NRI plugin registration the locking order is adaption
lock -> NRI lock, which is consistent with the locking order during
container lifecycle events.

Fixes containerd/containerd#10085
bobbypage added a commit to bobbypage/nri that referenced this issue Apr 18, 2024
During NRI external plugin registration:

* acceptPluginConnections() is called
* adaptation lock from `nri/pkg/adaptation` is acquired
* `syncFn` is invoked
* `syncFn` acquires NRI lock in `pkg/nri/nri.go`

During container lifecycle events such as `ContainerStart`
* NRI lock is acquired in pkg/nri.go
* adaptation lock is acquired in `StateChange()` in `nri/pkg/adaptation`

As a result, the locking order during NRI plugin registration is:
* adaptation lock -> NRI lock

While the locking order during container starts is:
* NRI lock -> adaptation lock

Due the fact that the locking order is inverted and not consistent, it
it possible to encounter a deadlock.

To fix the issue, during NRI plugin registration, first acquire the NRI
lock (done via `syncFn` call) and only after acquire the adaptation lock.
This ensures that NRI plugin registration the locking order is adaption
lock -> NRI lock, which is consistent with the locking order during
container lifecycle events.

Fixes containerd/containerd#10085

Signed-off-by: David Porter <porterdavid@google.com>
@bobbypage
Copy link
Contributor

bobbypage commented Apr 18, 2024

I applied the proposed patch in #10085 (comment) and reran the repro steps with custom built containerd and have not been able to repro the deadlock. Opened containerd/nri#79 with the proposed patch to get feedback.

bobbypage added a commit to bobbypage/nri that referenced this issue Apr 18, 2024
During NRI external plugin registration:

* `acceptPluginConnections()` is called
* adaptation lock from `nri/pkg/adaptation` is acquired
* `syncFn` is invoked
* `syncFn` acquires NRI lock in `pkg/nri/nri.go`

During container lifecycle events such as `ContainerStart`
* NRI lock is acquired in pkg/nri.go
* adaptation lock is acquired in `StateChange()` in `nri/pkg/adaptation`

As a result, the locking order during NRI plugin registration is:
* adaptation lock -> NRI lock

While the locking order during container starts is:
* NRI lock -> adaptation lock

Due the fact that the locking order is inverted and not consistent, it
it possible to encounter a deadlock.

To fix the issue, during NRI plugin registration, first acquire the NRI
lock (done via `syncFn` call) and only after acquire the adaptation lock.
This ensures that NRI plugin registration the locking order is adaption
lock -> NRI lock, which is consistent with the locking order during
container lifecycle events.

Fixes containerd/containerd#10085

Signed-off-by: David Porter <porterdavid@google.com>
@estesp estesp reopened this Apr 18, 2024
@estesp
Copy link
Member

estesp commented Apr 18, 2024

Let's leave this open and use a PR in this repo that updates the nri import to be the closing PR for the bug

@bobbypage
Copy link
Contributor

bobbypage commented Apr 18, 2024

Thank you @samuelkarp and @estesp for the reviews!

Next step -- we need a new cut of containerd/nri... Can maintainers please help to get one started? Thank you!

samuelkarp added a commit to samuelkarp/containerd that referenced this issue Apr 18, 2024
Fixes containerd#10085

Signed-off-by: Samuel Karp <samuelkarp@google.com>
@samuelkarp
Copy link
Member

Next step -- we need a new cut of containerd/nri... Can maintainers please help to get one started? Thank you!

Lost track of this issue, but yes this was done yesterday: https://github.com/containerd/nri/releases/tag/v0.6.1

Dependency bumped in main: #10089

I'll open a cherry-pick to 1.7 shortly.

@samuelkarp samuelkarp reopened this Apr 19, 2024
samuelkarp added a commit to samuelkarp/containerd that referenced this issue Apr 19, 2024
Fixes containerd#10085

Signed-off-by: Samuel Karp <samuelkarp@google.com>
(cherry picked from commit a153b2c)
Signed-off-by: Samuel Karp <samuelkarp@google.com>
@bobbypage
Copy link
Contributor

Thanks @samuelkarp for doing the dependency update. Since this is now fixed on main and in 1.7 with #10097, I think we can close this out.

This fix will go into next containerd 1.7 release, so 1.7.16+.

@samuelkarp
Copy link
Member

Yep, I was keeping it open until we have a new 1.7 release out.

@samuelkarp
Copy link
Member

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/nri Node Resource Interface (NRI) dependencies Pull requests that update a dependency file kind/bug
Projects
None yet
4 participants