Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

data race in nodeshutdown tests #110854

Closed
kerthcet opened this issue Jun 29, 2022 · 29 comments
Closed

data race in nodeshutdown tests #110854

kerthcet opened this issue Jun 29, 2022 · 29 comments
Assignees
Labels
kind/flake Categorizes issue or PR as related to a flaky test. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. sig/node Categorizes an issue or PR as relevant to SIG Node.

Comments

@kerthcet
Copy link
Member

Which jobs are flaking?

https://prow.k8s.io/view/gs/kubernetes-jenkins/pr-logs/pull/110768/pull-kubernetes-unit/1541991414412349440

WARNING: DATA RACE
Read at 0x00c0004c1703 by goroutine 96:
  testing.(*common).logDepth()
      /usr/local/go/src/testing/testing.go:882 +0x4ce
  testing.(*common).log()
      /usr/local/go/src/testing/testing.go:869 +0x84
  testing.(*common).Log()
      /usr/local/go/src/testing/testing.go:910 +0x58
  testing.(*T).Log()
      <autogenerated>:1 +0x55
  k8s.io/kubernetes/vendor/k8s.io/klog/v2/ktesting.(*tlogger).log()
      /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/ktesting/testinglogger.go:279 +0x524
  k8s.io/kubernetes/vendor/k8s.io/klog/v2/ktesting.(*tlogger).Info()
      /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/ktesting/testinglogger.go:250 +0x157
  k8s.io/kubernetes/vendor/github.com/go-logr/logr.Logger.Info()
      /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/go-logr/logr/logr.go:261 +0xe3
  k8s.io/kubernetes/pkg/kubelet/nodeshutdown.(*managerImpl).processShutdownEvent.func3()
      /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/nodeshutdown/nodeshutdown_manager_linux.go:386 +0x7e4
  k8s.io/kubernetes/pkg/kubelet/nodeshutdown.(*managerImpl).processShutdownEvent.func5()
      /home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/pkg/kubelet/nodeshutdown/nodeshutdown_manager_linux.go:388 +0x9d
Previous write at 0x00c0004c1703 by goroutine 8:
  testing.tRunner.func1()
      /usr/local/go/src/testing/testing.go:1426 +0x7af
  runtime.deferreturn()
      /usr/local/go/src/runtime/panic.go:436 +0x32
  testing.(*T).Run.func1()
      /usr/local/go/src/testing/testing.go:1486 +0x47

Which tests are flaking?

TestLocalStorage in pkg/kubelet/nodeshutdown

Since when has it been flaking?

N/A

Testgrid link

No response

Reason for failure (if possible)

No response

Anything else we need to know?

No response

Relevant SIG(s)

/sig node

@kerthcet kerthcet added the kind/flake Categorizes issue or PR as related to a flaky test. label Jun 29, 2022
@k8s-ci-robot k8s-ci-robot added sig/node Categorizes an issue or PR as relevant to SIG Node. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Jun 29, 2022
@k8s-ci-robot
Copy link
Contributor

@kerthcet: This issue is currently awaiting triage.

If a SIG or subproject determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@kerthcet
Copy link
Member Author

cc @pohly as we introduced contextual logging weeks ago, I have no idea whether this is related.

@pohly
Copy link
Contributor

pohly commented Jun 29, 2022

Contextual logging is involved here because it enables log output through testing.T.Log.

It's interesting that the data race seems to be inside testing.T itself. It does mutex locking but apparently the access to the mutex itself is racing?!

/assign

@pohly
Copy link
Contributor

pohly commented Jun 29, 2022

The change which triggered this is from PR #110504 .

@pohly
Copy link
Contributor

pohly commented Jun 29, 2022

The main problem seems to be that the testing.T instance becomes unusable once the test it was created for terminates. Running the test locally repeatedly, I got the data race and:

==================
panic: Log in goroutine after TestRestart has completed: INFO Restarting watch for node shutdown events


goroutine 77 [running]:
testing.(*common).logDepth(0xc00060d860, {0xc0006d63c0, 0x2f}, 0x3)
	/nvme/gopath/go-1.18.1/src/testing/testing.go:887 +0x6c5
testing.(*common).log(...)
	/nvme/gopath/go-1.18.1/src/testing/testing.go:869
testing.(*common).Log(0xc00060d860, {0xc000325620, 0x2, 0x2})
	/nvme/gopath/go-1.18.1/src/testing/testing.go:910 +0x85
k8s.io/klog/v2/ktesting.(*tlogger).log(0xc0004382d0, {0x27fd715, 0x4}, {0x2840e7e, 0x29}, 0x1, 0xc000927ea8, {0x0, 0x0}, {0x0, ...})
	/nvme/gopath/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/ktesting/testinglogger.go:279 +0x525
k8s.io/klog/v2/ktesting.(*tlogger).Info(0xc0004382d0, 0x3329070?, {0x2840e7e, 0x29}, {0x0, 0x0, 0x0})
	/nvme/gopath/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/ktesting/testinglogger.go:250 +0x158
github.com/go-logr/logr.Logger.Info({{0x3329070?, 0xc0004382d0?}, 0xc0001d3fd0?}, {0x2840e7e, 0x29}, {0x0, 0x0, 0x0})
	/nvme/gopath/src/k8s.io/kubernetes/vendor/github.com/go-logr/logr/logr.go:261 +0xe4
k8s.io/kubernetes/pkg/kubelet/nodeshutdown.(*managerImpl).Start.func1()
	/nvme/gopath/src/k8s.io/kubernetes/pkg/kubelet/nodeshutdown/nodeshutdown_manager_linux.go:189 +0xd5
created by k8s.io/kubernetes/pkg/kubelet/nodeshutdown.(*managerImpl).Start
	/nvme/gopath/src/k8s.io/kubernetes/pkg/kubelet/nodeshutdown/nodeshutdown_manager_linux.go:182 +0x1cf

One the one hand, keeping goroutines running in the background after test completion is bad and should be avoided. On the other hand, this used to be okay(is) in the past (it just made the global log output even less useful) and cannot always be avoided.

What could be done to solve this is to "mute" testinglogger once the test completes. But that depends on a klog API extension.

pohly added a commit to pohly/klog that referenced this issue Jun 29, 2022
When testing.T.Log gets called after the test has completed, it panics. There's
also a data race (kubernetes/kubernetes#110854).

Normally that should never happen because tests should ensure that all
goroutines have stopped before returning. But sometimes it is not possible to
do that. For those cases, "defer Stop(logger)" may be added to a test.  When
called, it will cause all future usage of the testing.T instance to be skipped.
pohly added a commit to pohly/klog that referenced this issue Jun 29, 2022
When testing.T.Log gets called after the test has completed, it panics. There's
also a data race (kubernetes/kubernetes#110854).

Normally that should never happen because tests should ensure that all
goroutines have stopped before returning. But sometimes it is not possible to
do that. For those cases, "defer Stop(logger)" may be added to a test.  When
called, it will cause all future usage of the testing.T instance to be skipped.
@harry1064
Copy link

harry1064 commented Jun 29, 2022

Hi @pohly

Isn't systemDbus defer function get called earlier

systemDbus = func() (dbusInhibiter, error) {
defer func() {
connChan <- struct{}{}
}()
ch := make(chan bool)
shutdownChanMut.Lock()
shutdownChan = ch
shutdownChanMut.Unlock()
dbus := &fakeDbus{currentInhibitDelay: systemInhibitDelay, shutdownChan: ch, overrideSystemInhibitDelay: overrideSystemInhibitDelay}
return dbus, nil
}

therefore following line get executed immediately. Hence, testing.T instance became unusable before go routines inside Start function finishes.

Also, saw your above PR kubernetes/klog#337
I have one question, How we will use this for existing test case where we cannot use waitGroup? e.g TestRestart()

@pohly
Copy link
Contributor

pohly commented Jun 29, 2022

TestRestart is a red herring (= not relevant). The unexpected output must be coming from one of the other unit tests, for example:

logger, _ := ktesting.NewTestContext(t)

We have to add defer ktesting.Stop(logger) after such lines.

@pohly
Copy link
Contributor

pohly commented Jun 29, 2022

Scratch that. The version of TestRestart that you linked to doesn't use ktesting, but master does.

@pohly
Copy link
Contributor

pohly commented Jun 29, 2022

Hmm, does this test also have other data races?

After adding ktesting.Stop, I get:

Read at 0x00000402c820 by goroutine 112:
  k8s.io/kubernetes/pkg/kubelet/nodeshutdown.(*managerImpl).start()
      /nvme/gopath/src/k8s.io/kubernetes/pkg/kubelet/nodeshutdown/nodeshutdown_manager_linux.go:202 +0x49
  k8s.io/kubernetes/pkg/kubelet/nodeshutdown.(*managerImpl).Start.func1()
      /nvme/gopath/src/k8s.io/kubernetes/pkg/kubelet/nodeshutdown/nodeshutdown_manager_linux.go:190 +0xde

Previous write at 0x00000402c820 by goroutine 106:
  k8s.io/kubernetes/pkg/kubelet/nodeshutdown.TestRestart()
      /nvme/gopath/src/k8s.io/kubernetes/pkg/kubelet/nodeshutdown/nodeshutdown_manager_linux_test.go:391 +0x326
  testing.tRunner()
      /nvme/gopath/go-1.18.1/src/testing/testing.go:1439 +0x213
  testing.(*T).Run.func1()
      /nvme/gopath/go-1.18.1/src/testing/testing.go:1486 +0x47

That's a race around setting and calling systemDbus.

When checking out the version prior to my contextual logging change, I get:

$ git checkout 65385fec209fb5a6d549129fb03cd529c25a2cff~
Previous HEAD position was 10bea49c12d Merge pull request #110140 from marosset/hpc-sandbox-config-fixes
$ go test -count=5 -race ./pkg/kubelet/nodeshutdown/
...
--- FAIL: TestRestart (0.00s)
panic: close of closed channel [recovered]
	panic: close of closed channel

goroutine 424 [running]:
testing.tRunner.func1.2({0x25cb900, 0x32fe1d0})
	/nvme/gopath/go-1.18.1/src/testing/testing.go:1389 +0x366
testing.tRunner.func1()
	/nvme/gopath/go-1.18.1/src/testing/testing.go:1392 +0x5d2
panic({0x25cb900, 0x32fe1d0})
	/nvme/gopath/go-1.18.1/src/runtime/panic.go:844 +0x258
k8s.io/kubernetes/pkg/kubelet/nodeshutdown.TestRestart(0xc000583ba0)
	/nvme/gopath/src/k8s.io/kubernetes/pkg/kubelet/nodeshutdown/nodeshutdown_manager_linux_test.go:422 +0x686
testing.tRunner(0xc000583ba0, 0x28f3468)
	/nvme/gopath/go-1.18.1/src/testing/testing.go:1439 +0x214
created by testing.(*T).Run
	/nvme/gopath/go-1.18.1/src/testing/testing.go:1486 +0x725
FAIL	k8s.io/kubernetes/pkg/kubelet/nodeshutdown	5.162s
FAIL

Trying again gives me the same race I saw earlier:

==================
WARNING: DATA RACE
Write at 0x000004019818 by goroutine 48:
  k8s.io/kubernetes/pkg/kubelet/nodeshutdown.TestRestart()
      /nvme/gopath/src/k8s.io/kubernetes/pkg/kubelet/nodeshutdown/nodeshutdown_manager_linux_test.go:380 +0x286
  testing.tRunner()
      /nvme/gopath/go-1.18.1/src/testing/testing.go:1439 +0x213
  testing.(*T).Run.func1()
      /nvme/gopath/go-1.18.1/src/testing/testing.go:1486 +0x47

Previous read at 0x000004019818 by goroutine 114:
  k8s.io/kubernetes/pkg/kubelet/nodeshutdown.(*managerImpl).start()
      /nvme/gopath/src/k8s.io/kubernetes/pkg/kubelet/nodeshutdown/nodeshutdown_manager_linux.go:200 +0x49
  k8s.io/kubernetes/pkg/kubelet/nodeshutdown.(*managerImpl).Start.func1()
      /nvme/gopath/src/k8s.io/kubernetes/pkg/kubelet/nodeshutdown/nodeshutdown_manager_linux.go:188 +0x104

systemDbus = func() (dbusInhibiter, error) {

@pohly
Copy link
Contributor

pohly commented Jun 29, 2022

We have to add defer ktesting.Stop(logger) after such lines.

But a better solution would be to shut down all goroutines cleanly...

@pohly
Copy link
Contributor

pohly commented Jun 29, 2022

That probably also would fix the non-logging race, too. I bet it is the goroutine from one test which reads systemDbus (because line 200 gets called in a loop) while another test sets it.

@harry1064
Copy link

harry1064 commented Jun 29, 2022

Scratch that. The version of TestRestart that you linked to doesn't use ktesting, but master does.

I have accidently linked the old one but I mentioned TestRestart because in new implementation we are using ktesting.NewTestContext(t).

I thought the for loop we added at the end was to wait for the test to complete and as following chan read will not wait
https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/nodeshutdown/nodeshutdown_manager_linux_test.go#L425
because defer function has already written into it.

@harry1064
Copy link

Hi @pohly
I was thinking, if the purpose of contextual logging is to test the logs content. Can we comment out the line

https://github.com/kubernetes/kubernetes/blob/master/vendor/k8s.io/klog/v2/ktesting/testinglogger.go#L279

This way we will not be calling Log on instance of *testing.T and it will not panic. As, lines after already collect the string in buffer, we will still be able to test the logs content.

@pohly
Copy link
Contributor

pohly commented Jun 29, 2022

That doesn't solve the problem that the test leaks goroutines and the race I mentioned in #110854 (comment)

@pohly
Copy link
Contributor

pohly commented Jun 29, 2022

I tried to ensure that all goroutines terminate, but some functions blocked in some test cases. So a "proper" solution might not work and we need something like kubernetes/klog#337

@harry1064
Copy link

in kubernetes/klog#337, new Stop method will only solve the panic case, right? Go routines will still leaks from the test cases.
And for contextual logging, where the function we want to test spawn a go routine, we will not be able to test the log content, right?

@pohly
Copy link
Contributor

pohly commented Jun 30, 2022

in kubernetes/klog#337, new Stop method will only solve the panic case, right? Go routines will still leaks from the test cases.

Correct.

And for contextual logging, where the function we want to test spawn a go routine, we will not be able to test the log content, right?

The test would have to ensure that the goroutine is done with logging before checking the output.

@harry1064
Copy link

The test would have to ensure that the goroutine is done with logging before checking the output.

Then we have to change our functionality under test to have the ability to inject something like a channel and then in the test we have to listen on that chan, in this way we can keep the *testing.T to not return until we read from that channel.
But that would be a lot of changes in code base, I assume.

pohly added a commit to pohly/klog that referenced this issue Jun 30, 2022
When testing.T.Log gets called after the test has completed, it panics. There's
also a data race (kubernetes/kubernetes#110854).

Normally that should never happen because tests should ensure that all
goroutines have stopped before returning. But sometimes it is not possible to
do that. ktesting now automatically protects against that by registering a
cleanup function and redirecting all future output into klog.
k8s-publishing-bot pushed a commit to kubernetes/api that referenced this issue Jul 8, 2022
This makes ktesting more resilient against logging from leaked goroutines,
which is a problem that came up in kubelet node shutdown
tests (kubernetes/kubernetes#110854).

Kubernetes-commit: 3581e308835c69b11b2c9437db44073129e0e2bf
k8s-publishing-bot pushed a commit to kubernetes/client-go that referenced this issue Jul 8, 2022
This makes ktesting more resilient against logging from leaked goroutines,
which is a problem that came up in kubelet node shutdown
tests (kubernetes/kubernetes#110854).

Kubernetes-commit: 3581e308835c69b11b2c9437db44073129e0e2bf
k8s-publishing-bot pushed a commit to kubernetes/component-base that referenced this issue Jul 8, 2022
This makes ktesting more resilient against logging from leaked goroutines,
which is a problem that came up in kubelet node shutdown
tests (kubernetes/kubernetes#110854).

Kubernetes-commit: 3581e308835c69b11b2c9437db44073129e0e2bf
k8s-publishing-bot pushed a commit to kubernetes/component-helpers that referenced this issue Jul 8, 2022
This makes ktesting more resilient against logging from leaked goroutines,
which is a problem that came up in kubelet node shutdown
tests (kubernetes/kubernetes#110854).

Kubernetes-commit: 3581e308835c69b11b2c9437db44073129e0e2bf
k8s-publishing-bot pushed a commit to kubernetes/apiserver that referenced this issue Jul 8, 2022
This makes ktesting more resilient against logging from leaked goroutines,
which is a problem that came up in kubelet node shutdown
tests (kubernetes/kubernetes#110854).

Kubernetes-commit: 3581e308835c69b11b2c9437db44073129e0e2bf
k8s-publishing-bot pushed a commit to kubernetes/kube-aggregator that referenced this issue Jul 8, 2022
This makes ktesting more resilient against logging from leaked goroutines,
which is a problem that came up in kubelet node shutdown
tests (kubernetes/kubernetes#110854).

Kubernetes-commit: 3581e308835c69b11b2c9437db44073129e0e2bf
k8s-publishing-bot pushed a commit to kubernetes/sample-apiserver that referenced this issue Jul 8, 2022
This makes ktesting more resilient against logging from leaked goroutines,
which is a problem that came up in kubelet node shutdown
tests (kubernetes/kubernetes#110854).

Kubernetes-commit: 3581e308835c69b11b2c9437db44073129e0e2bf
k8s-publishing-bot pushed a commit to kubernetes/sample-controller that referenced this issue Jul 8, 2022
This makes ktesting more resilient against logging from leaked goroutines,
which is a problem that came up in kubelet node shutdown
tests (kubernetes/kubernetes#110854).

Kubernetes-commit: 3581e308835c69b11b2c9437db44073129e0e2bf
k8s-publishing-bot pushed a commit to kubernetes/apiextensions-apiserver that referenced this issue Jul 8, 2022
This makes ktesting more resilient against logging from leaked goroutines,
which is a problem that came up in kubelet node shutdown
tests (kubernetes/kubernetes#110854).

Kubernetes-commit: 3581e308835c69b11b2c9437db44073129e0e2bf
k8s-publishing-bot pushed a commit to kubernetes/metrics that referenced this issue Jul 8, 2022
This makes ktesting more resilient against logging from leaked goroutines,
which is a problem that came up in kubelet node shutdown
tests (kubernetes/kubernetes#110854).

Kubernetes-commit: 3581e308835c69b11b2c9437db44073129e0e2bf
k8s-publishing-bot pushed a commit to kubernetes/cli-runtime that referenced this issue Jul 8, 2022
This makes ktesting more resilient against logging from leaked goroutines,
which is a problem that came up in kubelet node shutdown
tests (kubernetes/kubernetes#110854).

Kubernetes-commit: 3581e308835c69b11b2c9437db44073129e0e2bf
k8s-publishing-bot pushed a commit to kubernetes/sample-cli-plugin that referenced this issue Jul 8, 2022
This makes ktesting more resilient against logging from leaked goroutines,
which is a problem that came up in kubelet node shutdown
tests (kubernetes/kubernetes#110854).

Kubernetes-commit: 3581e308835c69b11b2c9437db44073129e0e2bf
k8s-publishing-bot pushed a commit to kubernetes/kube-proxy that referenced this issue Jul 8, 2022
This makes ktesting more resilient against logging from leaked goroutines,
which is a problem that came up in kubelet node shutdown
tests (kubernetes/kubernetes#110854).

Kubernetes-commit: 3581e308835c69b11b2c9437db44073129e0e2bf
k8s-publishing-bot pushed a commit to kubernetes/kubelet that referenced this issue Jul 8, 2022
This makes ktesting more resilient against logging from leaked goroutines,
which is a problem that came up in kubelet node shutdown
tests (kubernetes/kubernetes#110854).

Kubernetes-commit: 3581e308835c69b11b2c9437db44073129e0e2bf
k8s-publishing-bot pushed a commit to kubernetes/kube-scheduler that referenced this issue Jul 8, 2022
This makes ktesting more resilient against logging from leaked goroutines,
which is a problem that came up in kubelet node shutdown
tests (kubernetes/kubernetes#110854).

Kubernetes-commit: 3581e308835c69b11b2c9437db44073129e0e2bf
k8s-publishing-bot pushed a commit to kubernetes/controller-manager that referenced this issue Jul 8, 2022
This makes ktesting more resilient against logging from leaked goroutines,
which is a problem that came up in kubelet node shutdown
tests (kubernetes/kubernetes#110854).

Kubernetes-commit: 3581e308835c69b11b2c9437db44073129e0e2bf
k8s-publishing-bot pushed a commit to kubernetes/cloud-provider that referenced this issue Jul 8, 2022
This makes ktesting more resilient against logging from leaked goroutines,
which is a problem that came up in kubelet node shutdown
tests (kubernetes/kubernetes#110854).

Kubernetes-commit: 3581e308835c69b11b2c9437db44073129e0e2bf
k8s-publishing-bot pushed a commit to kubernetes/kube-controller-manager that referenced this issue Jul 8, 2022
This makes ktesting more resilient against logging from leaked goroutines,
which is a problem that came up in kubelet node shutdown
tests (kubernetes/kubernetes#110854).

Kubernetes-commit: 3581e308835c69b11b2c9437db44073129e0e2bf
k8s-publishing-bot pushed a commit to kubernetes/cluster-bootstrap that referenced this issue Jul 8, 2022
This makes ktesting more resilient against logging from leaked goroutines,
which is a problem that came up in kubelet node shutdown
tests (kubernetes/kubernetes#110854).

Kubernetes-commit: 3581e308835c69b11b2c9437db44073129e0e2bf
k8s-publishing-bot pushed a commit to kubernetes/csi-translation-lib that referenced this issue Jul 8, 2022
This makes ktesting more resilient against logging from leaked goroutines,
which is a problem that came up in kubelet node shutdown
tests (kubernetes/kubernetes#110854).

Kubernetes-commit: 3581e308835c69b11b2c9437db44073129e0e2bf
k8s-publishing-bot pushed a commit to kubernetes/mount-utils that referenced this issue Jul 8, 2022
This makes ktesting more resilient against logging from leaked goroutines,
which is a problem that came up in kubelet node shutdown
tests (kubernetes/kubernetes#110854).

Kubernetes-commit: 3581e308835c69b11b2c9437db44073129e0e2bf
k8s-publishing-bot pushed a commit to kubernetes/legacy-cloud-providers that referenced this issue Jul 8, 2022
This makes ktesting more resilient against logging from leaked goroutines,
which is a problem that came up in kubelet node shutdown
tests (kubernetes/kubernetes#110854).

Kubernetes-commit: 3581e308835c69b11b2c9437db44073129e0e2bf
k8s-publishing-bot pushed a commit to kubernetes/kubectl that referenced this issue Jul 8, 2022
This makes ktesting more resilient against logging from leaked goroutines,
which is a problem that came up in kubelet node shutdown
tests (kubernetes/kubernetes#110854).

Kubernetes-commit: 3581e308835c69b11b2c9437db44073129e0e2bf
k8s-publishing-bot pushed a commit to kubernetes/pod-security-admission that referenced this issue Jul 8, 2022
This makes ktesting more resilient against logging from leaked goroutines,
which is a problem that came up in kubelet node shutdown
tests (kubernetes/kubernetes#110854).

Kubernetes-commit: 3581e308835c69b11b2c9437db44073129e0e2bf
@pohly
Copy link
Contributor

pohly commented Jul 8, 2022

The klog update went in, the race in the logging path should be gone now. The kubelet shutdown test still has other data races, but they don't seem to occur in the CI.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 6, 2022
@kerthcet
Copy link
Member Author

kerthcet commented Oct 7, 2022

Refer to the comment #110854 (comment), I'd like to close this issue then.
/close

@k8s-ci-robot
Copy link
Contributor

@kerthcet: Closing this issue.

In response to this:

Refer to the comment #110854 (comment), I'd like to close this issue then.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/flake Categorizes issue or PR as related to a flaky test. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. sig/node Categorizes an issue or PR as relevant to SIG Node.
Projects
None yet
Development

No branches or pull requests

5 participants