New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Flaky test] ci-kubernetes-unit #110962
Comments
/triage accepted |
@pohly maybe you can help provide insights here (at least for the Logs with details about the race: https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/ci-kubernetes-unit/1544926366053437440 Seems like the race arises due to
Read happens here: |
This is a dupe of #110854. A revert is pending as a stop-gap solution, but I also expect an updated klog which is more resilient against leaked goroutines (the actual problem) soon. |
Hello @pohly 👋 I'm a 1.25 CI Signal Shadow and would you let us know if this issue should be a blocker for the 1.25.0-alpha.3 release on July 9th? I believe this is not a blocker but just in case. |
The klog update went in, the race in the logging path should be gone now. The kubelet shutdown test still has other data races, but they don't seem to occur in the CI. |
job is passing :) |
@Nivedita-coder: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Which jobs are flaking?
release-master-blocking
Which tests are flaking?
Since when has it been flaking?
07-04 22:51 EEST
Testgrid link
https://testgrid.k8s.io/sig-release-master-blocking#ci-kubernetes-unit
Reason for failure (if possible)
{Failed === CONT testing.go:1312: race detected during execution of test FAIL FAIL k8s.io/kubernetes/pkg/kubelet/nodeshutdown 2.384s }
Anything else we need to know?
No response
Relevant SIG(s)
/sig testing
cc @kubernetes/ci-signal
The text was updated successfully, but these errors were encountered: