New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kube-controller-manager yields whole stack trace when it loses leaderelection #107665
Comments
/sig api-machinery |
has Fatalf always triggered a stack dump of all goroutines? |
/cc |
@liggitt kubernetes/vendor/k8s.io/klog/v2/klog.go Lines 1638 to 1643 in b0f0aad
Could we |
If I read the original code right, it only printed the current goroutine to stderr. Dumping everything seems to have been added later.
|
Dumping of all goroutines was added in kubernetes/klog#79. |
I have similar doubts. I want to fix it. |
kube-scheduler seems to have a similar issue (but a shorter stack trace) |
When replacing There's also kubernetes/klog#316 - reverting the klog change might be a faster way to address this. |
What happened?
kube-controller-manager yields whole stack trace when it loses leaderelection.
What did you expect to happen?
kube-controller-manager to only log
leaderelection lost
and to exit (without logging 17K lines of stack trace that is not useful for anyone).How can we reproduce it (as minimally and precisely as possible)?
kube-controller-manager yields around 17K lines of stack trace (omitted in the above example).
Anything else we need to know?
No response
Kubernetes version
v1.20.12
Cloud provider
OS version
Install tools
Container runtime (CRI) and and version (if applicable)
Related plugins (CNI, CSI, ...) and versions (if applicable)
The text was updated successfully, but these errors were encountered: