New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Delete the log of the component, it will still not rebuild the file to save it. #100478
Comments
/sig node |
I can repeat the question Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.4", GitCommit:"e87da0bd6e03ec3fea7933c4b5263d151aafd07c", GitTreeState:"clean", BuildDate:"2021-02-18T16:12:00Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.4", GitCommit:"e87da0bd6e03ec3fea7933c4b5263d151aafd07c", GitTreeState:"clean", BuildDate:"2021-02-18T16:03:00Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"} I've looked at the code and this is probably a problem with the Klog library. I will try to fix this. /assign |
I am also looking at this piece of code to solve this problem. If you have any nice idea, we can discuss it together. @mengjiao-liu |
Now I can be sure that the problem is with the Klog library, because I test with Klog alone and repeat the problem as well @lunhuijie |
klog.go->output->switch s { , I find v19 is differ to v20. |
When we move or delete files, so log file is not rebuilt @lunhuijie |
Need to update the klog version to solve the issue after it is fixed |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
Haven't had time to do it recently, or can you continue with it? @lunhuijie /unassign |
The previous PR is here kubernetes/klog#232 for your reference. Anyone is welcome to do it.😀 |
/remove-lifecycle stale |
I can try this one. |
Thanks for your effort. @RPing |
/assign |
@mengjiao-liu I've implemented a simple fix using a inotify goroutine. |
If rebuilt, the time point is after the file is deleted.On second thought, the immediate reconstruction should have been done in Kubelet. File handle release and file reconstruction should be a feature rather than a bug. Let's refer to more comments. @serathius @derekwaynecarr @mrunalp |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close |
@k8s-triage-robot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
What happened:
1.when I delete the log's file of kubelet and find the status of kubelet is ok.
but i can't find where these logs go.
2.when I mv the log's file of kubelet to other path, it still write to this file.
then i rebuild the same file at the old path, it wouldn't write to this new file but still the one which moved to other path just now.
3.when I delete the logs of kubelet and restart kubelet, it will rebuild.
What you expected to happen:
whatever I do to these log's files, it will rebuild when cluster found it lost.
How to reproduce it (as minimally and precisely as possible):
see What happened
Anything else we need to know?:
This situation is right? I think it is unreasonable not to rebuild
Environment:
kubectl version
):cat /etc/os-release
):uname -a
):The text was updated successfully, but these errors were encountered: