-
Notifications
You must be signed in to change notification settings - Fork 884
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kubectl follow logs stops after few seconds if there is a lot data to stdout #1548
Comments
There are no sig labels on this issue. Please add an appropriate label by using one of the following commands:
Please see the group list for a listing of the SIGs, working groups, and committees available. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
This issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/sig kubectl |
@tamilselvan1102: The label(s) In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/transfer kubectl |
@tamilselvan1102: You must be an org member to transfer this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/transfer kubectl |
When logs are tailed kubectl uses the watcher utility, which relies on go channels, in this case a buffered go channel. We can see in the below link that the default size for the buffer is 100, and this is not currently configurable. Now, normally this would be fine because the logs coming in would be sent out, but because you are generating so much data, the logs are unable to exit the channel buffer at a faster rate than the logs are flowing in to the channel buffer, resulting in negative pressure and the channel to hang. How the log function spawns multiple watchers The default buffer size of the channel used by the watchers If you run the command causing the issue with |
/sig cli |
@mpuckett159, Thanks for detailed response. I'm guessing you're asking for the beginning of v=9 output. |
would like to help here. Do we want to provide an option to increase the buffer size? |
What happened?
A lot of running jobs generates a massive amount of log to stdout.
Steps:
Starts tailing pod output, till it stuck randomly.
Note, it stuck till pod exit with a success or failure.
What did you expect to happen?
Expected result is to see all pod stdout on screen. This is not happening.
How can we reproduce it (as minimally and precisely as possible)?
kubectl logs -f -n
Anything else we need to know?
No response
Kubernetes version
Cloud provider
OS version
The text was updated successfully, but these errors were encountered: