-
Notifications
You must be signed in to change notification settings - Fork 38.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kubernetes: requesting flag for "kubectl logs" to avoid 5-minute timeout if no stdout/stderr #58486
Comments
@DreadPirateShawn: There are no sig labels on this issue. Please add a sig label. A sig label can be added by either:
Note: Method 1 will trigger an email to the group. See the group list. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
I don't think there a timeout built into kubectl or the apiserver. are you going through a load balancer? |
@liggitt Bah! Excellent call, thanks for the insight. For posterity, the problem here is that we're using haproxy to serve a VIP to balance HA Kubernetes masters. Specifically we're using HA-Proxy v1.4.21, and we have this in our haproxy cfg:
Increasing both of those values was required to fix the timeout issue described here. Closing this ticket. |
Another option would be to add keep alive packets to kubectl so that connection doesn't die. |
Yeah, like AWS... the max idle timeout for an A/N/ELB is 4000 seconds. I'm running a kops cluster in AWS which shoves an ELB in front of the API, and I am frequently running into this issue. Use cases:
Can we have this issue reopened? Thanks. |
Yes,like @DreadPirateShawn said: |
@DreadPirateShawn @cjbottaro @liggitt @Sure2020 |
Same problem here :) |
Hacky practicality. |
@mreinhardt Could be a viable hack for basic use-cases, but it can easily be messy for others. For instance, if there's also a long log history to begin with, then that loop spews it every time, and that would break tooling that's monitoring for fresh activity. Sure, you could add timestamps, parse those, de-dupe... but that's kinda the point of requesting a real fix rather than trying to compensate for the bug downstream. :-) |
/kind feature
What happened:
When running
kubectl logs --follow
on a pod, after 5 minutes of no stdout/stderr, we received:What you expected to happen:
The in-progress test was still running, it simply takes about 2 hours -- so after 113 minutes, more stdout was indeed written.
Expected
--follow
to behave like similar follow flags in (for instance)journalctl
ortail
-- that is, follow until Ictrl-c
to stop.How to reproduce it (as minimally and precisely as possible):
Run a pod that doesn't output stdout/stderr for over 5 minutes, and attempt to
kubectl logs --follow
it.Our particular use-case is, we're using Kubernetes job to run system tests, driven by a Jenkins job. We want to follow the logs so we can see the Jenkins job data in real time, not to mention easily tracking the pod lifetime without needing to poll / check status / sleep / poll / check status / sleep / etc.
Looking at https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#logs, there's no relevant flags which affect this behavior.
Anything else we need to know?:
Nope.
Environment:
kubectl version
):16.04 LTS (Xenial Xerus)
uname -a
):4.4.0-89-generic #112-Ubuntu
The text was updated successfully, but these errors were encountered: