New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
startupProbe being ignored after restarting the pod #102230
Comments
/sig architecture |
@judab5ericom: The label(s) In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/cc @matthyx |
/assign |
/sig node |
This is a duplicate Issue #101064 |
@wzshiming it shoud be fixed in 1.18.19 and 1.18.20 by #101226. Feel free to reopen if it still exists. @judab5ericom |
@pacoxu: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/remove-sig architecture |
/sig architecture
/wg K8s Infra
What happened:
Kubernetes bring up Container successfully perform StartupProbe properly,
a few minutes later I make the POD restart by fail the Liveness Prob.
The container goes down and in the next creation of the POD the startupProbe does not run at all,
the container starts directly with Liveness and Readiness while startupProbe skipped
What you expected to happen:
I expect that startup probe will run every POD creation
How to reproduce it (as minimally and precisely as possible):
Configure POD with all three probes: Startup/Liveness/Readiness Probe (Probe should configure to run as EXEC shell script)
Verify that StartupProbe pass sucessfully, Verify that Liveness and Readiness pass sucessfully
Wait few minutes
Make Liveness to fail
Verify that POD restart due to Liveness Check,
Verify that POD go up again
Check Logs you should see Liveness probe without startup probe
Anything else we need to know?:
Environment:
kubectl version
):Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.4", GitCommit:"e87da0bd6e03ec3fea7933c4b5263d151aafd07c", GitTreeState:"clean", BuildDate:"2021-02-18T16:12:00Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.18", GitCommit:"6f6ce59dc8fefde25a3ba0ef0047f4ec6662ef24", GitTreeState:"clean", BuildDate:"2021-04-15T03:23:41Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"linux/amd64"}
YAML Code Snip:
Instance run with Ubuntu
$ cat /etc/issue
Ubuntu 18.04.5 LTS \n \l
Using Rancher Docker Kubernetes
OS (e.g:
cat /etc/os-release
):Kernel (e.g.
uname -a
):Linux xxx 5.4.0-1044-oracle Some documentation tweaks #47~18.04.1-Ubuntu SMP Thu Apr 22 03:29:37 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
Install tools:
Rancher
Network plugin and version (if this is a network-related bug):
Others:
YAML
startupProbe:
exec:
command:
- /startup_probe.sh
failureThreshold: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
livenessProbe:
exec:
command:
- /liveness.sh
failureThreshold: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
The text was updated successfully, but these errors were encountered: