-
Notifications
You must be signed in to change notification settings - Fork 3.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pod condition_type values are not consistent with actual possible values of kubernetes #1760
Comments
Is this pod created by Custom Resource, |
@showjason - This pod is from a standard deployment, but the deployment is not running (createconfigError, createSecretError, unknownpodstatus, etc) this endpoint is not coming from the pod AT ALL - that means this pod is created using a standard apps/v1 deployment, and is managed by a replicaset, this domain exists in the python generated structure, but not in yaml defnitino of the pod AT ALL, to make it even weirder - this domain is jumping on more than one pod & more than one namespace, |
@DanArlowski, you can describe this pod or get the pod's yaml template by |
This is being fixed in upstream. We will cut a new 1.23 client to backport the fix once the PR kubernetes/kubernetes#108740 is merged |
I think the upstream PR are merged, and the new client build could happen now, thank you in advance! |
The new client can be generated when the upstream cuts a new patch release. Tracking in #1773. |
Also seeing this issue on EKS 1.21. specifically, using AWS Load Balancer Controller with pod readinessgates makes this client unusable. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close not-planned |
@k8s-triage-robot: Closing this issue, marking it as "Not Planned". In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
What happened (please include outputs or screenshots):
when trying to fetch pods (namespaced, cluster wide, or a single pod),
when the pod status is not
['ContainersReady', 'Initialized', 'PodScheduled', 'Ready']
an exceptions is raised:
What you expected to happen:
The running pods in the namespace have the following Statuses:
Terminating Running Pending ImagePullBackOff ErrImageNeverPull ErrImageNeverPull ErrImageNeverPull
(this is the kubectl get pods status)
How to reproduce it (as minimally and precisely as possible):
try to get pods in namespace with a non-straightforward status.
Anything else we need to know?:
I think the Problem is that the possible enum values does not align with the actual values that are possible in kube.
Environment:
kubectl version
): 1.23python --version
) 3.9.2pip list | grep kubernetes
) 23.3.0The text was updated successfully, but these errors were encountered: