Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

client.CoreV1Api().list_node() does not work #1735

Closed
ApproximateIdentity opened this issue Mar 4, 2022 · 14 comments
Closed

client.CoreV1Api().list_node() does not work #1735

ApproximateIdentity opened this issue Mar 4, 2022 · 14 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@ApproximateIdentity
Copy link

What happened (please include outputs or screenshots):

When I run the following script:

from kubernetes import client, config
config.load_kube_config()
v1 = client.CoreV1Api()
v1.list_node()

I expect it to list my nodes (running kubectl get nodeworks fine), but instead it throws the following error:

Traceback (most recent call last):
  File "/home/user/cluster-scaler/script.py", line 4, in <module>
    v1.list_node()
  File "/home/user/cluster-scaler/venv/lib/python3.9/site-packages/kubernetes/client/api/core_v1_api.py", line 16844, in list_node
    return self.list_node_with_http_info(**kwargs)  # noqa: E501
  File "/home/user/cluster-scaler/venv/lib/python3.9/site-packages/kubernetes/client/api/core_v1_api.py", line 16951, in list_node_with_http_info
    return self.api_client.call_api(
  File "/home/user/cluster-scaler/venv/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 348, in call_api
    return self.__call_api(resource_path, method,
  File "/home/user/cluster-scaler/venv/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 192, in __call_api
    return_data = self.deserialize(response_data, response_type)
  File "/home/user/cluster-scaler/venv/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 264, in deserialize
    return self.__deserialize(data, response_type)
  File "/home/user/cluster-scaler/venv/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 303, in __deserialize
    return self.__deserialize_model(data, klass)
  File "/home/user/cluster-scaler/venv/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 639, in __deserialize_model
    kwargs[attr] = self.__deserialize(value, attr_type)
  File "/home/user/cluster-scaler/venv/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 280, in __deserialize
    return [self.__deserialize(sub_data, sub_kls)
  File "/home/user/cluster-scaler/venv/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 280, in <listcomp>
    return [self.__deserialize(sub_data, sub_kls)
  File "/home/user/cluster-scaler/venv/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 303, in __deserialize
    return self.__deserialize_model(data, klass)
  File "/home/user/cluster-scaler/venv/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 639, in __deserialize_model
    kwargs[attr] = self.__deserialize(value, attr_type)
  File "/home/user/cluster-scaler/venv/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 303, in __deserialize
    return self.__deserialize_model(data, klass)
  File "/home/user/cluster-scaler/venv/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 639, in __deserialize_model
    kwargs[attr] = self.__deserialize(value, attr_type)
  File "/home/user/cluster-scaler/venv/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 280, in __deserialize
    return [self.__deserialize(sub_data, sub_kls)
  File "/home/user/cluster-scaler/venv/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 280, in <listcomp>
    return [self.__deserialize(sub_data, sub_kls)
  File "/home/user/cluster-scaler/venv/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 303, in __deserialize
    return self.__deserialize_model(data, klass)
  File "/home/user/cluster-scaler/venv/lib/python3.9/site-packages/kubernetes/client/api_client.py", line 641, in __deserialize_model
    instance = klass(**kwargs)
  File "/home/user/cluster-scaler/venv/lib/python3.9/site-packages/kubernetes/client/models/v1_node_condition.py", line 76, in __init__
    self.type = type
  File "/home/user/cluster-scaler/venv/lib/python3.9/site-packages/kubernetes/client/models/v1_node_condition.py", line 219, in type
    raise ValueError(
ValueError: Invalid value for `type` (GcfsSnapshotterUnhealthy), must be one of ['DiskPressure', 'MemoryPressure', 'NetworkUnavailable', 'PIDPressure', 'Ready']

What you expected to happen:

I expect it to list the pods

How to reproduce it (as minimally and precisely as possible):

Script found above

Anything else we need to know?:

Environment:

  • Kubernetes version (kubectl version):
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.4", GitCommit:"e6c093d87ea4cbb530a7b2ae91e54c0842d8308a", GitTreeState:"clean", BuildDate:"2022-02-16T12:38:05Z", GoVersion:"go1.17.7", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.3-gke.1500", GitCommit:"dbfed6fd139873c88230073d9a1d7b8e7ac4c98e", GitTreeState:"clean", BuildDate:"2021-11-17T09:30:21Z", GoVersion:"go1.16.9b7", Compiler:"gc", Platform:"linux/amd64"}
  • OS (e.g., MacOS 10.13.6):
$ lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description:    Debian GNU/Linux 11 (bullseye)
Release:        11
Codename:       bullseye
  • Python version (python --version)
$ python3 -V
Python 3.9.2 
  • Python client version (pip list | grep kubernetes)
$ pip3 freeze | grep kubernetes
kubernetes==23.3.0
@ApproximateIdentity ApproximateIdentity added the kind/bug Categorizes issue or PR as related to a bug. label Mar 4, 2022
@ApproximateIdentity
Copy link
Author

Maybe this is related to this bug:

#1733

I am using GKE by the way, so it seems like this is a problem with gcloud in addition to the other providers in that other bug report.

@iameskild
Copy link

iameskild commented Mar 7, 2022

I'm also getting a similar error when calling list_nod():

Problem encountered: Invalid value for `type` (CorruptDockerOverlay2), must be one of ['DiskPressure', 'MemoryPressure', 'NetworkUnavailable', 'PIDPressure', 'Ready']

I don't seem to get this error if I downgrade to version 22.6.0.

@jesskranz
Copy link

Hey I added some types that allowed me to list under AKS @ApproximateIdentity @iameskild

#1739

@roycaihw
Copy link
Member

Thanks for bringing this to our attention. It's a regression and we are fixing it: #1739 (comment)

@roycaihw
Copy link
Member

This is being fixed in upstream. We will cut a new 1.23 client to backport the fix once the PR kubernetes/kubernetes#108740 is merged

@Usuychik
Copy link

Usuychik commented Apr 6, 2022

This is being fixed in upstream. We will cut a new 1.23 client to backport the fix once the PR kubernetes/kubernetes#108740 is merged

Seems it is merged. Any info than fix will be available?

@roycaihw
Copy link
Member

roycaihw commented Apr 6, 2022

Yes, I plan to cut a new release this week.

@goloneczka
Copy link

has this fix been released yet ? do you know about others problems beetwen this client and gke ? i would like to create my own kube-scheduler at GKE

@roycaihw
Copy link
Member

We are still waiting for the upstream to cut a new patch release: #1773

@goloneczka
Copy link

goloneczka commented Apr 23, 2022

I'm also getting a similar error when calling list_nod():

Problem encountered: Invalid value for `type` (CorruptDockerOverlay2), must be one of ['DiskPressure', 'MemoryPressure', 'NetworkUnavailable', 'PIDPressure', 'Ready']

I don't seem to get this error if I downgrade to version 22.6.0.

@iameskild what did you downgrade exactly, and how ? do you mean client version ?
EDIT X ( after crying ):
your workaround is working: just added 'RUN pip install kubernetes==22.6.0'

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 22, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 21, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

8 participants