Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refresh token/api-key periodically #741

Closed
aparamon opened this issue Jan 30, 2019 · 30 comments · Fixed by kubernetes-client/python-base#250
Closed

Refresh token/api-key periodically #741

aparamon opened this issue Jan 30, 2019 · 30 comments · Fixed by kubernetes-client/python-base#250
Assignees

Comments

@aparamon
Copy link

aparamon commented Jan 30, 2019

Currently, authorization token/api-key is only initialized on loading config:
https://github.com/kubernetes-client/python-base/blob/bd9a8525e9215f7f01c32a321beb9a605cf0402b/config/kube_config.py#L420
https://github.com/kubernetes-client/python-base/blob/bd9a8525e9215f7f01c32a321beb9a605cf0402b/config/kube_config.py#L510
When working with Amazon EKS via aws-iam-authenticator though, token/api-key expires relatively quickly.

It is proposed to introduce a configurable option to specify token/api-key time-to-live. On API call the time should be checked, and if expired the token/api-key should be refreshed by calling https://github.com/kubernetes-client/python-base/blob/bd9a8525e9215f7f01c32a321beb9a605cf0402b/config/kube_config.py#L350 again.
Alternatively, API client could check for 401 Unauthorized return code and refresh token (at most once per API call).

@aparamon aparamon changed the title Refresh auth token periodically Refresh token/api-key periodically Jan 30, 2019
@marsewe
Copy link

marsewe commented Mar 21, 2019

See also kubernetes-sigs/aws-iam-authenticator#63 and https://stackoverflow.com/questions/48151388/kubernetes-python-client-authentication-issue. Should the user of the client catch the 401 (and re-authenticate and retry) or is that a responsibility of the kubernetes-client?

@jmeickle
Copy link

jmeickle commented Jun 7, 2019

Calling config.load_kube_config() again doesn't seem to recreate a token in my case, even though the corresponding kubectl command would have automatically done so. So I think there's some staleness check here that isn't actually occurring.

@houqp
Copy link

houqp commented Aug 5, 2019

config for an existing client is cached unfortunately :( After calling load_kube_config, you will have to recreate a new api client to pick up the refreshed token. Currently, the expirationTimestamp returned from aws-iam-authenticator is ignored by KubeConfigLoader.

@wing328
Copy link

wing328 commented Aug 8, 2019

@houqp I wonder if you can send the patch upstream to OpenAPI Generator as that's what the project going to use moving forward to generate API clients: kubernetes-client/gen#93

@houqp
Copy link

houqp commented Aug 8, 2019

Good call @wing328, do you know if #738 and kubernetes-client/gen#97 are the right PRs to watch?

@houqp
Copy link

houqp commented Aug 8, 2019

@wing328 i have ported the patches to openapi-generator, could you help review them?

@roycaihw
Copy link
Member

/assign

@roycaihw
Copy link
Member

kubernetes-client/gen#97 is merged. We used openapi-generator in 11.0.0a1 release #931

@jdamata
Copy link

jdamata commented Nov 13, 2019

I'm using 11.0.0a1 but still getting intermittent 401's

{"asctime": "2019-11-13 14:38:33,384", "name": "k8s", "levelname": "INFO", "message": "Sleeping for 60 seconds"}
Traceback (most recent call last):
  File "src/main.py", line 57, in <module>
    drain.run()
  File "/opt/git/stratus/stratus-eks/worker_node_rolling_update/src/k8s.py", line 42, in run
    self.drain()
  File "/opt/git/stratus/stratus-eks/worker_node_rolling_update/src/k8s.py", line 33, in drain
    if check_node_drained(self.v1api, node):
  File "/opt/git/stratus/stratus-eks/worker_node_rolling_update/src/k8s_helper.py", line 117, in check_node_drained
    node_body = api.read_node(node_name)
  File "/Users/damatj/.local/share/virtualenvs/worker_node_rolling_update-n2X_lDEC/lib/python3.7/site-packages/kubernetes/client/api/core_v1_api.py", line 20565, in read_node
    (data) = self.read_node_with_http_info(name, **kwargs)  # noqa: E501
  File "/Users/damatj/.local/share/virtualenvs/worker_node_rolling_update-n2X_lDEC/lib/python3.7/site-packages/kubernetes/client/api/core_v1_api.py", line 20649, in read_node_with_http_info
    collection_formats=collection_formats)
  File "/Users/damatj/.local/share/virtualenvs/worker_node_rolling_update-n2X_lDEC/lib/python3.7/site-packages/kubernetes/client/api_client.py", line 335, in call_api
    _preload_content, _request_timeout)
  File "/Users/damatj/.local/share/virtualenvs/worker_node_rolling_update-n2X_lDEC/lib/python3.7/site-packages/kubernetes/client/api_client.py", line 166, in __call_api
    _request_timeout=_request_timeout)
  File "/Users/damatj/.local/share/virtualenvs/worker_node_rolling_update-n2X_lDEC/lib/python3.7/site-packages/kubernetes/client/api_client.py", line 356, in request
    headers=headers)
  File "/Users/damatj/.local/share/virtualenvs/worker_node_rolling_update-n2X_lDEC/lib/python3.7/site-packages/kubernetes/client/rest.py", line 241, in GET
    query_params=query_params)
  File "/Users/damatj/.local/share/virtualenvs/worker_node_rolling_update-n2X_lDEC/lib/python3.7/site-packages/kubernetes/client/rest.py", line 231, in request
    raise ApiException(http_resp=r)
kubernetes.client.rest.ApiException: (401)
Reason: Unauthorized
HTTP response headers: HTTPHeaderDict({'Audit-Id': '96bc2da9-40a0-42b8-9303-3a99b5d77db1', 'Content-Type': 'application/json', 'Date': 'Wed, 13 Nov 2019 19:39:33 GMT', 'Content-Length': '129'})
HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Unauthorized","reason":"Unauthorized","code":401}

@athom
Copy link

athom commented Dec 11, 2019

11.0.0b2 also not work

   File "/etc/airflow/.local/share/virtualenvs/airflow-bTdwlyD1/lib/python3.6/site-packages/airflow/contrib/kubernetes/pod_launcher.py", line 168, in read_pod
     return self._client.read_namespaced_pod(pod.name, pod.namespace)
   File "/etc/airflow/.local/share/virtualenvs/airflow-bTdwlyD1/lib/python3.6/site-packages/kubernetes/client/api/core_v1_api.py", line 19078, in read_namespaced_pod
     (data) = self.read_namespaced_pod_with_http_info(name, namespace, **kwargs)  # noqa: E501
   File "/etc/airflow/.local/share/virtualenvs/airflow-bTdwlyD1/lib/python3.6/site-packages/kubernetes/client/api/core_v1_api.py", line 19169, in read_namespaced_pod_with_http_info
     collection_formats=collection_formats)
   File "/etc/airflow/.local/share/virtualenvs/airflow-bTdwlyD1/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 335, in call_api
     _preload_content, _request_timeout)
   File "/etc/airflow/.local/share/virtualenvs/airflow-bTdwlyD1/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 166, in __call_api
     _request_timeout=_request_timeout)
   File "/etc/airflow/.local/share/virtualenvs/airflow-bTdwlyD1/lib/python3.6/site-packages/kubernetes/client/api_client.py", line 356, in request
     headers=headers)
   File "/etc/airflow/.local/share/virtualenvs/airflow-bTdwlyD1/lib/python3.6/site-packages/kubernetes/client/rest.py", line 241, in GET
     query_params=query_params)
   File "/etc/airflow/.local/share/virtualenvs/airflow-bTdwlyD1/lib/python3.6/site-packages/kubernetes/client/rest.py", line 231, in request
     raise ApiException(http_resp=r)
 kubernetes.client.rest.ApiException: (401)
 Reason: Unauthorized
 HTTP response headers: HTTPHeaderDict({'Audit-Id': '685e7f66-8303-418c-9d1d-b82920ef2d0f', 'Content-Type': 'application/json', 'Date': 'Wed, 11 Dec 2019 04:12:24 GMT', 'Content-Length': '129'})
 HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Unauthorized","reason":"Unauthorized","code":401}

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 10, 2020
@tbarrella
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 10, 2020
@roycaihw
Copy link
Member

OpenAPITools/openapi-generator#3594 was included in openapi-generator 4.1.1. Currently we are still using openapi-generator 3.3.4. We are tracking upgrading openapi-generator in the next major release: #1052, #1088

dinvlad added a commit to broadinstitute/dsp-appsec-infrastructure-apps that referenced this issue Apr 6, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 8, 2020
@dinvlad
Copy link

dinvlad commented Jun 9, 2020

/remove-lifecycle stale

@Uznick
Copy link

Uznick commented Mar 10, 2021

We have a similiar workaround as well. Works fine for now.

@dinvlad
Copy link

dinvlad commented Mar 10, 2021

@houqp by looking at your code, it uses the default load_incluster_config() when running inside the cluster. Does it mean you only experience this issue with the default load_kube_config()? We're seeing it even with load_incluster_config(), if I'm reading that right.

Or maybe our "hanging" issue is actually not due to expired creds, but instead because of the TCP keepalive problem you mentioned there as well (thanks for indirectly bringing this to my attention!) 🤔

@dinvlad
Copy link

dinvlad commented Mar 10, 2021

To more specifically expand on our issue (which might be slightly different from others' use of config loader here), we use load_incluster_config() for in-cluster cleanup operation, which in our case is the following pseudo-code:

from kubernetes.watch import Watch

def cleanup(namespace: str):
    """
    Watches for and deletes terminated Jobs.
    This function would not normally be needed if `ttlSecondsAfterFinished` worked.
    However, that feature is currently hidden behind an `alpha` flag in GKE.
    """
    load_incluster_config()
    api = BatchV1Api()
    events = Watch().stream(api.list_namespaced_job, namespace)
    for event in events:
        # cleanup logic

So our issue (if I'm not mistaken) is that no matter how credentials are fetched (either in-cluster or not), they seem to be "cached" in the config, and then stream() generator expires. I wonder if there's a more general way to implement a config client that refreshes credentials automatically.

Please correct me if that's what's already happening in load_incluster_config(), and our issue (watcher stops working after a while) is caused by something else (like the keepalive problem etc.)

@houqp
Copy link

houqp commented Mar 10, 2021

@dinvlad my fix was only for out of cluster communication where auth goes through iam-authenticator. IIRC, in cluster communication uses builtin K8S auth, which doesn't have this problem.

@alexcristi
Copy link

Hi there! Any updates on this?

@kwlzn
Copy link

kwlzn commented Jul 20, 2021

is anyone actively working on this issue? if not, my team at Twitter might try to take a stab at it soon.

emenendez pushed a commit to twitter-forks/python-base that referenced this issue Jul 23, 2021
…lient v11.0.0)

This is a partial fix for kubernetes-client/python#741, based on the version of this repo included in `kubernetes-client` v11.0.0 (https://github.com/kubernetes-client/python/tree/v11.0.0).

As described in kubernetes-client/python#741, some of the authentication schemes supported by Kubernetes require updating the client's credentials from time to time. The Kubernetes Python client currently does not support this, except for when using the `gcp` auth scheme. This is because the OpenAPI-generated code does not generally expect credentials to change after the client is configured.

However, in OpenAPITools/openapi-generator#3594, the OpenAPI-generated code added a (undocumented) hook on the `Configuration` object which provides a method for the client credentials to be refreshed as needed. Unfortunately, this version of the Kubernetes client is too old to have that hook, but this patch adds it with a subclass of `Configuration`. Then the `load_kube_config()` function, used by the Kubernetes API to set up the `Configuration` object from the client's local k8s config, just needs to be updated to take advantage of this hook.

This patch does this for `exec`-based authentication, which is a partial fix for kubernetes-client/python#741. The plan is to follow up to support this for all other authentication schemes which may require refreshing credentials. The follow-up patch will be based on the latest Kubernetes client and won't need the `Configuration` subclass.

As noted above, `load_kube_config()` already has a special-case monkeypatch to refresh GCP tokens. I presume this functionality was added before the OpenAPI generator added support for the refresh hook. A complete fix will probably include refactoring the GCP token refreshing to use the new hook.
emenendez pushed a commit to twitter-forks/python-base that referenced this issue Aug 27, 2021
This is a fix for kubernetes-client/python#741.

As described in kubernetes-client/python#741, some of the authentication schemes supported by Kubernetes require updating the client's credentials from time to time. The Kubernetes Python client currently does not support this, except for when using the `gcp` auth scheme. This is because the OpenAPI-generated client code does not generally expect credentials to change after the client is configured.

However, in OpenAPITools/openapi-generator#3594, the OpenAPI generator added a (undocumented) hook on the `Configuration` object which provides a method for the client credentials to be refreshed as needed. Now that this hook exists, the `load_kube_config()` function, used by the Kubernetes API to set up the `Configuration` object from the client's local k8s config, just needs to be updated to take advantage of this hook.

This patch does this for `exec`-based authentication, which should resolve kubernetes-client/python#741.

Also, as noted above, `load_kube_config()` already has a special-case monkeypatch to refresh GCP tokens. I presume this functionality was added before the OpenAPI generator added support for the refresh hook. This patch also refactors the GCP token refreshing code to use the new hook instead of the monkeypatch.

Tests are also updated.
emenendez pushed a commit to twitter-forks/python-base that referenced this issue Sep 3, 2021
This is a fix for kubernetes-client/python#741.

As described in kubernetes-client/python#741, some of the authentication schemes supported by Kubernetes require updating the client's credentials from time to time. The Kubernetes Python client currently does not support this, except for when using the `gcp` auth scheme. This is because the OpenAPI-generated client code does not generally expect credentials to change after the client is configured.

However, in OpenAPITools/openapi-generator#3594, the OpenAPI generator added a (undocumented) hook on the `Configuration` object which provides a method for the client credentials to be refreshed as needed. Now that this hook exists, the `load_kube_config()` function, used by the Kubernetes API to set up the `Configuration` object from the client's local k8s config, just needs to be updated to take advantage of this hook.

This patch does this for `exec`-based authentication, which should resolve kubernetes-client/python#741.

Also, as noted above, `load_kube_config()` already has a special-case monkeypatch to refresh GCP tokens. I presume this functionality was added before the OpenAPI generator added support for the refresh hook. This patch also refactors the GCP token refreshing code to use the new hook instead of the monkeypatch.

Tests are also updated.
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 18, 2021
@dinvlad
Copy link

dinvlad commented Oct 18, 2021

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 18, 2021
dudleyhunt86 added a commit to dudleyhunt86/python-base-repository-build that referenced this issue Oct 7, 2022
This is a fix for kubernetes-client/python#741.

As described in kubernetes-client/python#741, some of the authentication schemes supported by Kubernetes require updating the client's credentials from time to time. The Kubernetes Python client currently does not support this, except for when using the `gcp` auth scheme. This is because the OpenAPI-generated client code does not generally expect credentials to change after the client is configured.

However, in OpenAPITools/openapi-generator#3594, the OpenAPI generator added a (undocumented) hook on the `Configuration` object which provides a method for the client credentials to be refreshed as needed. Now that this hook exists, the `load_kube_config()` function, used by the Kubernetes API to set up the `Configuration` object from the client's local k8s config, just needs to be updated to take advantage of this hook.

This patch does this for `exec`-based authentication, which should resolve kubernetes-client/python#741.

Also, as noted above, `load_kube_config()` already has a special-case monkeypatch to refresh GCP tokens. I presume this functionality was added before the OpenAPI generator added support for the refresh hook. This patch also refactors the GCP token refreshing code to use the new hook instead of the monkeypatch.

Tests are also updated.
sergiitk added a commit to grpc/grpc that referenced this issue Jan 27, 2023
…32210)

This PR adds retries on create/get requests from the test driver to the K8s API when 401 Unauthorized error is encountered.
K8S python library expects the ApiClient to be cycled on auth token refreshes.

The problem is described in kubernetes-client/python#741. Currently we don't have any hypotheses why we weren't affected by this problem before.

To force the ApiClient to pick up the new credentials, I shut down the current client, create a new one, and replace api_client properties on all k8s APIs we manage.

This should also work with the Watch-based log collector recovering from an error. To support that, I replace default Configuration so that the next time Watch creates ApiClient implicitly, the Configuration with updated token will be used.
XuanWang-Amos pushed a commit to XuanWang-Amos/grpc that referenced this issue May 1, 2023
…rpc#32210)

This PR adds retries on create/get requests from the test driver to the K8s API when 401 Unauthorized error is encountered.
K8S python library expects the ApiClient to be cycled on auth token refreshes.

The problem is described in kubernetes-client/python#741. Currently we don't have any hypotheses why we weren't affected by this problem before.

To force the ApiClient to pick up the new credentials, I shut down the current client, create a new one, and replace api_client properties on all k8s APIs we manage.

This should also work with the Watch-based log collector recovering from an error. To support that, I replace default Configuration so that the next time Watch creates ApiClient implicitly, the Configuration with updated token will be used.
wanlin31 pushed a commit to grpc/grpc that referenced this issue May 18, 2023
…32210)

This PR adds retries on create/get requests from the test driver to the K8s API when 401 Unauthorized error is encountered.
K8S python library expects the ApiClient to be cycled on auth token refreshes.

The problem is described in kubernetes-client/python#741. Currently we don't have any hypotheses why we weren't affected by this problem before.

To force the ApiClient to pick up the new credentials, I shut down the current client, create a new one, and replace api_client properties on all k8s APIs we manage.

This should also work with the Watch-based log collector recovering from an error. To support that, I replace default Configuration so that the next time Watch creates ApiClient implicitly, the Configuration with updated token will be used.
sergiitk added a commit to grpc/psm-interop that referenced this issue Nov 8, 2023
…32210)

This PR adds retries on create/get requests from the test driver to the K8s API when 401 Unauthorized error is encountered.
K8S python library expects the ApiClient to be cycled on auth token refreshes.

The problem is described in kubernetes-client/python#741. Currently we don't have any hypotheses why we weren't affected by this problem before.

To force the ApiClient to pick up the new credentials, I shut down the current client, create a new one, and replace api_client properties on all k8s APIs we manage.

This should also work with the Watch-based log collector recovering from an error. To support that, I replace default Configuration so that the next time Watch creates ApiClient implicitly, the Configuration with updated token will be used.
abdul5497 pushed a commit to abdul5497/python-dapp that referenced this issue Apr 1, 2024
This is a fix for kubernetes-client/python#741.

As described in kubernetes-client/python#741, some of the authentication schemes supported by Kubernetes require updating the client's credentials from time to time. The Kubernetes Python client currently does not support this, except for when using the `gcp` auth scheme. This is because the OpenAPI-generated client code does not generally expect credentials to change after the client is configured.

However, in OpenAPITools/openapi-generator#3594, the OpenAPI generator added a (undocumented) hook on the `Configuration` object which provides a method for the client credentials to be refreshed as needed. Now that this hook exists, the `load_kube_config()` function, used by the Kubernetes API to set up the `Configuration` object from the client's local k8s config, just needs to be updated to take advantage of this hook.

This patch does this for `exec`-based authentication, which should resolve kubernetes-client/python#741.

Also, as noted above, `load_kube_config()` already has a special-case monkeypatch to refresh GCP tokens. I presume this functionality was added before the OpenAPI generator added support for the refresh hook. This patch also refactors the GCP token refreshing code to use the new hook instead of the monkeypatch.

Tests are also updated.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet