Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

self signed certificate in certificate chain error in 1.0.0-rc4 #1509

Open
rudyflores opened this issue Jan 10, 2024 · 11 comments
Open

self signed certificate in certificate chain error in 1.0.0-rc4 #1509

rudyflores opened this issue Jan 10, 2024 · 11 comments
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@rudyflores
Copy link

Describe the bug
A clear and concise description of what the bug is.

I receive the following error:

request to https://example?pretty=true failed, reason: self signed certificate in certificate chain

this was not appearing on previous versions of the kubernetes client, I also noticed it on 0.20.0 but had to upgrade due to vulnerability issues with request, is there a way to get rid of this error? I am logging into my cluster and generating a token just fine which used to work.

** Client Version **
e.g. 0.12.0

v1.0.0-rc4

** Server Version **
e.g. 1.19.1

v1.26.6

To Reproduce
Steps to reproduce the behavior:

run any request with kubernetes client, e.g:

await this.kc.readNamespace({
          name: this.kubeConfig.namespace,
          pretty: "true",
});

Expected behavior
A clear and concise description of what you expected to happen.

I should be able to make calls without errors about self-signed certs in cert chain.

** Example Code**
Code snippet for what you are doing

Environment (please complete the following information):

  • OS: [e.g. Windows, Linux] MacOS
  • NodeJS Version [eg. 10] v18.19.0
  • Cloud runtime [e.g. Azure Functions, Lambda]

Additional context
Add any other context about the problem here.

@rudyflores rudyflores changed the title self signed self signed certificate in certificate chain error in 1.0.0-rc4 Jan 10, 2024
@brendandburns
Copy link
Contributor

There is a similar error here:

#1451

which appears to be related to the runtime environment.

@rudyflores
Copy link
Author

@brendandburns do you know if maybe my token in my kube config is not being attached with the kubernetes client?

I tried the same request I am trying with the API call of readNamespace() on Postman and the request seems to work just fine there.

@rudyflores
Copy link
Author

Just tested with v0.18.1 and this worked just fine, something must've changed during and update with kubernetes client that now throws this error for me when I can perform actions just fine with v 0.18.1 which is now vulnerable due to the request dependency

@brendandburns
Copy link
Contributor

The switch to v1.0.4 includes a switch to a different underlying HTTP client. (fetch vs request) it's possible that's different, but it seems to work for other people.

What is the kubernetes distro that you are using? Can you send the contents of your kubeconfig file with any secrets redacted?

@rudyflores
Copy link
Author

The switch to v1.0.4 includes a switch to a different underlying HTTP client. (fetch vs request) it's possible that's different, but it seems to work for other people.

What is the kubernetes distro that you are using? Can you send the contents of your kubeconfig file with any secrets redacted?

Take in mind I also am seeing this issue with v0.20.0 (Which still uses request), where as v0.18.1 does not seem to have this issue.

The Kubernetes distro is Openshift, and this is the kubeconfig (with secrets redacted):

apiVersion: v1
clusters:
- cluster:
    insecure-skip-tls-verify: true
    server: https://<myserver>:<myport>
  name: <myserver>:<myport>
contexts:
- context:
    cluster: <myserver>:<myport>
    namespace: <namespace>
    user: <user>/<server>:<myport>
  name: default/<myserver>:<myport>/<user>
current-context: default/<myserver>:<myport>/<user>
kind: Config
preferences: {}
users:
- name: <myuser>/<myserver>:<myport>
  user:
    token: <token>

@brendandburns
Copy link
Contributor

ah, ok so you are explicitly turning off cert checking with:

insecure-skip-tls-verify: true

I suspect that something broke in our handling of that parameter. I'll try to reproduce in unit tests.

@rudyflores
Copy link
Author

ah, ok so you are explicitly turning off cert checking with:

insecure-skip-tls-verify: true

I suspect that something broke in our handling of that parameter. I'll try to reproduce in unit tests.

Thank you for your help with this, please keep me updated.

@brendandburns
Copy link
Contributor

So I think that this is because you are using a BearerToken for auth. The codepath for that is different and I don't think it respects the SSL in that case.

I'm not quite sure about the right way to fix it, but I will keep looking. In the meantime if you could try a different auth method and see if that works, that would be useful.

@rudyflores
Copy link
Author

So I think that this is because you are using a BearerToken for auth. The codepath for that is different and I don't think it respects the SSL in that case.

I'm not quite sure about the right way to fix it, but I will keep looking. In the meantime if you could try a different auth method and see if that works, that would be useful.

Thanks for the update!

I believe currently my team has only token auth setup in our cluster so I may not be able to try another auth method for the time being unfortunately, thanks again for looking into resolving this issue! If a pull request is made could you link it to this issue?

@rudyflores
Copy link
Author

@brendandburns any updates for this issue?

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

No branches or pull requests

4 participants