Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Context aware policies are not working with rusttls #136

Closed
flavio opened this issue Dec 22, 2021 · 3 comments · Fixed by #231
Closed

Context aware policies are not working with rusttls #136

flavio opened this issue Dec 22, 2021 · 3 comments · Fixed by #231
Assignees
Projects

Comments

@flavio
Copy link
Member

flavio commented Dec 22, 2021

Now that we switched to rusttls, the thread fetching information from a Kubernetes cluster is not able to connect to the cluster anymore.

My current setup:

  • policy-server running from main, with rusttls enabled
  • external kuberenetes cluster: built using k3s

This is the error message we get:

WARN policy_server::kube_poller: error when initializing the cluster context client: could not initialize a cluster context because a Kubernetes client could not be created: SslError: No valid private key was found
@viccuad viccuad self-assigned this Jan 10, 2022
@viccuad viccuad added this to In progress in Development Jan 10, 2022
@viccuad
Copy link
Member

viccuad commented Jan 14, 2022

After debugging, it seems we are hitting the following issues in the kube crate, when feature = {rustls}:

First, with kube = 0.64.0, and k3d, one is hitting kube-rs/kube#153 (kube client with rustls can't reach cluster through ip. Workaround: edit kubeconfig and use localhost). A fix is partially fixed in kube since 0.59.0, but blocked on rustls issues.

After applying workarund, and bumping kube to 0.65.0 (which needs a bump in policy-evaluator from kube = 0.64.0 to 0.65.0), I get a more descriptive error: identity PEM is missing a private key: the key must be PKCS8 or RSA/PKCS1. This corresponds with the second issue. Workaround, as listed in the issue, is to convert the key in the kubeconfig to PKCS8. In my case:

kubectl config view --raw \
  -o jsonpath='{.users[?(@.name == "admin@k3d-k3s-default")].user.client-key-data}' | \
  base64 -d | openssl pkcs8 -topk8 -nocrypt | base64 -w0 | \
  xargs -I{} kubectl config set users.admin@k3d-k3s-default.client-key-data {}

With minikube and kube 0.65.0, I hit the dns issue (as for me it ends for example in https://192.168.49.2:8443). A workaround is to add an entry to /etc/hosts.

@viccuad viccuad moved this from In progress to External block in Development Jan 14, 2022
@flavio flavio moved this from External block to TODO in Development Mar 4, 2022
@viccuad viccuad removed their assignment Mar 8, 2022
@jvanz jvanz self-assigned this Apr 12, 2022
@jvanz jvanz moved this from TODO to In progress in Development Apr 12, 2022
@jvanz
Copy link
Member

jvanz commented Apr 13, 2022

AFAICS in my test, upgrading the kube crate to v0.70.0 as suggested during a call with the Kubewarden team, solve the identity PEM is missing a private key: the key must be PKCS8 or RSA/PKCS1 issue. So, I propose upgrade the lib and document how to workaround the ip issue.

I'll open the PRs.

@jvanz
Copy link
Member

jvanz commented Apr 13, 2022

The related PR released in the kube crate version 0.70.0: kube-rs/kube#804

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
No open projects
Development
Done (weekly)
Development

Successfully merging a pull request may close this issue.

3 participants