Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Multi-Tenant Auth] Ability to customize the namespaces dropdown behavior/contents #8496

Open
mecampbellsoup opened this issue Nov 20, 2023 · 19 comments · May be fixed by #8526
Open

[Multi-Tenant Auth] Ability to customize the namespaces dropdown behavior/contents #8496

mecampbellsoup opened this issue Nov 20, 2023 · 19 comments · May be fixed by #8526
Labels
kind/feature Categorizes issue or PR as related to a new feature.

Comments

@mecampbellsoup
Copy link

mecampbellsoup commented Nov 20, 2023

What would you like to be added?

We run a multi-tenant k8s cluster and use namespaces to segregate customers from one another.

I've setup the k8s dashboard UI to proxy auth to the logged-in user and use its credentials. This part is working as expected.

The remaining glaring issue with rolling this out to customers is that I don't have the ability to configure or customize the behavior of the namespaces dropdown list:

image

default is confusing here and that namespace is most definitely not something our customers will be able to access.

We worked with kubeapps team and implemented a fallback mechanism there for something similar - would something like this be possible in k8s dashboard UI? Or is there some existing mechanism by which we can configure or even disable the namespaces dropdown?

Why is this needed?

For better multi-tenant cluster support, we need the ability to configure some sort of fallback so that our customers can rely on the UI's namespaces dropdown to contain the list of namespaces they are able to access.

This feature is necessary for multi-tenant clusters in which isolation is done via namespaces (i.e. customer A has namespace tenant-a, customer B has namespace tenant-b, etc., and neither has much cluster-scoped RBAC as it is a shared cluster).

Here is how the api namespace endpoint currently responds in e.g. a shared cluster where users' namespaces are segregated (and those users do not have list namespaces RBAC in the cluster):

{
 "listMeta": {
  "totalItems": 0
 },
 "namespaces": [],
 "errors": [
  {
   "ErrStatus": {
    "metadata": {},
    "status": "Failure",
    "message": "namespaces is forbidden: User \"mcampbell+dev+11-16-2023@coreweave.com\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope",
    "reason": "Forbidden",
    "details": {
     "kind": "namespaces"
    },
    "code": 403
   }
  }
 ]
}

With this response, the user sees the following namespace dropdown:
image

So in summary: in clusters where users do not have RBAC to list namespaces, there needs to be a programmatic way to modify the list of namespaces returned by /api/v1/namespace endpoint to answer the question "what namespaces does this [auth proxied] user have access to?" An acceptable fallback alternative would be to make the namespaces list setting dynamic and/or configurable per-user as well.

Kubeapps has implemented something similar (we worked w/ them on it) where you can configure a values trustedNamespaces.headerName and trustedNamespaces.headerPattern that allow the backend API to extract "valid" accessible namespaces from another proxied request header. For kube dashboard, since you already support proxying the Impersonate-* k8s request headers, one of those could probably be used; or we could implement something similar to kubeapps where the proxied header name is entirely configurable.

https://github.com/vmware-tanzu/kubeapps/blob/main/chart/kubeapps/values.yaml#L1614-L1626

@mlbiam
Copy link
Contributor

mlbiam commented Nov 21, 2023

Happy to help on a PR for this.

@floreks
Copy link
Member

floreks commented Nov 21, 2023

You can specify the fallback namespace list and default namespace via settings page or directly via config map that stores settings.

@mlbiam
Copy link
Contributor

mlbiam commented Nov 21, 2023

You can specify the fallback namespace list and default namespace via settings page or directly via config map that stores settings.

this doesn't really help in a multi tenant environment because it's static. User's access is going to be different based on the logged in user, so having a static list doesn't solve the UX issue this feature request does.

@floreks
Copy link
Member

floreks commented Nov 21, 2023

Using a custom header seems to be the only viable solution to me. It should not be hard to add on the backend side. Unfortunately, I have almost no free time lately to work on the Dashboard. The feature work would have to be contributed by someone else. I could give some pointers if needed though.

@mlbiam
Copy link
Contributor

mlbiam commented Nov 21, 2023

@floreks would it need to be against the 3.0.0 branch? I don't know how close that is to release.

@floreks
Copy link
Member

floreks commented Nov 21, 2023

@floreks would it need to be against the 3.0.0 branch? I don't know how close that is to release.

Ye, it would have to be a PR to the main branch. It is marked alpha just because the helm chart configuration might not be flexible enough to work for most users. Other than that api and frontend didn't really change. They were just split into separate containers.

@mecampbellsoup
Copy link
Author

mecampbellsoup commented Nov 29, 2023

I can work on this, but what is the easiest way to get a containerized dev workflow going? We use Skaffold typically which works great w/ relatively vanilla builds, but the UI's API Dockerfile is doing some unconventional stuff (seems like it depends on Makefile to have populated the .dist directory for instance)... any dev tips would be appreciated!

@mecampbellsoup
Copy link
Author

mecampbellsoup commented Dec 1, 2023

@floreks or @mlbiam - either of you guys know how I can get the api backend working w/ a combination of in-cluster config (i.e. the kubernetes-dashboard service account should mount and use the standard config via /var/run/secrets/kubernetes.io/serviceaccount in the pod), plus adding impersonation headers?

My apiserver keeps responding w/ 401 unauthorized:

E1201 22:37:42.011121       1 authentication.go:63] "Unable to authenticate the request" err="[invalid bearer token, Token has been invalidated]"
E1201 22:37:42.011137       1 authentication.go:63] "Unable to authenticate the request" err="[invalid bearer token, Token has been invalidated]"
I1201 22:37:42.011206       1 httplog.go:94] "HTTP" verb="GET" URI="/api/v1/namespaces/tenant-dev-eac4f6-ns1" latency="2.930588ms" userAgent="dashboard/UNKNOWN" srcIP="10.241.126.247:42138" resp=401
I1201 22:37:42.011216       1 httplog.go:94] "HTTP" verb="GET" URI="/api/v1/namespaces/tenant-dev-eac4f6-ns1/events" latency="3.123772ms" userAgent="dashboard/UNKNOWN" srcIP="10.241.126.247:42138" resp=401

I'm passing the SA secret's token data via Authorization: Bearer {token}:

(⎈|default:kubernetes-dashboard)mcampbell-1➜  ~/github/coreweave/k8s-services/dev : mc/cloud/add-kube-dashboard-chart ✘ :✹✭ ᐅ  k neat get secret api-kubernetes-dashboard-token-p9jq9 | yq -r .data.token | base64 -d
eyJhbGciOiJSUzI1NiIsImtpZCI6IjJzLVhhOEpsLUtTREdKYXlqaDMtQktqYkZ4WDQwNW1aYjhhM2dRbHUybWsifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhcGkta3ViZXJuZXRlcy1kYXNoYm9hcmQtdG9rZW4tcDlqcTkiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiYXBpLWt1YmVybmV0ZXMtZGFzaGJvYXJkIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMWU3ZjllMWUtMWM1NC00ZTkxLWJhYWMtY2JlYzZmOTg0MWM1Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmFwaS1rdWJlcm5ldGVzLWRhc2hib2FyZCJ9.YGsxcg0h3J4ajEfyYRoN5aCpQb_A2N8aOUhegQX1bWrxWD74TK9XszNnS_Om2D9jspEVlNmQAnkiHljdBVCe6Em8PLcTdTOpIrKLxQ6_xihpfPLEXw7mKAQo_D-d4YK-TNAaNUVDd88k9ZpSBAhoqhdahal6gXTmuoCAU6bBWilBkUjaopOt8BL6Q01WgsGt6tSTVvRNXiFekuLIy6J10c4-7Y9Lst7qkkJfmY2Arv3Onrk3B5F3JMo6mM0SOhVEfR-6hvsGPosFDv_LVKTE9J0fiB-vQdeGTsED941fZymiI-OGqDQwnWj6RYoZJA1WoUGcvO9PxgwA7o-Cue77RA

I'm aiming for:

  • kubernetes-dashboard service account can impersonate users (I added this RBAC manually)
  • the user to impersonate is passed via proxy headers set by our API gateway (we have a reverse proxy in front of dashboard)
  • requests from dashboard api -> k8s apiserver should authenticate first as the kubernetes-dashboard SA, and the requests should include Impersonate-* headers

@mlbiam
Copy link
Contributor

mlbiam commented Dec 1, 2023

The permissions on the dashboard's sa are irrelevant. With impersonation you need an authorization header with a token that is allowed to impersonate the user and the correct headers for who to impersonate. The dashboard's sa is only used when there's no external token or login cookie

@mecampbellsoup
Copy link
Author

@mlbiam but why is it structured that way? It seems to me that one should be able to use in-cluster config (i.e. the kubernetes-dashboard SA is the authenticated entity making requests to the k8s apiserver), and the Impersonate- headers can be added on top. To be clear, when I say "use in-cluster config", I mean that my api pod has mounted the usual k8s config data at /var/run/secrets/kubernetes.io/serviceaccount, including the /var/run/secrets/kubernetes.io/serviceaccount/token.

As of right now, it seems I have to copy-paste the SA's secret token (i.e. the same token that is mounted at /var/run/secrets/kubernetes.io/serviceaccount/token in the api pod), and have my auth proxy pass it through as the Authorization header.

That seems quite roundabout. I would rather the api code "fallback" (not sure what the right term is) to using in-cluster config in the case that my auth proxy has provided Impersonate-* headers but not an Authorization bearer token.

Thoughts?

@mlbiam
Copy link
Contributor

mlbiam commented Dec 4, 2023

@mlbiam but why is it structured that way? It seems to me that one should be able to use in-cluster config (i.e. the kubernetes-dashboard SA is the authenticated entity making requests to the k8s apiserver), and the Impersonate- headers can be added on top.

first, the dashboard doesn't have any mechanism for authenticating or authorizing your request. So unless you limited your scope of Impersonation via RBAC to a single user you have no mechanism to make sure that the headers that are coming in are valid.

Impersonation is essentially a privileged access, which means anyone who gets into the dashboard and bypasses the security has access to the cluster in a privileged state. Keeping this privilege at a higher level (the reverse proxy) cuts down on the amount of risk from a breach. If there were some kind of application level bug in the dashboard, when running in the mode you suggest, the attacker could now elevate themselves with the ServiceAccount from the dashboard's Pod. With all the tutorials for the dashboard that involve exposing via kubectl proxy or some other easy mechanism you're asking for a breach. This way, the focus of an attacker needs to be on your proxy which is hopefully hardened anyways.

Imagine a scenario where a vulnerable Ingress controller allows an Ingress to be setup by an unprivileged user directly to the dashboard and now that user can inject impersonation headers without any kind of verification because the dashboard's own identity can impersonate the user. The dashboard has no mechanism for validating the inbound request, so an issue unrelated to the dashboard could lead to a breach.

@mecampbellsoup
Copy link
Author

@mlbiam I see what you're saying.

In our setup, we have configured our k8s apiserver as follows:

--requestheader-username-headers=X-Consumer-Username
--requestheader-group-headers=X-Consumer-Permissions

So, it would be preferable and safer to point the dashboard api to the k8s apiserver via a protected ingress, e.g. https://my-ingress.cloud/k8s. That way all traffic from the dashboard api to the apiserver goes through our authenticating reverse proxy, like all other traffic/services to the apiserver.

Helm release api not installed. Installing...
coalesce.go:286: warning: cannot overwrite table with non table for kubernetes-dashboard.kubernetes-dashboard.app.ingress.annotations (map[])
NAME: api
LAST DEPLOYED: Mon Dec  4 13:58:14 2023
NAMESPACE: kubernetes-dashboard
STATUS: deployed
REVISION: 1
TEST SUITE: None
Waiting for deployments to stabilize...
 - kubernetes-dashboard:deployment/api-kubernetes-dashboard-api: creating container kubernetes-dashboard-api
    - kubernetes-dashboard:pod/api-kubernetes-dashboard-api-748d6895c5-x49rk: creating container kubernetes-dashboard-api
 - kubernetes-dashboard:deployment/api-kubernetes-dashboard-web: creating container kubernetes-dashboard-web
    - kubernetes-dashboard:pod/api-kubernetes-dashboard-web-8797c5f96-bmpfh: creating container kubernetes-dashboard-web
 - kubernetes-dashboard:deployment/api-kubernetes-dashboard-api: container kubernetes-dashboard-api terminated with exit code 1
    - kubernetes-dashboard:pod/api-kubernetes-dashboard-api-748d6895c5-x49rk: container kubernetes-dashboard-api terminated with exit code 1
      > [api-kubernetes-dashboard-api-748d6895c5-x49rk kubernetes-dashboard-api] 2023/12/04 13:58:19 Starting overwatch
      > [api-kubernetes-dashboard-api-748d6895c5-x49rk kubernetes-dashboard-api] 2023/12/04 13:58:19 Using apiserver-host location: https://cloud-app-kubernetes-ingress.cloud/k8s
      > [api-kubernetes-dashboard-api-748d6895c5-x49rk kubernetes-dashboard-api] 2023/12/04 13:58:19 Using namespace: kubernetes-dashboard
      > [api-kubernetes-dashboard-api-748d6895c5-x49rk kubernetes-dashboard-api] 2023/12/04 13:58:19 Using auth proxy request header for authorized namespaces: x-consumer-permissions
      > [api-kubernetes-dashboard-api-748d6895c5-x49rk kubernetes-dashboard-api] 2023/12/04 13:58:19 Applying regex pattern to namespaces header: r:ns-([a-z0-9-]+):base
      > [api-kubernetes-dashboard-api-748d6895c5-x49rk kubernetes-dashboard-api] 2023/12/04 13:58:19 Skipping in-cluster config
      > [api-kubernetes-dashboard-api-748d6895c5-x49rk kubernetes-dashboard-api] 2023/12/04 13:58:19 Using random key for csrf signing
      > [api-kubernetes-dashboard-api-748d6895c5-x49rk kubernetes-dashboard-api] 2023/12/04 13:58:19 Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service account's configuration) or the --apiserver-host param points to a server that does not exist. Reason: Get "https://cloud-app-kubernetes-ingress.cloud/k8s/version": tls: failed to verify certificate: x509: certificate signed by unknown authority
      > [api-kubernetes-dashboard-api-748d6895c5-x49rk kubernetes-dashboard-api] Refer to our FAQ and wiki pages for more information: https://github.com/kubernetes/dashboard/wiki/FAQ
 - kubernetes-dashboard:deployment/api-kubernetes-dashboard-api failed. Error: container kubernetes-dashboard-api terminated with exit code 1.
Cleaning up...
release "api" uninstalled
Pruning images...
1/2 deployment(s) failed

However, as you can see, this encounters a TLS issue. I believe it's because the ingress uses a self-signed certificate (this is local development in my dev cluster), but one should probably be able to provide TLS CA/cert data again similarly to how kubeapps does it... maybe this is the bug/thing to focus on instead?

@mlbiam
Copy link
Contributor

mlbiam commented Dec 4, 2023

In our setup, we have configured our k8s apiserver as follows:

--requestheader-username-headers=X-Consumer-Username
--requestheader-group-headers=X-Consumer-Permissions

I don't think the dashboard has support for generic header passthrough. At least when I implemented impersonation I think it only looks for impersonation headers to passthrough.

Get "https://cloud-app-kubernetes-ingress.cloud/k8s/version": tls: failed to verify certificate: x509: certificate signed by unknown authority

I think if you created a kubeconfig file that was accessible from your dev environment with your certificate and a token you'd be all set from that perspective.

@mecampbellsoup
Copy link
Author

mecampbellsoup commented Dec 4, 2023

I think if you created a kubeconfig file that was accessible from your dev environment with your certificate and a token you'd be all set from that perspective.

The issue w/ using a kubeconfig is that the token is dynamic or variable. Each user that interacts with the dashboard (since we have a multi-tenant cluster) will provide a different token (i.e. Cookie: sessionid=123abc) that needs to be included w/ the request from dashboard api to k8s apiserver via our auth proxy ingress. Does that make sense?

I don't think the dashboard has support for generic heade passthrough. At least when I implemented impersonation I think it only looks for impersonation headers to passthrough.

Yes, but would be easy to modify the configuration to enable this to be customizable.

@mlbiam
Copy link
Contributor

mlbiam commented Dec 4, 2023

The issue w/ using a kubeconfig is that the token is dynamic or variable. Each user that interacts with the dashboard (since we have a multi-tenant cluster) will provide a different token (i.e. Cookie: sessionid=123abc) that needs to be included w/ the request from dashboard api to k8s apiserver via our auth proxy ingress. Does that make sense?

two different things. The kueconfig used by the API server doesn't do anything besides tell the dashbaord where the API server is and what cert to use. The token is only used if --skip-login is enabled.

Yes, but would be easy to modify the configuration to enable this to be customizable.

  1. I would consider this a new issue that should be opened if you really want to implement it
  2. I would highly recommend against this approach. Header based auth in Kubernetes doesn't require any authentication outside of having a valid client certificate (which I don't know if the API server connection from the dashboard to the API server would support? ). You're using an Ingress so my guess is that your Ingress controller is what has the client cert, so that protection doesn't exist either. While I understand you want to do this for development, this feature makes it easier to build a very insecure environment. I'm all for making security easier, but not for making a breach friendly environment easier.

@mecampbellsoup
Copy link
Author

@mlbiam do you have an example of such a kubeconfig anywhere? I'm a little unclear on how to populate the commented out users and to some extent contexts sections below:

apiVersion: v1
           clusters:
             - cluster:
                 certificate-authority-data: <base64-encoded string containing the "ca.crt" in the TLS secret associated w/ the ingress hostname on following line>
               server: https://cloud-app-kubernetes-ingress.cloud/k8s  # This ingress host ensures all traffic is sent via our API gateway before hitting the k8s apiserver
               name: coreweave
           contexts:
             - context:
                 cluster: coreweave
                 # namespace: is this needed?
                 # user: is this needed?
               name: coreweave
           current-context: coreweave
           kind: Config
           preferences: {}
			###################################
			# ASSUME ALL BELOW ARE COMMENTED OUT
			# What "user" will this kubeconfig operate as?
           users:
            - name: minikube
              user:
              client-certificate: /home/hector/.minikube/profiles/minikube/client.crt
              client-key: /home/hector/.minikube/profiles/minikube/client.key
           users:
             - name: token-NNRMoro77pGm6FsZvkhU
               user:
                 token: JpuaBbZvDjHR3xqayJ8YYR2NKabQrcJaUWnu5jm3

@mecampbellsoup
Copy link
Author

mecampbellsoup commented Dec 4, 2023

I ended up adding in this hack, hopefully it's clear why I need it, but LMK!

f6833e8#diff-53e31565fe24dd7b0ea4a3fd2d6fc93a50cbfc225ac3242a6cf268a44e5fa584R371-R376

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 3, 2024
@vaibhav2107
Copy link
Member

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature.
Projects
None yet
6 participants