Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fleet RBAC #385

Closed
Jasstkn opened this issue May 21, 2021 · 26 comments
Closed

Fleet RBAC #385

Jasstkn opened this issue May 21, 2021 · 26 comments

Comments

@Jasstkn
Copy link

Jasstkn commented May 21, 2021

Hello! I'm trying to configure restricted access to deployment via Fleet.
Case: user has an owner permissions in the specific cluster, e.g. sandbox. But they can't see the Continuous Delivery tab with this role. How can I grant specific access to the Fleet API that user will be able to deploy only to allowed cluster?

@abelnieva
Copy link

same issue here

@markus-obstmayer
Copy link

I have also the same issue.

@tonyjsolano
Copy link

I would like to know about this as well

@jleni
Copy link

jleni commented Jul 18, 2021

Any news about this?

@Hryhorii-Tatsyi
Copy link

same issue

@matteogazzadi
Copy link

same issue also for us

@ArtourK
Copy link

ArtourK commented Oct 8, 2021

This is a showstopper for us!
We can't provide Admin rights to the users in Rancher, that need to manage fleet for some cluster.

@markus-obstmayer
Copy link

same here. It´s also a showstopper for use.

@ADustyOldMuffin
Copy link

A quick update, if you're using Rancher you can grant this by using GlobalRole's

Example.

apiVersion: management.cattle.io/v3
builtin: false
description: Used to view and access Continuous Delivery in Rancher.
displayName: FleetAccess
kind: GlobalRole
newUserDefault: false
metadata:
  generateName: gr-
rules:
- apiGroups:
  - fleet.cattle.io
  resources:
  - bundles
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - fleet.cattle.io
  nonResourceURLs: []
  resourceNames: []
  resources:
  - bundledeployments
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - fleet.cattle.io
  nonResourceURLs: []
  resourceNames: []
  resources:
  - clusters
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - fleet.cattle.io
  nonResourceURLs: []
  resourceNames: []
  resources:
  - gitrepos
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - management.cattle.io
  nonResourceURLs: []
  resourceNames: []
  resources:
  - fleetworkspaces
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - fleet.cattle.io
  nonResourceURLs: []
  resourceNames: []
  resources:
  - clustergroups
  verbs:
  - get
  - list
  - watch

This can be done via the UI or added as a K8s resource if you're using Rancher, and works. I only have read permissions on these right now, but you can easily change them to have more (or just * for all).

@manicole
Copy link

Thanks @ADustyOldMuffin !

Creating a GlobalRole seems to be needed to access fleetworkspaces. However, I'd like my user to be limited to its own cluster (local) and its own namespace(s)/project(s) when deploying workloads ... Any clue on how to do this ?

@ADustyOldMuffin
Copy link

Thanks @ADustyOldMuffin !

Creating a GlobalRole seems to be needed to access fleetworkspaces. However, I'd like my user to be limited to its own cluster (local) and its own namespace(s)/project(s) when deploying workloads ... Any clue on how to do this ?

This would require fleet to deploy/deal with namespaces other than flee-default. You might check other issues as I believe that's an enhancement currently underway.

@Torkolis
Copy link

Torkolis commented Apr 6, 2022

A quick update, if you're using Rancher you can grant this by using GlobalRole's

Example.

apiVersion: management.cattle.io/v3
builtin: false
description: Used to view and access Continuous Delivery in Rancher.
displayName: FleetAccess
kind: GlobalRole
newUserDefault: false
metadata:
  generateName: gr-
rules:
- apiGroups:
  - fleet.cattle.io
  resources:
  - bundles
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - fleet.cattle.io
  nonResourceURLs: []
  resourceNames: []
  resources:
  - bundledeployments
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - fleet.cattle.io
  nonResourceURLs: []
  resourceNames: []
  resources:
  - clusters
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - fleet.cattle.io
  nonResourceURLs: []
  resourceNames: []
  resources:
  - gitrepos
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - management.cattle.io
  nonResourceURLs: []
  resourceNames: []
  resources:
  - fleetworkspaces
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - fleet.cattle.io
  nonResourceURLs: []
  resourceNames: []
  resources:
  - clustergroups
  verbs:
  - get
  - list
  - watch

This can be done via the UI or added as a K8s resource if you're using Rancher, and works. I only have read permissions on these right now, but you can easily change them to have more (or just * for all).

Hi I have done this in Rancher 2.6.4 through the UI:
image
But unfortunately I still can't see continuous delivery:
image

Any Ideas?

@manno
Copy link
Member

manno commented Oct 20, 2022

I'd be interested in feedback on https://fleet.rancher.io/next/multi-tenancy
The bundle labels and allowedTargetNamespace only works with the latest fleet, though.

@sowmyav27
Copy link

@manno Could we add info in a QA Template to validate this issue?

@manno
Copy link
Member

manno commented Dec 8, 2022

Additional QA

Most of the functionality was already part of fleet, however not well known.
These PRs are small, but new:

Problem

How can we support a setup where one fleet installation serves multiple tenants?
Instead of dividing up whole clusters between users, users deploy isolated applications to shared clusters.
This is useful if a user has one cluster per site and multiple teams need to deploy to all/some of these sites.

Solution

The suggested approach is documented here: https://fleet.rancher.io/multi-tenancy

Users are not allowed to change cluster resources, they only create gitrepo resources in specially prepared namespaces.
These namespaces contain a BundleNamespaceMapping resource, which selects the available cluster.
Optionally, operators will put a GitRepoRestriction resource into such a namespace, to restrict the downstream cluster's namespaces.

These PRs were necessary:

More documentation:

Testing

Engineering Testing

Manual Testing

The docs describe how to set up multi-tenancy and how to create a limited user to test it.
@sowmyav27 Sure

Automated Testing

QA Testing Considerations

Integration in the Rancher UI is still being worked on. Most of the functionality existed already. Maybe a full QA on multi-tenancy should wait for the feature to arrive Rancher, as there are still some open questions regarding UX.

@xhejtman
Copy link

xhejtman commented Jan 5, 2023

Hello, can this be applied to a situation when a user deploys gitrepo object in a namespace and the git repo is allowed to be deployed in this namespace and only in this namespace without admin need to pre-setup permissions for this particular namespace in advance?

E.g., can we setup rights so that namespace in gitrepo is completely ignored and deployment is done in the same namespace as the gitrepo object?

@manno
Copy link
Member

manno commented Jan 27, 2023

Hello, can this be applied to a situation when a user deploys gitrepo object in a namespace and the git repo is allowed to be deployed in this namespace and only in this namespace without admin need to pre-setup permissions for this particular namespace in advance?

Yes, this can be achieved with a bundle namespace mapping: https://fleet.rancher.io/next/namespaces#cross-namespace-deployments
The users namespace(s) need to match the mapping. The actual cluster resource is in another namespace together with the bundlenamespacemapping.

E.g., can we setup rights so that namespace in gitrepo is completely ignored and deployment is done in the same namespace as the gitrepo object?

You mean the namespace in the gitrepo crd for the deployment? No, there is no way to override that globally. You can use a gitreporestriction resource to restrict the targetNamespace to certain values.
I'm not sure you should be deploying to the gitrepo's own namespace. You are asking this for a single cluster fleet standalone deployment, because you want deal with only one namespace per user?

@xhejtman
Copy link

Yes, this can be achieved with a bundle namespace mapping: https://fleet.rancher.io/next/namespaces#cross-namespace-deployments The users namespace(s) need to match the mapping. The actual cluster resource is in another namespace together with the bundlenamespacemapping.

yes, but admin needs to setup the mapping in advance, right?

You mean the namespace in the gitrepo crd for the deployment? No, there is no way to override that globally. You can use a gitreporestriction resource to restrict the targetNamespace to certain values. I'm not sure you should be deploying to the gitrepo's own namespace. You are asking this for a single cluster fleet standalone deployment, because you want deal with only one namespace per user?

I have multitenant cluster with many users and with many namespaces, multiple namespaces per user. I want any user to be able to use gitrepo with fleet on self-service basis: the user creates his namespace and creates a gitrepo and that's everyhing that needs to be done, no requests to admin and also it is secure so that user cannot deploy gitrepo into namespate he does not own.

The scenario above can be easily done if deployment can be done into the namepace the gitrepo object is and only to this namespace. Or, if service account for deployment must be specified and fleet verifies that the user owns this service account.

If I understand correctly, this can be done using bundlenamespacemapping, but admin has to create mapping in advance so that it works and user probably has to request the mapping from admin once the user created the namespace for gitrepo. Right?

@izaac
Copy link

izaac commented Mar 23, 2023

I have this scenario.

The fleet user role has permissions to gitrepos adding the user with this custom role access Continuous Delivery in Rancher.
If I add access to fleetworkspaces instead without gitrepos the user can't access Continous Delivery.

Is that the expectation? Or the user should have access to Continuous Delivery and be able to list the Workspaces.
There's a section in Continuous Delivery for listing the Workspaces if the user has access to list them.

@mattfarina
Copy link
Collaborator

@izaac Fleet Workspaces is a sub-section of Continuous Delivery (CD). I believe you should be able to use CD without being able to use workspaces. If fleetworkspaces is not accessible, the workspaces features should just not be accessible. When adding/editing a gitrepo the options to deploy based on workspace should not be present and the workspaces panel would not be present. Note, I think this is the case.

If one has fleetworkspaces access but no gitrepo access than I can't see how CD would work. It doesn't make sense. In the long run, some helper message may be useful to help someone navigate the permissions.

What you describe as happening with the CD section and the gitrepos and fleetworkspaces permissions makes sense. I expect those are ok.

@izaac
Copy link

izaac commented Apr 4, 2023

Validated on Rancher vv2.7-head over multiple commits latest: 0c76b3f
Latest validated Dashboard: release-2.7.2 94650da49
Fleet: v0.6.0-rc.5

Validated based on documented test plan scenarios.

Separate issues during validation:
rancher/dashboard#8623
rancher/dashboard#8624
rancher/dashboard#8468
rancher/dashboard#8467

@izaac izaac closed this as completed Apr 4, 2023
@zube zube bot removed the [zube]: Done label Jul 4, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment