-
Notifications
You must be signed in to change notification settings - Fork 784
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] Pod.status subresource not working after upgrade from 1.9.4 #10182
Comments
Thanks for opening your first issue here! Be sure to follow the issue template! |
- key: '{{request.operation}}'
operator: In
value:
- UPDATE The - key: '{{request.operation}}'
operator: Equals
value: UPDATE |
Hi, sorry that was my fault. This is the ClusterPolicy I currently use: apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: datadog-agent-startup-taint
spec:
failurePolicy: Fail
background: false
webhookTimeoutSeconds: 30
mutateExistingOnPolicyUpdate: false
rules:
- name: remove-startup-taint
match:
any:
- resources:
kinds:
- v1/Pod.status
namespaces:
- datadog
names:
- datadog-*
preconditions:
all:
- key: '{{request.object.metadata.ownerReferences[0].kind}}'
operator: Equals
value: DaemonSet
- key: '{{request.object.metadata.ownerReferences[0].name}}'
operator: Equals
value: datadog
- key: "{{ to_string((request.object.status.containerStatuses[?name == 'agent'].ready)[0] || false ) }}"
operator: Equals
value: 'true'
# Mutates the Deployment resource to add fields.
mutate:
targets:
- apiVersion: v1
kind: Node
name: '{{request.object.spec.nodeName}}'
patchStrategicMerge:
spec:
taints: "{{ target.spec.taints[?key != 'node.datadog.eu/agent-not-ready'] }}" I also tried to add the update operations - resources:
kinds:
- v1/Pod.status
namespaces:
- datadog
names:
- datadog-*
operations:
- UPDATE It is not working, even if I completely remove the whole I also tried to remove all other selectors except for kinds and operations: apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: datadog-agent-startup-taint
spec:
admission: true
background: false
failurePolicy: Fail
mutateExistingOnPolicyUpdate: false
rules:
- match:
any:
- resources:
kinds:
- v1/Pod.status
operations:
- UPDATE
mutate:
patchStrategicMerge:
spec:
taints: '{{ target.spec.taints[?key != ''node.datadog.eu/agent-not-ready'']
}}'
targets:
- apiVersion: v1
kind: Node
name: '{{request.object.spec.nodeName}}'
name: remove-startup-taint
skipBackgroundRequests: true
validationFailureAction: Audit
webhookTimeoutSeconds: 30 I do get some error logs from the backgroud-controller now, {
"content": {
"service": "background-controller",
"message": "",
"attributes": {
"caller": "mutate/mutate.go:165",
"level": "error",
"resource": "v1/Pod/[namespace removed]/[podname removed]",
"logger": {
"name": "background"
},
"name": "ur-kdxxd",
"error": "failed to mutate existing resource, rule remove-startup-taint, response error: : failed to substitute variables in target[0].Name {{request.object.spec.nodeName}}, value: <nil>, err: failed to resolve request.object.spec.nodeName at path : JMESPath query failed: Unknown key \"nodeName\" in path",
"ts": "2024-05-09T18:22:53Z",
"policy": "datadog-agent-startup-taint"
}
}
} and
I do get these errors as long as the pod is pending, and the status is updated, e.g. by the cluster-autoscaler. But as soon as the pod is scheduled and is started, kyverno does not handle any further pod.status update events e.g. when a container states changes. |
@PhilippMT, remove the |
Kyverno Version
1.12.0
Kubernetes Version
1.29.x
Kubernetes Platform
EKS
Kyverno Rule Type
Mutate
Description
Hello,
I just upgraded kyverno from 1.9.4 to the latest 1.12.1 with a complete new installation. I have a ClusterPolicy which was working fine with 1.9.4 but is not working with the latest 1.12.1. I also tried version 1.11.4 but with the same result. The ClusterPolicy was triggered as soon as a pod of the datadog DaemonSet is in the ready state. The policy mutates the node the pod runs on and removes a startup taint.
These are the values I deployed the helm chart with
The generated webbhook configuration looks fine to me
The "pod.status" subresource seams not to work in general anymore. I also created an example with generate rules which creates ConfigMaps for UPDATES on "Deployment.status" and "DaemonSet.status" which is working as expected, but nothing happends on "Pod.status" updates.
Does anybody has an advise for me if there is a misconfiguration?
Best regards
Philipp
Steps to reproduce
Result: The 2 configMaps "test-daemonset-abc" and "test-deployment-abc" but not the configMap "test-pod-abc".
Expected behavior
I expect the ClusterPolicy to react on pod.status Update events.
Screenshots
No response
Kyverno logs
Slack discussion
No response
Troubleshooting
The text was updated successfully, but these errors were encountered: