Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] Skipping a resource re-validation overwrites to original result #10169

Open
2 tasks done
prescott-core opened this issue May 3, 2024 · 5 comments · May be fixed by #10233
Open
2 tasks done

[Bug] Skipping a resource re-validation overwrites to original result #10169

prescott-core opened this issue May 3, 2024 · 5 comments · May be fixed by #10233
Assignees
Labels
bug Something isn't working reports Issues related to policy reports.

Comments

@prescott-core
Copy link

Kyverno Version

1.11.4

Description

We're providing K8s clusters with Kyverno & Policy-Reporter to different DevOps teams. The DevOps are running multiple environments like "production" or "stage" on this clusters. They recieve Kyverno-Policy-Reports, one per month.

We noticed an unexpected difference in policy-reports between two identical environments. The difference comes from an report entry with the result "skip" and the message "skipping modified resource as validation results have not changed".
We expected this entry to be "warn" in compare to the other environment.

Policy-Reporter then seems to ignore this entry and the differences are confusing the DevOps. Without previous knowledge from the first validation result we're not able to know the original validation result.

As described in https://kyverno.io/docs/policy-reports/#report-result-logic "skip" should only be set when a Precondition or PolicyException is the reason. I think the result should not be changed to "skip" when Kyverno skips the validation for technical reasons.

Slack discussion

No response

Troubleshooting

  • I have read and followed the documentation AND the troubleshooting guide.
  • I have searched other issues in this repository and mine is not recorded.
@prescott-core prescott-core added bug Something isn't working triage Default label assigned to all new issues indicating label curation is needed to fully organize. labels May 3, 2024
Copy link

welcome bot commented May 3, 2024

Thanks for opening your first issue here! Be sure to follow the issue template!

@MariamFahmy98
Copy link
Collaborator

Could you please share with us the policy and the resource manifests to be able to reproduce the issue and fix it? Thanks.

@prescott-core
Copy link
Author

OK then here's an example from one of our clusters.
I'm required to shorten things to our compliance policy but hopefully you'll be able to reproduce this behaviour. The PolicyReports report says this resource is skipped (result: skip):

---
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: require-multiple-replicas
  annotations:
    policies.kyverno.io/category: Best Practises
    policies.kyverno.io/minversion: 1.9.2
    policies.kyverno.io/severity: low
    policies.kyverno.io/subject: Deployment,StatefulSet
    policies.kyverno.io/title: Require Multiple Replicas
    policies.kyverno.io/scored: "false"
spec:
  background: false
  rules:
    - name: require-multiple-replicas
      match:
        any:
          - resources:
              kinds:
                - Deployment
                - StatefulSet
              operations:
                - CREATE
                - UPDATE
      validate:
        pattern:
          spec:
            replicas: ">1"
  validationFailureAction: Audit
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    workload.user.cattle.io/workloadselector: apps.deployment-default-test-1
  name: test-1
  namespace: default
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      workload.user.cattle.io/workloadselector: apps.deployment-default-test-1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        workload.user.cattle.io/workloadselector: apps.deployment-default-test-1
      namespace: default
    spec:
      containers:
        - args:
            - '-f'
            - /dev/null
          command:
            - tail
          image: ubuntu:focal
          imagePullPolicy: Always
          resources:
            limits:
              cpu: 500m
              memory: 1908Mi
            requests:
              cpu: 100m
              memory: 256Mi
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      terminationGracePeriodSeconds: 30

@MariamFahmy98
Copy link
Collaborator

MariamFahmy98 commented May 14, 2024

I am not able to reproduce the bug you are facing. I applied the policy first and then the deployment.

Here are the events:

7s          Warning   PolicyViolation           clusterpolicy/require-multiple-replicas   Deployment default/test-1: [require-multiple-replicas] fail; validation error: rule require-multiple-replicas failed at path /spec/replicas/
7s          Warning   PolicyViolation           deployment/test-1                         policy require-multiple-replicas/require-multiple-replicas fail: validation error: rule require-multiple-replicas failed at path /spec/replicas/

Here is the corresponding policy report:

$ kubectl get polr  
NAME                                   KIND         NAME     PASS   FAIL   WARN   ERROR   SKIP   AGE
5e15a912-004e-40e0-848f-cf879079cf17   Deployment   test-1   0      0      1      0       0      17s

$ kubectl get polr 5e15a912-004e-40e0-848f-cf879079cf17 -o yaml
apiVersion: wgpolicyk8s.io/v1alpha2
kind: PolicyReport
metadata:
  creationTimestamp: "2024-05-14T09:34:35Z"
  generation: 1
  labels:
    app.kubernetes.io/managed-by: kyverno
  name: 5e15a912-004e-40e0-848f-cf879079cf17
  namespace: default
  ownerReferences:
  - apiVersion: apps/v1
    kind: Deployment
    name: test-1
    uid: 5e15a912-004e-40e0-848f-cf879079cf17
  resourceVersion: "1588"
  uid: 16e9b67d-dc2d-428d-ab93-a121509c263b
results:
- category: Best Practises
  message: 'validation error: rule require-multiple-replicas failed at path /spec/replicas/'
  policy: require-multiple-replicas
  result: warn
  rule: require-multiple-replicas
  severity: low
  source: kyverno
  timestamp:
    nanos: 0
    seconds: 1715679255
scope:
  apiVersion: apps/v1
  kind: Deployment
  name: test-1
  namespace: default
  uid: 5e15a912-004e-40e0-848f-cf879079cf17
summary:
  error: 0
  fail: 0
  pass: 0
  skip: 0
  warn: 1

I tested it against 1.11 and the main branch and both generate report with warn as a result.

@MariamFahmy98
Copy link
Collaborator

After updating the deployment, the policy report changes to have a skip result:

$ kubectl get polr 
NAME                                   KIND         NAME     PASS   FAIL   WARN   ERROR   SKIP   AGE
9856591e-9763-448b-8b2e-05396c82bb92   Deployment   test-1   0      0      0      0       1      2m45s

$ kubectl get polr 9856591e-9763-448b-8b2e-05396c82bb92 -o yaml
apiVersion: wgpolicyk8s.io/v1alpha2
kind: PolicyReport
metadata:
  creationTimestamp: "2024-05-14T09:50:01Z"
  generation: 2
  labels:
    app.kubernetes.io/managed-by: kyverno
  name: 9856591e-9763-448b-8b2e-05396c82bb92
  namespace: default
  ownerReferences:
  - apiVersion: apps/v1
    kind: Deployment
    name: test-1
    uid: 9856591e-9763-448b-8b2e-05396c82bb92
  resourceVersion: "4466"
  uid: d3e3ccbb-c61f-4dec-86ec-1947efd02c61
results:
- category: Best Practises
  message: skipping modified resource as validation results have not changed
  policy: require-multiple-replicas
  resources:
  - apiVersion: apps/v1
    kind: Deployment
    name: test-1
    namespace: default
    uid: 9856591e-9763-448b-8b2e-05396c82bb92
  result: skip
  rule: require-multiple-replicas
  severity: low
  source: kyverno
  timestamp:
    nanos: 0
    seconds: 1715680338
scope:
  apiVersion: apps/v1
  kind: Deployment
  name: test-1
  namespace: default
  uid: 9856591e-9763-448b-8b2e-05396c82bb92
summary:
  error: 0
  fail: 0
  pass: 0
  skip: 1
  warn: 0

@MariamFahmy98 MariamFahmy98 self-assigned this May 14, 2024
@MariamFahmy98 MariamFahmy98 added reports Issues related to policy reports. and removed triage Default label assigned to all new issues indicating label curation is needed to fully organize. labels May 14, 2024
@MariamFahmy98 MariamFahmy98 added this to the Kyverno Release 1.12.2 milestone May 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working reports Issues related to policy reports.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants