Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pod CPU usage check from Windows node summary #122196

Closed
wants to merge 1 commit into from

Conversation

knabben
Copy link
Member

@knabben knabben commented Dec 6, 2023

What type of PR is this?

/kind bug
/sig windows

What this PR does / why we need it:

Moving CPU usage and stats summary to a Polling function with 2 minutes delay (this cover the other behavior of time.sleep 2 minutes), if limit exceed happens or CPU usage is 0, 2 more retries will happen before timing out, this can reduce the flakes.

Could not replicate the issue on a vSphere cluster, kubelet stats seems stable. Need to confirm the containerd version, here is 1.6.24 to try to replicate again.

Which issue(s) this PR fixes:

Fixes #122092

@k8s-ci-robot
Copy link
Contributor

Adding the "do-not-merge/release-note-label-needed" label because no release-note block was detected, please follow our release note process to remove it.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. kind/bug Categorizes issue or PR as related to a bug. do-not-merge/release-note-label-needed Indicates that a PR should not merge because it's missing one of the release note labels. sig/windows Categorizes an issue or PR as relevant to SIG Windows. labels Dec 6, 2023
@k8s-ci-robot
Copy link
Contributor

Please note that we're already in Test Freeze for the release-1.29 branch. This means every merged PR will be automatically fast-forwarded via the periodic ci-fast-forward job to the release branch of the upcoming v1.29.0 release.

Fast forwards are scheduled to happen every 6 hours, whereas the most recent run was: Tue Dec 5 22:11:44 UTC 2023.

@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. labels Dec 6, 2023
@knabben
Copy link
Member Author

knabben commented Dec 6, 2023

/test

@k8s-ci-robot
Copy link
Contributor

@knabben: The /test command needs one or more targets.
The following commands are available to trigger required jobs:

  • /test pull-cadvisor-e2e-kubernetes
  • /test pull-kubernetes-conformance-kind-ga-only-parallel
  • /test pull-kubernetes-coverage-unit
  • /test pull-kubernetes-dependencies
  • /test pull-kubernetes-dependencies-go-canary
  • /test pull-kubernetes-e2e-gce
  • /test pull-kubernetes-e2e-gce-100-performance
  • /test pull-kubernetes-e2e-gce-big-performance
  • /test pull-kubernetes-e2e-gce-cos
  • /test pull-kubernetes-e2e-gce-cos-canary
  • /test pull-kubernetes-e2e-gce-cos-no-stage
  • /test pull-kubernetes-e2e-gce-network-proxy-http-connect
  • /test pull-kubernetes-e2e-gce-scale-performance-manual
  • /test pull-kubernetes-e2e-kind
  • /test pull-kubernetes-e2e-kind-ipv6
  • /test pull-kubernetes-integration
  • /test pull-kubernetes-integration-go-canary
  • /test pull-kubernetes-kubemark-e2e-gce-scale
  • /test pull-kubernetes-node-e2e-containerd
  • /test pull-kubernetes-typecheck
  • /test pull-kubernetes-unit
  • /test pull-kubernetes-unit-go-canary
  • /test pull-kubernetes-update
  • /test pull-kubernetes-verify
  • /test pull-kubernetes-verify-go-canary

The following commands are available to trigger optional jobs:

  • /test check-dependency-stats
  • /test pull-ci-kubernetes-unit-windows
  • /test pull-crio-cgroupv1-node-e2e-eviction
  • /test pull-crio-cgroupv1-node-e2e-features
  • /test pull-crio-cgroupv1-node-e2e-hugepages
  • /test pull-crio-cgroupv1-node-e2e-resource-managers
  • /test pull-crio-cgroupv2-imagefs-e2e-diskpressure
  • /test pull-e2e-gce-cloud-provider-disabled
  • /test pull-kubernetes-conformance-image-test
  • /test pull-kubernetes-conformance-kind-ga-only
  • /test pull-kubernetes-conformance-kind-ipv6-parallel
  • /test pull-kubernetes-cos-cgroupv1-containerd-node-e2e
  • /test pull-kubernetes-cos-cgroupv1-containerd-node-e2e-features
  • /test pull-kubernetes-cos-cgroupv2-containerd-node-e2e
  • /test pull-kubernetes-cos-cgroupv2-containerd-node-e2e-eviction
  • /test pull-kubernetes-cos-cgroupv2-containerd-node-e2e-features
  • /test pull-kubernetes-cos-cgroupv2-containerd-node-e2e-serial
  • /test pull-kubernetes-crio-node-memoryqos-cgrpv2
  • /test pull-kubernetes-cross
  • /test pull-kubernetes-e2e-autoscaling-hpa-cm
  • /test pull-kubernetes-e2e-autoscaling-hpa-cpu
  • /test pull-kubernetes-e2e-capz-azure-disk
  • /test pull-kubernetes-e2e-capz-azure-disk-vmss
  • /test pull-kubernetes-e2e-capz-azure-file
  • /test pull-kubernetes-e2e-capz-azure-file-vmss
  • /test pull-kubernetes-e2e-capz-conformance
  • /test pull-kubernetes-e2e-capz-windows-alpha-feature-vpa
  • /test pull-kubernetes-e2e-capz-windows-alpha-features
  • /test pull-kubernetes-e2e-capz-windows-master
  • /test pull-kubernetes-e2e-capz-windows-serial-slow-hpa
  • /test pull-kubernetes-e2e-containerd-gce
  • /test pull-kubernetes-e2e-ec2
  • /test pull-kubernetes-e2e-ec2-arm64
  • /test pull-kubernetes-e2e-ec2-conformance
  • /test pull-kubernetes-e2e-ec2-conformance-arm64
  • /test pull-kubernetes-e2e-gce-canary
  • /test pull-kubernetes-e2e-gce-correctness
  • /test pull-kubernetes-e2e-gce-cos-alpha-features
  • /test pull-kubernetes-e2e-gce-csi-serial
  • /test pull-kubernetes-e2e-gce-device-plugin-gpu
  • /test pull-kubernetes-e2e-gce-disruptive-canary
  • /test pull-kubernetes-e2e-gce-kubelet-credential-provider
  • /test pull-kubernetes-e2e-gce-network-proxy-grpc
  • /test pull-kubernetes-e2e-gce-serial
  • /test pull-kubernetes-e2e-gce-serial-canary
  • /test pull-kubernetes-e2e-gce-storage-disruptive
  • /test pull-kubernetes-e2e-gce-storage-slow
  • /test pull-kubernetes-e2e-gce-storage-snapshot
  • /test pull-kubernetes-e2e-gci-gce-autoscaling
  • /test pull-kubernetes-e2e-gci-gce-ingress
  • /test pull-kubernetes-e2e-gci-gce-ipvs
  • /test pull-kubernetes-e2e-inplace-pod-resize-containerd-main-v2
  • /test pull-kubernetes-e2e-kind-alpha-beta-features
  • /test pull-kubernetes-e2e-kind-alpha-features
  • /test pull-kubernetes-e2e-kind-beta-features
  • /test pull-kubernetes-e2e-kind-canary
  • /test pull-kubernetes-e2e-kind-dual-canary
  • /test pull-kubernetes-e2e-kind-ipv6-canary
  • /test pull-kubernetes-e2e-kind-ipvs-dual-canary
  • /test pull-kubernetes-e2e-kind-kms
  • /test pull-kubernetes-e2e-kind-multizone
  • /test pull-kubernetes-e2e-storage-kind-disruptive
  • /test pull-kubernetes-e2e-ubuntu-gce-network-policies
  • /test pull-kubernetes-integration-eks
  • /test pull-kubernetes-kind-dra
  • /test pull-kubernetes-kind-json-logging
  • /test pull-kubernetes-kind-text-logging
  • /test pull-kubernetes-kubemark-e2e-gce-big
  • /test pull-kubernetes-linter-hints
  • /test pull-kubernetes-local-e2e
  • /test pull-kubernetes-node-arm64-e2e-containerd-ec2
  • /test pull-kubernetes-node-arm64-e2e-containerd-serial-ec2
  • /test pull-kubernetes-node-arm64-ubuntu-serial-gce
  • /test pull-kubernetes-node-crio-cgrpv1-evented-pleg-e2e
  • /test pull-kubernetes-node-crio-cgrpv2-e2e
  • /test pull-kubernetes-node-crio-cgrpv2-e2e-kubetest2
  • /test pull-kubernetes-node-crio-cgrpv2-imagefs-e2e
  • /test pull-kubernetes-node-crio-e2e
  • /test pull-kubernetes-node-crio-e2e-kubetest2
  • /test pull-kubernetes-node-e2e-containerd-1-7-dra
  • /test pull-kubernetes-node-e2e-containerd-alpha-features
  • /test pull-kubernetes-node-e2e-containerd-ec2
  • /test pull-kubernetes-node-e2e-containerd-features
  • /test pull-kubernetes-node-e2e-containerd-features-kubetest2
  • /test pull-kubernetes-node-e2e-containerd-kubetest2
  • /test pull-kubernetes-node-e2e-containerd-serial-ec2
  • /test pull-kubernetes-node-e2e-containerd-sidecar-containers
  • /test pull-kubernetes-node-e2e-containerd-standalone-mode
  • /test pull-kubernetes-node-e2e-containerd-standalone-mode-all-alpha
  • /test pull-kubernetes-node-e2e-crio-dra
  • /test pull-kubernetes-node-kubelet-credential-provider
  • /test pull-kubernetes-node-kubelet-serial-containerd
  • /test pull-kubernetes-node-kubelet-serial-containerd-alpha-features
  • /test pull-kubernetes-node-kubelet-serial-containerd-kubetest2
  • /test pull-kubernetes-node-kubelet-serial-containerd-sidecar-containers
  • /test pull-kubernetes-node-kubelet-serial-cpu-manager
  • /test pull-kubernetes-node-kubelet-serial-cpu-manager-kubetest2
  • /test pull-kubernetes-node-kubelet-serial-crio-cgroupv1
  • /test pull-kubernetes-node-kubelet-serial-crio-cgroupv2
  • /test pull-kubernetes-node-kubelet-serial-hugepages
  • /test pull-kubernetes-node-kubelet-serial-memory-manager
  • /test pull-kubernetes-node-kubelet-serial-pod-disruption-conditions
  • /test pull-kubernetes-node-kubelet-serial-topology-manager
  • /test pull-kubernetes-node-kubelet-serial-topology-manager-kubetest2
  • /test pull-kubernetes-node-swap-fedora
  • /test pull-kubernetes-node-swap-fedora-serial
  • /test pull-kubernetes-node-swap-ubuntu-serial
  • /test pull-kubernetes-unit-experimental
  • /test pull-kubernetes-verify-lint
  • /test pull-publishing-bot-validate

Use /test all to run the following jobs that were automatically triggered:

  • pull-kubernetes-conformance-kind-ga-only-parallel
  • pull-kubernetes-conformance-kind-ipv6-parallel
  • pull-kubernetes-dependencies
  • pull-kubernetes-e2e-capz-windows-master
  • pull-kubernetes-e2e-ec2
  • pull-kubernetes-e2e-ec2-conformance
  • pull-kubernetes-e2e-gce
  • pull-kubernetes-e2e-gce-canary
  • pull-kubernetes-e2e-kind
  • pull-kubernetes-e2e-kind-ipv6
  • pull-kubernetes-integration
  • pull-kubernetes-linter-hints
  • pull-kubernetes-node-e2e-containerd
  • pull-kubernetes-typecheck
  • pull-kubernetes-unit
  • pull-kubernetes-verify
  • pull-kubernetes-verify-lint

In response to this:

/test

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: knabben
Once this PR has been reviewed and has the lgtm label, please assign jsturtevant for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the sig/testing Categorizes an issue or PR as relevant to SIG Testing. label Dec 6, 2023
@knabben
Copy link
Member Author

knabben commented Dec 6, 2023

/test pull-kubernetes-verify

@knabben
Copy link
Member Author

knabben commented Dec 6, 2023

/test pull-kubernetes-e2e-capz-windows-master

@jsturtevant
Copy link
Contributor

/triage accepted
/priority important-soon

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. labels Dec 6, 2023
Copy link
Contributor

@jsturtevant jsturtevant left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The upstream test is using containerd 1.7. Maybe that is the difference on vSpere?

Seems strange we get 0 after the pods run for 2 mins. Any ideas where it might be failing? Is the get summary failing and so we end up with 0? or is the actual reported value for the pod zero?

test/e2e/windows/cpu_limits.go Outdated Show resolved Hide resolved
@knabben
Copy link
Member Author

knabben commented Dec 6, 2023

The upstream test is using containerd 1.7. Maybe that is the difference on vSpere?

Seems strange we get 0 after the pods run for 2 mins. Any ideas where it might be failing? Is the get summary failing and so we end up with 0? or is the actual reported value for the pod zero?

Yes, I saw this behavior with the pod running in the first <10 seconds, and not after the sleep.

The pod was still Running when the test failed from the logs dump, so I suppose the value being reported from stats summary is the failure point (could replicate neither, the CPU usage was always ~500m after the sleep), will try again with 1.7.

@knabben knabben force-pushed the windows-retry-cpu-check branch 3 times, most recently from 820318c to 32e7b79 Compare December 6, 2023 21:34
@jsturtevant
Copy link
Contributor

so I suppose the value being reported from stats summary is the failure point (could replicate neither, the CPU usage was always ~500m after the sleep),

if this is the case we might be able to wrap that call in retry instead of waiting an additional 2 mins?

@knabben
Copy link
Member Author

knabben commented Dec 6, 2023

so I suppose the value being reported from stats summary is the failure point (could replicate neither, the CPU usage was always ~500m after the sleep),

Would you like to reproduce this with 1.7, for now only supposing based on the behavior observed on 1.6.

if this is the case we might be able to wrap that call in retry instead of waiting an additional 2 mins?

yes, not sure where the 2 minutes are coming from.

It still needed to wait an initial amount of time for the cpustress pod start the "job", since the 0 value, in the beginning is valid. If we do a retry call (maybe after 30 seconds) at this point, and check the ~500m value it seems a good test IMO

So the test can be renamed to Container limits should not be exceeded the threshold or so..

@knabben
Copy link
Member Author

knabben commented Dec 6, 2023

For this statement is a matter of changing the call to:

immediate := true
wait.PollUntilContextTimeout(ctx, 30 * time.Second, 6 * time.Minute, immediate, func()..

@jsturtevant
Copy link
Contributor

If we do a retry call (maybe after 30 seconds) at this point, and check the ~500m value it seems a good test IMO

I generally agree, This test should start up the pod, and once cpu is being consumed we should be verifing that it doesn't go over 500. If that is the case we could so something like gomega.eventually and consistently to achieve that as described in https://github.com/kubernetes/community/blob/master/contributors/devel/sig-testing/writing-good-e2e-tests.md#polling-and-timeouts

@knabben knabben force-pushed the windows-retry-cpu-check branch 3 times, most recently from dd0dd1d to a7394d2 Compare December 9, 2023 19:12
@knabben
Copy link
Member Author

knabben commented Dec 9, 2023

I generally agree, This test should start up the pod, and once cpu is being consumed we should be verifing that it doesn't go over 500. If that is the case we could so something like gomega.eventually and consistently to achieve that as described in https://github.com/kubernetes/community/blob/master/contributors/devel/sig-testing/writing-good-e2e-tests.md#polling-and-timeouts

Makes sense in terms of reliability, changed for both functions usage. Function renamed

@jsturtevant
Copy link
Contributor

We noticed during sig-windows triage today that we are only see this error on 1.7+. Our 1.6 tests cluster does not have this flake. @knabben can you see if the changes work against a 1.7 vSphere cluster or if you can reproduce the error before these changes?

@knabben
Copy link
Member Author

knabben commented Dec 12, 2023

Running with 1.7.6 on Windows CAPV, these are the errors, seem flaky with the official e2e.test 1.18.0, definitely some different behavior on 1.7:

[FAILED] Pod cpu-resources-test-windows-8357/cpulimittest-1936531e-6dd0-4d36-9d79-01a9a64c8534 reported usage is 1.06193494, but it should not exceed limit by > 5%
  In [It] at: test/e2e/windows/cpu_limits.go:98 @ 12/12/23 15:41:52.014
[FAILED] Pod cpu-resources-test-windows-1393/cpulimittest-b6954b14-fc83-4362-89db-737172c07e06 reported usage is 0.6330031110000001, but it should not exceed limit by > 5%
  In [It] at: test/e2e/windows/cpu_limits.go:98 @ 12/12/23 15:49:07.446
  STEP: Gathering node summary stats @ 12/12/23 16:13:43.427
  Dec 12 16:13:43.719: INFO: Pod cpulimittest-00ece286-5207-4182-90f9-26c3344cbdf0 usage: 0
  [FAILED] in [It] - test/e2e/windows/cpu_limits.go:95 @ 12/12/23 16:13:43.719
  STEP: Ensuring cpu doesn't exceed limit by >5% @ 12/12/23 16:17:12.076
  STEP: Gathering node summary stats @ 12/12/23 16:17:12.076
  Dec 12 16:17:12.428: INFO: Pod cpulimittest-e70f700d-0f27-46db-9741-a405e6263f5c usage: 0.49151226400000003
  STEP: Gathering node summary stats @ 12/12/23 16:17:12.428
  Dec 12 16:17:12.983: INFO: Pod cpulimittest-c77a0f28-1cd0-466c-81d3-6769d984648a usage: 0

@jsturtevant
Copy link
Contributor

those look consistent with what we are seeing in capz (#122092). This means it is likely a bug in containerd or the processing of the stats in kubelet

@knabben
Copy link
Member Author

knabben commented Dec 12, 2023

/hold

@k8s-ci-robot k8s-ci-robot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Dec 12, 2023
@knabben
Copy link
Member Author

knabben commented Dec 12, 2023

Removing the Consistently statement with 1.7 this wont pass, letting the Eventually statement to allow the retry in both 0 and >500ms cases within 2 minutes.

/unhold

@k8s-ci-robot k8s-ci-robot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Dec 12, 2023
@jsturtevant
Copy link
Contributor

Removing the Consistently statement with 1.7 this wont pass, letting the Eventually statement to allow the retry in both 0 and >500ms cases within 2 minutes.

It seems like this is hiding a bug somewhere else in the stack, we should track that down instead of adjusting the test to allow for inconsistency.

@jsturtevant
Copy link
Contributor

We've tracked down the bug to containerd: containerd/containerd#9531

@dims dims added the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Mar 5, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Reopen this PR with /reopen
  • Mark this PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closed this PR.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Reopen this PR with /reopen
  • Mark this PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/test cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/release-note-label-needed Indicates that a PR should not merge because it's missing one of the release note labels. kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. sig/testing Categorizes an issue or PR as relevant to SIG Testing. sig/windows Categorizes an issue or PR as relevant to SIG Windows. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
Status: Done
Development

Successfully merging this pull request may close these issues.

kubelet /stats/summary returns Zero from CPU usageNanoCores stats when more than one caller
5 participants