Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ci: separarte pull-containerd-node-e2e for 1.5 branch #27912

Merged
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
52 changes: 51 additions & 1 deletion config/jobs/containerd/containerd/containerd-presubmit-jobs.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,6 @@ presubmits:
decorate: true
branches:
- main
- release/1.5
- release/1.6
decoration_config:
timeout: 100m
Expand Down Expand Up @@ -78,6 +77,57 @@ presubmits:
--timeout=65m
"--node-args=--image-config-file=${GOPATH}/src/k8s.io/test-infra/jobs/e2e_node/containerd/containerd-main-presubmit/image-config-presubmit.yaml -node-env=PULL_REFS=$(PULL_REFS)"

- name: pull-containerd-release-1.5-node-e2e
always_run: true
max_concurrency: 8
decorate: true
branches:
- release/1.5
decoration_config:
timeout: 100m
extra_refs:
- org: kubernetes
repo: kubernetes
base_ref: release-1.25
path_alias: k8s.io/kubernetes
- org: kubernetes
repo: test-infra
base_ref: master
path_alias: k8s.io/test-infra
annotations:
testgrid-dashboards: sig-node-containerd
testgrid-tab-name: pull-containerd-release-1.5-node-e2e
description: run node e2e tests
labels:
preset-service-account: "true"
preset-k8s-ssh: "true"
spec:
containers:
- name: pull-containerd-node-e2e
image: gcr.io/k8s-staging-test-infra/kubekins-e2e:v20221024-d0c013ee2d-master
env:
- name: USE_TEST_INFRA_LOG_DUMPING
value: "true"
command:
- sh
- -c
- >
runner.sh
./test/build.sh
&&
cd ${GOPATH}/src/k8s.io/kubernetes
&&
/workspace/scenarios/kubernetes_e2e.py
--deployment=node
--gcp-project=cri-c8d-pr-node-e2e
--gcp-zone=us-central1-f
'--node-test-args=--container-runtime-endpoint=unix:///run/containerd/containerd.sock --container-runtime-process-name=/home/containerd/usr/local/bin/containerd --container-runtime-pid-file= --kubelet-flags="--cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/containerd.service" --extra-log="{\"name\": \"containerd.log\", \"journalctl\": [\"-u\", \"containerd\"]}"'
--node-tests=true
--provider=gce
'--test_args=--nodes=8 --focus="\[NodeConformance\]|\[NodeFeature:.+\]|\[NodeFeature\]" --skip="\[Flaky\]|\[Slow\]|\[Serial\]"'
--timeout=65m
"--node-args=--image-config-file=${GOPATH}/src/k8s.io/test-infra/jobs/e2e_node/containerd/containerd-main-presubmit/image-config-presubmit.yaml -node-env=PULL_REFS=$(PULL_REFS)"
Copy link
Member

@bobbypage bobbypage Nov 9, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This image config is pointing to to cos-stable (https://github.com/kubernetes/test-infra/blob/master/jobs/e2e_node/containerd/containerd-main-presubmit/image-config-presubmit.yaml#L3-L5)

cos-stable is using COS 101 which has cgroupv2 enabled. cgroupv2 should enable the systemd cgroup driver since the default (cgroupfs driver is unsupported).

I think we should also add https://github.com/kubernetes/test-infra/blob/master/jobs/e2e_node/containerd/containerd-main/cgroupv2/env-cgroupv2#L10 in CONTAINERD_SYSTEMD_CGROUP: 'true'

and add --cgroup-driver=systemd to kubelet flags like https://github.com/kubernetes/test-infra/blob/master/config/jobs/kubernetes/sig-node/containerd.yaml#L238

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since all the images for containerd are updated to cos-stable, should we add the cgroup_env to all the tests except containerd-main/cgroupv1?

Copy link
Member

@bobbypage bobbypage Nov 9, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yup, that makes sense. Thanks!

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have one more doubt related to the cgroupv2, these env is currently not set on the image files for other tests, but its still passing. So isnt it a mandatory requirement?

Copy link
Member

@bobbypage bobbypage Nov 9, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not mandatory (if it is not specified it will use the cgroupfs driver). But the systemd driver is what is recommended to be used with cgroupv2 (https://kubernetes.io/docs/concepts/architecture/cgroups/#requirements) and what we are recommending to all k8s users on cgroupv2 OS images.

We want to make it default in future version of containerd - containerd/containerd#7319

(Docker already defaults to using systemd cgroup driver on cgroupv2) - moby/moby#40846

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldnt we also add the CONTAINERD_COS_CGROUP_MODE: 'v2' to the env? Or is it set by default in cos-101?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@akhilerm I think its fine not to set CONTAINERD_COS_CGROUP_MODE: 'v2' as cgroups v2 is enabled by default in cos 101

@bobbypage @mikebrow so this pull job will cover the cgroupsv2 only, do we need to add an additional pull job for cgroupsv1?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it's fine to not set CONTAINERD_COS_CGROUP_MODE, this is only needed to force COS to enable cgroupv2 but since it's already enabled started from COS97 this is unnecessary


- name: pull-containerd-sandboxed-node-e2e
always_run: false
max_concurrency: 8
Expand Down