Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cleanup: remove cluster provisioning from e2e binary #3254

Merged

Conversation

perdasilva
Copy link
Collaborator

@perdasilva perdasilva commented May 14, 2024

Description of the change:
Previously, the e2e test suite would programatically provision a kind cluster and helm install olm on it. This allowed the e2e test suite, which is not stable when executed in parallel against a single cluster, to be executable in parallel (because each test worker would get their own cluster).

Removing the cluster provisioning code has meant that, to keep the parallel execution time gains when executing the e2es, I've had to keep the same strategy of 1 cluster / test node. Therefore, this PR also:

  • Refactors the e2e job into more of a pipeline so we're not re-building binaries and images
  • Moves the flaky e2e test job into the e2e job
  • Simplifies the Makefile removing the e2e spec chunking to the e2e job
  • Adds helm-based deploy target
  • Refactors the e2e job to provision the test clusters
  • Adds kubeconfig-root flag to e2e_test: the directory where the test runners can find the kubeconfig for their cluster
  • Adds kind to tools.go
  • Removes the kind provisioning and helm installing from the e2e tests

Motivation for the change:
The e2e tests shouldn't care about provisioning their own clusters, etc. They should just be given a kubeconfig and a running cluster.

Architectural changes:

Testing remarks:

Reviewer Checklist

  • Implementation matches the proposed design, or proposal is updated to match implementation
  • Sufficient unit test coverage
  • Sufficient end-to-end test coverage
  • Bug fixes are accompanied by regression test(s)
  • e2e tests and flake fixes are accompanied evidence of flake testing, e.g. executing the test 100(0) times
  • tech debt/todo is accompanied by issue link(s) in comments in the surrounding code
  • Tests are comprehensible, e.g. Ginkgo DSL is being used appropriately
  • Docs updated or added to /doc
  • Commit messages sensible and descriptive
  • Tests marked as [FLAKE] are truly flaky and have an issue
  • Code is properly formatted

@openshift-ci openshift-ci bot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label May 14, 2024
@openshift-ci openshift-ci bot requested review from anik120 and oceanc80 May 14, 2024 09:58
@perdasilva perdasilva force-pushed the experiment/parallel_jobs branch 27 times, most recently from ce15c7a to 0e304e5 Compare May 15, 2024 14:54
@perdasilva perdasilva force-pushed the experiment/parallel_jobs branch 11 times, most recently from 92e1043 to d1f0520 Compare May 17, 2024 08:44
Signed-off-by: Per Goncalves da Silva <pegoncal@redhat.com>
Per Goncalves da Silva added 4 commits May 17, 2024 13:05
…st gha

Signed-off-by: Per Goncalves da Silva <pegoncal@redhat.com>
Signed-off-by: Per Goncalves da Silva <pegoncal@redhat.com>
Signed-off-by: Per Goncalves da Silva <pegoncal@redhat.com>
Signed-off-by: Per Goncalves da Silva <pegoncal@redhat.com>
kevinrizza
kevinrizza previously approved these changes May 21, 2024
@kevinrizza kevinrizza added this pull request to the merge queue May 21, 2024
Signed-off-by: Per Goncalves da Silva <pegoncal@redhat.com>
Merged via the queue into operator-framework:master with commit d8ee29d May 21, 2024
12 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants