Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Switch manager tests to run on singleDC environment #7435

Merged
merged 2 commits into from
May 15, 2024

Conversation

mikliapko
Copy link
Contributor

@mikliapko mikliapko commented May 14, 2024

Closes scylladb/scylla-manager#3850

Should be merged together with #7365

Since there is an issue with multiDC cluster restore when the EaR is turned on (scylladb/scylla-manager#3829), it was decided to:

  • switch them to use single-DC cluster;
  • leave one test with multiDC cluster for enterprise version 2022 (no encryption feature implemented).

After scylladb/scylla-manager#3829 resolution, multiDC cluster would be returned back.

Testing

PR pre-checks (self review)

  • I added the relevant backport labels
  • I didn't leave commented-out/debugging code

Reminders

  • Add New configuration option and document them (in sdcm/sct_config.py)
  • Add unit tests to cover my changes (under unit-test/ folder)
  • Update the Readme/doc folder relevant to this change (if needed)

Since there is an issue with multiDC cluster restore when the EaR is
turned on (scylladb/scylla-manager#3829),
it was decided to temporarily switch the main part of jobs to run on
singleDC cluster. Only one multiDC cluster job is left for enterprise
version 2022 where EaR is not implemented.
The test is valid only for multiDC configuration. Otherwise, it should
be skipped.
@@ -5,9 +5,9 @@ def lib = library identifier: 'sct@snapshot', retriever: legacySCM(scm)

managerPipeline(
backend: 'aws',
region: '''["us-east-1", "us-west-2"]''',
region: 'us-west-2',
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's on purpose you are putting each case on a different region ?

(if you can, you can even use random, I've fixed to be working a while back)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I randomly chose one of the regions supported before in multiDC runs: "us-east-1" or "us-west-2"

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to explain, setting the region as random, can help a bit with the spot issues, since tests would spread to all of the supported regions, and won't all run on the exact same ones

Copy link
Contributor

@fruch fruch left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@@ -349,9 +349,8 @@ def test_manager_sanity(self):
self.test_mgmt_cluster_crud()
with self.subTest('Mgmt cluster Health Check'):
self.test_mgmt_cluster_healthcheck()
# test_healthcheck_change_max_timeout requires a multi dc run. And since ipv6 cannot run in multi dc, this test
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good that this comment is going away, cause the ipv6 part of it, is alway wrong. we can use ipv6 on AWS with multi-dc cases.

@fruch fruch merged commit 03155c1 into scylladb:master May 15, 2024
6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[SCT] Switch multiDC cluster tests to run on singleDC cluster
2 participants