Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

scrubber: add scan-metadata and hook into integration tests #5176

Merged
merged 25 commits into from Sep 6, 2023

Conversation

jcsp
Copy link
Contributor

@jcsp jcsp commented Sep 1, 2023

Problem

  • Scrubber's tidy command requires presence of a control plane
  • Scrubber has no tests at all

Summary of changes

  • Add re-usable async streams for reading metadata from a bucket
  • Add a scan-metadata command that reads from those streams and calls existing checks.rs code to validate metadata, then returns a summary struct for the bucket. Command returns nonzero status if errors are found.
  • Add an enable_scrub_on_exit() function to NeonEnvBuilder so that tests using remote storage can request to have the scrubber run after they finish
  • Enable remote storarge and scrub_on_exit in test_pageserver_restart and test_pageserver_chaos

This is a "toe in the water" of the overall space of validating the scrubber. Later, we should:

  • Enable scrubbing at end of tests using remote storage by default
  • Make the success condition stricter than "no errors": tests should declare what tenants+timelines they expect to see in the bucket (or sniff these from the functions tests use to create them) and we should require that the scrubber reports on these particular tenants/timelines.

The tidy command is untouched in this PR, but it should be refactored later to use similar async streaming interface instead of the current batch-reading approach (the streams are faster with large buckets), and to also be covered by some tests.

Checklist before requesting a review

  • I have performed a self-review of my code.
  • If it is a core feature, I have added thorough tests.
  • Do we need to implement analytics? if so did you add the relevant metrics to the dashboard?
  • If this PR requires public announcement, mark it with /release-notes label and add several sentences in this section.

Checklist before merging

  • Do not forget to reformat commit message to not include the above checklist

This provides the same analysis done at the end of
`tidy`, but in a standalone command that uses Stream-based
listing helpers with parallel execution to provide a
faster result when one is interested in the contents
of a bucket, but does not want to check the control plane
state to learn which items in the bucket correspond
to active/non-deleted tenants/timelines.
@jcsp jcsp added a/tech_debt Area: related to tech debt c/storage/scrubber Component: s3_scrubber labels Sep 1, 2023
s3_scrubber/src/main.rs Outdated Show resolved Hide resolved
@github-actions
Copy link

github-actions bot commented Sep 1, 2023

1624 tests run: 1551 passed, 0 failed, 73 skipped (full report)


Code coverage full report

  • functions: 53.4% (7464 of 13988 functions)
  • lines: 81.5% (44078 of 54052 lines)

The comment gets automatically updated with the latest test results
8ac049b at 2023-09-06T10:25:28.243Z :recycle:

s3_scrubber/src/main.rs Outdated Show resolved Hide resolved
jcsp and others added 11 commits September 1, 2023 17:14
Co-authored-by: Joonas Koivunen <joonas@neon.tech>
## Problem

Currently, the `deploy` job doesn't wait for the custom extension job
(in another repo) and can be started even with failed extensions build.
This PR adds another job that polls the status of the extension build job
and fails if the extension build fails.

## Summary of changes
- Add `wait-for-extensions-build` job, which waits for a custom
extension build in another repo.
…ime (#5177)

The `remote_timeline_client` tests use `#[tokio::test]` and rely on the
fact that the test runtime that is set up by this macro is
single-threaded.

In PR #5164, we observed
interesting flakiness with the `upload_scheduling` test case:
it would observe the upload of the third layer (`layer_file_name_3`)
before we did `wait_completion`.

Under the single-threaded-runtime assumption, that wouldn't be possible,
because the test code doesn't await inbetween scheduling the upload
and calling `wait_completion`.

However, RemoteTimelineClient was actually using `BACKGROUND_RUNTIME`.
That means there was parallelism where the tests didn't expect it,
leading to flakiness such as execution of an UploadOp task before
the test calls `wait_completion`.

The most confusing scenario is code like this:

```
schedule upload(A);
wait_completion.await; // B
schedule_upload(C);
wait_completion.await; // D
```

On a single-threaded executor, it is guaranteed that the upload up C
doesn't run before D, because we (the test) don't relinquish control
to the executor before D's `await` point.

However, RemoteTimelineClient actually scheduled onto the
BACKGROUND_RUNTIME, so, `A` could start running before `B` and
`C` could start running before `D`.

This would cause flaky tests when making assertions about the state
manipulated by the operations. The concrete issue that led to discover
of this bug was an assertion about `remote_fs_dir` state in #5164.
…#5086)

This RFC describes a simple scheme to make layer map updates crash
consistent by leveraging the index_part.json in remote storage. Without
such a mechanism, crashes can induce certain edge cases in which broadly
held assumptions about system invariants don't hold.
For
[#5086](#5086 (comment))
we will require remote storage to be configured in pageserver.

This PR enables `localfs`-based storage for all Rust unit tests.

Changes:

- In `TenantHarness`, set up localfs remote storage for the tenant.
- `create_test_timeline` should mimic what real timeline creation does,
and real timeline creation waits for the timeline to reach remote
storage. With this PR, `create_test_timeline` now does that as well.
- All the places that create the harness tenant twice need to shut down
the tenant before the re-create through a second call to `try_load` or
`load`.
- Without shutting down, upload tasks initiated by/through the first
incarnation of the harness tenant might still be ongoing when the second
incarnation of the harness tenant is `try_load`/`load`ed. That doesn't
make sense in the tests that do that, they generally try to set up a
scenario similar to pageserver stop & start.
- There was one test that recreates a timeline, not the tenant. For that
case, I needed to create a `Timeline::shutdown` method. It's a
refactoring of the existing `Tenant::shutdown` method.
- The remote_timeline_client tests previously set up their own
`GenericRemoteStorage` and `RemoteTimelineClient`. Now they re-use the
one that's pre-created by the TenantHarness. Some adjustments to the
assertions were needed because the assertions now need to account for
the initial image layer that's created by `create_test_timeline` to be
present.
## Problem

Tests using remote storage have manually entered `test_name` parameters,
which:
- Are easy to accidentally duplicate when copying code to make a new
test
- Omit parameters, so don't actually create unique S3 buckets when
running many tests concurrently.

## Summary of changes

- Use the `request` fixture in neon_env_builder fixture to get the test
name, then munge that into an S3 compatible bucket name.
- Remove the explicit `test_name` parameters to enable_remote_storage
@jcsp jcsp marked this pull request as ready for review September 4, 2023 09:22
@jcsp jcsp requested review from a team as code owners September 4, 2023 09:22
@jcsp jcsp requested review from knizhnik and removed request for a team September 4, 2023 09:22
test_runner/fixtures/utils.py Outdated Show resolved Hide resolved
s3_scrubber/src/main.rs Show resolved Hide resolved
s3_scrubber/src/main.rs Show resolved Hide resolved
s3_scrubber/src/lib.rs Outdated Show resolved Hide resolved
s3_scrubber/src/checks.rs Outdated Show resolved Hide resolved
s3_scrubber/src/checks.rs Outdated Show resolved Hide resolved
s3_scrubber/src/checks.rs Outdated Show resolved Hide resolved
@jcsp jcsp requested a review from arpad-m September 6, 2023 10:34
@jcsp jcsp merged commit 7439331 into main Sep 6, 2023
33 checks passed
@jcsp jcsp deleted the jcsp/scrubber-scan-metadata branch September 6, 2023 10:55
jcsp added a commit that referenced this pull request Oct 26, 2023
## Problem

The previous garbage cleanup functionality relied on doing a dry run,
inspecting logs, and then doing a deletion. This isn't ideal, because
what one actually deletes might not be the same as what one saw in the
dry run. It's also risky UX to rely on presence/absence of one CLI flag
to control deletion: ideally the deletion command should be totally
separate from the one that scans the bucket.

Related: #5037

## Summary of changes

This is a major re-work of the code, which results in a net decrease in
line count of about 600. The old code for removing garbage was build
around the idea of doing discovery and purging together: a
"delete_batch_producer" sent batches into a deleter. The new code writes
out both procedures separately, in functions that use the async streams
introduced in #5176 to achieve
fast concurrent access to S3 while retaining the readability of a single
function.

- Add `find-garbage`, which writes out a JSON file of tenants/timelines
to purge
- Add `purge-garbage` which consumes the garbage JSON file, applies some
extra validations, and does deletions.
- The purge command will refuse to execute if the garbage file indicates
that only garbage was found: this guards against classes of bugs where
the scrubber might incorrectly deem everything garbage.
- The purge command defaults to only deleting tenants that were found in
"deleted" state in the control plane. This guards against the risk that
using the wrong console API endpoint could cause all tenants to appear
to be missing.

Outstanding work for a future PR:
- Make whatever changes are needed to adapt to the Console/Control Plane
separation.
- Make purge even safer by checking S3 `Modified` times for
index_part.json files (not doing this here, because it will depend on
the generation-aware changes for finding index_part.json files)

## Checklist before requesting a review

- [ ] I have performed a self-review of my code.
- [ ] If it is a core feature, I have added thorough tests.
- [ ] Do we need to implement analytics? if so did you add the relevant
metrics to the dashboard?
- [ ] If this PR requires public announcement, mark it with
/release-notes label and add several sentences in this section.

## Checklist before merging

- [ ] Do not forget to reformat commit message to not include the above
checklist

---------

Co-authored-by: Arpad Müller <arpad-m@users.noreply.github.com>
Co-authored-by: Shany Pozin <shany@neon.tech>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
a/tech_debt Area: related to tech debt c/storage/scrubber Component: s3_scrubber
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants