New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
scrubber: add scan-metadata
and hook into integration tests
#5176
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This provides the same analysis done at the end of `tidy`, but in a standalone command that uses Stream-based listing helpers with parallel execution to provide a faster result when one is interested in the contents of a bucket, but does not want to check the control plane state to learn which items in the bucket correspond to active/non-deleted tenants/timelines.
jcsp
added
a/tech_debt
Area: related to tech debt
c/storage/scrubber
Component: s3_scrubber
labels
Sep 1, 2023
koivunej
reviewed
Sep 1, 2023
koivunej
reviewed
Sep 1, 2023
koivunej
reviewed
Sep 1, 2023
koivunej
reviewed
Sep 1, 2023
koivunej
reviewed
Sep 1, 2023
1624 tests run: 1551 passed, 0 failed, 73 skipped (full report)Code coverage full report
The comment gets automatically updated with the latest test results
8ac049b at 2023-09-06T10:25:28.243Z :recycle: |
problame
reviewed
Sep 1, 2023
problame
reviewed
Sep 1, 2023
bayandin
reviewed
Sep 1, 2023
Co-authored-by: Joonas Koivunen <joonas@neon.tech>
## Problem Currently, the `deploy` job doesn't wait for the custom extension job (in another repo) and can be started even with failed extensions build. This PR adds another job that polls the status of the extension build job and fails if the extension build fails. ## Summary of changes - Add `wait-for-extensions-build` job, which waits for a custom extension build in another repo.
…ime (#5177) The `remote_timeline_client` tests use `#[tokio::test]` and rely on the fact that the test runtime that is set up by this macro is single-threaded. In PR #5164, we observed interesting flakiness with the `upload_scheduling` test case: it would observe the upload of the third layer (`layer_file_name_3`) before we did `wait_completion`. Under the single-threaded-runtime assumption, that wouldn't be possible, because the test code doesn't await inbetween scheduling the upload and calling `wait_completion`. However, RemoteTimelineClient was actually using `BACKGROUND_RUNTIME`. That means there was parallelism where the tests didn't expect it, leading to flakiness such as execution of an UploadOp task before the test calls `wait_completion`. The most confusing scenario is code like this: ``` schedule upload(A); wait_completion.await; // B schedule_upload(C); wait_completion.await; // D ``` On a single-threaded executor, it is guaranteed that the upload up C doesn't run before D, because we (the test) don't relinquish control to the executor before D's `await` point. However, RemoteTimelineClient actually scheduled onto the BACKGROUND_RUNTIME, so, `A` could start running before `B` and `C` could start running before `D`. This would cause flaky tests when making assertions about the state manipulated by the operations. The concrete issue that led to discover of this bug was an assertion about `remote_fs_dir` state in #5164.
…#5086) This RFC describes a simple scheme to make layer map updates crash consistent by leveraging the index_part.json in remote storage. Without such a mechanism, crashes can induce certain edge cases in which broadly held assumptions about system invariants don't hold.
For [#5086](#5086 (comment)) we will require remote storage to be configured in pageserver. This PR enables `localfs`-based storage for all Rust unit tests. Changes: - In `TenantHarness`, set up localfs remote storage for the tenant. - `create_test_timeline` should mimic what real timeline creation does, and real timeline creation waits for the timeline to reach remote storage. With this PR, `create_test_timeline` now does that as well. - All the places that create the harness tenant twice need to shut down the tenant before the re-create through a second call to `try_load` or `load`. - Without shutting down, upload tasks initiated by/through the first incarnation of the harness tenant might still be ongoing when the second incarnation of the harness tenant is `try_load`/`load`ed. That doesn't make sense in the tests that do that, they generally try to set up a scenario similar to pageserver stop & start. - There was one test that recreates a timeline, not the tenant. For that case, I needed to create a `Timeline::shutdown` method. It's a refactoring of the existing `Tenant::shutdown` method. - The remote_timeline_client tests previously set up their own `GenericRemoteStorage` and `RemoteTimelineClient`. Now they re-use the one that's pre-created by the TenantHarness. Some adjustments to the assertions were needed because the assertions now need to account for the initial image layer that's created by `create_test_timeline` to be present.
## Problem #5162 (comment)
## Problem Tests using remote storage have manually entered `test_name` parameters, which: - Are easy to accidentally duplicate when copying code to make a new test - Omit parameters, so don't actually create unique S3 buckets when running many tests concurrently. ## Summary of changes - Use the `request` fixture in neon_env_builder fixture to get the test name, then munge that into an S3 compatible bucket name. - Remove the explicit `test_name` parameters to enable_remote_storage
arpad-m
reviewed
Sep 4, 2023
arpad-m
approved these changes
Sep 6, 2023
5 tasks
jcsp
added a commit
that referenced
this pull request
Oct 26, 2023
## Problem The previous garbage cleanup functionality relied on doing a dry run, inspecting logs, and then doing a deletion. This isn't ideal, because what one actually deletes might not be the same as what one saw in the dry run. It's also risky UX to rely on presence/absence of one CLI flag to control deletion: ideally the deletion command should be totally separate from the one that scans the bucket. Related: #5037 ## Summary of changes This is a major re-work of the code, which results in a net decrease in line count of about 600. The old code for removing garbage was build around the idea of doing discovery and purging together: a "delete_batch_producer" sent batches into a deleter. The new code writes out both procedures separately, in functions that use the async streams introduced in #5176 to achieve fast concurrent access to S3 while retaining the readability of a single function. - Add `find-garbage`, which writes out a JSON file of tenants/timelines to purge - Add `purge-garbage` which consumes the garbage JSON file, applies some extra validations, and does deletions. - The purge command will refuse to execute if the garbage file indicates that only garbage was found: this guards against classes of bugs where the scrubber might incorrectly deem everything garbage. - The purge command defaults to only deleting tenants that were found in "deleted" state in the control plane. This guards against the risk that using the wrong console API endpoint could cause all tenants to appear to be missing. Outstanding work for a future PR: - Make whatever changes are needed to adapt to the Console/Control Plane separation. - Make purge even safer by checking S3 `Modified` times for index_part.json files (not doing this here, because it will depend on the generation-aware changes for finding index_part.json files) ## Checklist before requesting a review - [ ] I have performed a self-review of my code. - [ ] If it is a core feature, I have added thorough tests. - [ ] Do we need to implement analytics? if so did you add the relevant metrics to the dashboard? - [ ] If this PR requires public announcement, mark it with /release-notes label and add several sentences in this section. ## Checklist before merging - [ ] Do not forget to reformat commit message to not include the above checklist --------- Co-authored-by: Arpad Müller <arpad-m@users.noreply.github.com> Co-authored-by: Shany Pozin <shany@neon.tech>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Problem
tidy
command requires presence of a control planeSummary of changes
scan-metadata
command that reads from those streams and calls existingchecks.rs
code to validate metadata, then returns a summary struct for the bucket. Command returns nonzero status if errors are found.enable_scrub_on_exit()
function to NeonEnvBuilder so that tests using remote storage can request to have the scrubber run after they finishThis is a "toe in the water" of the overall space of validating the scrubber. Later, we should:
The
tidy
command is untouched in this PR, but it should be refactored later to use similar async streaming interface instead of the current batch-reading approach (the streams are faster with large buckets), and to also be covered by some tests.Checklist before requesting a review
Checklist before merging