Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(storage): only update related read version on flush finish #16725

Merged
merged 5 commits into from
May 16, 2024

Conversation

wenym1
Copy link
Contributor

@wenym1 wenym1 commented May 13, 2024

I hereby agree to the terms of the RisingWave Labs, Inc. Contributor License Agreement.

What's changed and what's your intention?

Currently in event handler, when flush task finishes, we need to update all instances of read version, even though the flush task does not include the imm of the read version. In this PR, we change to store the mapping from instance id to its included imm in the output, so that we can know which instances should be updated and avoid updating all instances.

Besides, the algorithm to update each read version can be simplified. Previously, we need to do time-consuming sanity check when we update the read version. After this PR, since we separate the input imm of each read version shard, when we update the read version, we only need to check whether the input is at the imm queue end, which is greatly simplified.

The implementation of for_each_read_version is also modified. Previously, we will blocking wait for lock acquisition, even though we are able to acquire the lock of other read versions. In this PR, in the first round, we only call try_write on each read version. The try_write succeeds, we do the work, and otherwise, we add the instance id to a queue. And then we will keep trying to acquire the write lock for instance at the queue front for at most 10 milli second, if it fails, we will move it to queue end and continue trying to acquire the write lock for the next item.

The code of merge imm is removed by the way in this PR, because if we consider merged imm, we need to maintain the order of normal imm and merged imm, which unnecessarily increases the complexity.

Checklist

  • I have written necessary rustdoc comments
  • I have added necessary unit tests and integration tests
  • I have added test labels as necessary. See details.
  • I have added fuzzing tests or opened an issue to track them. (Optional, recommended for new SQL features Sqlsmith: Sql feature generation #7934).
  • My PR contains breaking changes. (If it deprecates some features, please create a tracking issue to remove them in the future).
  • All checks passed in ./risedev check (or alias, ./risedev c)
  • My PR changes performance-critical code. (Please run macro/micro-benchmarks and show the results.)
  • My PR contains critical fixes that are necessary to be merged into the latest release. (Please check out the details)

Documentation

  • My PR needs documentation updates. (Please use the Release note section below to summarize the impact on users)

Release note

If this PR includes changes that directly affect users or other significant modifications relevant to the community, kindly draft a release note to provide a concise summary of these changes. Please prioritize highlighting the impact these changes will have on users.

@StrikeW
Copy link
Contributor

StrikeW commented May 15, 2024

The code of merge imm is removed by the way in this PR, because if we consider merged imm, we need to maintain the order of normal imm and merged imm, which unnecessarily increases the complexity.

IIRC we set the checkpoint_interval to 1 by default, but when memtable spill occurs, a state table would still generate multiple IMMs in a checkpoint epoch. Do you mean we don't need optimization for multiple imm case? Or there is other optimization can handle it?

@wenym1
Copy link
Contributor Author

wenym1 commented May 15, 2024

The code of merge imm is removed by the way in this PR, because if we consider merged imm, we need to maintain the order of normal imm and merged imm, which unnecessarily increases the complexity.

IIRC we set the checkpoint_interval to 1 by default, but when memtable spill occurs, a state table would still generate multiple IMMs in a checkpoint epoch. Do you mean we don't need optimization for multiple imm case? Or there is other optimization can handle it?

When spill happens, it's likely to trigger a shared buffer compaction that merge multiple imms into ssts. This serves similar functionality to merge imm.

Copy link
Contributor

@Li0k Li0k left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remember to deprecated the config related to "merge imm", Rest LGTM

@wenym1 wenym1 enabled auto-merge May 16, 2024 10:00
@wenym1 wenym1 added this pull request to the merge queue May 16, 2024
Merged via the queue into main with commit 8a98b85 May 16, 2024
27 of 28 checks passed
@wenym1 wenym1 deleted the yiming/staging-sst-shard-imm branch May 16, 2024 11:03
@kwannoel kwannoel added the need-cherry-pick-release-1.9 Open a cherry-pick PR to branch release-1.8 after the current PR is merged label May 24, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
need-cherry-pick-release-1.9 Open a cherry-pick PR to branch release-1.8 after the current PR is merged type/feature
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants