Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(backfill): do not require at least 1 record to be read per epoch, if rate limit enabled #16744

Merged
merged 3 commits into from
May 15, 2024

Conversation

kwannoel
Copy link
Contributor

@kwannoel kwannoel commented May 14, 2024

I hereby agree to the terms of the RisingWave Labs, Inc. Contributor License Agreement.

What's changed and what's your intention?

We have a requirement to read at least 1 record per barrier from snapshot for backfill.

This means we are forced to have the following distribution in our stream:

barrier -> record -> barrier -> record -> barrier

Which means the barrier latency is tied to processing times for the records.
In cases where the processing time is long, e.g. UDF with high latencies, the barrier latency will spike as well.
In the associated test case, we show that when UDF calls have 5s latency, the test should take a long time to complete.
This occurs even when rate limit set, disrupting its usefulness.

The solution is to check if rate limit can currently read a record, before applying the read at least 1 record per epoch step.
If it can't, just skip over. Eventually we will get to read a record.
If it can, then continue to apply this step.

Checklist

  • I have written necessary rustdoc comments
  • I have added necessary unit tests and integration tests
  • I have added test labels as necessary. See details.
  • I have added fuzzing tests or opened an issue to track them. (Optional, recommended for new SQL features Sqlsmith: Sql feature generation #7934).
  • My PR contains breaking changes. (If it deprecates some features, please create a tracking issue to remove them in the future).
  • All checks passed in ./risedev check (or alias, ./risedev c)
  • My PR changes performance-critical code. (Please run macro/micro-benchmarks and show the results.)
  • My PR contains critical fixes that are necessary to be merged into the latest release. (Please check out the details)

Documentation

  • My PR needs documentation updates. (Please use the Release note section below to summarize the impact on users)

Release note

If this PR includes changes that directly affect users or other significant modifications relevant to the community, kindly draft a release note to provide a concise summary of these changes. Please prioritize highlighting the impact these changes will have on users.

@kwannoel kwannoel added the need-cherry-pick-release-1.9 Open a cherry-pick PR to branch release-1.8 after the current PR is merged label May 14, 2024
@fuyufjh
Copy link
Contributor

fuyufjh commented May 14, 2024

I didn't get it.

The motivation of requiring at least 1 record to be read per epoch is to prevent the livelock of scanning from storage #12780

This is so we don't lose tombstone iteration progress for that epoch.

The problem exists regardless of whether rate limit was enabled or not. They are unrelated. Don't understand why associating them together.

@fuyufjh
Copy link
Contributor

fuyufjh commented May 14, 2024

Let me guess. Suppose you met a case that even 1 record per epoch is unacceptable. If so, I believe we should find a better implementation for #12780, such as holding a delete-tomb-aware iterator, or something like that.

Fixing a new problem while creating an known problem sounds bad to me.

@kwannoel
Copy link
Contributor Author

kwannoel commented May 14, 2024

Let me guess. Suppose you met a case that even 1 record per epoch is unacceptable. If so, I believe we should find a better implementation for #12780, such as holding a delete-tomb-aware iterator, or something like that.

Fixing a new problem while creating an known problem sounds bad to me.

This PR does not lead to any regression. Consider the case when we don't read at least 1 record. This is when rate limiter tells us that we have hit the snapshot read rate limit, and should not read a new record.

That's totally fine, because subsequently, when rate limit capacity frees up, we can apply this requirement of 1 record per epoch again. The key thing is that we will always read at least one row, in the rate limit duration, we didn't get rid of this requirement, just generalized it from one row per epoch.

1 record per epoch is unacceptable in cases where the latency of processing a single record is high. If we apply rate limit to this scenario, it still can't change the distribution of the stream.
After this PR however, we can make the distribution like so instead if rate limit is set:

barrier -> record -> barrier -> barrier

Frequency of record now respects rate limit, rather than being forced to be at least 1 per barrier.
At the same time, we eventually read at least 1 record, 
on an epoch when there's free capacity in the rate limiter.

The provided test case will take an extremely long time without the fix in this PR.

For sure we can have this: such as holding a delete-tomb-aware iterator, or something like that.
But this is much more complex than the current solution, which already fixes the issue without causing any regression.

@lmatz
Copy link
Contributor

lmatz commented May 14, 2024

Does 1.9.0 need to war for this or included in minor version is also ok?

@kwannoel
Copy link
Contributor Author

Does 1.9.0 need to war for this or included in minor version is also ok?

Minor version is ok

Copy link
Contributor

@chenzl25 chenzl25 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. IIUC, the rate limiter would always be checked before the at least 1 row per barrier read, so only when rate_limit = 0 will stop the at least 1 row per barrier read. I think it is acceptable, because rate_limit = 0 means we want to throttle the streaming DAG entirely.

@kwannoel kwannoel added this pull request to the merge queue May 15, 2024
Merged via the queue into main with commit 34c732a May 15, 2024
27 of 28 checks passed
@kwannoel kwannoel deleted the kwannoel/rate-limit-pass branch May 15, 2024 09:04
github-actions bot pushed a commit that referenced this pull request May 15, 2024
github-merge-queue bot pushed a commit that referenced this pull request May 15, 2024
…, if rate limit enabled (#16744) (#16769)

Co-authored-by: Noel Kwan <47273164+kwannoel@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
need-cherry-pick-release-1.9 Open a cherry-pick PR to branch release-1.8 after the current PR is merged type/feature
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants