Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Configure receive window per connection #1386

Merged

Conversation

lijunwangs
Copy link
Contributor

Problem:
Currently the receive_window on the server side is obtained from the ServerConfig stored in the Endpoint, which will make all connections having the same receive_window and making per connection data limits difficult.

Solution:
Create interfaces to set the receive_window per connection.

This is to address #1342 to make receive_window configurable per connection. It only handles receive_window not the stream_receive_window which can be addressed in a separate PR.

@lijunwangs
Copy link
Contributor Author

Hi @Ralith, can you take a look at this?

@Ralith
Copy link
Collaborator

Ralith commented Jul 18, 2022

The relevant review from #1384 still applies. Copying here for convenience:

To be effective, set_receive_window must increase StreamsState::local_max_data by the difference between the new connection-level receive window and the old one, and set pending.max_data = true. If the window shrinks, then instead local_max_data must not be changed, but the difference in credit passed to add_read_credits in future calls should be silently absorbed.

To limit the risk of drift, let's cite the methods on TransportConfig (with intra-doc links) rather than duplicating their docs.

@lijunwangs
Copy link
Contributor Author

lijunwangs commented Jul 19, 2022

The relevant review from #1384 still applies. Copying here for convenience:

To be effective, set_receive_window must increase StreamsState::local_max_data by the difference between the new connection-level receive window and the old one, and set pending.max_data = true. If the window shrinks, then instead local_max_data must not be changed, but the difference in credit passed to add_read_credits in future calls should be silently absorbed.

To limit the risk of drift, let's cite the methods on TransportConfig (with intra-doc links) rather than duplicating their docs.

Thanks @Ralith on the local_max_data when window is shrank, just to confirm when you say "absorbed", it means when we add_read_credits, we do not do the following:

self.local_max_data = self.local_max_data.saturating_add(credits)

Instead we do self.local_max_data = self.local_max_data.saturating_add(credits - (shrink_diff)) where shrink_diff = (new_receive_window - old_receive_window)?
In addition. I think we would need to store this delayed diff, and once applied, set it to 0.

@Ralith
Copy link
Collaborator

Ralith commented Jul 19, 2022

Instead we do self.local_max_data = self.local_max_data.saturating_add(credits - (shrink_diff)) where shrink_diff = (new_receive_window - old_receive_window)?

Close, but some state is required, because it could take many calls to add_read_credits until the excess window size from a single reduction is absorbed. You should compute the excess window size, then so long as that amount is nonzero, subtract from it instead of adding to local_max_data. Immediately after hits zero, every following credit goes back into adding to local_max_data. This will also need unit testing to verify that credit is handled correctly even when read credit increments are large/small/etc.

@lijunwangs
Copy link
Contributor Author

Hi @Ralith, on the following

To be effective, set_receive_window must increase StreamsState::local_max_data by the difference between the new connection-level receive window and the old one, and set pending.max_data = true.

Which pending are you referring to? The one in RecvStream? Or in in StreamsState? I do not see the pending in StreamsState has max_data field.

@Ralith
Copy link
Collaborator

Ralith commented Jul 19, 2022

self.spaces[SpaceId::Data].pending in Connection, specifically.

quinn/src/connection.rs Outdated Show resolved Hide resolved
@lijunwangs
Copy link
Contributor Author

@Ralith -- I have addressed your feedback. Could you take another look? Thanks

@lijunwangs
Copy link
Contributor Author

@Ralith -- a gentle reminder, can you review this again?

Copy link
Collaborator

@Ralith Ralith left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the ping, this had slipped through the cracks.

Thank you for writing up those very thorough tests! Overall this looks great, just a few minor tweaks.

quinn-proto/src/connection/streams/state.rs Outdated Show resolved Hide resolved
quinn-proto/src/connection/streams/state.rs Outdated Show resolved Hide resolved
quinn-proto/src/connection/streams/state.rs Outdated Show resolved Hide resolved
quinn-proto/src/connection/streams/state.rs Outdated Show resolved Hide resolved
quinn-proto/src/connection/streams/state.rs Outdated Show resolved Hide resolved
quinn-proto/src/connection/streams/state.rs Outdated Show resolved Hide resolved
quinn-proto/src/connection/streams/state.rs Outdated Show resolved Hide resolved
quinn-proto/src/connection/streams/state.rs Outdated Show resolved Hide resolved
quinn-proto/src/connection/streams/state.rs Outdated Show resolved Hide resolved
quinn-proto/src/connection/streams/state.rs Outdated Show resolved Hide resolved
commit 37b30a8
Author: Lijun Wang <83639177+lijunwangs@users.noreply.github.com>
Date:   Thu Jul 28 17:11:42 2022 -0700

    Fixed some comments from @Ralith
Copy link
Collaborator

@Ralith Ralith left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks!

@lijunwangs
Copy link
Contributor Author

Thanks @Ralith for your detailed reviews. Can you help merge this PR? I do not have permission to merge the PR against this repo. Thanks

@Ralith
Copy link
Collaborator

Ralith commented Jul 30, 2022

Usually I prefer to give @djc an opportunity to weigh in on nontrivial changes. Do you specifically need it merged quickly?

@lijunwangs
Copy link
Contributor Author

Usually I prefer to give @djc an opportunity to weigh in on nontrivial changes. Do you specifically need it merged quickly?

Yes -- we have a project which we plan to use this feature tweak different data limits for different connections for the same endpoint from the server side -- related to issue #1342.

@djc
Copy link
Collaborator

djc commented Aug 1, 2022

I'll review this today.

@djc djc merged commit 56271fb into quinn-rs:main Aug 1, 2022
@lijunwangs
Copy link
Contributor Author

Hi @Ralith and @djc, I wonder if it is possible to merge this to the 0.8.x branch? Our application is based off 0.8.x branch.

@djc
Copy link
Collaborator

djc commented Aug 3, 2022

Sure, please submit a PR to the 0.8.x branch (squashing all the changes).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants