Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add configurable max_udp_payload_size transport parameter #3988

Closed

Conversation

DevinCarr
Copy link

@DevinCarr DevinCarr commented Jul 21, 2023

Helps partially address #3300 and #3385 by adding a configuration parameter for the max_udp_payload_size.

Even though the RFC defines the max_udp_payload_size to be defaulted to 65527* this PR continues the assumption of a default UDP payload size of 1452 (MaxPacketBufferSize). It does this because the MaxPacketBufferSize is used all throughout the code to allocate max capacity for buffers. By maintaining this maximum value we can implement these changes without needing to (currently) address larger packet sizes. It should work with lower max_udp_payload_size values between >1200 and <=1452.

Adjusting the MaxPacketBufferSize to be larger would also require that the buffer_pool can be adjusted ahead of opening a connection and would be unique to that connection. Or, another solution would be to increase the buffer_pool to a larger value of maybe 1600 per buffer as a maximum since it's unlikely (currently) many would want to have a larger UDP packet size than the jumbograms size. This would also require the transport parameter to also be clipped to this upper limit value. Unsure which is the best solution to approach here though, but defaulting to the 65527 limit seems a bit large and unlikely to be utilized by many except in perhaps unique networks.

Additionally, Path MTU Discovery can operate as normal with this parameter, as it just helps support the case that one side of a quic connection knows that it's PMTU can adjust to a certain lower threshold.

I have held off on adding tests for these changes as I feel you will want to iterate on the implementation and to address the above adjustments that need to be made before merging. I'm willing to continue work here, but I wanted to pause at this point for your feedback.

@google-cla
Copy link

google-cla bot commented Jul 21, 2023

Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

View this failed invocation of the CLA check for more information.

For the most up to date status, view the checks section at the bottom of the pull request.

@marten-seemann
Copy link
Member

Can you help me understand the motivation behind this PR? Why do you need packet sizes beyond 1452 bytes?

I understand that there’s the jumbo-frames use case on local networks, but if I understand correctly, that’s not what you’re trying to solve here.

@DevinCarr
Copy link
Author

DevinCarr commented Jul 21, 2023

We and thereby, my use-case, does not have a need for payload sizes to go beyond 1452 bytes. I didn't want to limit the case that someone does want to go higher than that as that's what was described by other folks in #3385. If you feel that it would be fine to just provide the max_udp_payload_size as a configurable parameter but it can only go lower that 1452 (but greater than 1200), I would be fine with that for this PR.

@marten-seemann
Copy link
Member

Why does it need to be configurable at all? PMTUD will find the appropriate values, right?

@DevinCarr
Copy link
Author

Currently the PMTUD will only discover the MTU values at the beginning of the connection, but if the PMTU were to decrease it causes packet loss (#3955).

To avoid this entirely, if a network operator were to know that the PMTU isn't always stable at 1500 and the PMTU could only ever drop to 1400 for example, setting the max_udp_payload_size is the tool that could be reached for to solve this. The long term goal would be to have on-going PMTU discovery, but that seems like more of an advanced use-case, that can pursued separately. The max_udp_payload_size is defined in the RFC already and can be made available to a user to configure and that's what this PR is attempting to accomplish.

@marten-seemann
Copy link
Member

I see. With the transport parameter, you’re making sure that the peer (and not yourself) is not sending packets larger than the configured size. In other words, you’re protecting against a peer that’s not properly implementing PMTUD.

That’s not really what this TP was intended for. The TP exists to inform your peer or your maximum receive buffer size.

I’d prefer to do “the right thing” and fix our PMTUD algorithm, since it doesn’t fully implement the respective RFC. I don’t think it’s that much work to get those changes in, and it would probably be reasonable to target that for the v0.38 release. What do you think?

@DevinCarr
Copy link
Author

Sure, if you think that would be more valuable! I was trying this avenue since it seemed like a smaller scoped change instead of addressing the PMTUD algorithm to adjust for decreases (or increases).

We can bring this discussion back to the #3955 issue if you want to help provide me with some pointers for sections in the relevant RFC(s) and locations of where you think the changes would best be done. Not sure the timeline of getting this for the v0.38 release since I don't know what your timeline is for that, but I would be willing to at least get started on it.

@marten-seemann
Copy link
Member

No fixed timeline for v0.38 yet. We just released v0.37 today, so I’d expect v0.38 to ship towards the end of August / beginning of September, depending on the changes that make it in by then.

And yes, let’s move the discussion to #3955.

@joliveirinha
Copy link
Contributor

Can we still pivot this PR to address #3300 only?

It is independent of the PTMU issue, and we can solve that problem.

@marten-seemann
Copy link
Member

As a short term fix, I’d recommend simply turning it PMTUD.

@marten-seemann
Copy link
Member

Closing since we're not planning to add another config flag.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants