New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow HTTP2 encoder to split headers across frames #55322
base: main
Are you sure you want to change the base?
Conversation
cc. @amcasey who has been in the discussions of the corresponding issue. |
Thanks for this! Thorough, as always. Some notes:
Some test questions (you may already have code/coverage for this - I haven't checked):
|
@amcasey let me reply inline:
You are wondering about headers that are roundtripped to the client with large size (previously would fail, now it could DDoS)? I need to check Kestrel's H2 header size limits (you also mention), but there is nothing in the Http2FrameWriter in this regard.
It can span into zero or more CONTINUATION frames.
There is no such place, but it could be very well built along the Kestrel's limit or an AppContext switch. Please let me know if building such a would be preference. But note, that previously "possible" use-cases still work the same as before, so the switch would only control if large headers are allowed or not -> hence a limit might be suitable option.
I did not come across compression/no-compression on this path. HPack encodes the header values into this buffer.
The header is written to a buffer, which is split into CONTINUATION frames, so it does not matter if the name or the value is being oversized.
MaxRequestHeaderFieldSize ? -> I need to test the behavior.
It works on anything that HPack writes to my understanding. I will double confirm.
-> If the long one does not fit in the same frame, yes, the initial header will be sent in a tiny frame. This is even true for the "current" behavior. |
src/Servers/Kestrel/Core/src/Internal/Http2/Http2FrameWriter.cs
Outdated
Show resolved
Hide resolved
src/Servers/Kestrel/Core/src/Internal/Http2/Http2FrameWriter.cs
Outdated
Show resolved
Hide resolved
} | ||
|
||
length = currentLength; | ||
return false; | ||
return HeaderWriteResult.MoreHeaders; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what is the behaviour here if we can't request more (currentLength == 0 && !canRequestLargerBuffer
) ? I haven't pulled the code locally, but it looks like we will return MoreHeaders
, which will ... keep trying? is there any way this can result in an infinite loop of not making progress?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This (that it cannot request more) should be only the case for the first HEADER frame which always has status response, so no loop. We could consider completely removing this though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the side-effect of getting this wrong would be catastrophic, though; it seems that a throw
(or a throw-helper) that is never hit would be a much better "things that never happen" than an infinite loop that is never hit
(my point is: if the code gets changed at some point in the future such that our expectations are no longer true: how should it manifest? in this case, it feels bad enough that a fault is preferable to an infinite loop)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is a debug assert on the call site.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The canRequestLargerBuffer
could dropped if we apply a do-while loop on the initial header as well, but I felt that might sacrifice perf. I will do a measurement.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But if I did that, remove canRequestLargerBuffer
and implement a do-while loop in WriteResponseHeadersUnsynchronized
to enlarge the header, the perf decreases. Using the same Http2FrameWriterBenchmark
as in the description:
Method | Mean | Error | StdDev | Op/s | Gen 0 | Gen 1 | Gen 2 | Allocated |
---|---|---|---|---|---|---|---|---|
WriteResponseHeaders | 80.23 ns | 1.206 ns | 1.128 ns | 12,463,819.6 | 0.0002 | - | - | 32 B |
Background: I've looked at HTTP2 headers a lot. I wrote some of the dynamic support and rewrote the writer and parser at one point. I haven't looked through the code in detail yet. Some initial thoughts:
|
@ladeak Thanks! Your responses make sense.
|
I don't think we need a switch. If this lands in a mid .NET 9 preview, then it should go through a lot of use and testing before GA. For example, the work David did to rewrite response writing in .NET 7(?) was much more complex and we didn't have a fallback. |
Good enough for me. I hadn't considered how well exercised this code is by TechEmpower, etc. |
@ladeak Did you receive the initial feedback you needed? Is this ready for a full review or are you still working on it. There's no rush - I just wondered whether the next steps were our responsibility. Thanks! |
@amcasey Going to come back tomorrow with some findings. |
Discussion about the header size limits: as I understood there is a general desire to have a limit. However, response headers mostly depend on the app, and the way app handles headers. I have not found limits for HTTP/1.1. An empty/default Kestrel ASP.NET Core webapp also allows as large headers as desired with h1. On the consumer-side I ran into limits though (H1): .NET When I run the app with IISExpress, it seems to only returns the remainder of 64k. (header mod 65536). @amcasey , the following questions would need further clarification:
|
Since the spec doesn't give a limit, I think we need to give users a way to override whatever we decide. I'm not sure why it would be specific to http/2 though - presumably the same concern applies to http/1.1 or http/3. My first thought for a default would be double If we were to decide this shouldn't get a public API, I'd want to go even higher - maybe 10 MB. |
@amcasey I think I have not thought about http/1.1 about a limit, given it does not have currently, and setting 64KB would be breaking, wouldn't it? (I am not sure how difficult it could be to implement this for http/1.1, http/3 looks similar to h2 in code structure. But it makes sense from the point of view you describe that it could be a setting that applies to all versions of http. A question if it is public: should it apply to a single header or to the total headers. Consumers HttpClient and Edge had a total while curl per header limit. |
Because we're guarding against resource utilization rather than malformed responses, I think the limit should apply to the total size of all headers, rather than to the size of any individual header. Similarly, if we're reducing the whole detection mechanism to a single number, I would expect trailers to be included. I'm open to feedback on both.
Yes, it would. I think we generally accept breaks in service of DoS prevention, but I agree that this is a strong argument for choosing a default that is larger than we expect anyone to use in practice. If we felt really strongly about this, I could live with adding limits to both http/2 and http/3 and not to |
@amcasey , I added a commit that has a new limit on One thing I found on the way: I would expect similar implementation would be needed on H/1.1 and H/3 (because the public setting is on Kestrel level), hence not sure if a public limit is worth all the complexity to be added. Maybe a setting called |
Thanks for the prototype and the thoughtful write-up.
Assuming no perf impact, I would probably accept that level of complexity to get the extra protection. Having said that, it feels like there are ways we could reduce the complexity. What if we capped the header size before hpack? Would that let us do a single check up-front? Given how much larger we expect the limit to be than any app would reasonably use, I don't think hpack is going to be what saves people (i.e. by keeping them under the limit). One implementation note: when you give people an option like this, it's not unusual for them to pass
It does seem like a well-behaved server ought to respect that setting. Maybe we had a reason for not already doing so? @halter73 @JamesNK? If we were to add that functionality (possibly in a separate PR), it would be important to ensure that it uses different error text from the internal server limit so app authors know they can't control it.
I would agree that a public limit should apply to all protocols. Would the H/1.1 and H/3 changes be simpler if we used a limit on the pre-compression size? Again, any insight from @halter73 @JamesNK on why H/1.1 doesn't already have such a limit would be welcome.
I'm not sure I understand the suggestion. Isn't the buffer the thing we use for breaking the header into pieces? Does your proposed setting limit the size of each piece or is it a different buffer?
It's slightly wonky. You can use Set/GetData but you get an object and then you have to check for both int and string. It's not ideal, but it's possible. I could live with making it an appcontext switch at first and eventually promoting it to a setting but, with the information I have now, I think I would still lean towards having a public setting. As always, I'm open to arguments in favor of going a different way. There's an example here. |
The reason I went the way it is implemented, because the encoding is also applied by the HPack writer, but this suggestion makes sense to investigate. I will check if there is any common place for all protocols to perform this.
Makes sense, thank you for the reminder.
Each piece - the reason I keep coming back to this idea is because my understanding was that the problem is allocating a really large buffer, and this would be an easy check. But as discussed above, let me try to pursue validating the total headers before the write operation. |
I am thinking (and moving - wip) the limit logic to |
I was actually thinking of a single check for each protocol, but I'm fine with merging the checks if that's an option. |
Yeah, I think I was just mixing myself up. Preventing the actual problem seems like a viable way forward. We just need to make sure we're able to give the user an intelligible setting. |
In
I am still looking at this idea. If I want to avoid iterating all headers an additional time, I find it very suitable to calculate the header length in I will prepare a prototype of this solution. The confusing part is that EncodeHeadersCore already has an |
Thanks! I agree that an additional iteration would be undesirable. I'm going to be out of town next week, but @mgravell should be around to answer your questions (except Monday, which is a holiday for him). |
@amcasey (and @mgravell ) I moved the size validation into This PR is still not covering HTTP/1.1 and HTTP/3, but I remove the Draft label, so then we can discuss the other HTTP version and the new limit. Another consideration on the way the headers length is aggregated: because it is HTTP version specific, and because in the current HTTP/2 the implementation piggy backs onto So, if this approach is kept, that we have the limit calculated for each protocol (and to me it seems reasonable), maybe separate settings would make more sense? (unless it gets too granular, or these differences are acceptable) |
Allow the HTTP2 encoder to split headers across frames
Enable Kestrel's HTTP2 to split large HTTP headers into HEADER and CONTINUATION frames.
Description
Kestrel's HTTP2 implementation limits the max header size to the size of the frame size. If a header's size is larger than the frame size, it throws an exception:
throw new HPackEncodingException(SR.net_http_hpack_encode_failure);
. RFC 7540 allows the headers to be split into a HEADER frame and CONTINUATION frames.Before the change Kestrel only used CONTINUATION frames when headers fitted fully within a frame.
This PR changes the above behavior by allowing to split even a single HTTP header into multiple frames. It uses an
ArrayBufferWriter<byte>
(similar to .NET runtime), to back the buffer used by the HPack encoder.When the HPack encoder reports that a single header does not fit the available buffer, the size of the buffer is increased. Note, that the .NET runtime implementation on HttpClient writes all headers to a single buffer before pushing it onto the output, contrary to this implementation that keeps the semantics of Kestrel. It only increases the buffer when a single header fails to be written to the output, otherwise the old behavior is kept. My intention was to keep this behavior so that memory-wise it does not use more memory than the single largest header or the max frame size.
With this PR
HPackHeaderWriter
uses an enum to tellHttp2FrameWriter
to increase the buffer or not. When the buffer is too small, its size is doubled.This behavior is also implemented for trailers. Note that in case of headers, the HEADER frame is never empty because of the response status, while this is not true for trailers. Hence there is a subtle difference when getting the buffer for the initial frame of a header vs. a trailer.
I updated existing tests asserting the previous behavior and added new tests to validate the proposed changes.
Performance
Performance-wise the change is not expected to increase throughput (given it must do more to enable this use-case) but the goal is to have the 'slow-path' is only in the case when a single header is too large. I used the existing
Http2FrameWriterBenchmark
to compare performance before and after:Before changes (updated 2024. 05. 04. main):
After changes rebased on main (Updated with Validating the header length in HPackHeaderWriter)
Fixes #4722