New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pass through upgrade request #11266
Closed
Closed
Pass through upgrade request #11266
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Motivation: `Http2Frame` has extra empty line after `String name();`. However, it should not be there. Modification: Removed extra empty line. Result: Empty-line code style now matching with other classes.
This reverts commit 78d5ab4.
Motivation: We should have the `toString` method in `DefaultHttp2WindowUpdateFrame` because it makes debugging a little easy. Modification: Added `toString` method. Result: `toString` method to help in debugging. Co-authored-by: Norman Maurer <norman_maurer@apple.com>
…ameter (#10760) Motivation: `HttpConversionUtil#toHttpResponse` translates `Http2Headers` to `HttpResponse`. It uses `#addHttp2ToHttpHeaders(..., boolean isRequest)` to do so. However, `isRequest` field is set to `true` instead of `false`. It should be set to `false` because we're doing conversion of Response not Request. Modification: Changed `true` to `false`. Result: Correctly translates `Http2Headers` to `HttpResponse`.
Motivation: There is no need for ByteProcessor to throw a checked exception. The declared checked exception causes unnecessary code complications just to propagate it. This can be cleaned up. Modification: ByteProcessor.process no longer declares to throw a checked exception, and all the places that were trying to cope with the checked exception have been simplified. Result: Simpler code.
This reverts commit b70d0fa.
Motivation: There is no need for ByteProcessor to throw a checked exception. The declared checked exception causes unnecessary code complications just to propagate it. This can be cleaned up. Modification: ByteProcessor.process no longer declares to throw a checked exception, and all the places that were trying to cope with the checked exception have been simplified. Result: Simpler code.
Motivation: PoolChunk maintains multiple PriorityQueue<Long> collections. The usage of PoolChunk#removeAvailRun unboxes the Long values to long, and then this method uses queue.remove(..) which will auto box the value back to Long. This creates unnecessary allocations via Long.valueOf(long). Modifications: - Adjust method signature and usage of PoolChunk#removeAvailRun to avoid boxing Result: Less allocations as a result of PoolChunk#removeAvailRun.
Motivation: JUnit 5 is the new hotness. It's more expressive, extensible, and composable in many ways, and it's better able to run tests in parallel. But most importantly, it's able to directly run JUnit 4 tests. This means we can update and start using JUnit 5 without touching any of our existing tests. I'm also introducing a dependency on assertj-core, which is like hamcrest, but arguably has a nicer and more discoverable API. Modification: Add the JUnit 5 and assertj-core dependencies, without converting any tests at time time. Result: All our tests are now executed through the JUnit 5 Vintage Engine. Also, the JUnit 5 test APIs are available, and any JUnit 5 tests that are added from now on will also be executed.
Motivation: There were new releases of java. Modifications: Update java versions so we use the latest on the CI Result: Use latest releases
Motivation: `dnsinfo` uses `Apple Public Source License 2.0` not `Apache License 2.0`. Modification: Changed `Apache License 2.0` to `Apple Public Source License 2.0` Result: Fixes #10772
Motivation: We received a [bug report](https://bugs.chromium.org/p/chromium/issues/detail?id=1143320) from the Chrome team at Google, their canary builds are failing [HTTP/2 GREASE](https://tools.ietf.org/html/draft-bishop-httpbis-grease-00) testing to netflix.com. The reason it's failing is that Netty can't handle unknown frames without an active stream created. Let me know if you'd like more info, such as stack traces or repro steps. Modification: The change is minor and simply ignores unknown frames on the connection stream, similarly to `onWindowUpdateRead`. Result: I figured I would just submit a PR rather than filing an issue, but let me know if you want me to do that for tracking purposes.
…error (#10775) Motivation: When parsing HEADERS, connection errors can occur (e.g., too large of headers, such that we don't want to HPACK decode them). These trigger a GOAWAY with a last-stream-id telling the client which streams haven't been processed. Unfortunately that last-stream-id didn't include the stream for the HEADERS that triggered the error. Since clients are free to silently retry streams not included in last-stream-id, the client is free to retransmit the request on a new connection, which will fail the connection with the wrong last-stream-id, and the client is still free to retransmit the request. Modifications: Have fatal connection errors (those that hard-cut the connection) include all streams in last-stream-id, which guarantees the HEADERS' stream is included and thus should not be silently retried by the HTTP/2 client. This modification is heavy-handed, as it will cause racing streams to also fail, but alternatives that provide precise last-stream-id tracking are much more invasive. Hard-cutting the connection is already heavy-handed and so is rare. Result: Fixes #10670
Motivation: We should have a method to add `HttpScheme` if `HttpRequest` does not contain `x-http2-scheme` then we should use add it if `HttpToHttp2ConnectionHandler` is build using specified `HttpScheme`. Modification: Added `HttpScheme` in `HttpToHttp2ConnectionHandlerBuilder`. Result: Automatically add `HttpScheme` if missing in `HttpRequest`.
Motivation: Printing download progress in the build log makes it harder to see what's wrong when the build fails. Modification: Change the maven command to not show transfer progress, also enable batch mode so Maven don't print in colors that we can't see anyway. Result: More concise code analysis build logs.
…10783) Motivation: Sometimes it would be helpful to easily detect if an operation failed due the SSLEngine already be closed. Modifications: Add special exception that is used when the engine was closed Result: Easier to detect a failure caused by a closed exception
Motivation: `HttpConversionUtil#toFullHttpResponse` and `HttpConversionUtil#toFullHttpRequest` has `ByteBufAllocator` which is used for building `FullHttpMessage` and then data can be appended with `FullHttpMessage#content`. However, there can be cases when we already have `ByteBuf` ready with data. So we need a parameter to add `ByteBuf` directly into `FullHttpMessage` while creating it. Modification: Added `ByteBuf` parameter, Result: More functionality for handling `FullHttpMessage` content.
Motivation: https in xmlns URIs does not work and will let the maven release plugin fail: ``` [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 1.779 s [INFO] Finished at: 2020-11-10T07:45:21Z [INFO] ------------------------------------------------------------------------ [ERROR] Failed to execute goal org.apache.maven.plugins:maven-release-plugin:2.5.3:prepare (default-cli) on project netty-parent: Execution default-cli of goal org.apache.maven.plugins:maven-release-plugin:2.5.3:prepare failed: The namespace xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" could not be added as a namespace to "project": The namespace prefix "xsi" collides with an additional namespace declared by the element -> [Help 1] [ERROR] ``` See also https://issues.apache.org/jira/browse/HBASE-24014. Modifications: Use http for xmlns Result: Be able to use maven release plugin
Motivation: 2d1b143 missed to change the artifactId in one place Modification: Change to netty-build-common Result: Release works
Motivation: `DelegatingDecompressorFrameListener#initDecompressor` has multiple dots `.` in comments. However, it should not have that. Modification: Removed multiple dots. Result: Clean comment
Fix issue #10508 where PARANOID mode slow down about 1000 times compared to ADVANCED. Also fix a rare issue when internal buffer was growing over a limit, it was partially discarded using `discardReadBytes()` which causes bad changes within previously discovered HttpData. Reasons were: Too many `readByte()` method calls while other ways exist (such as keep in memory the last scan position when trying to find a delimiter or using `bytesBefore(firstByte)` instead of looping externally). Changes done: - major change on way buffer are parsed: instead of read byte per byte until found delimiter, try to find the delimiter using `bytesBefore()` and keep the last unfound position to skeep already parsed parts (algorithms are the same but implementation of scan are different) - Change the condition to discard read bytes when refCnt is at most 1. Observations using Async-Profiler: ================================== 1) Without optimizations, most of the time (more than 95%) is through `readByte()` method within `loadDataMultipartStandard` method. 2) With using `bytesBefore(byte)` instead of `readByte()` to find various delimiter, the `loadDataMultipartStandard` method is going down to 19 to 33% depending on the test used. the `readByte()` method or equivalent `getByte(pos)` method are going down to 15% (from 95%). Times are confirming those profiling: - With optimizations, in SIMPLE mode about 82% better, in ADVANCED mode about 79% better and in PARANOID mode about 99% better (most of the duplicate read accesses are removed or make internally through `bytesBefore(byte)` method) A benchmark is added to show the behavior of the various cases (one big item, such as File upload, and many items) and various level of detection (Disabled, Simple, Advanced, Paranoid). This benchmark is intend to alert if new implementations make too many differences (such as the previous version where about PARANOID gives about 1000 times slower than other levels, while it is now about at most 10 times). Extract of Benchmark run: ========================= Run complete. Total time: 00:13:27 Benchmark Mode Cnt Score Error Units HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigAdvancedLevel thrpt 6 2,248 ± 0,198 ops/ms HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigDisabledLevel thrpt 6 2,067 ± 1,219 ops/ms HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigParanoidLevel thrpt 6 1,109 ± 0,038 ops/ms HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderBigSimpleLevel thrpt 6 2,326 ± 0,314 ops/ms HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighAdvancedLevel thrpt 6 1,444 ± 0,226 ops/ms HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighDisabledLevel thrpt 6 1,462 ± 0,642 ops/ms HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighParanoidLevel thrpt 6 0,159 ± 0,003 ops/ms HttpPostMultipartRequestDecoderBenchmark.multipartRequestDecoderHighSimpleLevel thrpt 6 1,522 ± 0,049 ops/ms
…ple (#10807) Motivation: People may use the object serialisation example as a vehicle to test out sending their own objects across the wire. If those objects are not actually serialisable for some reason, then we need to let the exception propagate so that this becomes obvious to people. Modification: Add a listener to the future that sends the first serialisable message, so that we ensure that any exceptions that shows up during serialisation becomes visible. Without this, the state of the future that sent the first message was never checked or inspected anywhere. Result: Serialisation bugs in code derived from the Object Echo example are much easier to diagnose. This fixes #10777
…fter custom headers to avoid duplication (#10793) Motivation: According rfc (https://tools.ietf.org/html/rfc6455#section-11.3.4), `Sec-WebSocket-Protocol` header field MUST NOT appear more than once in an HTTP response. At the moment we can pass `Sec-WebSocket-Protocol` via custom headers and it will be added to response. Modification: Change method add() to set() for avoid duplication. If we pass sub protocols in handshaker constructor it means that they are preferred over custom ones. Result: Less error prone behavior.
Motivation: ABORT and COMMIT commands were missing from the enum but they are part of the STOMP spec. Modifications: Modified the enum to add the missing commands. Result: ABORT and COMMIT commands can now be parsed properly and acted on.
Motivation: When a HashedWheelTimer instance is started or stopped, its working thread is started or stopped. These operations block the calling thread: - start() calls java.util.concurrent.CountDownLatch.await() to wait for the worker thread to finish initializing; - stop() calls java.lang.Thread.join(long) to wait for the worker thread to exit. BlockHound detects these calls and as a consequence, prevents HashedWheelTimer from working properly, if it is started or stopped in a thread that is not allowed to block. Modifications: Added two more BlockHound exceptions to io.netty.util.internal.Hidden.NettyBlockHoundIntegration: one for HashedWheelTimer.start() and one for HashedWheelTimer.stop(). Result: HashedWheelTimer can now be started and stopped properly when BlockHound is activated.
Motivation: In some enviroments sun.misc.Unsafe is not present. We should support these as well. Modifications: Fallback to JNI if we can't directly access the memoryAddress of the buffer. Result: Fixes #10813
Motivation: GlobalEventExecutor.addTask was rightfully allowed to block by commit 09d38c8. However the same should have been done for SingleThreadEventExecutor.addTask. BlockHound is currently intercepting that call, and as a consequence, it prevents SingleThreadEventExecutor from working properly, if addTask is called from a thread that cannot block. The interception is due to LinkedBlockingQueue.offer implementation, which uses a ReentrantLock internally. Modifications: * Added one BlockHound exception to io.netty.util.internal.Hidden.NettyBlockHoundIntegration for SingleThreadEventExecutor.addTask. * Also added unit tests for both SingleThreadEventExecutor.addTask and GlobalEventExecutor.addTask. Result: SingleThreadEventExecutor.addTask can now be invoked from any thread when BlockHound is activated.
… channelInputClosed(...) is processing the buffer. (#10817) Motivation: We need to carefully check for null before we pass the cumulation buffer into decodeLast as callDecode(...) may have removed the codec already and so set cumulation to null. Modifications: - Check for null and if we see null use Unpooled.EMPTY_BUFFEr - Only call decodeLast(...) if callDecode(...) didnt remove the handler yet. Result: Fixes #10802
Motivation: Netty lacks client side support for decompressing Brotli compressed response bodies. Modification: * Introduce optional dependency to brotli4j by @hyperxpro. It will be up to the user to provide the brotli4j libraries for the target platform in the classpath. brotli4j is currently available for Linux, OSX and Windows, all for x86 only. * Introduce BrotliDecoder in codec module * Plug it onto `HttpContentDecompressor` for HTTP/1 and `DelegatingDecompressorFrameListener` for HTTP/2 * Add test in `HttpContentDecoderTest` * Add `BrotliDecoderTest` that doesn't extend `AbstractDecoderTest` that looks flaky Result: Netty now support decompressing Brotli compressed response bodies.
Motivation: Mac OS specific DNS resolver fails to take into account search order of resolvers causing wrong resolver being used is some circumstances Modifications: Re-order array of resolvers using their sort order as an ordering key. Final order is opposite of the search order to make sure that resolver with the lower sort order goes last (so it overrides previous one in the `resolverMap`). Result: Fixes issue #11225
…rop (#11239) Motivation: `PlatformDependent#normalizedOs()` already caches normalized variant of the value of `os.name` system property. Instead of inconsistently normalizing it in every case, use the utility method. Modifications: - `PlatformDependent`: `isWindows0()` and `isOsx0()` use `NORMALIZED_OS`; - `PlatformDependent#normalizeOs(String)` define `darwin` as `osx`; - `OpenSsl#loadTcNative()` does not require `equalsIgnoreCase` bcz `os` is already normalized; - Epoll and KQueue: `Native#loadNativeLibrary()` use `normalizedOs()`; - Use consistent `Locale.US` for lower case conversion of `os.name`; - `MacOSDnsServerAddressStreamProvider#loadNativeLibrary()` uses `PlatformDependent.isOsx()`; Result: Consistent approach for `os.name` parsing.
Motivation: Conscrypt not correctly filters out non support TLS versions which may lead to test failures. Related to google/conscrypt#1013 Modifications: - Bump up to latest patch release - Add workaround Result: No more test failures caused by conscrypt
Motivation: TLSv1 and TLSv1.1 is considered insecure. Let's follow the JDK and disable these by default Modifications: - Disable TLSv1 and TLSv1.1 by default when using OpenSSL. - Add unit tests Result: Use only strong TLS versions by default when using OpenSSL
Motivation: We should use the same maven cache for all builds so we can re-use as much of the downloaded maven dependencies as possible Modifications: - Just use the same cache for all Result: Hopefully be able to re-use most of the dependencies
#11248) Motivation: We should setup the caching so it will be able to use different restore keys and so almost never need to start from scratch Modifications: Adjust caching config to make use of different restore keys for maven caching but also docker caching Result: Better cache usage
Motivation: When trying to compile with java16 we should use adopt@1.16* Modifications: - Use adopt@1.16.0-1- - Upgrade to blockhoud 1.0.6 to be able to support java16 Result: Use correct java version / flavor
Motivation: We had some println left in the test-classes. Modifications: Remove println usage Result: Cleanup
Motivation: We introduced the ability to offload certain operations to an executor that may take some time to complete. At the moment this is not enabled by default when using the openssl based SSL provider. Let's enable it by default as we have this support for some while now and didnt see any issues yet. This will also make things less confusing and more consistent with the JDK based provider. Modifications: Use true as default value for io.netty.handler.ssl.openssl.useTasks. Result: Offloading works with openssl based SSL provider as well by default
Motivation: Just use MAVEN_OPTS to setup all the timeouts etc for dependency downloads. This way we at least can be sure these are applied. Modifications: - Use MAVEN_OPTS - Remove ci profile - Remove unused settings.xml file - Always use ./mvnw Result: Build stability improvements
Motivation: `SelfSignedCertificate` creates a certificate and private key files and store them in a temporary directory. However, if the certificate uses a wildcard hostname that uses asterisk *, e.g. `*.shieldblaze.com`, it'll throw an error because * is not a valid character in the file system. Modification: Replace the asterisk with 'x' Result: Fixes #11240
…le's entries for a hostname (#11246) Motivation: DefaultHostsFileEntriesResolver should provide all hosts file's entries for a hostname when DnsNameResolver#resolveAll as opposed to the current implementation where only the first entry is taken into consideration Modification: - Add DefaultHostsFileEntriesResolver#addresses to provide all hosts file's entries for a hostname - Add HostsFileEntriesProvider to provide all hosts file's entries for a hostname and to keep backwards compatibility for HostsFileEntries and HostsFileParser - DnsNameResolver#resolveAll uses the new DefaultHostsFileEntriesResolver#addresses - BlockHound configuration: replace HostsFileParser#parse with HostsFileEntriesProvider$ParserImpl#parse as the latter does the parsing - Add junit tests Result: Fixes #10834
Motivation: There is a typo in the javadocs Modification: correct grammar mistakes Result: cleanup
Motivation: ParserImpl is stateless and so we can use the same instance multiple times Modifications: - Make constructor private - Return the same instance all the time Result: Less object creation
Motivation: After the release was done we need to also copy the apidocs and xref to the netty-website Modifications: Add script that does the copy etc Result: Less manual steps to remember
Motivation: The current initialization of Slf4JLoggerFactory is not singleton. Modification: Use Slf4JLoggerFactory.INSTANCE to initialize Slf4JLoggerFactory. Result: The instance of Slf4JLoggerFactory became a singleton.
Motivation: When changing the netty-all artifact to not include any sources we also removed the ability to generate the javadocs / xref files for our website Modifications: - Add new profile which will generate the files - Add script which generates all the files and copy these over to the netty-website Result: Easier to generate files for website
Motivation: Cannot load the native library for DNS resolutions on MacOS. The exception below is observed: 18:02:43.453 [Test worker] ERROR i.n.r.d.DnsServerAddressStreamProviders - Unable to load io.netty.resolver.dns.macos.MacOSDnsServerAddressStreamProvider, fallback to system defaults. This may result in incorrect DNS resolutions on MacOS. java.lang.reflect.InvocationTargetException: null at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at io.netty.resolver.dns.DnsServerAddressStreamProviders.<clinit>(DnsServerAddressStreamProviders.java:64) at io.netty.resolver.dns.DnsNameResolverBuilder.<init>(DnsNameResolverBuilder.java:60) at reactor.netty.transport.NameResolverProvider.newNameResolverGroup(NameResolverProvider.java:432) ... Caused by: java.lang.UnsatisfiedLinkError: io.netty.resolver.dns.macos.MacOSDnsServerAddressStreamProvider.resolvers()[Lio/netty/resolver/dns/macos/DnsResolver; at io.netty.resolver.dns.macos.MacOSDnsServerAddressStreamProvider.resolvers(Native Method) at io.netty.resolver.dns.macos.MacOSDnsServerAddressStreamProvider.retrieveCurrentMappings(MacOSDnsServerAddressStreamProvider.java:127) at io.netty.resolver.dns.macos.MacOSDnsServerAddressStreamProvider.<init>(MacOSDnsServerAddressStreamProvider.java:123) This is a regression made with #11239 Modification: When checking for OS, an exception must be thrown when the OS is not MacOS Result: The native library for DNS resolutions on MacOS can be loaded
Motivation: A user might want to handle a certain HTTP upgrade request differently than what `HttpServerUpgradeHandler` does by default. For example, a user could let `HttpServerUpgradeHandler` handle HTTP/2 upgrades but not WebSocket upgrades. Modifications: - Added `HttpServerUpgradeHandler.isUpgrade(HttpRequest)` so a user can tell `HttpServerUpgradeHandler` to pass the request as it is to the next handler. Result: - A user can handle a certain upgrade request specially.
Oops! See #11267 instead. 😄 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Motivation:
Explain here the context, and why you're making that change.
What is the problem you're trying to solve.
Modification:
Describe the modifications you've done.
Result:
Fixes #.
If there is no issue then describe the changes introduced by this PR.