Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

HTTP2 connections not used efficiently #2262

Closed
kitkars opened this issue Jun 1, 2022 · 10 comments
Closed

HTTP2 connections not used efficiently #2262

kitkars opened this issue Jun 1, 2022 · 10 comments
Assignees
Labels
type/bug A general bug
Milestone

Comments

@kitkars
Copy link

kitkars commented Jun 1, 2022

When we use HTTP2, single connection is enough for a remote host to send multiple requests. Unlike HTTP1.1, HTTP2 does not wait for the previous request to complete to send another request.

Server-Controller:

    @GetMapping("echo")
    public Mono<String> echo(){
        return Mono.just("echo")
                   .delayElement(Duration.ofSeconds(1));
    }

Server-Config:

    @Bean
    public NettyServerCustomizer customizer() throws CertificateException {
        return httpServer -> httpServer
                .protocol(HttpProtocol.H2C);
    }

Client:

    private HttpClient httpClient = HttpClient.create(ConnectionProvider.builder("http2").maxConnections(1).build())
                                              .protocol(HttpProtocol.H2C);
    private WebClient client = WebClient.builder()
                                        .baseUrl("http://localhost:8080/echo")
                                        .clientConnector(new ReactorClientHttpConnector(httpClient))
                                        .build();
                                        
    @Test
    public void test(){
        var start = System.currentTimeMillis();
        Flux<String> flux = Flux.range(1, 10)
                             .flatMap(i -> client.get().retrieve().bodyToMono(String.class))
                              .doFinally(s -> System.out.println("Total Time : " + (System.currentTimeMillis() - start) + " ms"));
        StepVerifier.create(flux.then())
                .verifyComplete();
    }                                    

Expected Behavior:
All the 10 requests should get completed in ~1 second.

Actual Behavior:
All the 10 requests get completed in ~5 seconds. (To be honest, I have no clue why 5 seconds. if we send requests 1 after another, then it should have taken 10 seconds)

I tried with maxConcurrentStreams(10) - no luck. Same behavior.

@kitkars kitkars added status/need-triage A new issue that still need to be evaluated as a whole type/bug A general bug labels Jun 1, 2022
@violetagg violetagg removed the status/need-triage A new issue that still need to be evaluated as a whole label Jun 1, 2022
@violetagg
Copy link
Member

@kitkars When you use one connection this means that you are running also with one thread. Enable Reactor Netty logging logging.level.reactor.netty=debug and take a look how the tasks are executed by only one event loop.

@violetagg violetagg added the for/user-attention This issue needs user attention (feedback, rework, etc...) label Jun 1, 2022
@violetagg
Copy link
Member

@kitkars Also I have a PR for improvement in the area of HTTP/2 connection pool so it should be better with it.
Would you like to test it? #2257

@kitkars
Copy link
Author

kitkars commented Jun 2, 2022

@violetagg First of all, thanks for helping everyone via SO / Gitter / GitHub.

I have been using gRPC for a while and I love it. It might not be apples to apples comparison. But the concept should be same more or less. That is, gRPC also uses netty + HTTP2 behind the scenes. gRPC can send multiple independent requests via single HTTP2 persistent connection (something like rSocket does).

So I started assuming that WebFlux with HTTP2 and persistent connection, I should be able to send multiple requests as well. If you say, I need to increase the connections - for ex: to make 10 parallel requests, open 10 HTTP2 connections, then I do not see the real benefit of HTTP2. Because thats what HTTP1.1 does anyway. HTTP2 removes the head of line blocking via multiplexing with single connection. So I am not sure how thread is connected to this as things are non-blocking.

Please correct me if my understanding is totally incorrect and what I should expect from WebFlux-HTTP2.

Regarding the PR, I cloned and tried to build. There were some unit tests issues I am facing. Let me work on it and get back.

Thanks again for the quick response.

@violetagg
Copy link
Member

Regarding the PR, I cloned and tried to build. There were some unit tests issues I am facing. Let me work on it and get back.

Use ./gradlew publishToMavenLocal so that you will just install the snapshot in your local repository and give it a try.

@kitkars
Copy link
Author

kitkars commented Jun 3, 2022

Thanks @violetagg.

It provides much much better performance now. 👍

I noticed one weird thing. I would appreciate if you could clarify. That is the performance is better if I use Flux.range(1, 10_000). If I use Flux.range(1, 100) , it is extremely slow that only 2 requests get completed at a time.

My dependencies:

image

Server-Controller:

    @GetMapping("echo/{id}")
    public Mono<String> echo(@PathVariable int id) {
        return Mono.just("echo " + id)
                   .doFirst(() -> System.out.println("Received : " + id))
                   .doFinally(s -> System.out.println("Responded : " + id))
                   .delayElement(Duration.ofSeconds(1));
    }

Server Config:

    @Bean
    public NettyServerCustomizer customizer() {
        return httpServer -> httpServer
                .protocol(HttpProtocol.H2C)
                .http2Settings(b -> b.maxConcurrentStreams(100));
    }

Client:

    private HttpClient httpClient = HttpClient.create(ConnectionProvider.builder("http2")
                                                                        .maxConnections(1).build())
                                              .protocol(HttpProtocol.H2C)
                                              .http2Settings(b -> b.maxConcurrentStreams(10));
    private WebClient client = WebClient.builder()
                                        .baseUrl("http://localhost:8080/echo/")
                                        .clientConnector(new ReactorClientHttpConnector(httpClient))
                                        .build();

    @Test
    public void test() {
        Flux<String> flux = Flux.range(1, 10)
                                .flatMap(i -> client.get()
                                                    .uri("{id}", i)
                                                    .retrieve()
                                                    .bodyToMono(String.class)
                                                    .doOnNext(r -> System.out.println(LocalDateTime.now() + ": Received : " + r)));
        StepVerifier.create(flux.then())
                    .verifyComplete();
    }

If you look at the server log, I am not sure what do we mean by terminating channel for every 2 responses. (from where this 2 comes from?)

Server side log:

Received : 2
Responded : 2
2022-06-03 07:59:29.815 DEBUG 36472 --- [ctor-http-nio-2] r.n.http.server.HttpServerOperations     : [f0e2587b/2-1, L:/127.0.0.1:8080 ! R:/127.0.0.1:53362] Last HTTP packet was sent, terminating the channel
...
...
Received : 4
Responded : 4
2022-06-03 07:59:30.883 DEBUG 36472 --- [ctor-http-nio-2] r.n.http.server.HttpServerOperations     : [f0e2587b/3-1, L:/127.0.0.1:8080 ! R:/127.0.0.1:53362] Last HTTP packet was sent, terminating the channel

Client side log:
Even though we can have 100 active streams, l am able to achieve only 2 as the server seems to keep closing.

{uri=http://localhost:8080/echo/2, method=GET}
2022-06-03 07:59:28.600 DEBUG 36498 --- [ctor-http-nio-2] r.n.http.client.Http2ConnectionProvider  : [a5af7766/2-1, L:/127.0.0.1:53362 - R:localhost/127.0.0.1:8080] Stream opened, now: 2 active streams and 100 max active streams.

Attached the log files for your reference.

netty-http2-client-log.txt
netty-http2-server-log.txt

@violetagg violetagg self-assigned this Jun 6, 2022
@violetagg violetagg removed the for/user-attention This issue needs user attention (feedback, rework, etc...) label Jun 6, 2022
@violetagg violetagg added this to the 1.0.20 milestone Jun 6, 2022
@violetagg
Copy link
Member

violetagg commented Jun 7, 2022

@kitkars I updated the PR #2257 and it is now in its final state.

Thanks for the example that you provided. However I need to clarify some things:

The API below configures the HTTP/2 initial settings.
.http2Settings(b -> b.maxConcurrentStreams(N)); this configures SETTINGS_MAX_CONCURRENT_STREAMS . According to the specification

https://datatracker.ietf.org/doc/html/rfc7540#section-5.1.2

A peer can limit the number of concurrently active streams using the
   SETTINGS_MAX_CONCURRENT_STREAMS parameter (see [Section 6.5.2](https://datatracker.ietf.org/doc/html/rfc7540#section-6.5.2)) within
   a SETTINGS frame.  The maximum concurrent streams setting is specific
   to each endpoint and applies only to the peer that receives the
   setting.  That is, clients specify the maximum number of concurrent
   streams the server can initiate, and servers specify the maximum
   number of concurrent streams the client can initiate.

On the server you have

.http2Settings(b -> b.maxConcurrentStreams(100)); which means that the client can open maximum 100 concurrent streams

On the client you have

.http2Settings(b -> b.maxConcurrentStreams(10)); which means that the server (server push) can open maximum 10 concurrent streams. However Reactor Netty does not support server push.

If your intention was to further limit the streams that the client can open, then with the mentioned PR above we are introducing a new API so that you can do this

reactor.netty.http.client.Http2AllocationStrategy
...
/**
 * Configures the maximum number of the concurrent streams that can be opened to the remote peer.
 * When evaluating how many streams can be opened to the remote peer,
 * the minimum of this configuration and the remote peer configuration is taken.
 * Default to {@code -1} - use always the remote peer configuration.
 *
 * @param maxConcurrentStreams the maximum number of the concurrent streams that can be opened to the remote peer
 * @return {@code this}
 */
Builder maxConcurrentStreams(long maxConcurrentStreams);
...

Or the ConnectionProvider configuration should look like this

Http2AllocationStrategy strategy = Http2AllocationStrategy.builder().maxConnections(1).maxConcurrentStreams(10).build();
ConnectionProvider.builder("http2")
		.allocationStrategy(strategy)
		.build()

Can you please test with the final changes and can you please clarify what was your original intention with .http2Settings(b -> b.maxConcurrentStreams(10)); setting on the client?

@kitkars
Copy link
Author

kitkars commented Jun 7, 2022

@violetagg

Your last PR update fixes all the problems I was facing. 🥇 👍


My setup:

  • The server takes 3 seconds to process each reuqest (using delayElement(Duration.ofSeconds(3)).
  • The client sets up 1 http connection with 1 server (.maxConnections(1))

My expectation was whatever the requests client sends - they all should complete in more or less ~3 seconds.

I have been playing with this for sometime. Just to quickly summarize

Scenarios:

  • Client sends 100 requests in parallel (Flux.range(1, 100)) - takes ~3 seconds
  • Client sends 10K requests in parallel (Flux.range(1, 10000) & flatMap concurrency is set to 10000 ) takes ~4 seconds (pretty good)

Regarding your question for my original intention behind setting .http2Settings(b -> b.maxConcurrentStreams(10)); - I had assumed that the client could further limit the server's maxConcurrentStreams from its side. Not sure of the exact use case. But I felt it would be kind of client-side rate-limiting or a way of conveying client's ability to handle the response processing rate. But you seem to have got it right and handled it via new API which seems to work great. 👍

But this is where I find it little bit confusing (from user's perspective). That is - why do we have maxConnections in multiple places?

ConnectionProvider.builder("test")
                      .maxConnections(1) 
                      .allocationStrategy(Http2AllocationStrategy
                                                                .builder()
                                                                .maxConnections(1)
                                                                .build())
                      .build()

@violetagg
Copy link
Member

@kitkars

But this is where I find it little bit confusing (from user's perspective). That is - why do we have maxConnections in multiple places?

If you provide an allocation strategy then you do not need to configure maxConnections outside of it.

I pointed this in the javadoc but if you think we need to do more clarifications, please tell us.

/**
 * Set the options to use for configuring {@link ConnectionProvider} maximum connections per connection pool.
-> * This is a pre-made allocation strategy where only max connections is specified.
-> * Custom allocation strategies can be provided via {@link #allocationStrategy(AllocationStrategy)}.
 * Default to {@link #DEFAULT_POOL_MAX_CONNECTIONS}.
 *
 * @param maxConnections the maximum number of connections (per connection pool) before start pending
 * @return {@literal this}
 * @throws IllegalArgumentException if maxConnections is negative
 */
public final SPEC maxConnections(int maxConnections) {
/**
 * Limits in how many connections can be allocated and managed by the pool are driven by the
-> * provided {@link AllocationStrategy}. This is a customization escape hatch that replaces the last
-> * configured strategy, but most cases should be covered by the {@link #maxConnections()}
-> * pre-made allocation strategy.
 *
 * @param allocationStrategy the {@link AllocationStrategy} to use
 * @return {@literal this}
 * @see #maxConnections()
 * @since 1.0.20
 */
public final SPEC allocationStrategy(AllocationStrategy<?> allocationStrategy) {

@kitkars
Copy link
Author

kitkars commented Jun 7, 2022

Yeah, That would help.
Thanks @violetagg for your prompt response and hard work to make other developer's (like me) life easier.
Please close this issue.
I can't wait to work on final release version.

@violetagg violetagg linked a pull request Jun 8, 2022 that will close this issue
violetagg added a commit that referenced this issue Jun 8, 2022
Cache Http2FrameCodec/Http2MultiplexHandler/H2CUpgradeHandler context.
Obtain the negotiated application level protocol once.

Related to #2151 and #2262
violetagg added a commit that referenced this issue Jun 8, 2022
violetagg added a commit that referenced this issue Jun 8, 2022
violetagg added a commit that referenced this issue Jun 8, 2022
Http2StreamFrameToHttpObjectCodec can be shared
Http2StreamBridgeClientHandler can be shared

Related to #2151 and #2262
violetagg added a commit that referenced this issue Jun 8, 2022
@violetagg
Copy link
Member

@kitkars Thanks for testing the changes. Fix #2257 is available in 1.0.20-SNAPSHOT.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type/bug A general bug
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants