New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Too much time taken to send a DataFrame #4651
Comments
Please report what Jetty version are you using. The logs you show are incomplete or important bits redacted out or you are using an old Jetty version. |
Very likely you are hitting the session flow control window. |
Thanks for the reply. I have attached the logs for a complete stream and also the MBean attributes. We can also see that the number of threads being used in HttpClient are only 2, rest of the threads are either in RUNNABLE state or TIMED_WAITING state. So I think we are unable to make jetty client use all its threads. |
Jetty Version : 9.4.24.v20191120 |
@prateekkohli2112 you have not reported the full DEBUG logs, so it's difficult to say anything about what's going on. Don't remove lines from the logs or we will be unable to understand what's going on. One log line that is present is the following:
It says that the session flow control window is down to only 546 bytes. Also, this log line:
hints that the server told the client that the session flow control window is the default of 65535 bytes, which is really small. In summary, the server is configured with a too small session flow control window, and it's not reading fast enough so the client is forced to slow down the send of DATA frames. |
We are using tomcat server. We changed the session flow control window to 655350 bytes, but still there is no improvement. I am attaching the full logs and tcpdump. tcpdumps are taken on port 1091. |
What does that mean? No improvements in what exactly? Are you taking into account pauses due to GC? |
From your latest logs I don't see any slow down sending DATA frames. |
Below is our use case:
As the load on the 1st tomcat increases, we can observe a dip in the throughput. We are investigating this issue and looking at the possible reasons of low throughput in jetty. We are looking into tomcat as well, but wanted to know if jetty might be causing some back pressure. |
It's unlikely that Jetty will cause any throughput issue since its HTTP/2 implementation powers others projects that require a very high throughput that would have noticed immediately if there was a problem. Have you tried Jetty as a server? |
Ours is a legacy application in which we have already used Tomcat server and cannot change. We have kept the default values for Max requests queued per destination and max connections. And we are creating a single client for all the traffic in jetty as per the HTTP2 RFC recommendation to keep only a single connection open with 1 host and port. We have 1 more application in which we are creating multiple clients to the same destination and can observe very high throughput. So, even though it is recommended to open a single connection per destination, have you observed any use cases in which creating multiple clients to a single destination is preferred for throughput efficiency. |
Single connection per destination is only good for browsers (maybe). If you are writing a proxy, or a load test, etc. you don't want to use a single connection because you will be severely limited by the session flow control window. Don't configure Jetty's Always use just one |
We have already kept 64 maxConnections and 1024 maxRequestsQueued per destination. And we are using 1 HttpClient instance. We have observed that as we increase the number of HttpClient instances, our throughput also increases. |
This is strange. If that's the case, you have something configured wrongly. We have used one |
No, we are expecting around 10,000 only. But the throughput which we are getting is only 2-3k. |
You have not given enough information to help you more, sorry. What's the |
This is the |
The way connections are used in HTTP/2 is that one connection is opened and used until If your client is sending requests from a single thread sequentially, then only one connection will ever be opened (and only one stream will be active at a time). Jetty's |
I am attaching the JMX output below. Only 1 HttpConnection is made in our case as per the JMX output. Also, we have observed that in jetty 9.4.8 version multiple connections were established at the start (i.e. it did not wait for the max_concurrent_streams to reach its max value) but only 1 of them was being used to sent the requests. But in 9.4.24 version only 1 connection is established at the start and in our case no new connections are established. We are sending requests and waiting for the response. |
This was issue #2293 that we fixed.
If you are waiting for the response before sending the next request, then you only ever have 1 outstanding request at a time, and therefore just 1 HTTP/2 stream, and therefore you will never open more than 1 connection because you will never reach |
Due to our design constraint we are sending synchronous requests to jetty client using 50 threads. Even if we use Jetty's asynchronous API we will have to use synchronous listeners in our design. |
50 threads means at most 50 concurrent requests/streams.
You are not measuring throughput, but latency. The number you are getting is not something that represents the throughput (as in max number of requests/s) of your system. If you are interested, you can contact Webtide for commercial support, we do have lots of experience in load testing and in HTTP/2. |
Thanks for the help, will surely consider this. Just 1 question, is it possible to open static connections to Http2 server, i.e. open 2-3 connections while creating the HttpClient object, before calling the request.send() method and use those connections to then send all the subsequent requests. |
I am using the below API: HTTP2Client jettHttp2Client = new HTTP2Client(); jettHttp2Client.setSelectors(1); HttpClientTransportOverHTTP2 httpClientTransportOverHTTP2 = new HttpClientTransportOverHTTP2(jettHttp2Client); RetryJettyHttpClient jettyHttpClient; Is there any possibility I can open connections before sending the request and then send all the requests to these statically opened connections? |
There is a HttpClientTransportOverHTTP2 httpClientTransportOverHTTP2 = new HttpClientTransportOverHTTP2(jettHttp2Client);
httpClientTransportOverHTTP2.setConnectionPoolFactory(destination -> new RoundRobinConnectionPool(destination, N, destination)); where If your numbers improve with the |
Thanks, but still it did not improve. I have 1 different question. |
I believe you are looking in the wrong place. Your initial problem was "slow" sending of DATA frame. From the DEBUG logs you provided, that was not the problem, or it was a GC issue, not a HTTP/2 issue. You are complaining about a generic throughput problem, but I have yet to see a concrete issue about that. You are using a limited number of threads on the client, there is no information about the client configuration (thread pool sizing, etc.) and whether the client sends content to the server or not (which also would be flow controlled). There is currently no reliable way to configure the max multiplex number on the client side. You probably have to write your own Have you looked into Tomcat to see whether it's the problem? Or your application code in the proxy? |
Thanks for all the help. We were able to increase the performance by setting the below logging parameters: We were using the java util logging and thought disabling those would disable jetty logs as well. So we set the above logging parameters and our performance improved significantly. |
@prateekkohli2112 glad it's working for you. Feel free to close the issue if it's resolved for you. |
Sure, I have 1 small question other than this. (Not related to this query). Is there any way I can close my connection after a certain stream count and open a new connection. Is there any way I can achieve this, may be by setting the max streamId count or any othwer way you can suggest. |
@prateekkohli2112 there is no way to do this automatically right now. This information is exported to JMX ( Why do you need it? |
Actually we are planning to have a L4 load balancer between Jetty client and HTTP server. The load balancer routes traffic based on established connections with the backend servers. Load balancer will maintain a mapping between a client connection and backend server connection. And every request from a particular client will be routed on a specific connection. Now for dynamically scaling the backend servers, we would want that these client connections have a limited age. So to avoid starving a backend server of traffic. We were thinking of doing this based on a limited number of streams on a single connection. And that's why need to set either the max number of local streams or an initial number from where stream IDs should start. Also, we were able to reach the Integer max limit for number of streams for a connection. When the max range for Integer is reached we get an error for the 1 next request saying "java.lang.IllegalArgumentException: Invalid stream id: -2147483647" |
@prateekkohli2112 can you please open a new issue about this feature, i.e. set a max number of requests per connection. |
Below are jetty logs for communication over 1 jetty stream:
It seems that the time taken between sending the DataFrame and actually flushing it is too much.
Flushing time : 07:57:58.724
Sending time : 07:57:58.671
Difference : 53 ms
Is it the expected behavior or do I need to configure any buffers to reduce this time.
Thanks
Prateek
The text was updated successfully, but these errors were encountered: