New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Major websocket memory change in 9.4.36 #6974
Comments
Upgrade. Jetty 9.4.36 has a buffer aggregation bug fixed in PR #6056 that could be biting you. Please always upgrade to the newest version in your major version space (you are using 9.4.x, which means 9.4.44.v20210927 is your best choice for upgrade) BTW, version 9.4.36 is subject to several published security advisories - https://www.eclipse.org/jetty/security_reports.php |
Sorry I should have been clearer - all versions 9.4.36 and newer, including 9.4.44, show the same behavior for us. The security advisories are why we are trying to move forward from 9.4.18 |
This should be upgraded as well.
I take it you did a memory dump? |
Thanks again for your help - I have tried other JVMs, I don't think thats the issue but I will update to the latest next week to be sure. I am using YourKit - have taken many memory dumps. Here's a path view of the object's path just a few minutes after starting the server, with just a couple of clients connected. On older Jetty (<= 9.4.35) the HeapByteBuffer retained size sticks around 500Kbyte, regardless of how long we run. On 9.4.36 and newer it grows quite quickly - like I said this example has only been running a couple of minutes. That all said.... I upgraded to Jetty 10.0.6, and at initial tests, it appears to be fixed there, so maybe its a moot point for us. |
@lachlan-roberts can you take a look here too? |
This looks like it would be related to this change from PR #5574 which was introduced in @slipcon do you have a simple reproducer that I could run to debug this problem? |
I think I am able to reproduce a similar issue in 9.4.44 that will lead to high heap memory usage with the following steps:
I think what's happening here is that since each new message is larger than the last one, the already allocated buffers in the |
We definitely need to have a max pool size turned on by default. However, I think we also need to consider have exp. buffer buckets on by default (see #6538) as there is no good reason a 200KB message should use buffers from a different bucket to a 210KB message or 220KB message. Linear buckets are just wasteful. @sbordet @lorban I remember that one of you had objections to the exp. buckets so we only did them as a demo rather than as a core algorithm. But I can't remember what that reason was? What is the reason to use linear bucket sizes again? Why are we violating Weber's Law? |
To add a bit of context - and I'm happy to try to gather specific data if anyone wants me to.... unfortunately I don't have a simple reproducer example.... our application is pretty complex. I can reproduce it easily enough in my test environment, however, and would be happy to enable debug logging or run with a test release if it could help narrow the problem down. Our Jetty application is the websocket server - the clients are javascript/browser based, so no Jetty involved on the client side of the websocket. The messages are probably 99% or greater sent from the server to the clients. The clients only send back an empty JSON object ("{}") as a heartbeat every few seconds. The messages from server to clients are going to be a wide mix of sizes. Some are small, maybe a few hundred bytes. Others are quite large - maybe 150Kbytes. There is no reason that the messages would be growing significantly over time. I'd guess that within a few minutes a client would have seen a "normal" range of message sizes. In an operational system, the server may have 100 or more clients connected simultaneously - each client would receive similar messages, but not necessarily exactly the same - the contents will vary depending on a lot of factors specific to our application. Clearly when we test with the "bad" versions of Jetty for our usage, the heap grows faster when we have many clients connected - however I am able to see the problem (as above in YourKit) with only one client connected. |
So looking through our buffer pool handling (mostly from 10, but perhaps also in 9.4) I have the following comments:
|
@gregw my objection with the exp. buffer buckets is the amount of "waste" that it's going to introduce which may have an impact on the minimal memory requirement one need to serve a certain throughput. What I call waste is memory that is reserved but isn't used, so it can be at most 1023 bytes with linear buckets, and next power / 2 - 1 with exp buckets. TLS is a prime example because it needs buffers that are a just a few bytes over 16 KB, so the current impl uses 17 KB buffer, "wasting" a bit less than 1 KB per request. If we were going to use 32 KB buffers, we would waste a bit less than 16 KB per request, meaning that we'd potentially need around twice the amount of RAM in buffers for serving the same throughput. |
Yep, the idea of building a So these problems make it hard to have only one BP in the system as the two uses are incompatible and it is hard/impossible to mimic one BP as the other safely. Thus I think in the short term we need to expose configuration of both BPs types and then work to remove usage of the BBP in jetty-12 so there is only 1. |
Signed-off-by: Lachlan Roberts <lachlan@webtide.com>
Signed-off-by: Lachlan Roberts <lachlan@webtide.com>
Signed-off-by: Lachlan Roberts <lachlan@webtide.com>
Signed-off-by: Lachlan Roberts <lachlan@webtide.com>
…g of bucket size. Signed-off-by: Lachlan Roberts <lachlan@webtide.com>
Signed-off-by: Lachlan Roberts <lachlan@webtide.com>
Signed-off-by: Lachlan Roberts <lachlan@webtide.com>
Signed-off-by: Lachlan Roberts <lachlan@webtide.com>
Signed-off-by: Lachlan Roberts <lachlan@webtide.com>
Signed-off-by: Lachlan Roberts <lachlan@webtide.com>
Fwiw, we're likely seeing this issue as well, on Jetty v9.4.43. The jvm keeps growing in rss memory, most of which ends up in zlib's deflateInit2 (i.e. it's neither in the heap, non-heap, nor jvm "native memory"). With our workload, we run out of memory within a week or two, on 16 GB servers. Profiler outputs attached, generated by libjemalloc's jeprof. One is with all compression extensions enabled (i.e. not explicitly disabled), while the other is with all three supported extensions disabled ( For completeness sake, we're running with:
I'm having trouble resolving the top-level symbols to see where java.util.zip.Deflater (and Inflater) is actually coming from. Any hints on that would be greatly appreciated. We're on Oracle jdk 8. Instead, I managed to dump a 6 GB memory region (our heap is fixed at 3 GB) from /proc/pid/smaps:
Dumped in chunks using gdb (otherwise it would segfault):
That memory consists mostly of uncompressed websocket payload, hence the suspicion that this issue is related. |
@sveniu with this issue you should only be seeing a build up in heap memory usage in the Can you take a Jetty Server dump? You should then be able to see lines indicating the size and capacity of the |
@lachlan-roberts I've finally been able to produce the server dump. Here's the abridged dump, showing the InflaterPool and DeflaterPool info, with all other branches pruned.
Some context:
|
@sveniu thanks for the info. I have also opened #7078 as a speculative cause. It would also be useful to know some extra information, so can you post the whole dump if possible. Unfortunately the
|
Latest review with @lachlan-roberts. Signed-off-by: Simone Bordet <simone.bordet@gmail.com>
Latest review with @lachlan-roberts. Fixed typo. Signed-off-by: Simone Bordet <simone.bordet@gmail.com>
Signed-off-by: Lachlan Roberts <lachlan@webtide.com>
…7017) - WebSocket should user server ByteBufferPool if possible - fix various bugs ByteBufferPool implementations - add heuristic for maxHeapMemory and maxDirectMemory - Add dump for ByteBufferPools - add LogArrayByteBufferPool that does exponential scaling of bucket size. - ByteBufferPools should default to use maxMemory heuristic - Add module jetty-bytebufferpool-logarithmic Signed-off-by: Lachlan Roberts <lachlan@webtide.com> Co-authored-by: Simone Bordet <simone.bordet@gmail.com>
…7017) - WebSocket should user server ByteBufferPool if possible - fix various bugs ByteBufferPool implementations - add heuristic for maxHeapMemory and maxDirectMemory - Add dump for ByteBufferPools - add LogArrayByteBufferPool that does exponential scaling of bucket size. - ByteBufferPools should default to use maxMemory heuristic - Add module jetty-bytebufferpool-logarithmic Signed-off-by: Lachlan Roberts <lachlan@webtide.com> Co-authored-by: Simone Bordet <simone.bordet@gmail.com>
If an allocation size of 0 was requested bucketFor would throw. Signed-off-by: Lachlan Roberts <lachlan@webtide.com>
Signed-off-by: Lachlan Roberts <lachlan@webtide.com>
Signed-off-by: Lachlan Roberts <lachlan@webtide.com>
If an allocation size of 0 was requested bucketFor would throw. Signed-off-by: Lachlan Roberts <lachlan@webtide.com>
Signed-off-by: Lachlan Roberts <lachlan@webtide.com>
Signed-off-by: Lachlan Roberts <lachlan@webtide.com>
Jetty version(s)
We've been using 9.4.18 for a long time, but having trouble updating to the latest 9.4.x.
I've narrowed the problem down to starting in 9.4.36.
Java version/vendor
(use: java -version)
openjdk version "11.0.2" 2019-01-15
OS type/version
Seen on multiple OSes - MacOS, Windows & Linux.
Description
Our application has a websocket servlet, which creates a class that extends WebSocketAdapter and calls session.getRemote().sendString( data, this ) to transmit JSON messages to the clients. The clients send data back to the server over the same websocket, but it is very minimal heartbeat/keepalive JSON objects.
Starting in 9.4.36, we start to see significant memory utilization very quickly - specifically in the HeapByteBuffer objects, eventually leading to memory exhaustion and crashes.
Disabling the permessage-deflate extension on the websocket fixes the issue, but we'd like to keep compression of the JSON messages on. Our largest JSON messages are on the order of 200Kbytes. We are not doing any sort of jetty memory tuning - and it has worked great prior to 9.4.36.
I suspect this change may be relevant: #5499
Please let me know if there is any more information I can provide or steps to try.
How to reproduce?
The text was updated successfully, but these errors were encountered: