Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why does WebSocketEndPoint does not implement MessageHandler.Partial<String> #1171

Open
kavishme opened this issue Apr 20, 2022 · 4 comments
Labels

Comments

@kavishme
Copy link

kavishme commented Apr 20, 2022

CometD version(s): 3 and above to latest

Java version & vendor (use: java -version) NA

Question: The WebSocketTransport while adding endpoints to the container

always creates endpoint config that implements only MessageHandler.Whole

This limits the messaging to configured size always. For example, with the default tomcat config I am not able to send messages larger than 8k in size.

I have checked this till version 3 it seems that cometD has always had this limitation by using MessageHandler.Whole. Need help to understand why has this been the case?

@sbordet
Copy link
Member

sbordet commented Apr 20, 2022

The server has to protect itself from large messages that will consume memory, so a limit must be set no matter whether using Whole or Partial.

Messaging is typically for small messages to be sent around to possibly a large number of recipients, so large messages are typically an anti-pattern.

Why do you need to send messages larger than 8 KiB?

@rameshmurthy
Copy link

rameshmurthy commented Apr 20, 2022

I ran into similar use case before.

Yes protecting the instance from large messages is needed, but currently the buffer size configuration is directly controlling the behavior of the size of the message that can be send from client. If we can have an independent control that will be useful. For example I can specify my buffer size as 4096 and max message size as 8192.

In my use case, we want to reduce the heap memory of each web socket connection by tomcat. To reduce the heap memory consumption we were thinking to change the buffer size to 4096 from 8192. This will make the clients not to send any message greater than 4096 bytes. This will affect existing clients that are using our product to send messages of less than or equal to 8192. In most cases the size of the message will be smaller but if there are clients that are sending 8192 bytes today will be affected by changing the buffer size.

As part of my research I found out below request in tomcat. In a way either reducing the buffer sizes or offering a buffer pool is helpful to reduce the memory consumption of a WebSocket connection.

https://bz.apache.org/bugzilla/show_bug.cgi?id=65809

I'm using a WebSocket service that manage a large amount of open connections, sending back messages to clients.
The number of concurrent messages can be negligible if compared to the number of active connections.
In such a situation I've found the running Tomcat process can use approx. 100KB of memory per open WebSocket connection.
Looking at source code I've found some classes with buffers allocated inside the constructor.

With the default value of 8192 for org.apache.tomcat.websocket.DEFAULT_BUFFER_SIZE:
In org.apache.tomcat.websocket.WsFrameBase

  • inputBuffer 8KB
  • messageBufferBinary 8KB
  • messageBufferText 16KB
    In org.apache.tomcat.websocket.WsRemoteEndpointImplBase
  • outputBuffer 8KB
  • encoderBuffer 8KB

With the above buffers this sum up to a 48KB of memory per WebSocket connection.
Changing the allocation strategy to an on-demand buffer allocation, in the above situation
could reduce the memory footprint by 480MB of memory for 10K active connections.

The buffers could also be pooled by a pool manager, reducing the allocation costs

@kavishme
Copy link
Author

My use case is to reduce the buffer sizes but still be able to support few cases where message size could be >= 8KB. Reducing buffer size helps to reduce applications' memory consumption.

@sbordet
Copy link
Member

sbordet commented May 1, 2022

At the CometD level you can configure the max message size.

Implementation details like the network buffer size are implementation details, so you can configure Tomcat (not CometD) or Jetty (not CometD) depending on the implementation you use.
Tomcat and Jetty may have different options to configure their internals, so it will not make sense to have a configuration in CometD.

Feel free to reduce the buffer sizes in your implementation of choice, but I don't think it should be a configuration option in CometD.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants