New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WebSocket transport improvements #1585
Comments
@darnap, a few follow-up questions below.
This could be interesting to add to CometD.
Can you detail this?
How do you apply backpressure to the server? Thanks! |
@sbordet
On the client side we have strict timeouts on service channel requests, so we need to send a service reply immediately. There may be many outgoing messages to send to the client that could introduce excessive delay before replies can be sent, so we changed the priority. Furthermore messages can trigger heavy processing on the client, which may cause service timeouts to expire before it can get to the replies.
The back-pressure signal is sent to the application, which then enacts various strategies to reduce the message load. Essentially, fewer messages are delivered to the CometD session (such as reducing update frequencies or notifying upstream applications of the congestion situation). We trigger the back-pressure condition based on the number and size of queued messages for which we have not yet received a writeComplete notification. Thanks |
So these reply messages are application replies, not CometD replies, right? |
They are CometD replies. When a client publishes a message on a Service Channel (implementation of The current
Can you please clarify why the session can expire on the server if replies (including the Thanks! |
You should not to that, as it opens up for horrible races.
The CometD reply and some messages may be sent over the network, but then either TCP congestion or HTTP/2 flow control stall may happen. However, the client has received the reply and thinks it can send further messages to the server. To worsen things, the new Not to mention that if the TCP congestion resolves exactly at the same time the new In short do not do that. We have seen these problems long time ago and since then fixed them, carefully writing CometD exactly to avoid these nasty, horrible to troubleshoot, problems so you should not change the CometD protocol. May I suggest you to open a new issue (this one is more about the Jetty issue that caused loss of messages), detail exactly what is your problem and start a discussion on that new issue? |
The reason the change was required initially was to deal with client requests that would immediately (i.e. in-stack with the notification) generate large response messages requiring more time to send than what the client-side maxNetworkDelay setting in CometD allowed, so the client considered the request failed since the CometD 'reply' would get enqueued after the application response. We'll create a separate issue to discuss other possible improvements that could be generally useful. Many thanks for discussing this issue with us. |
CometD version(s)
6+
Description
Jetty issue jetty/jetty.project#11081 impacts CometD as well.
Following the discussion on that issue here.
The text was updated successfully, but these errors were encountered: