Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

stub,examples: fix comments related to ServerCallStreamObserver.isReady() to clarify it applies to sending side readiness #8910

Open
wants to merge 2 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -42,11 +42,11 @@ public StreamObserver<HelloRequest> sayHelloStreaming(final StreamObserver<Hello
serverCallStreamObserver.disableAutoRequest();

// Set up a back-pressure-aware consumer for the request stream. The onReadyHandler will be invoked
// when the consuming side has enough buffer space to receive more messages.
// when the sending side has enough buffer space to store more responses.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"consuming side" makes sense to talk about. But "sending side" is this side, so is strange. Maybe instead:

The onReadyHandler will be invoked when there is available buffer space to store more responses.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Back-pressure detection includes both local and remote buffer availability on the sending side, is that right? If the remote side (client in this case) has no buffer to receive responses it will cause back-pressure (thru TCP) and the sending side's buffer will fill up. And when that buffer is full, that is when isReady() will be false, IIUC.

Your suggested wording is concise and better so I will change to that.

//
// Note: the onReadyHandler's invocation is serialized on the same thread pool as the incoming StreamObserver's
// onNext(), onError(), and onComplete() handlers. Blocking the onReadyHandler will prevent additional messages
// from being processed by the incoming StreamObserver. The onReadyHandler must return in a timely manner or
// from being processed by the sending StreamObserver. The onReadyHandler must return in a timely manner or
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Revert this. It was previously correct, and the change is incorrect. The inbound stream observer is the one with callbacks, and those are sharing the executor.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, I think the main thing to note is "onReadyHandler's invocation is serialized on the same thread pool as the incoming StreamObserver's onNext(), onError(), and onComplete() handlers" and with that the "incoming" makes sense.

I was thinking that the isReady() call is on the sending side and hence the onReadyHandler is also on the sending side i.e. the callbacks are made from the sending side so if you block inside the callback you are blocking the sending side's processing.

Will revert ...

// else message processing throughput will suffer.
class OnReadyHandler implements Runnable {
// Guard against spurious onReady() calls caused by a race between onNext() and onReady(). If the transport
Expand All @@ -61,7 +61,7 @@ public void run() {
wasReady = true;
logger.info("READY");
// Signal the request sender to send one message. This happens when isReady() turns true, signaling that
// the receive buffer has enough free space to receive more messages. Calling request() serves to prime
// the send buffer has enough free space to send more responses. Calling request() serves to prime
// the message pump.
serverCallStreamObserver.request(1);
}
Expand Down Expand Up @@ -95,8 +95,8 @@ public void onNext(HelloRequest request) {
// cycling through the loop of onNext() -> request(1)...onNext() -> request(1)... until the client runs
// out of messages and ends the loop (via onCompleted()).
//
// If request() was called here with the argument of more than 1, the server might runs out of receive
// buffer space, and isReady() will turn false. When the receive buffer has sufficiently drained,
// If request() was called here with the argument of more than 1, the server might run out of response
// buffer space, and isReady() will turn false. When the response buffer has sufficiently drained,
// isReady() will turn true, and the serverCallStreamObserver's onReadyHandler will be called to restart
// the message pump.
serverCallStreamObserver.request(1);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -98,7 +98,7 @@ public void disableAutoRequest() {


/**
* If {@code true}, indicates that the observer is capable of sending additional messages
* If {@code true}, indicates that the observer is capable of sending additional responses
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Meh. Does this actually help anything?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The reason I wanted to clarify this is because of the confusing and relative nature of words like "receiving" and "sending" in this context. For example, if you are a server you are receiving request messages on the wire through the requestObserver. But the requestObserver is "sending" these messages to the application through the onNext callback and I think some docs/comments do have that usage which is why I wanted to disambiguate here.

* without requiring excessive buffering internally. This value is just a suggestion and the
* application is free to ignore it, however doing so may result in excessive buffering within the
* observer.
Expand Down