Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Avoid flushing the stream after writing a message in case the server returns a single response #9273

Merged
merged 1 commit into from Jun 24, 2022

Conversation

amirhadadi
Copy link
Contributor

Resolves #4884

@linux-foundation-easycla
Copy link

linux-foundation-easycla bot commented Jun 13, 2022

CLA Signed

The committers listed above are authorized under a signed CLA.

@ejona86
Copy link
Member

ejona86 commented Jun 13, 2022

Have you confirmed whether this makes a difference with a packet dump?

Digging into the code, it seems this could improve things, but only because we already have this optimization later on in the write path:

// Since endOfStream is triggered by the sending of trailers, avoid flush here and just flush
// after the trailers.
abstractServerStreamSink().writeFrame(frame, endOfStream ? false : flush, numMessages);

@ejona86 ejona86 added the kokoro:run Add this label to a PR to tell Kokoro the code is safe and tests can be run label Jun 13, 2022
@grpc-kokoro grpc-kokoro removed the kokoro:run Add this label to a PR to tell Kokoro the code is safe and tests can be run label Jun 13, 2022
@ejona86
Copy link
Member

ejona86 commented Jun 13, 2022

Yeah, I think this just may work. This was what I thought was the hard part! But the plumbing in the transport already has a solution!

Delaying the flush for the initial response should be relatively easy and similar to client-side. We just need to use false instead of true for the flush argument here:

writeQueue.enqueue(
SendResponseHeadersCommand.createHeaders(
transportState(),
Utils.convertServerHeaders(headers)),
true);

(That's fine as a separate PR)

@amirhadadi
Copy link
Contributor Author

amirhadadi commented Jun 14, 2022

Have you confirmed whether this makes a difference with a packet dump?

It seems to create the same number of packets. Maybe if the message body + trailers were aggregated into a single netty object before flushing it would make it a single packet? I don't have enough Netty know how to tell.
However, it does eliminate one flush and ends up with a single system call for writing out the data instead of 2.

@amirhadadi
Copy link
Contributor Author

amirhadadi commented Jun 14, 2022

Delaying the flush for the initial response should be relatively easy and similar to client-side. We just need to use false instead of true for the flush argument here:

Would that require the serverSendsOneMessage check?
If it does, I guess we need to change this argument to false and move the decision whether to flush over here?

@ejona86 ejona86 requested a review from YifeiZhuang June 23, 2022 19:36
@ejona86
Copy link
Member

ejona86 commented Jun 23, 2022

Would that require the serverSendsOneMessage check?

Yes, it would.

If it does, I guess we need to change this argument to false and move the decision whether to flush over here?

I thought we'd just do it in io.grpc.netty, not in the ServerCall. But I see now, NettyServerStream doesn't have access to the MethodDescriptor like NettyClientStream does. That's because it is determined later, after the call starts. We could either pass the method descriptor down into the server stream using a new method, or plumb the decision down in writeHeaders(). It makes sense to me to pass the flush decision as an argument to writeHeaders(). (That'd differ from client-side, but really, we could do the same thing on client-side and it wouldn't be bad.)

@ejona86 ejona86 merged commit 7b80954 into grpc:master Jun 24, 2022
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 23, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Avoid unnecessary flushes for unary responses
4 participants