Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

POC: gRPC-on-gRPC tunnel #3987

Open
wants to merge 4 commits into
base: master
Choose a base branch
from
Open

POC: gRPC-on-gRPC tunnel #3987

wants to merge 4 commits into from

Conversation

ejona86
Copy link
Member

@ejona86 ejona86 commented Jan 20, 2018

This is the Java counterpart of grpc/grpc#14100. I've not actually tried to figure out what's going on it the C PR yet, so I have no clue how compatible this is with it.

It is functional, in that the client and server can actually communicate successfully. Error handler and API could use some work, but aren't horrible.

This uses zero internal APIs!

TODO: flow control

@ejona86 ejona86 changed the title gRPC-on-gRPC tunnel POC POC: gRPC-on-gRPC tunnel Jan 20, 2018
@rmichela
Copy link
Contributor

Is the pattern demonstrated in this PR something you expect to be merged into grpc-java as reusable framework classes?

@ejona86
Copy link
Member Author

ejona86 commented Jan 21, 2018

@rmichela, likely. It might be able to go in grpc-netty itself. Even though in Java it is fairly easy as separate utility, this is envisioned as a cross-language feature and other languages would likely need built-in support. Netty made this easier in Java since there was already the Channel abstraction and it was already exposed.

But these are details that still need to be figured out. We're still organizing, and we'll likely want a gRFC and similar. Watch/comment on grpc/grpc#14101 if you're interested.

@larry-safran
Copy link
Contributor

This would make a great example.

@bsideup
Copy link

bsideup commented Mar 14, 2024

Hey folks! Sorry for resurrecting this 6yo PR, but I picked it up and I believe I was able to address the TODO here (flow control), as well as the memory efficiency (by using ByteBuf + Drainable, so that there is no byte[] conversations.

@ejona86 would you be open to receiving a contribution? And, if so, could you please advise on the process (do we need an RFC? Any API design recommendations that you could provide?)

@ejona86
Copy link
Member Author

ejona86 commented Mar 15, 2024

@bsideup let's start with making a PR and taking a look at your changes. Maybe we'll merge some of this stuff as an example to allow easier iteration on it.

@bsideup
Copy link

bsideup commented Mar 15, 2024

@ejona86 thanks! Will submit the latest state. I ended up breaking it (lol) so there is a nasty bug that I am trying to pinpoint (first "reverse" connection always fails, but then everything works perfectly fine 🤯), but perhaps you will be able to solve it easily.

Also, I realized that since the "main"/"real" channel already does the flow control, we probably can rely on it (combined with protocol-level demand control via request) and don't need an extra one. I will think about how to test it tho.

@bsideup bsideup mentioned this pull request Mar 16, 2024
@bsideup
Copy link

bsideup commented Mar 16, 2024

Submitted #11023, PTAL 🙌

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants