Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pass non terminated (SNI matched) TLS connection to http app #70

Open
networkException opened this issue Sep 8, 2022 · 7 comments
Open
Labels
enhancement New feature or request

Comments

@networkException
Copy link

I've recently been looking into improving a setup that uses NGINX streams to accept (but not terminate) incoming TLS connections and proxy them to two caddy instances depending on the SNI (one of which running on the same host).

For the remove instance its obviously required to use the PROXY protocol to retain remote IP metadata and the overhead of an outgoing TCP connection is also necessary, however for the local caddy both of those issues are just design issues with the setup.

In a perfect world I would have one caddy which handles incoming traffic and before terminating TLS can decide to proxy a connection instead of handling it itself based on the SNI. Keeping the connection in process obviously benefits performance but also removes the need to wrap everything in the PROXY protocol.

While exploring I came across various approaches which went into the right direction but didn't fully work:

A listener wrapper seems like an obvious candidate but after successfully implementing ClientHello reading my efforts came to a halt due to what I far is caddy simply not being designed to split off / proxy connections on that level.

To my understanding this amazing project basically can be used as a drop in replacement for the NGINX streams process, however the requirement to use the PROXY protocol for connection metadata still remains. The performance can probably be improved by using unix domain sockets for on device proxying but to me its still rather unsatisfying.

The missing link here - and what I think would be a great addition in general - is being able to pass a connection to a different caddy app in process (provided that the protocols are compatible). With such a feature caddy-l4 would simply be able to "dial" http and let it handle TLS termination and so on while retaining all connection metadata (maybe even expose L4 variables).

It could be that im missing something here entirely or that there's no measurable (performance) impact (besides convenience) to support something this. I'm grateful for any pointers or ideas where to start looking / implementing

@mholt
Copy link
Owner

mholt commented Sep 9, 2022

One of my (few) regrets about how I built Caddy 2 is that I didn't make the http module inside a network or layer4 module. In other words, I wish this is how Caddy worked, and if I were to design a Caddy v3, it'd probably take this idea further. Because yeah, it makes a lot of sense for layer4 to be the main app module that can then run applications on top of it.

What we can probably do, though, is expose a method on the http app so that you can give it a conn and serve it as if it was new.

We'd probably have to wrap the listener with one that implements a custom Accept() that also can accept virtual connections, or conns that are already accepted. Something like this: https://stackoverflow.com/questions/29948497/tcp-accept-and-go-concurrency-model (i.e. we'd select over a channel that our goroutine pipes real Accept()s into and our own channel that can receive already-accepted connections)

func (l *CustomListener) Accept() (net.Conn, error) {
	select {
	case conn := <-realAccept:
		return conn, nil
	case conn := <-virtualConn:
		return conn, nil
	}
}

(of course in real code we'd pipe both the conn and any err into the channels)

Does that make sense?

@mholt
Copy link
Owner

mholt commented Sep 14, 2022

@networkException I have a somewhat spikey implementation here: caddyserver/caddy#5040

But it's not wired up end-to-end yet. I haven't even tried it. The next step will be to set up some code in this module that accesses the http app's listeners and chooses one to give the connection to. Then it just calls Pipe(conn), basically, and the http app should do the rest.

In theory 🙃

@networkException
Copy link
Author

Very cool! Thanks for working on this

@WeidiDeng
Copy link
Contributor

@networkException Can you try this branch, which added the ability to configure l4 as a caddy listener wrapper.

You have to write json config and http app must disable https though.

@networkException
Copy link
Author

Wow I didn't catch the listener wrapper support getting upstreamed, for anyone else looking here's the pr #78

@coolaj86
Copy link

coolaj86 commented Mar 17, 2023

Update: Solved

I did get this working:

Original

@networkException Do you have an example for how you did this?

I don't understand the example in the PR. It says to put the config "in the appropriate path", but I don't have the context to understand what that means.

I need to see which thing is a parent node that the child node can belong to get it.

@WeidiDeng
Copy link
Contributor

@coolaj86 Basically if you can understand how l4 can work as a standalone app, you just extract the routes array and put it inside listener_wrappers array object. It's here if you want more context about json nodes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants