New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Delay in Caddy using changes to reverse_proxy upstream #6195
Delay in Caddy using changes to reverse_proxy upstream #6195
Comments
Hmmm, thanks for the report. I'll try to look into this soon. |
What is in your logs before, during, and after a config change? I would expect the server socket to shut down after the grace period and sever existing connections to clients. As for the dynamic upstreams, they are global and persist through config reloads, but keyed by the SRV address, so if those change the old ones should no longer be used I think. |
I've attached a dump of the logs log.gz, I started Caddy with
Accessed a page on the site, then entered the container, modified the Caddyfile and did The initial service record was php-pool-dda10003a747409ee137207837a8eae7.service.consul and I changed it to xxx-php-pool-dda10003a747409ee137207837a8eae7.service.consul Reload happened at 2024/03/28 03:58:58.357 Old record was used until 2024/03/28 04:00:04.674, at that point it failed as in this case the upstream doesn't exist, but it doesn't matter if the upstream exists or not. Sometimes it'll hold it longer, but usually 2 - 3 minutes. Caddyfile used:
Thanks |
From the log I found out all the requests are HTTP/3.0. It's due to old h3 connections being active after configuration reload. I'll see what I can do with quic-go, which caddy depends for http3. |
@paularlott Can you try |
@WeidiDeng I've only had a chance for a quick test so far, but it's looking like that resolves the issues, thank you. |
@WeidiDeng You are a WIZARD 🎉 🧙♂️ can't wait to review this 😃 |
@paularlott Do you mind giving this patch a go? This properly closes http3 servers without interrupting ongoing requests unless a grace period is reached. |
@WeidiDeng I've just built it and deployed it as the ingress controller for some test sites and it seems to be following the changes to the upstream SRV records as it should. I'll leave it running so we can give it a good test. Thank you |
I'm writing/written a plugin for Caddy which watches a Nomad / Consul cluster and builds ingress routes to our cluster based on the data, basically it generates a collection of
reverse_proxy
statements to the running containers.However I've noticed that after reloading the Caddyfile existing clients continue to use the previous configurations for 2 - 3 minutes or until the browser cache is cleared.
Initially I thought this was something I'd done wrong in my code, so I've extracted the minimum Caddyfile and manually applied it against a stock version of Caddy (2.7.6 running in Docker) with the same results.
Initially I might have a
reverse_proxy
block such asAfter a deployment the
php-pool-1
name changes e.g.If I start browsing the site I can see Caddy logging that it's querying DNS for
php-pool-1.service.consul
, which is correct.If I now reload with a new Caddyfile to change the upstream to
php-pool-2.service.consul
then I can see Caddy logging that it's querying DNS forphp-pool-1.service.consul
and not the newphp-pool-2.service.consul
, it will keep using the old upstream until either I clear the browser cache or wait 2 - 3 minutes, after which it moves tophp-pool-2.service.consul
.If I use a different browser to connect directly after the reload then that connection is using
php-pool-2.service.consul
, which is what I expect.The minimal Caddyfile I have is:
From the documentation, https://caddyserver.com/docs/caddyfile/options#grace-period if I include
grace_period
then it should force connection close after in my case 3s, but it doesn't seem to be happening so I'm guessing I'm missing something obvious here.Thank you for any guidance
The text was updated successfully, but these errors were encountered: