Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Requests sent to terminating pods #15211

Open
dspeck1 opened this issue May 15, 2024 · 3 comments
Open

Requests sent to terminating pods #15211

dspeck1 opened this issue May 15, 2024 · 3 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. triage/needs-user-input Issues which are waiting on a response from the reporter

Comments

@dspeck1
Copy link

dspeck1 commented May 15, 2024

What version of Knative?

1.14.0

0.9.x
0.10.x
0.11.x
Output of git describe --dirty

Expected Behavior

Being able to send groups of 200 requests and knative service those requests.

The requests not to be scheduled to terminating pods.

Actual Behavior

Sending in groups of 200 requests to knative. The processing takes 5 minutes on knative to run. All the pods will finish with 200 return code. When a second groups of 200 requests is sent in and pods are terminating many of the requests will return 502 bad gateway errors. The requests are getting scheduled to pods that are terminating.

Steps to Reproduce the Problem

Watch for pods to terminate and send in requests. kourier is the ingress and using the knative autoscaler.

@dspeck1 dspeck1 added the kind/bug Categorizes issue or PR as related to a bug. label May 15, 2024
@skonto
Copy link
Contributor

skonto commented May 16, 2024

Hi @dspeck1 could you pls provide more info on how to reproduce this eg. ksvc definition, env setup. There was a similar issue in the past but it was not reproducible, status was unclear.

/triage needs-user-input

@knative-prow knative-prow bot added the triage/needs-user-input Issues which are waiting on a response from the reporter label May 16, 2024
@dspeck1
Copy link
Author

dspeck1 commented May 22, 2024

Hi @skonto. I posted testing code here The app folder has the knative service, the tester folder sends simultaneous requests, and the knative operator config is here To replicate the issue. Send a job with 200 requests, watch for the pods to start to terminate then send job-2 and observe 502 bad gateway errors in the response. It does not happen every time. I have also noticed it does not happen if the pod runs for a short time (10 seconds, 30 seconds). It occurs on long requests like 5 minutes.

The below error is from the queue proxy when this happens. We see the same behavior on Google Cloud GKE and on an on-premise Kubernetes Cluster.

logger: "queueproxy"
message: "error reverse proxying request; sockstat: sockets: used 8
TCP: inuse 3 orphan 11 tw 12 alloc 183 mem 63
UDP: inuse 0 mem 0
UDPLITE: inuse 0
RAW: inuse 0
FRAG: inuse 0 memory 0
"
stacktrace: "knative.dev/pkg/network.ErrorHandler.func1
	knative.dev/pkg@v0.0.0-20240416145024-0f34a8815650/network/error_handler.go:33
net/http/httputil.(*ReverseProxy).ServeHTTP
	net/http/httputil/reverseproxy.go:472
knative.dev/serving/pkg/queue.(*appRequestMetricsHandler).ServeHTTP
	knative.dev/serving/pkg/queue/request_metric.go:199
knative.dev/serving/pkg/queue/sharedmain.mainHandler.ProxyHandler.func3.2
	knative.dev/serving/pkg/queue/handler.go:65
knative.dev/serving/pkg/queue.(*Breaker).Maybe
	knative.dev/serving/pkg/queue/breaker.go:155
knative.dev/serving/pkg/queue/sharedmain.mainHandler.ProxyHandler.func3
	knative.dev/serving/pkg/queue/handler.go:63
net/http.HandlerFunc.ServeHTTP
	net/http/server.go:2166
knative.dev/serving/pkg/queue/sharedmain.mainHandler.ForwardedShimHandler.func4
	knative.dev/serving/pkg/queue/forwarded_shim.go:54
net/http.HandlerFunc.ServeHTTP
	net/http/server.go:2166
knative.dev/serving/pkg/http/handler.(*timeoutHandler).ServeHTTP.func4
	knative.dev/serving/pkg/http/handler/timeout.go:118"
timestamp: "2024-05-22T19:42:56.16014057Z"

Below are similar issues I have found:
This one mentions a lack of a graceful shutdown for the user-container for in flight requests.
Timeout issue on long requests

Thanks for your help! Please let me know anything else you need.

@dspeck1
Copy link
Author

dspeck1 commented May 28, 2024

Here is another related issue. #9355

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. triage/needs-user-input Issues which are waiting on a response from the reporter
Projects
None yet
Development

No branches or pull requests

2 participants