You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What version of gRPC and what language are you using?
v1.42.0
What operating system (Linux, Windows,...) and version?
Linux
As seen and resolved in #19969 I have a feeling that in our setup despite the server sending a GOAWAY, the client is re-creating the connection.
The client in my scenario is Kubernetes API Server and the grpc server is a proxy server (runs as a pod on the control plane) that sits in between Kuberentes API Server and worker nodes. So in a way client is is never away and if the client goes away there is no point in the proxy server running as the entire cluster is down.
The reason for this hypotheses is that under lots of traffic, we are seeing lots of open file descriptors and grpc connections which are not coming down to their original values once the traffic is stopped in its entirely. With increase of grpc connections and open fds, memory consumption of the proxy-server spikes up as well and that too never returns to original values.
@yashykt I see that you identified the issue and also proposed a solution which got merged. Can you please provide any pointers on how to go about testing my hypotheses. Anything else that could be out of order causing such behaviour ?
I also tried closing the idle connections ( using the Keepalive params - MaxConnectionAge and MaxConnectionAgeGrace, for some reason MaxconnectionIdle and Time are not working for me), I get the open file descriptors and number of gRPC connections metric back to initial levels but only see 10% reduction in memory usage.
Any pointers ?
The text was updated successfully, but these errors were encountered:
What version of gRPC and what language are you using?
v1.42.0
What operating system (Linux, Windows,...) and version?
Linux
As seen and resolved in #19969 I have a feeling that in our setup despite the server sending a GOAWAY, the client is re-creating the connection.
The client in my scenario is Kubernetes API Server and the grpc server is a proxy server (runs as a pod on the control plane) that sits in between Kuberentes API Server and worker nodes. So in a way client is is never away and if the client goes away there is no point in the proxy server running as the entire cluster is down.
The reason for this hypotheses is that under lots of traffic, we are seeing lots of open file descriptors and grpc connections which are not coming down to their original values once the traffic is stopped in its entirely. With increase of grpc connections and open fds, memory consumption of the proxy-server spikes up as well and that too never returns to original values.
@yashykt I see that you identified the issue and also proposed a solution which got merged. Can you please provide any pointers on how to go about testing my hypotheses. Anything else that could be out of order causing such behaviour ?
I also tried closing the idle connections ( using the Keepalive params - MaxConnectionAge and MaxConnectionAgeGrace, for some reason MaxconnectionIdle and Time are not working for me), I get the open file descriptors and number of gRPC connections metric back to initial levels but only see 10% reduction in memory usage.
Any pointers ?
The text was updated successfully, but these errors were encountered: