Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SharedInformer failure on reconnect can kill the watcher #4388

Closed
andreaTP opened this issue Sep 5, 2022 · 4 comments
Closed

SharedInformer failure on reconnect can kill the watcher #4388

andreaTP opened this issue Sep 5, 2022 · 4 comments
Labels
status/stale Waiting on feedback Issues that require feedback from User/Other community members

Comments

@andreaTP
Copy link
Member

andreaTP commented Sep 5, 2022

Describe the bug

@lburgazzoli reproduced an issue where a SharedInformer fails to restart.

From the logs, I think that this sequence of events is happening:

  • a SharedInformer lose connection with the API server
  • the retry mechanism kicks in but it triggers Rate Limiting
  • the Rate Limiting kills definitely the SharedInformer

Note that this error happens using the Quarkus extension for JOSDK and the pod is not getting restarted as the exception gets swallowed.

Fabric8 Kubernetes Client version

5.12.3

Steps to reproduce

Expected behavior

The SharedInformer exception should be handled, ultimately restarting the Pod in case of necessity.

Runtime

OpenShift

Kubernetes API Server version

1.23

Environment

Linux

Fabric8 Kubernetes Client Logs

2022-09-03 14:38:00,625 DEBUG [io.fabric8.kubernetes.client.informers.cache.Reflector] (OkHttp https://172.30.0.1/...) Event received MODIFIED ManagedConnector resourceVersion 64538164
2022-09-03 14:38:00,625 INFO  [io.github.lburgazzoli.qw.MyWatcher] (OkHttp https://172.30.0.1/...) Update, Namespace: redhat-openshift-connectors-caor78j7orh5vio9aa40, Name: mctr-cc901q05q3a79e0h54c0
2022-09-03 14:38:00,641 DEBUG [io.fabric8.kubernetes.client.informers.cache.Reflector] (OkHttp https://172.30.0.1/...) Event received MODIFIED ManagedConnector resourceVersion 64538165
2022-09-03 14:38:00,641 INFO  [io.github.lburgazzoli.qw.MyWatcher] (OkHttp https://172.30.0.1/...) Update, Namespace: redhat-openshift-connectors-caor78j7orh5vio9aa40, Name: mctr-cc901q05q3a79e0h54c0
2022-09-03 14:39:36,009 DEBUG [io.fabric8.kubernetes.client.dsl.internal.AbstractWatchManager] (OkHttp WebSocket https://172.30.0.1/...) Scheduling reconnect task
2022-09-03 14:39:36,010 DEBUG [io.fabric8.kubernetes.client.dsl.internal.AbstractWatchManager] (OkHttp WebSocket https://172.30.0.1/...) Current reconnect backoff is 1000 milliseconds (T0)
2022-09-03 14:39:37,011 DEBUG [io.fabric8.kubernetes.client.dsl.internal.AbstractWatchManager] (pool-2-thread-531) Watching https://172.30.0.1:443/apis/cos.bf2.org/v1alpha1/namespaces/redhat-openshift-connectors-caor78j7orh5vio9aa40/managedconnectors?resourceVersion=64538215&allowWatchBookmarks=true&watch=true...
2022-09-03 14:39:37,012 DEBUG [io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager] (pool-2-thread-531) Closing websocket io.fabric8.kubernetes.client.okhttp.OkHttpWebSocketImpl@1354c826
2022-09-03 14:39:37,012 DEBUG [io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager] (pool-2-thread-531) Websocket already closed io.fabric8.kubernetes.client.okhttp.OkHttpWebSocketImpl@1354c826
2022-09-03 14:39:37,012 DEBUG [io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager] (pool-2-thread-531) Closing websocket io.fabric8.kubernetes.client.okhttp.OkHttpWebSocketImpl@1520b41a
2022-09-03 14:39:37,012 DEBUG [io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager] (pool-2-thread-531) Websocket already closed io.fabric8.kubernetes.client.okhttp.OkHttpWebSocketImpl@1520b41a
2022-09-03 14:39:47,023 DEBUG [io.fabric8.kubernetes.client.dsl.internal.AbstractWatchManager] (OkHttp https://172.30.0.1/...) Scheduling reconnect task
2022-09-03 14:39:47,023 DEBUG [io.fabric8.kubernetes.client.dsl.internal.AbstractWatchManager] (OkHttp https://172.30.0.1/...) Current reconnect backoff is 2000 milliseconds (T1)
2022-09-03 14:39:49,023 DEBUG [io.fabric8.kubernetes.client.dsl.internal.AbstractWatchManager] (pool-2-thread-531) Watching https://172.30.0.1:443/apis/cos.bf2.org/v1alpha1/namespaces/redhat-openshift-connectors-caor78j7orh5vio9aa40/managedconnectors?resourceVersion=64538215&allowWatchBookmarks=true&watch=true...
2022-09-03 14:39:49,023 DEBUG [io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager] (pool-2-thread-531) Closing websocket io.fabric8.kubernetes.client.okhttp.OkHttpWebSocketImpl@1354c826
2022-09-03 14:39:49,023 DEBUG [io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager] (pool-2-thread-531) Websocket already closed io.fabric8.kubernetes.client.okhttp.OkHttpWebSocketImpl@1354c826
2022-09-03 14:39:49,075 DEBUG [io.fabric8.kubernetes.client.dsl.internal.WatcherWebSocketListener] (OkHttp https://172.30.0.1/...) WebSocket successfully opened
2022-09-03 14:39:49,075 DEBUG [io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager] (OkHttp https://172.30.0.1/...) Closing websocket io.fabric8.kubernetes.client.okhttp.OkHttpWebSocketImpl@1354c826
2022-09-03 14:39:49,075 DEBUG [io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager] (OkHttp https://172.30.0.1/...) Websocket already closed io.fabric8.kubernetes.client.okhttp.OkHttpWebSocketImpl@1354c826
2022-09-03 14:39:49,075 DEBUG [io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager] (OkHttp https://172.30.0.1/...) Closing websocket io.fabric8.kubernetes.client.okhttp.OkHttpWebSocketImpl@1a9a8eac
2022-09-03 14:39:49,075 DEBUG [io.fabric8.kubernetes.client.informers.cache.Reflector] (OkHttp https://172.30.0.1/...) Watch restarting due to http gone
2022-09-03 14:39:59,079 ERROR [io.fabric8.kubernetes.client.dsl.internal.AbstractWatchManager] (OkHttp https://172.30.0.1/...) Unhandled exception encountered in watcher event handler: io.fabric8.kubernetes.client.KubernetesClientException: Operation: [list]  for kind: [GenericKubernetesResource]  with name: [null]  in namespace: [redhat-openshift-connectors-caor78j7orh5vio9aa40]  failed.
	at io.fabric8.kubernetes.client.KubernetesClientException.launderThrowable(KubernetesClientException.java:130)
	at io.fabric8.kubernetes.client.dsl.base.BaseOperation.listRequestHelper(BaseOperation.java:140)
	at io.fabric8.kubernetes.client.dsl.base.BaseOperation.list(BaseOperation.java:415)
	at io.fabric8.kubernetes.client.dsl.base.BaseOperation.list(BaseOperation.java:83)
	at io.fabric8.kubernetes.client.informers.cache.Reflector.listSyncAndWatch(Reflector.java:81)
	at io.fabric8.kubernetes.client.informers.cache.Reflector$ReflectorWatcher.onClose(Reflector.java:165)
	at io.fabric8.kubernetes.client.utils.WatcherToggle.onClose(WatcherToggle.java:56)
	at io.fabric8.kubernetes.client.dsl.internal.AbstractWatchManager.close(AbstractWatchManager.java:108)
	at io.fabric8.kubernetes.client.dsl.internal.AbstractWatchManager.onStatus(AbstractWatchManager.java:322)
	at io.fabric8.kubernetes.client.dsl.internal.AbstractWatchManager.onMessage(AbstractWatchManager.java:289)
	at io.fabric8.kubernetes.client.dsl.internal.WatcherWebSocketListener.onMessage(WatcherWebSocketListener.java:68)
	at io.fabric8.kubernetes.client.okhttp.OkHttpWebSocketImpl$BuilderImpl$1.onMessage(OkHttpWebSocketImpl.java:97)
	at okhttp3.internal.ws.RealWebSocket.onReadMessage(RealWebSocket.java:322)
	at okhttp3.internal.ws.WebSocketReader.readMessageFrame(WebSocketReader.java:219)
	at okhttp3.internal.ws.WebSocketReader.processNextFrame(WebSocketReader.java:105)
	at okhttp3.internal.ws.RealWebSocket.loopReader(RealWebSocket.java:273)
	at okhttp3.internal.ws.RealWebSocket$1.onResponse(RealWebSocket.java:209)
	at okhttp3.RealCall$AsyncCall.execute(RealCall.java:174)
	at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
	at java.base/java.lang.Thread.run(Thread.java:833)
Caused by: java.net.SocketTimeoutException: Connect timed out
	at java.base/sun.nio.ch.NioSocketImpl.timedFinishConnect(NioSocketImpl.java:546)
	at java.base/sun.nio.ch.NioSocketImpl.connect(NioSocketImpl.java:597)
	at java.base/java.net.SocksSocketImpl.connect(SocksSocketImpl.java:327)
	at java.base/java.net.Socket.connect(Socket.java:633)
	at okhttp3.internal.platform.Platform.connectSocket(Platform.java:130)
	at okhttp3.internal.connection.RealConnection.connectSocket(RealConnection.java:263)
	at okhttp3.internal.connection.RealConnection.connect(RealConnection.java:183)
	at okhttp3.internal.connection.ExchangeFinder.findConnection(ExchangeFinder.java:224)
	at okhttp3.internal.connection.ExchangeFinder.findHealthyConnection(ExchangeFinder.java:108)
	at okhttp3.internal.connection.ExchangeFinder.find(ExchangeFinder.java:88)
	at okhttp3.internal.connection.Transmitter.newExchange(Transmitter.java:169)
	at okhttp3.internal.connection.ConnectInterceptor.intercept(ConnectInterceptor.java:41)
	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142)
	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:117)
	at okhttp3.internal.cache.CacheInterceptor.intercept(CacheInterceptor.java:94)
	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142)
	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:117)
	at okhttp3.internal.http.BridgeInterceptor.intercept(BridgeInterceptor.java:93)
	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142)
	at okhttp3.internal.http.RetryAndFollowUpInterceptor.intercept(RetryAndFollowUpInterceptor.java:88)
	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142)
	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:117)
	at io.fabric8.kubernetes.client.okhttp.OkHttpClientBuilderImpl$InteceptorAdapter.intercept(OkHttpClientBuilderImpl.java:62)
	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142)
	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:117)
	at io.fabric8.kubernetes.client.okhttp.OkHttpClientBuilderImpl$InteceptorAdapter.intercept(OkHttpClientBuilderImpl.java:62)
	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142)
	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:117)
	at io.fabric8.kubernetes.client.okhttp.OkHttpClientBuilderImpl$InteceptorAdapter.intercept(OkHttpClientBuilderImpl.java:62)
	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142)
	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:117)
	at io.fabric8.kubernetes.client.okhttp.OkHttpClientBuilderImpl$InteceptorAdapter.intercept(OkHttpClientBuilderImpl.java:62)
	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:142)
	at okhttp3.internal.http.RealInterceptorChain.proceed(RealInterceptorChain.java:117)
	at okhttp3.RealCall.getResponseWithInterceptorChain(RealCall.java:229)
	at okhttp3.RealCall.execute(RealCall.java:81)
	at io.fabric8.kubernetes.client.okhttp.OkHttpClientImpl.send(OkHttpClientImpl.java:138)
	at io.fabric8.kubernetes.client.dsl.base.OperationSupport.retryWithExponentialBackoff(OperationSupport.java:574)
	at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:553)
	at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:518)
	at io.fabric8.kubernetes.client.dsl.base.OperationSupport.handleResponse(OperationSupport.java:502)
	at io.fabric8.kubernetes.client.dsl.base.BaseOperation.listRequestHelper(BaseOperation.java:133)
	... 20 more

2022-09-03 14:39:59,080 DEBUG [io.fabric8.kubernetes.client.dsl.internal.WatcherWebSocketListener] (OkHttp https://172.30.0.1/...) WebSocket close received. code: 1000, reason: 
2022-09-03 14:39:59,080 DEBUG [io.fabric8.kubernetes.client.dsl.internal.AbstractWatchManager] (OkHttp https://172.30.0.1/...) Scheduling reconnect task
2022-09-03 14:39:59,080 DEBUG [io.fabric8.kubernetes.client.dsl.internal.AbstractWatchManager] (OkHttp https://172.30.0.1/...) Current reconnect backoff is 1000 milliseconds (T0)
2022-09-03 14:40:00,081 DEBUG [io.fabric8.kubernetes.client.dsl.internal.AbstractWatchManager] (pool-2-thread-531) Watching https://172.30.0.1:443/apis/cos.bf2.org/v1alpha1/namespaces/redhat-openshift-connectors-caor78j7orh5vio9aa40/managedconnectors?resourceVersion=64538215&allowWatchBookmarks=true&watch=true...
2022-09-03 14:40:00,081 DEBUG [io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager] (pool-2-thread-531) Closing websocket io.fabric8.kubernetes.client.okhttp.OkHttpWebSocketImpl@1354c826
2022-09-03 14:40:00,081 DEBUG [io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager] (pool-2-thread-531) Websocket already closed io.fabric8.kubernetes.client.okhttp.OkHttpWebSocketImpl@1354c826
2022-09-03 14:40:13,302 DEBUG [io.fabric8.kubernetes.client.dsl.internal.AbstractWatchManager] (OkHttp https://172.30.0.1/...) Scheduling reconnect task
2022-09-03 14:40:13,302 DEBUG [io.fabric8.kubernetes.client.dsl.internal.AbstractWatchManager] (OkHttp https://172.30.0.1/...) Current reconnect backoff is 2000 milliseconds (T1)
2022-09-03 14:40:15,303 DEBUG [io.fabric8.kubernetes.client.dsl.internal.AbstractWatchManager] (pool-2-thread-531) Watching https://172.30.0.1:443/apis/cos.bf2.org/v1alpha1/namespaces/redhat-openshift-connectors-caor78j7orh5vio9aa40/managedconnectors?resourceVersion=64538215&allowWatchBookmarks=true&watch=true...
2022-09-03 14:40:15,303 DEBUG [io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager] (pool-2-thread-531) Closing websocket io.fabric8.kubernetes.client.okhttp.OkHttpWebSocketImpl@1354c826
2022-09-03 14:40:15,303 DEBUG [io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager] (pool-2-thread-531) Websocket already closed io.fabric8.kubernetes.client.okhttp.OkHttpWebSocketImpl@1354c826
2022-09-03 14:40:20,352 DEBUG [io.fabric8.kubernetes.client.dsl.internal.AbstractWatchManager] (OkHttp https://172.30.0.1/...) Scheduling reconnect task
2022-09-03 14:40:20,352 DEBUG [io.fabric8.kubernetes.client.dsl.internal.AbstractWatchManager] (OkHttp https://172.30.0.1/...) Current reconnect backoff is 4000 milliseconds (T2)
2022-09-03 14:40:24,353 DEBUG [io.fabric8.kubernetes.client.dsl.internal.AbstractWatchManager] (pool-2-thread-531) Watching https://172.30.0.1:443/apis/cos.bf2.org/v1alpha1/namespaces/redhat-openshift-connectors-caor78j7orh5vio9aa40/managedconnectors?resourceVersion=64538215&allowWatchBookmarks=true&watch=true...
2022-09-03 14:40:24,353 DEBUG [io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager] (pool-2-thread-531) Closing websocket io.fabric8.kubernetes.client.okhttp.OkHttpWebSocketImpl@1354c826
2022-09-03 14:40:24,353 DEBUG [io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager] (pool-2-thread-531) Websocket already closed io.fabric8.kubernetes.client.okhttp.OkHttpWebSocketImpl@1354c826
2022-09-03 14:40:25,554 DEBUG [io.fabric8.kubernetes.client.dsl.internal.WatcherWebSocketListener] (OkHttp https://172.30.0.1/...) WebSocket successfully opened
2022-09-03 14:40:25,555 DEBUG [io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager] (OkHttp https://172.30.0.1/...) Closing websocket io.fabric8.kubernetes.client.okhttp.OkHttpWebSocketImpl@1354c826
2022-09-03 14:40:25,555 DEBUG [io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager] (OkHttp https://172.30.0.1/...) Websocket already closed io.fabric8.kubernetes.client.okhttp.OkHttpWebSocketImpl@1354c826
2022-09-03 14:40:25,555 DEBUG [io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager] (OkHttp https://172.30.0.1/...) Closing websocket io.fabric8.kubernetes.client.okhttp.OkHttpWebSocketImpl@dc0b8b8
2022-09-03 14:40:25,555 DEBUG [io.fabric8.kubernetes.client.informers.cache.Reflector] (OkHttp https://172.30.0.1/...) Watch restarting due to http gone
2022-09-03 14:40:25,620 FINE  [okhttp3.internal.http2.Http2] (OkHttp https://172.30.0.1/...) >> CONNECTION 505249202a20485454502f322e300d0a0d0a534d0d0a0d0a
2022-09-03 14:40:25,620 FINE  [okhttp3.internal.http2.Http2] (OkHttp https://172.30.0.1/...) >> 0x00000000     6 SETTINGS      
2022-09-03 14:40:25,621 FINE  [okhttp3.internal.http2.Http2] (OkHttp https://172.30.0.1/...) >> 0x00000000     4 WINDOW_UPDATE 
2022-09-03 14:40:25,621 FINE  [okhttp3.internal.http2.Http2] (OkHttp 172.30.0.1) << 0x00000000    24 SETTINGS      
2022-09-03 14:40:25,621 FINE  [okhttp3.internal.http2.Http2] (OkHttp https://172.30.0.1/...) >> 0x00000003  1255 HEADERS       END_STREAM|END_HEADERS
2022-09-03 14:40:25,621 FINE  [okhttp3.internal.http2.Http2] (OkHttp 172.30.0.1 ACK Settings) >> 0x00000000     0 SETTINGS      ACK
2022-09-03 14:40:25,627 FINE  [okhttp3.internal.http2.Http2] (OkHttp 172.30.0.1) << 0x00000000     4 WINDOW_UPDATE 
2022-09-03 14:40:25,631 FINE  [okhttp3.internal.http2.Http2] (OkHttp 172.30.0.1) << 0x00000000     0 SETTINGS      ACK
2022-09-03 14:40:34,978 FINE  [okhttp3.internal.http2.Http2] (OkHttp 172.30.0.1) << 0x00000003   191 HEADERS       END_HEADERS
2022-09-03 14:40:34,980 FINE  [okhttp3.internal.http2.Http2] (OkHttp 172.30.0.1) << 0x00000003 16384 DATA          
2022-09-03 14:40:34,980 FINE  [okhttp3.internal.http2.Http2] (OkHttp 172.30.0.1) << 0x00000003 16384 DATA          
2022-09-03 14:40:34,980 FINE  [okhttp3.internal.http2.Http2] (OkHttp 172.30.0.1) << 0x00000003 16384 DATA          
2022-09-03 14:40:34,981 FINE  [okhttp3.internal.http2.Http2] (OkHttp 172.30.0.1) << 0x00000003  4545 DATA          
2022-09-03 14:40:34,981 FINE  [okhttp3.internal.http2.Http2] (OkHttp https://172.30.0.1/...) >> 0x00000003     4 RST_STREAM    
2022-09-03 14:40:34,981 INFO  [io.github.lburgazzoli.qw.MyWatcher] (OkHttp https://172.30.0.1/...) Update, Namespace: redhat-openshift-connectors-caor78j7orh5vio9aa40, Name: mctr-cc6ulk05q3a79e0etcn0
2022-09-03 14:40:34,981 INFO  [io.github.lburgazzoli.qw.MyWatcher] (OkHttp https://172.30.0.1/...) Update, Namespace: redhat-openshift-connectors-caor78j7orh5vio9aa40, Name: mctr-cc7p94g5q3a79e0fofk0
2022-09-03 14:40:34,981 INFO  [io.github.lburgazzoli.qw.MyWatcher] (OkHttp https://172.30.0.1/...) Update, Namespace: redhat-openshift-connectors-caor78j7orh5vio9aa40, Name: mctr-cc7p94o5q3a79e0fofl0
2022-09-03 14:40:34,981 INFO  [io.github.lburgazzoli.qw.MyWatcher] (OkHttp https://172.30.0.1/...) Update, Namespace: redhat-openshift-connectors-caor78j7orh5vio9aa40, Name: mctr-cc88jgg5q3a79e0g8lcg
2022-09-03 14:40:34,981 INFO  [io.github.lburgazzoli.qw.MyWatcher] (OkHttp https://172.30.0.1/...) Update, Namespace: redhat-openshift-connectors-caor78j7orh5vio9aa40, Name: mctr-cc890go5q3a79e0g95qg
2022-09-03 14:40:34,981 INFO  [io.github.lburgazzoli.qw.MyWatcher] (OkHttp https://172.30.0.1/...) Update, Namespace: redhat-openshift-connectors-caor78j7orh5vio9aa40, Name: mctr-cc898s85q3a79e0g9gog
2022-09-03 14:40:34,981 INFO  [io.github.lburgazzoli.qw.MyWatcher] (OkHttp https://172.30.0.1/...) Update, Namespace: redhat-openshift-connectors-caor78j7orh5vio9aa40, Name: mctr-cc899905q3a79e0g9h8g
2022-09-03 14:40:34,981 INFO  [io.github.lburgazzoli.qw.MyWatcher] (OkHttp https://172.30.0.1/...) Update, Namespace: redhat-openshift-connectors-caor78j7orh5vio9aa40, Name: mctr-cc8a7eo5q3a79e0gasc0
2022-09-03 14:40:34,981 INFO  [io.github.lburgazzoli.qw.MyWatcher] (OkHttp https://172.30.0.1/...) Update, Namespace: redhat-openshift-connectors-caor78j7orh5vio9aa40, Name: mctr-cc8b75o5q3a79e0gcc90
2022-09-03 14:40:34,981 INFO  [io.github.lburgazzoli.qw.MyWatcher] (OkHttp https://172.30.0.1/...) Update, Namespace: redhat-openshift-connectors-caor78j7orh5vio9aa40, Name: mctr-cc8pv985q3a79e0gse1g
2022-09-03 14:40:34,981 INFO  [io.github.lburgazzoli.qw.MyWatcher] (OkHttp https://172.30.0.1/...) Update, Namespace: redhat-openshift-connectors-caor78j7orh5vio9aa40, Name: mctr-cc901q05q3a79e0h54c0
2022-09-03 14:40:34,982 DEBUG [io.fabric8.kubernetes.client.informers.cache.Reflector] (OkHttp https://172.30.0.1/...) Listing items (11) for resource class io.fabric8.kubernetes.api.model.GenericKubernetesResource v64539027
2022-09-03 14:40:34,982 DEBUG [io.fabric8.kubernetes.client.informers.cache.Reflector] (OkHttp https://172.30.0.1/...) Starting watcher for resource class io.fabric8.kubernetes.api.model.GenericKubernetesResource v64539027
2022-09-03 14:40:34,982 DEBUG [io.fabric8.kubernetes.client.dsl.internal.AbstractWatchManager] (OkHttp https://172.30.0.1/...) Watching https://172.30.0.1:443/apis/cos.bf2.org/v1alpha1/namespaces/redhat-openshift-connectors-caor78j7orh5vio9aa40/managedconnectors?resourceVersion=64539027&allowWatchBookmarks=true&watch=true...
2022-09-03 14:40:34,982 FINE  [okhttp3.internal.http2.Http2] (OkHttp 172.30.0.1) << 0x00000003     0 DATA          END_STREAM
2022-09-03 14:40:34,982 FINE  [okhttp3.internal.http2.Http2] (OkHttp 172.30.0.1 stream 3) >> 0x00000003     4 RST_STREAM    
2022-09-03 14:40:35,109 WARN  [io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager] (OkHttp https://172.30.0.1/...) Exec Failure: HTTP 429, Status: 429 - Too Many Requests
2022-09-03 14:40:35,110 DEBUG [io.fabric8.kubernetes.client.dsl.internal.AbstractWatchManager] (OkHttp https://172.30.0.1/...) Force closing the watch io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager@61077d01
2022-09-03 14:40:35,110 DEBUG [io.fabric8.kubernetes.client.informers.cache.Reflector] (OkHttp https://172.30.0.1/...) Watch gracefully closed
2022-09-03 14:40:35,110 ERROR [io.fabric8.kubernetes.client.dsl.internal.AbstractWatchManager] (OkHttp https://172.30.0.1/...) Unhandled exception encountered in watcher event handler: io.fabric8.kubernetes.client.KubernetesClientException: Failure executing: GET at: https://172.30.0.1/apis/cos.bf2.org/v1alpha1/namespaces/redhat-openshift-connectors-caor78j7orh5vio9aa40/managedconnectors?resourceVersion=64539027&allowWatchBookmarks=true&watch=true. Message: Too Many Requests.
	at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:682)
	at io.fabric8.kubernetes.client.dsl.base.OperationSupport.requestFailure(OperationSupport.java:661)
	at io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager.lambda$run$2(WatchConnectionManager.java:126)
	at java.base/java.util.concurrent.CompletableFuture.uniHandle(CompletableFuture.java:934)
	at java.base/java.util.concurrent.CompletableFuture$UniHandle.tryFire(CompletableFuture.java:911)
	at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510)
	at java.base/java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:2162)
	at io.fabric8.kubernetes.client.okhttp.OkHttpWebSocketImpl$BuilderImpl$1.onFailure(OkHttpWebSocketImpl.java:66)
	at okhttp3.internal.ws.RealWebSocket.failWebSocket(RealWebSocket.java:570)
	at okhttp3.internal.ws.RealWebSocket$1.onResponse(RealWebSocket.java:199)
	at okhttp3.RealCall$AsyncCall.execute(RealCall.java:174)
	at okhttp3.internal.NamedRunnable.run(NamedRunnable.java:32)
	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
	at java.base/java.lang.Thread.run(Thread.java:833)
	Suppressed: java.lang.Throwable: waiting here
		at io.fabric8.kubernetes.client.utils.Utils.waitUntilReady(Utils.java:169)
		at io.fabric8.kubernetes.client.utils.Utils.waitUntilReadyOrFail(Utils.java:180)
		at io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager.waitUntilReady(WatchConnectionManager.java:96)
		at io.fabric8.kubernetes.client.dsl.base.BaseOperation.watch(BaseOperation.java:572)
		at io.fabric8.kubernetes.client.informers.cache.Reflector.startWatcher(Reflector.java:106)
		at io.fabric8.kubernetes.client.informers.cache.Reflector.listSyncAndWatch(Reflector.java:96)
		at io.fabric8.kubernetes.client.informers.cache.Reflector$ReflectorWatcher.onClose(Reflector.java:165)
		at io.fabric8.kubernetes.client.utils.WatcherToggle.onClose(WatcherToggle.java:56)
		at io.fabric8.kubernetes.client.dsl.internal.AbstractWatchManager.close(AbstractWatchManager.java:108)
		at io.fabric8.kubernetes.client.dsl.internal.AbstractWatchManager.onStatus(AbstractWatchManager.java:322)
		at io.fabric8.kubernetes.client.dsl.internal.AbstractWatchManager.onMessage(AbstractWatchManager.java:289)
		at io.fabric8.kubernetes.client.dsl.internal.WatcherWebSocketListener.onMessage(WatcherWebSocketListener.java:68)
		at io.fabric8.kubernetes.client.okhttp.OkHttpWebSocketImpl$BuilderImpl$1.onMessage(OkHttpWebSocketImpl.java:97)
		at okhttp3.internal.ws.RealWebSocket.onReadMessage(RealWebSocket.java:322)
		at okhttp3.internal.ws.WebSocketReader.readMessageFrame(WebSocketReader.java:219)
		at okhttp3.internal.ws.WebSocketReader.processNextFrame(WebSocketReader.java:105)
		at okhttp3.internal.ws.RealWebSocket.loopReader(RealWebSocket.java:273)
		at okhttp3.internal.ws.RealWebSocket$1.onResponse(RealWebSocket.java:209)
		... 5 more

2022-09-03 14:40:35,110 DEBUG [io.fabric8.kubernetes.client.dsl.internal.WatcherWebSocketListener] (OkHttp https://172.30.0.1/...) WebSocket close received. code: 1000, reason: 
2022-09-03 14:40:35,110 DEBUG [io.fabric8.kubernetes.client.dsl.internal.AbstractWatchManager] (OkHttp https://172.30.0.1/...) Scheduling reconnect task
2022-09-03 14:40:35,110 DEBUG [io.fabric8.kubernetes.client.dsl.internal.AbstractWatchManager] (OkHttp https://172.30.0.1/...) Current reconnect backoff is 1000 milliseconds (T0)
2022-09-03 14:40:36,111 DEBUG [io.fabric8.kubernetes.client.dsl.internal.AbstractWatchManager] (pool-2-thread-531) Watching https://172.30.0.1:443/apis/cos.bf2.org/v1alpha1/namespaces/redhat-openshift-connectors-caor78j7orh5vio9aa40/managedconnectors?resourceVersion=64538215&allowWatchBookmarks=true&watch=true...
2022-09-03 14:40:36,111 DEBUG [io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager] (pool-2-thread-531) Closing websocket io.fabric8.kubernetes.client.okhttp.OkHttpWebSocketImpl@1354c826
2022-09-03 14:40:36,111 DEBUG [io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager] (pool-2-thread-531) Websocket already closed io.fabric8.kubernetes.client.okhttp.OkHttpWebSocketImpl@1354c826
2022-09-03 14:40:36,163 WARN  [io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager] (OkHttp https://172.30.0.1/...) Exec Failure: HTTP 429, Status: 429 - Too Many Requests
2022-09-03 14:40:36,164 DEBUG [io.fabric8.kubernetes.client.dsl.internal.AbstractWatchManager] (OkHttp https://172.30.0.1/...) Scheduling reconnect task
2022-09-03 14:40:36,164 DEBUG [io.fabric8.kubernetes.client.dsl.internal.AbstractWatchManager] (OkHttp https://172.30.0.1/...) Current reconnect backoff is 2000 milliseconds (T1)
2022-09-03 14:40:38,164 DEBUG [io.fabric8.kubernetes.client.dsl.internal.AbstractWatchManager] (pool-2-thread-531) Watching https://172.30.0.1:443/apis/cos.bf2.org/v1alpha1/namespaces/redhat-openshift-connectors-caor78j7orh5vio9aa40/managedconnectors?resourceVersion=64538215&allowWatchBookmarks=true&watch=true...
2022-09-03 14:40:38,164 DEBUG [io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager] (pool-2-thread-531) Closing websocket io.fabric8.kubernetes.client.okhttp.OkHttpWebSocketImpl@1354c826
2022-09-03 14:40:38,164 DEBUG [io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager] (pool-2-thread-531) Websocket already closed io.fabric8.kubernetes.client.okhttp.OkHttpWebSocketImpl@1354c826
2022-09-03 14:40:38,219 WARN  [io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager] (OkHttp https://172.30.0.1/...) Exec Failure: HTTP 429, Status: 429 - Too Many Requests
2022-09-03 14:40:38,219 DEBUG [io.fabric8.kubernetes.client.dsl.internal.AbstractWatchManager] (OkHttp https://172.30.0.1/...) Scheduling reconnect task
2022-09-03 14:40:38,219 DEBUG [io.fabric8.kubernetes.client.dsl.internal.AbstractWatchManager] (OkHttp https://172.30.0.1/...) Current reconnect backoff is 4000 milliseconds (T2)
2022-09-03 14:40:42,219 DEBUG [io.fabric8.kubernetes.client.dsl.internal.AbstractWatchManager] (pool-2-thread-531) Watching https://172.30.0.1:443/apis/cos.bf2.org/v1alpha1/namespaces/redhat-openshift-connectors-caor78j7orh5vio9aa40/managedconnectors?resourceVersion=64538215&allowWatchBookmarks=true&watch=true...
2022-09-03 14:40:42,219 DEBUG [io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager] (pool-2-thread-531) Closing websocket io.fabric8.kubernetes.client.okhttp.OkHttpWebSocketImpl@1354c826
2022-09-03 14:40:42,219 DEBUG [io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager] (pool-2-thread-531) Websocket already closed io.fabric8.kubernetes.client.okhttp.OkHttpWebSocketImpl@1354c826
2022-09-03 14:40:42,270 WARN  [io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager] (OkHttp https://172.30.0.1/...) Exec Failure: HTTP 429, Status: 429 - Too Many Requests
2022-09-03 14:40:42,270 DEBUG [io.fabric8.kubernetes.client.dsl.internal.AbstractWatchManager] (OkHttp https://172.30.0.1/...) Scheduling reconnect task
2022-09-03 14:40:42,270 DEBUG [io.fabric8.kubernetes.client.dsl.internal.AbstractWatchManager] (OkHttp https://172.30.0.1/...) Current reconnect backoff is 8000 milliseconds (T3)
2022-09-03 14:40:43,236 FINE  [okhttp3.internal.http2.Http2] (OkHttp 172.30.0.1) << 0x00000000     8 GOAWAY        
2022-09-03 14:40:50,270 DEBUG [io.fabric8.kubernetes.client.dsl.internal.AbstractWatchManager] (pool-2-thread-531) Watching https://172.30.0.1:443/apis/cos.bf2.org/v1alpha1/namespaces/redhat-openshift-connectors-caor78j7orh5vio9aa40/managedconnectors?resourceVersion=64538215&allowWatchBookmarks=true&watch=true...
2022-09-03 14:40:50,270 DEBUG [io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager] (pool-2-thread-531) Closing websocket io.fabric8.kubernetes.client.okhttp.OkHttpWebSocketImpl@1354c826
2022-09-03 14:40:50,270 DEBUG [io.fabric8.kubernetes.client.dsl.internal.WatchConnectionManager] (pool-2-thread-531) Websocket already closed io.fabric8.kubernetes.client.okhttp.OkHttpWebSocketImpl@1354c826

Additional context

cc. @csviri @metacosm

@shawkins
Copy link
Contributor

shawkins commented Sep 5, 2022

This is described in and addressed by #4365

@shawkins
Copy link
Contributor

shawkins commented Sep 6, 2022

This is described in and addressed by #4365

Sorry I was mis-remembering the state of the code. In 5.12 it's still the watch manager that is handling this retry. Do you see the watch / informer completely stopping? Or is it just that you keep getting the too many requests responses for a while? The informer watcher should get unlimited reconnect attempts.

I can't seem to find the existing issue, but in general the client does not handle the 429 response in the backoff loops. That of course can be added, but I don't think it's actually required for what you have posted here.

@manusa manusa added the Waiting on feedback Issues that require feedback from User/Other community members label Sep 20, 2022
@shawkins
Copy link
Contributor

I was going to use this issue to work the 429 logic, but that doesn't seem like it's worth the effort yet. The api server still uses 1 second as the backoff - https://github.com/kubernetes/apiserver/blob/master/pkg/server/filters/maxinflight.go#L38 and kubernetes/kubernetes#79805

@stale
Copy link

stale bot commented Jan 15, 2023

This issue has been automatically marked as stale because it has not had any activity since 90 days. It will be closed if no further activity occurs within 7 days. Thank you for your contributions!

@stale stale bot added the status/stale label Jan 15, 2023
@stale stale bot closed this as completed Jan 23, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
status/stale Waiting on feedback Issues that require feedback from User/Other community members
Projects
None yet
Development

No branches or pull requests

3 participants