New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Logs with exception connection to api server #379
Comments
Why do you need to provide a kubernetes_url in lieu if the plugin using the "well-known" service endpoint?
This says port 6443 but the error message says port 443:
Looks to me as there is a dependency here that needs to be resolved |
I just set FLUENT_FILTER_KUBERNETES_URL for testing purposes. Sorry the logs came from 2 different moments/configs, so when i set KUBERNETES_URL to some fqdn:6443 i can see: when i dont set and so it takes it from KUBERNETES_SERVICE_PORT (443) and KUBERNETES_SERVICE_HOST It seems (i can see it in tcpdump) that 3 connections are tried (assuming port 443) ::1:443 , 127.0.0.1:443 and the real url (given by KUBERNETES_SERVICE_HOST or FLUENT_FILTER_KUBERNETES_URL), only the in kubernetes_url succeeds . I can see in kibana information like pod labels, also in trace i can see that fluentd fetches data from api-server. Could it be same problem/config with the kubeclient itself? Since kubernetes_url is not localhost i can't see how from kubernetes_metadata_filter plugin a connection can be requested to ::1 or 127.0.0.1 |
I don't recognize this environment variable as anything that has ever been honored by this plugin
Are you running this test from inside the cluster? If not then you likely need to additionally provide certificates to be able to talk to the API server. The plugin relies upon the kubeclient which discovers the URL and certs based upon the well known locations of these artifacts. The scheduler will mount the CA and a token for the pod serviceaccount into the pod. |
From the conf file: So i guess the important is that that env var will end up in kubernetes_url
Yes, its running inside the cluster and as i mention the plugin is able to fetch the data from the api-server, i can confirm because i see the kibana logs with enriched data like labels and because if in trace level i can see the answer from api-server. But is also having this exceptions: From tcpdump the plugging tries to connect to 127.0.0.1 to 443 or 6443, to ::1 to 443 or 6443 and to kubernetes_url, the last one with success. (i m not aware of anything else in the node making call to localhost to port 443 or 6443) Any ideas? |
I have no comments. . This particular bit of code has not changed in a long time AFAIK. We have not updated the kube client in a while either so maybe there is a version mismatch or something there that is starting to introduce issues. This plugin has been used as part of OpenShift Logging without reported issue but we always set the url (probably unnecessarily) to the well-known service name for the api server |
Hi all,
I would like your help to confirm the following problem.
When using your plugin to enrich logs with k8s info i m getting constantly the following log:
: #0 [filter_kube_metadata] Exception encountered parsing pod watch event. The connection might have been closed. Sleeping for 1 seconds and resetting the pod watcher.failed to connect: Connection refused - connect(2) for nil port 443
With a tcpdump i could see:
kubernetes_url have https://k8s-master.mycluster.pt:6443.
I even add log.debug "url - #{@kubernetes_url}"
And confirm:
But i can also see it working
2024-02-05 15:40:11 +0000 [trace]: #0 [filter_kube_metadata] raw metadata for central/envoy....
Seems that besides the connection to kubernetes_url it also tries to connect to local host in ipv4 and ipv6.
Do you have any idea of this issue?
fluentd 1.16.3
fluent-plugin-kubernetes_metadata_filter (3.4.0)
kubeclient (4.11.0)
Thanks,
Carlos
The text was updated successfully, but these errors were encountered: