Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Excessive logging when using k8s client #1049

Open
eanveden opened this issue Jan 14, 2022 · 3 comments · May be fixed by #1180
Open

Excessive logging when using k8s client #1049

eanveden opened this issue Jan 14, 2022 · 3 comments · May be fixed by #1180
Labels
enhancement New feature or request

Comments

@eanveden
Copy link

In some scenarios (for example when waiting for pods to be available) there are a lot of unnecessary logs which cannot be discarded since there is no support for discarding logs for the k8s module. The culprit I am referring to is here:

logger.Logf(t, "Configuring Kubernetes client using config file %s with context %s", kubeConfigPath, options.ContextName)

To give you an example, when our team is testing our elastic cluster, we first wait for all the replicas to be available
k8s.WaitUntilPodAvailable(t, options, esOperatorPodName, retries, sleep)
k8s.WaitUntilPodAvailable(t, options, esClusterPodName, retries, sleep)
k8s.WaitUntilPodAvailable(t, options, esKibanaPodName, retries, sleep)

I propose a change (see below output) where this 'configuring Kubernetes client using config file /root/.kube/config with context ' message is only logged when the kubectl options are created, or alternatively that we can have some kind of options for loglevel to avoid these kind of messages. I know it seems trivial, but we have a lot of parallel tests that run and in some scenarios we can have thousands of these, making it hard to see what is going on in our tests.

...
TestElasticCluster 2022-01-14T13:44:54Z retry.go:91: Wait for pod elasticsearch-es-default-0 to be provisioned.
TestElasticCluster 2022-01-14T13:44:54Z client.go:42: Configuring Kubernetes client using config file /root/.kube/config with context
TestElasticCluster 2022-01-14T13:44:54Z retry.go:103: Wait for pod elasticsearch-es-default-0 to be provisioned. returned an error: Pod elasticsearch-es-default-0 is not available. Sleeping for 5s and will try again.
TestElasticCluster 2022-01-14T13:44:59Z retry.go:91: Wait for pod elasticsearch-es-default-0 to be provisioned.
TestElasticCluster 2022-01-14T13:44:59Z client.go:42: Configuring Kubernetes client using config file /root/.kube/config with context
TestElasticCluster 2022-01-14T13:44:59Z retry.go:103: Wait for pod elasticsearch-es-default-0 to be provisioned. returned an error: Pod elasticsearch-es-default-0 is not available. Sleeping for 5s and will try again.
TestElasticCluster 2022-01-14T13:45:04Z retry.go:91: Wait for pod elasticsearch-es-default-0 to be provisioned.
TestElasticCluster 2022-01-14T13:45:04Z client.go:42: Configuring Kubernetes client using config file /root/.kube/config with context
TestElasticCluster 2022-01-14T13:45:04Z retry.go:103: Wait for pod elasticsearch-es-default-0 to be provisioned. returned an error: Pod elasticsearch-es-default-0 is not available. Sleeping for 5s and will try again.
TestElasticCluster 2022-01-14T13:45:09Z retry.go:91: Wait for pod elasticsearch-es-default-0 to be provisioned.
TestElasticCluster 2022-01-14T13:45:09Z client.go:42: Configuring Kubernetes client using config file /root/.kube/config with context
TestElasticCluster 2022-01-14T13:45:09Z retry.go:103: Wait for pod elasticsearch-es-default-0 to be provisioned. returned an error: Pod elasticsearch-es-default-0 is not available. Sleeping for 5s and will try again.
....

@denis256 denis256 added the enhancement New feature or request label Feb 16, 2022
@tao12345666333
Copy link

I also encountered this problem in the Apache APISIX Ingress controller project[1], a large number of useless logs make it difficult for us to get useful information from it

  1. https://github.com/apache/apisix-ingress-controller/

@lingsamuel lingsamuel linked a pull request Sep 20, 2022 that will close this issue
4 tasks
@djsly
Copy link

djsly commented Jan 12, 2023

thanks @lingsamuel for the PR, when should we expect it to be merged ?

@aslafy-z
Copy link

Any news on this issue?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants