You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In some scenarios (for example when waiting for pods to be available) there are a lot of unnecessary logs which cannot be discarded since there is no support for discarding logs for the k8s module. The culprit I am referring to is here:
logger.Logf(t, "Configuring Kubernetes client using config file %s with context %s", kubeConfigPath, options.ContextName)
To give you an example, when our team is testing our elastic cluster, we first wait for all the replicas to be available
k8s.WaitUntilPodAvailable(t, options, esOperatorPodName, retries, sleep)
k8s.WaitUntilPodAvailable(t, options, esClusterPodName, retries, sleep)
k8s.WaitUntilPodAvailable(t, options, esKibanaPodName, retries, sleep)
I propose a change (see below output) where this 'configuring Kubernetes client using config file /root/.kube/config with context ' message is only logged when the kubectl options are created, or alternatively that we can have some kind of options for loglevel to avoid these kind of messages. I know it seems trivial, but we have a lot of parallel tests that run and in some scenarios we can have thousands of these, making it hard to see what is going on in our tests.
...
TestElasticCluster 2022-01-14T13:44:54Z retry.go:91: Wait for pod elasticsearch-es-default-0 to be provisioned.
TestElasticCluster 2022-01-14T13:44:54Z client.go:42: Configuring Kubernetes client using config file /root/.kube/config with context
TestElasticCluster 2022-01-14T13:44:54Z retry.go:103: Wait for pod elasticsearch-es-default-0 to be provisioned. returned an error: Pod elasticsearch-es-default-0 is not available. Sleeping for 5s and will try again.
TestElasticCluster 2022-01-14T13:44:59Z retry.go:91: Wait for pod elasticsearch-es-default-0 to be provisioned.
TestElasticCluster 2022-01-14T13:44:59Z client.go:42: Configuring Kubernetes client using config file /root/.kube/config with context
TestElasticCluster 2022-01-14T13:44:59Z retry.go:103: Wait for pod elasticsearch-es-default-0 to be provisioned. returned an error: Pod elasticsearch-es-default-0 is not available. Sleeping for 5s and will try again.
TestElasticCluster 2022-01-14T13:45:04Z retry.go:91: Wait for pod elasticsearch-es-default-0 to be provisioned.
TestElasticCluster 2022-01-14T13:45:04Z client.go:42: Configuring Kubernetes client using config file /root/.kube/config with context
TestElasticCluster 2022-01-14T13:45:04Z retry.go:103: Wait for pod elasticsearch-es-default-0 to be provisioned. returned an error: Pod elasticsearch-es-default-0 is not available. Sleeping for 5s and will try again.
TestElasticCluster 2022-01-14T13:45:09Z retry.go:91: Wait for pod elasticsearch-es-default-0 to be provisioned.
TestElasticCluster 2022-01-14T13:45:09Z client.go:42: Configuring Kubernetes client using config file /root/.kube/config with context
TestElasticCluster 2022-01-14T13:45:09Z retry.go:103: Wait for pod elasticsearch-es-default-0 to be provisioned. returned an error: Pod elasticsearch-es-default-0 is not available. Sleeping for 5s and will try again.
....
The text was updated successfully, but these errors were encountered:
I also encountered this problem in the Apache APISIX Ingress controller project[1], a large number of useless logs make it difficult for us to get useful information from it
In some scenarios (for example when waiting for pods to be available) there are a lot of unnecessary logs which cannot be discarded since there is no support for discarding logs for the k8s module. The culprit I am referring to is here:
terratest/modules/k8s/client.go
Line 42 in f4f2459
To give you an example, when our team is testing our elastic cluster, we first wait for all the replicas to be available
k8s.WaitUntilPodAvailable(t, options, esOperatorPodName, retries, sleep)
k8s.WaitUntilPodAvailable(t, options, esClusterPodName, retries, sleep)
k8s.WaitUntilPodAvailable(t, options, esKibanaPodName, retries, sleep)
I propose a change (see below output) where this 'configuring Kubernetes client using config file /root/.kube/config with context ' message is only logged when the kubectl options are created, or alternatively that we can have some kind of options for loglevel to avoid these kind of messages. I know it seems trivial, but we have a lot of parallel tests that run and in some scenarios we can have thousands of these, making it hard to see what is going on in our tests.
...
TestElasticCluster 2022-01-14T13:44:54Z retry.go:91: Wait for pod elasticsearch-es-default-0 to be provisioned.
TestElasticCluster 2022-01-14T13:44:54Z client.go:42: Configuring Kubernetes client using config file /root/.kube/config with context
TestElasticCluster 2022-01-14T13:44:54Z retry.go:103: Wait for pod elasticsearch-es-default-0 to be provisioned. returned an error: Pod elasticsearch-es-default-0 is not available. Sleeping for 5s and will try again.
TestElasticCluster 2022-01-14T13:44:59Z retry.go:91: Wait for pod elasticsearch-es-default-0 to be provisioned.
TestElasticCluster 2022-01-14T13:44:59Z client.go:42: Configuring Kubernetes client using config file /root/.kube/config with context
TestElasticCluster 2022-01-14T13:44:59Z retry.go:103: Wait for pod elasticsearch-es-default-0 to be provisioned. returned an error: Pod elasticsearch-es-default-0 is not available. Sleeping for 5s and will try again.
TestElasticCluster 2022-01-14T13:45:04Z retry.go:91: Wait for pod elasticsearch-es-default-0 to be provisioned.
TestElasticCluster 2022-01-14T13:45:04Z client.go:42: Configuring Kubernetes client using config file /root/.kube/config with context
TestElasticCluster 2022-01-14T13:45:04Z retry.go:103: Wait for pod elasticsearch-es-default-0 to be provisioned. returned an error: Pod elasticsearch-es-default-0 is not available. Sleeping for 5s and will try again.
TestElasticCluster 2022-01-14T13:45:09Z retry.go:91: Wait for pod elasticsearch-es-default-0 to be provisioned.
TestElasticCluster 2022-01-14T13:45:09Z client.go:42: Configuring Kubernetes client using config file /root/.kube/config with context
TestElasticCluster 2022-01-14T13:45:09Z retry.go:103: Wait for pod elasticsearch-es-default-0 to be provisioned. returned an error: Pod elasticsearch-es-default-0 is not available. Sleeping for 5s and will try again.
....
The text was updated successfully, but these errors were encountered: