Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

KEDA metrics api server not honoring log level #5139

Open
mrparkers opened this issue Oct 30, 2023 · 14 comments
Open

KEDA metrics api server not honoring log level #5139

mrparkers opened this issue Oct 30, 2023 · 14 comments
Labels
bug Something isn't working

Comments

@mrparkers
Copy link

mrparkers commented Oct 30, 2023

Report

Exact same issue as #3053 and #2316. Trace logs appear in stdout despite running the pod with -v=0.

The trace logs look like this:

I1030 16:06:17.184854       1 trace.go:236] Trace[1893921442]: "List" accept:application/vnd.kubernetes.protobuf, */*,audit-id:7da6ab1a-9d01-4c75-b3fc-11fcbee2f159,client:172.16.123.93,protocol:HTTP/2.0,resource:foo-303xx1-5-3022xx1-5,scope:namespace,url:/apis/external.metrics.k8s.io/v1beta1/namespaces/bar/foo-303xx1-5-3022xx1-5,user-agent:kube-controller-manager/v1.27.6 (linux/amd64) kubernetes/b6911bf/system:serviceaccount:kube-system:horizontal-pod-autoscaler,verb:LIST (30-Oct-2023 16:06:16.651) (total time: 533ms):
Trace[1893921442]: ---"Listing from storage done" 533ms (16:06:17.184)
Trace[1893921442]: [533.696851ms] [533.696851ms] END

Expected Behavior

Trace logs should not show up when -v=0.

Actual Behavior

Trace logs show up when -v=0.

Steps to Reproduce the Problem

  1. Deploy keda metrics api server with -v=0.
  2. Observe the trace logs in stdout

Logs from KEDA operator

not relevant

KEDA Version

2.12.0

Kubernetes Version

1.27

Platform

Amazon Web Services

Scaler Details

not relevant

Anything else?

No response

@mrparkers mrparkers added the bug Something isn't working label Oct 30, 2023
@JorTurFer
Copy link
Member

JorTurFer commented Oct 31, 2023

You're right,
I can reproduce it. At this moment I have commented this with @dgrisonnet and he will take a look as the code is part of https://github.com/kubernetes-sigs/custom-metrics-apiserver and it uses https://github.com/kubernetes/utils/blob/master/trace/README.md, which looks that doesn't allow ignoring the trace.

We will update this when we have more info

@dgrisonnet
Copy link

Looking at the code from the Kubernetes library this seems to be expected: https://github.com/kubernetes/utils/blob/3b25d923346b/trace/trace.go#L202-L204

The traces are expected to be written not matter what if the request takes longer than the provided 500 miliseconds threshold: https://github.com/kubernetes-sigs/custom-metrics-apiserver/blob/master/pkg/apiserver/endpoints/handlers/get.go#L129

That said I don't think these are critical enough to have with a log level of 0 and I think we should have it for level 4 and higher since it is debug information. We can add a piece of code to custom-metrics-apiserver to do that, but in the meantime I'll revive the discussion in kubernetes/kubernetes#115993

@zroubalik
Copy link
Member

Looking at the code from the Kubernetes library this seems to be expected: https://github.com/kubernetes/utils/blob/3b25d923346b/trace/trace.go#L202-L204

The traces are expected to be written not matter what if the request takes longer than the provided 500 miliseconds threshold: https://github.com/kubernetes-sigs/custom-metrics-apiserver/blob/master/pkg/apiserver/endpoints/handlers/get.go#L129

That said I don't think these are critical enough to have with a log level of 0 and I think we should have it for level 4 and higher since it is debug information. We can add a piece of code to custom-metrics-apiserver to do that, but in the meantime I'll revive the discussion in kubernetes/kubernetes#115993

Agree with this approach, thanks @dgrisonnet

@JorTurFer
Copy link
Member

Any update on this?

@dgrisonnet
Copy link

We had an agreement in kubernetes/kubernetes#115993 to make the traces log level 2. I will try to implement the fix this week.

@zroubalik
Copy link
Member

Awesome, thanks!

Copy link

stale bot commented Feb 10, 2024

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale All issues that are marked as stale due to inactivity label Feb 10, 2024
@mindw
Copy link

mindw commented Feb 10, 2024

Still and issue. Thanks!

@stale stale bot removed the stale All issues that are marked as stale due to inactivity label Feb 10, 2024
@JorTurFer
Copy link
Member

ping @dgrisonnet !

@dgrisonnet
Copy link

On it. Sorry I didn't follow-up on it after we wrapped up the discussion upstream.

Copy link

stale bot commented Apr 13, 2024

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale All issues that are marked as stale due to inactivity label Apr 13, 2024
@mindw
Copy link

mindw commented Apr 14, 2024

keep-alive :)

Copy link

stale bot commented Apr 23, 2024

This issue has been automatically closed due to inactivity.

@stale stale bot closed this as completed Apr 23, 2024
@zroubalik zroubalik reopened this Apr 23, 2024
@stale stale bot removed the stale All issues that are marked as stale due to inactivity label Apr 23, 2024
@dgrisonnet
Copy link

I lost track of this one, the PR has been up for a while, I'll try to ping some people on slack to have a look at it kubernetes/utils#301

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
Status: Proposed
Development

No branches or pull requests

5 participants