Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

InitFlags does not reset klog configuration if application already did logging #67

Open
losipiuk opened this issue May 22, 2019 · 11 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.

Comments

@losipiuk
Copy link

/kind bug

What steps did you take and what happened:

  1. Application uses klog.Infof()
  2. Logging happens based on klog default configuration.
  3. Application calls klog.InitFlags(nil) and flag.Parse(). Flags specify log file.
  4. Application uses klog.Infof() again.
  5. Logs are NOT appended to output file specified by flags

What did you expect to happen:

Ideally klog should properly reinitialize itself to configuration specified via flags on (4) and log from (5) should be appended to the log file.

Alternatively klog could disallows calling InitFlag() after logging was done before (panic?).
Yet this is suboptimal as one not always can prevent logging from happening before InitFlags. As logging can occur in init function of some dependent module.

Anything else you would like to add:

The source of described problem lies in behavior of createFiles (https://github.com/kubernetes/klog/blob/master/klog.go#L968). This method is called on first logging and log appenders are created then and put in logging.files array. Those are not reset later even if configuration changes.

Actually the situation is even more confusing than that. As appenders are created only for log levels up to requested one, we end up in inconsistent state of klog in following sequence of events:

  1. klog.Infof
  2. InitFlags
  3. klog.ErrorF

Then the appenders for DEBUG and INFO will be based on default configuration. And appender for ERROR will take configuration changes from flags into account.

I spotted the problem when playing with flag related to log file but possibly one can run into issues with other flags too.

@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label May 22, 2019
@losipiuk losipiuk changed the title InitFlags does not reset klog configuration of application already did logging InitFlags does not reset klog configuration if application already did logging May 22, 2019
losipiuk added a commit to losipiuk/autoscaler that referenced this issue May 22, 2019
Having call to algorithmprovider.ApplyFeatureGates() in module init
function was problematic as the mathod calls out to klog.Infof().
The call to klog.Infof() was then happening before klog related
application flags were parsed which resulted in klog misbehavior.
See kubernetes/klog#67.
@vincepri
Copy link
Member

Unless I'm mistaken logs.InitFlags() should always be called first, before any logging happens. I do agree that the behavior you describe is a little confusing though.

losipiuk added a commit to losipiuk/autoscaler that referenced this issue May 27, 2019
Having call to algorithmprovider.ApplyFeatureGates() in module init
function was problematic as the mathod calls out to klog.Infof().
The call to klog.Infof() was then happening before klog related
application flags were parsed which resulted in klog misbehavior.
See kubernetes/klog#67.
@DirectXMan12
Copy link

yeah, initflags should always be one of the first things you call in your program.

@losipiuk
Copy link
Author

yeah, initflags should always be one of the first things you call in your program.

Yeah - and it was in my case - the logging was done deep in the stack in the function called from one of the module Init() funs (which are called before main() entrypoint).

@DirectXMan12
Copy link

ah, hmm... that's problematic

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 2, 2019
@losipiuk
Copy link
Author

losipiuk commented Sep 2, 2019

/remove-lifecycle stale

Unless fixed this is still problematic issue.

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 2, 2019
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 1, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Dec 31, 2019
@neolit123
Copy link
Member

/lifecycle frozen

@k8s-ci-robot k8s-ci-robot added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. labels Dec 31, 2019
@neolit123
Copy link
Member

/priority important-longterm

@k8s-ci-robot k8s-ci-robot added the priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. label Dec 31, 2019
@zwpaper
Copy link
Member

zwpaper commented Jul 18, 2021

I have tested the v2.9.0, logging in init, it printed in stdout, then use initFlags and flag.Parse, it could be logging into the target file with --log_file option.

Double-check in source code,

klog/klog.go

Line 930 in 02ca14b

if logging.logFile != "" {

seems we could close this issue now.

@neolit123 @losipiuk

tim-smart pushed a commit to arisechurch/autoscaler that referenced this issue Nov 22, 2022
Having call to algorithmprovider.ApplyFeatureGates() in module init
function was problematic as the mathod calls out to klog.Infof().
The call to klog.Infof() was then happening before klog related
application flags were parsed which resulted in klog misbehavior.
See kubernetes/klog#67.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.
Projects
None yet
Development

No branches or pull requests

7 participants