Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Change at which level klog.Fatal is invoked #94663

Merged
merged 1 commit into from
Nov 11, 2020

Conversation

soltysh
Copy link
Contributor

@soltysh soltysh commented Sep 9, 2020

What type of PR is this?
/kind bug
/kind cleanup
/kind regression
/sig cli
/priority important-longterm

What this PR does / why we need it:
With klog/v2 and specifically this PR: kubernetes/klog#79 we suddenly are logging excessive amount of data at level 2 already. Example is, when any command returns an error and it was invoked with -v=2 the error will be accompanied with full stack trace, example:

$ kubectl get --raw=/healthz -v=2
F0909 21:50:59.912233   48492 helpers.go:115] Unable to connect to the server: dial tcp: lookup api.localhost on 127.0.0.1:53: no such host
goroutine 1 [running]:
k8s.io/kubernetes/vendor/k8s.io/klog/v2.stacks(0xc00000e001, 0xc0006f8c00, 0xac, 0xfd)
	/workspace/k8s/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:996 +0xb8
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).output(0x2f69600, 0xc000000003, 0x0, 0x0, 0xc00014a000, 0x2d42550, 0xa, 0x73, 0x40a200)
	/workspace/k8s/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:945 +0x19d
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).printDepth(0x2f69600, 0x3, 0x0, 0x0, 0x2, 0xc000555ab0, 0x1, 0x1)
	/workspace/k8s/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:718 +0x15e
k8s.io/kubernetes/vendor/k8s.io/klog/v2.FatalDepth(...)
	/workspace/k8s/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1442
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.fatal(0xc00004c400, 0x7d, 0x1)
	/workspace/k8s/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:93 +0x1e8
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.checkErr(0x1e50140, 0xc00097c1e0, 0x1c7ff98)
	/workspace/k8s/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:188 +0x958
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util.CheckErr(...)
	/workspace/k8s/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/util/helpers.go:115
k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/get.NewCmdGet.func1(0xc0002e7b80, 0xc0004737a0, 0x0, 0x2)
	/workspace/k8s/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/cmd/get/get.go:167 +0x151
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute(0xc0002e7b80, 0xc000473760, 0x2, 0x2, 0xc0002e7b80, 0xc000473760)
	/workspace/k8s/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:846 +0x29d
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0xc0000ca840, 0xc00007e180, 0xc00003a080, 0x4)
	/workspace/k8s/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:950 +0x349
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute(...)
	/workspace/k8s/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/spf13/cobra/command.go:887
main.main()
	_output/local/go/src/k8s.io/kubernetes/cmd/kubectl/kubectl.go:49 +0x21d

goroutine 6 [chan receive]:
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).flushDaemon(0x2f69600)
	/workspace/k8s/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1131 +0x8b
created by k8s.io/kubernetes/vendor/k8s.io/klog/v2.init.0
	/workspace/k8s/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:416 +0xd6

goroutine 17 [select]:
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x1c7fed0, 0x1e4e800, 0xc0005ee000, 0x1, 0xc000054120)
	/workspace/k8s/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x13f
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x1c7fed0, 0x12a05f200, 0x0, 0x1, 0xc000054120)
	/workspace/k8s/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x98
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0x1c7fed0, 0x12a05f200, 0xc000054120)
	/workspace/k8s/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x4d
created by k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/util/logs.InitLogs
	/workspace/k8s/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/k8s.io/kubectl/pkg/util/logs/logs.go:51 +0x96

This PR proposes to delay that stack to -v=4 since levels 1 and 2 are mostly used for information purposes and having this there seems an overkill. If you use kubectl prior to 1.19 those are not printed at all.

Special notes for your reviewer:
/assign @pwittrock @seans3

Does this PR introduce a user-facing change?:

Print go stack traces at -v=4 and not -v=2

@k8s-ci-robot k8s-ci-robot added the release-note Denotes a PR that will be considered when it comes time to generate release notes. label Sep 9, 2020
@k8s-ci-robot k8s-ci-robot added kind/bug Categorizes issue or PR as related to a bug. kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. kind/regression Categorizes issue or PR as related to a regression from a prior release. sig/cli Categorizes an issue or PR as relevant to SIG CLI. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. labels Sep 9, 2020
@soltysh
Copy link
Contributor Author

soltysh commented Sep 9, 2020

/hold
for longer exposure and agreement on the approach, we'll probably discuss this during next SIG-CLI which is in 2 weeks.

@k8s-ci-robot k8s-ci-robot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Sep 9, 2020
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: soltysh

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Sep 9, 2020
@soltysh
Copy link
Contributor Author

soltysh commented Nov 10, 2020

/hold cancel

@k8s-ci-robot k8s-ci-robot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Nov 10, 2020
@soltysh
Copy link
Contributor Author

soltysh commented Nov 10, 2020

/retest

@soltysh
Copy link
Contributor Author

soltysh commented Nov 10, 2020

/retest

@soltysh
Copy link
Contributor Author

soltysh commented Nov 10, 2020

/assign @sallyom

@sallyom
Copy link
Contributor

sallyom commented Nov 11, 2020

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Nov 11, 2020
@fejta-bot
Copy link

/retest
This bot automatically retries jobs that failed/flaked on approved PRs (send feedback to fejta).

Review the full test history for this PR.

Silence the bot with an /lgtm cancel or /hold comment for consistent failures.

@k8s-ci-robot k8s-ci-robot merged commit 1cd2ed8 into kubernetes:master Nov 11, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. area/kubectl cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/bug Categorizes issue or PR as related to a bug. kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. kind/regression Categorizes issue or PR as related to a regression from a prior release. lgtm "Looks good to me", indicates that a PR is ready to be merged. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. release-note Denotes a PR that will be considered when it comes time to generate release notes. sig/cli Categorizes an issue or PR as relevant to SIG CLI. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants