Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

update uber zap to 1.16.0 to fix a nil pointer exception #100739

Merged
merged 2 commits into from Apr 11, 2021

Conversation

pacoxu
Copy link
Member

@pacoxu pacoxu commented Apr 1, 2021

What type of PR is this?

/kind bug

What this PR does / why we need it:

I ran into a nil point in kubelet log, and after investigation, it was a problem of zap which was fixed.

Which issue(s) this PR fixes:

runtime error: invalid memory address or nil pointer dereference
goroutine 87459 [running]:
net/http.(*conn).serve.func1(0xc001bb1ae0)
	/usr/local/go/src/net/http/server.go:1801 +0x147
panic(0x40021e0, 0x6fd2280)
	/usr/local/go/src/runtime/panic.go:975 +0x47a
k8s.io/kubernetes/vendor/go.uber.org/zap/zapcore.(*jsonEncoder).AppendDuration(0xc001762e40, 0x44162b7b)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.uber.org/zap/zapcore/json_encoder.go:225 +0x53
k8s.io/kubernetes/vendor/go.uber.org/zap/zapcore.(*jsonEncoder).AddDuration(0xc001762e40, 0x48cdf0f, 0x7, 0x44162b7b)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.uber.org/zap/zapcore/json_encoder.go:123 +0x57
k8s.io/kubernetes/vendor/go.uber.org/zap/zapcore.Field.AddTo(0x48cdf0f, 0x7, 0x8, 0x44162b7b, 0x0, 0x0, 0x0, 0x0, 0x5008fc0, 0xc001762e40)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.uber.org/zap/zapcore/field.go:126 +0x4d6
k8s.io/kubernetes/vendor/go.uber.org/zap/zapcore.addFields(0x5008fc0, 0xc001762e40, 0xc0009bca80, 0x7, 0x7)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.uber.org/zap/zapcore/field.go:199 +0xcf
k8s.io/kubernetes/vendor/go.uber.org/zap/zapcore.(*jsonEncoder).EncodeEntry(0xc000198d80, 0x0, 0xc0117359a339beff, 0x3dac6ac3a2f, 0x70ea0e0, 0x0, 0x0, 0x48c85d6, 0x4, 0x0, ...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.uber.org/zap/zapcore/json_encoder.go:364 +0x1f2
k8s.io/kubernetes/vendor/go.uber.org/zap/zapcore.(*ioCore).Write(0xc000198db0, 0x0, 0xc0117359a339beff, 0x3dac6ac3a2f, 0x70ea0e0, 0x0, 0x0, 0x48c85d6, 0x4, 0x0, ...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.uber.org/zap/zapcore/core.go:86 +0xa9
k8s.io/kubernetes/vendor/go.uber.org/zap/zapcore.(*CheckedEntry).Write(0xc001adabb0, 0xc0009bca80, 0x7, 0x7)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/go.uber.org/zap/zapcore/entry.go:215 +0x12d
k8s.io/kubernetes/vendor/k8s.io/component-base/logs/json.(*zapLogger).Info(0xc000192ea0, 0x48c85d6, 0x4, 0xc000d46540, 0xc, 0xc)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/component-base/logs/json/json.go:61 +0x194
k8s.io/kubernetes/vendor/k8s.io/klog/v2.(*loggingT).infoS(0x70ea520, 0x4fd7ee0, 0xc000192ea0, 0x0, 0x0, 0x48c85d6, 0x4, 0xc000d46540, 0xc, 0xc)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:792 +0x8d
k8s.io/kubernetes/vendor/k8s.io/klog/v2.InfoS(...)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/klog/v2/klog.go:1414
	
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.WithLogging.func1.1(0xc0002f1030)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog/httplog.go:89 +0xa5
k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog.WithLogging.func1(0x4f9b380, 0xc0016edea0, 0xc000a87f00)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/httplog/httplog.go:92 +0x353
net/http.HandlerFunc.ServeHTTP(0xc000110ac0, 0x4f9b380, 0xc0016edea0, 0xc000a87f00)
	/usr/local/go/src/net/http/server.go:2042 +0x44
k8s.io/kubernetes/pkg/kubelet/server.(*Server).ServeHTTP(0xc0001816d0, 0x4f9b380, 0xc0016edea0, 0xc000a87f00)
	/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/server/server.go:932 +0x61c
net/http.serverHandler.ServeHTTP(0xc000d34c40, 0x4f9b380, 0xc0016edea0, 0xc000a87f00)
	/usr/local/go/src/net/http/server.go:2843 +0xa3
net/http.(*conn).serve(0xc001bb1ae0, 0x4f9ef80, 0xc0016a6500)
	/usr/local/go/src/net/http/server.go:1925 +0x8ad
created by net/http.(*Server).Serve
	/usr/local/go/src/net/http/server.go:2969 +0x36c
","v":0}

From k/k side, the message is from https://github.com/kubernetes/kubernetes/blob/v1.20.5/staging/src/k8s.io/apiserver/pkg/server/httplog/httplog.go#L89
It uses arg latency which is duration:
https://github.com/kubernetes/kubernetes/blob/v1.20.5/staging/src/k8s.io/apiserver/pkg/server/httplog/httplog.go#L158-L186

The error is the same with uber-go/zap#784
uber-go/zap#835 fixed it.

The JSON encoder assumes that encoders for time.Time and
time.Duration are always specified, which causes nil pointer
dereference panics.

		{
			name:     "duration",
			field:    zap.Duration("bar", time.Microsecond),
			expected: `{"bar": 1000}`,
		},

Another easy fix is to make sure the latency here is not nil. I opened #100742 to just fix the error message here.
https://github.com/kubernetes/kubernetes/pull/100742/files

The problem test case is like the above.

Special notes for your reviewer:

  1. update go.mod to 1.16.0 for zap
  2. run hack/update-vendor.sh

Does this PR introduce a user-facing change?

None

@k8s-ci-robot k8s-ci-robot added release-note-none Denotes a PR that doesn't merit a release note. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. kind/bug Categorizes issue or PR as related to a bug. do-not-merge/needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. area/apiserver area/cloudprovider area/dependency Issues or PRs related to dependency changes area/kubectl labels Apr 1, 2021
@k8s-ci-robot k8s-ci-robot added sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. sig/cli Categorizes an issue or PR as relevant to SIG CLI. sig/cloud-provider Categorizes an issue or PR as relevant to SIG Cloud Provider. sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. and removed do-not-merge/needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Apr 1, 2021
Signed-off-by: pacoxu <paco.xu@daocloud.io>
@pacoxu
Copy link
Member Author

pacoxu commented Apr 1, 2021

as it is indeed a log problem(json parsing)
/sig instrumentation
/cc serathius

@k8s-ci-robot k8s-ci-robot added the sig/instrumentation Categorizes an issue or PR as relevant to SIG Instrumentation. label Apr 1, 2021
@RA489
Copy link

RA489 commented Apr 1, 2021

/test pull-kubernetes-e2e-kind-ipv6

@serathius
Copy link
Contributor

Thanks catching this!

Could you add a test in https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/component-base/logs/json/json_test.go#L36 to prevent future regressions?
/milestone v1.21
/priority critical-urgent

@k8s-ci-robot k8s-ci-robot added the priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now. label Apr 1, 2021
@k8s-ci-robot k8s-ci-robot added this to the v1.21 milestone Apr 1, 2021
@k8s-ci-robot k8s-ci-robot removed the needs-priority Indicates a PR lacks a `priority/foo` label and requires one. label Apr 1, 2021
@k8s-ci-robot k8s-ci-robot removed this from the v1.21 milestone Apr 2, 2021
@pacoxu
Copy link
Member Author

pacoxu commented Apr 2, 2021

/remove-priority critical-urgent

@k8s-ci-robot k8s-ci-robot removed the priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now. label Apr 2, 2021
@pacoxu
Copy link
Member Author

pacoxu commented Apr 8, 2021

/retest
/hold cancel
for 1.22 if needed

@k8s-ci-robot k8s-ci-robot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Apr 8, 2021
@k8s-ci-robot
Copy link
Contributor

@pacoxu: GitHub didn't allow me to assign the following users: liggit.

Note that only kubernetes members, repo collaborators and people who have commented on this issue/PR can be assigned. Additionally, issues/PRs can only have 10 assignees at the same time.
For more information please see the contributor guide

In response to this:

test cases were added. I add some case for time as well as duration.
/assign liggit
for approval

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@pacoxu
Copy link
Member Author

pacoxu commented Apr 9, 2021

test cases were added. I add some test cases for time as well as duration.
/assign liggitt
for approval

@liggitt
Copy link
Member

liggitt commented Apr 9, 2021

/lgtm
/approve

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Apr 9, 2021
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: BenTheElder, dims, liggitt, pacoxu

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Apr 9, 2021
@fejta-bot
Copy link

/retest
This bot automatically retries jobs that failed/flaked on approved PRs (send feedback to fejta).

Review the full test history for this PR.

Silence the bot with an /lgtm cancel or /hold comment for consistent failures.

1 similar comment
@fejta-bot
Copy link

/retest
This bot automatically retries jobs that failed/flaked on approved PRs (send feedback to fejta).

Review the full test history for this PR.

Silence the bot with an /lgtm cancel or /hold comment for consistent failures.

@pacoxu
Copy link
Member Author

pacoxu commented Apr 9, 2021

/retest

@fejta-bot
Copy link

/retest
This bot automatically retries jobs that failed/flaked on approved PRs (send feedback to fejta).

Review the full test history for this PR.

Silence the bot with an /lgtm cancel or /hold comment for consistent failures.

@pacoxu
Copy link
Member Author

pacoxu commented Apr 10, 2021

/skip
/retest

@fejta-bot
Copy link

/retest
This bot automatically retries jobs that failed/flaked on approved PRs (send feedback to fejta).

Review the full test history for this PR.

Silence the bot with an /lgtm cancel or /hold comment for consistent failures.

1 similar comment
@fejta-bot
Copy link

/retest
This bot automatically retries jobs that failed/flaked on approved PRs (send feedback to fejta).

Review the full test history for this PR.

Silence the bot with an /lgtm cancel or /hold comment for consistent failures.

@pacoxu
Copy link
Member Author

pacoxu commented Apr 10, 2021

/hold
test seems to fail continuely.
I will look into it next Monday.

@k8s-ci-robot k8s-ci-robot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Apr 10, 2021
@pacoxu
Copy link
Member Author

pacoxu commented Apr 11, 2021

/hold cancel
/retest

@k8s-ci-robot k8s-ci-robot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Apr 11, 2021
@fejta-bot
Copy link

/retest
This bot automatically retries jobs that failed/flaked on approved PRs (send feedback to fejta).

Review the full test history for this PR.

Silence the bot with an /lgtm cancel or /hold comment for consistent failures.

1 similar comment
@fejta-bot
Copy link

/retest
This bot automatically retries jobs that failed/flaked on approved PRs (send feedback to fejta).

Review the full test history for this PR.

Silence the bot with an /lgtm cancel or /hold comment for consistent failures.

@pacoxu
Copy link
Member Author

pacoxu commented Apr 11, 2021

/retest

@fejta-bot
Copy link

/retest
This bot automatically retries jobs that failed/flaked on approved PRs (send feedback to fejta).

Review the full test history for this PR.

Silence the bot with an /lgtm cancel or /hold comment for consistent failures.

1 similar comment
@fejta-bot
Copy link

/retest
This bot automatically retries jobs that failed/flaked on approved PRs (send feedback to fejta).

Review the full test history for this PR.

Silence the bot with an /lgtm cancel or /hold comment for consistent failures.

@k8s-ci-robot k8s-ci-robot merged commit 0b0727b into kubernetes:master Apr 11, 2021
@k8s-ci-robot k8s-ci-robot added this to the v1.22 milestone Apr 11, 2021
@pacoxu pacoxu deleted the update-zap branch June 23, 2021 05:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. area/apiserver area/cloudprovider area/dependency Issues or PRs related to dependency changes area/kubectl cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/bug Categorizes issue or PR as related to a bug. lgtm "Looks good to me", indicates that a PR is ready to be merged. priority/backlog Higher priority than priority/awaiting-more-evidence. release-note-none Denotes a PR that doesn't merit a release note. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. sig/cli Categorizes an issue or PR as relevant to SIG CLI. sig/cloud-provider Categorizes an issue or PR as relevant to SIG Cloud Provider. sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle. sig/instrumentation Categorizes an issue or PR as relevant to SIG Instrumentation. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

9 participants