Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

E2E: log.SetLogger(...) was never called #1482

Open
lentzi90 opened this issue Mar 1, 2024 · 8 comments
Open

E2E: log.SetLogger(...) was never called #1482

lentzi90 opened this issue Mar 1, 2024 · 8 comments
Labels
good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. triage/accepted Indicates an issue is ready to be actively worked on.

Comments

@lentzi90
Copy link
Member

lentzi90 commented Mar 1, 2024

What steps did you take and what happened:

There is a stacktrace from controller-runtime happening in our e2e tests. It complains that ??log.SetLogger(...) was never called.
We should fix this.

What did you expect to happen:

There should be no stacktrace or warnings like this in successful tests.

Anything else you would like to add:

Here is the log that can be seen in the e2e tests:

10:44:38    STEP: Waiting for one control plane node to exist @ 03/01/24 03:44:37.924
10:50:29    INFO: Waiting for control plane of cluster metal3/test1 to be ready
10:50:29  [controller-runtime] log.SetLogger(...) was never called; logs will not be displayed.
10:50:29  Detected at:
10:50:29  	>  goroutine 175 [running]:
10:50:29  	>  runtime/debug.Stack()
10:50:29  	>  	/usr/local/go/src/runtime/debug/stack.go:24 +0x5e
10:50:29  	>  sigs.k8s.io/controller-runtime/pkg/log.eventuallyFulfillRoot()
10:50:29  	>  	/home/****/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.17.2/pkg/log/log.go:60 +0xcd
10:50:29  	>  sigs.k8s.io/controller-runtime/pkg/log.(*delegatingLogSink).WithName(0xc000448b40, {0x264db89, 0x14})
10:50:29  	>  	/home/****/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.17.2/pkg/log/deleg.go:147 +0x45
10:50:29  	>  github.com/go-logr/logr.Logger.WithName({{0x2a2df08, 0xc000448b40}, 0x0}, {0x264db89?, 0x0?})
10:50:29  	>  	/home/****/go/pkg/mod/github.com/go-logr/logr@v1.4.1/logr.go:345 +0x3d
10:50:29  	>  sigs.k8s.io/controller-runtime/pkg/client.newClient(0xc0002dcf50?, {0x0, 0xc00039acb0, {0x0, 0x0}, 0x0, {0x0, 0x0}, 0x0})
10:50:29  	>  	/home/****/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.17.2/pkg/client/client.go:129 +0xec
10:50:29  	>  sigs.k8s.io/controller-runtime/pkg/client.New(0x4126e5?, {0x0, 0xc00039acb0, {0x0, 0x0}, 0x0, {0x0, 0x0}, 0x0})
10:50:29  	>  	/home/****/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.17.2/pkg/client/client.go:110 +0x7d
10:50:29  	>  sigs.k8s.io/cluster-api/test/framework.(*clusterProxy).GetClient.func1({0xb2d05e00?, 0xc00036ccc0?})
10:50:29  	>  	/home/****/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.6.2/framework/cluster_proxy.go:204 +0x79
10:50:29  	>  k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext.func1(0xc0008bd120?, {0x2a27fb0?, 0xc00036ccb0?})
10:50:29  	>  	/home/****/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/loop.go:53 +0x52
10:50:29  	>  k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x2a27fb0, 0xc00036ccb0}, {0x2a1d440?, 0xc0008bd120}, 0x1, 0x0, 0x0?)
10:50:29  	>  	/home/****/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/loop.go:54 +0x117
10:50:29  	>  k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x2a27e98?, 0x3f252c0?}, 0xb2d05e00, 0x1?, 0x0?, 0x1000000000000?)
10:50:29  	>  	/home/****/go/pkg/mod/k8s.io/apimachinery@v0.29.2/pkg/util/wait/poll.go:48 +0x98
10:50:29  	>  sigs.k8s.io/cluster-api/test/framework.(*clusterProxy).GetClient(0xc0003aa480)
10:50:29  	>  	/home/****/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.6.2/framework/cluster_proxy.go:203 +0xbe
10:50:29  	>  sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyCustomClusterTemplateAndWait.setDefaults.func3({_, _}, {{0x2a38738, 0xc0003aa480}, {0xc000baa001, 0x45853, 0x45854}, {0x2634763, 0x5}, {0x26354f9, ...}, ...}, ...)
10:50:29  	>  	/home/****/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.6.2/framework/clusterctl/clusterctl_helpers.go:460 +0x3d
10:50:29  	>  sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyCustomClusterTemplateAndWait({_, _}, {{0x2a38738, 0xc0003aa480}, {0xc000baa001, 0x45853, 0x45854}, {0x2634763, 0x5}, {0x26354f9, ...}, ...}, ...)
10:50:29  	>  	/home/****/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.6.2/framework/clusterctl/clusterctl_helpers.go:424 +0x1110
10:50:29  	>  sigs.k8s.io/cluster-api/test/framework/clusterctl.ApplyClusterTemplateAndWait({_, _}, {{0x2a38738, 0xc0003aa480}, {{0x0, 0x0}, {0xc000548d1d, 0x47}, {0xc000548d65, 0x1b}, ...}, ...}, ...)
10:50:29  	>  	/home/****/go/pkg/mod/sigs.k8s.io/cluster-api/test@v1.6.2/framework/clusterctl/clusterctl_helpers.go:319 +0x994
10:50:29  	>  github.com/metal3-io/cluster-api-provider-metal3/test/e2e.createTargetCluster({0xc000056173, 0x7})
10:50:29  	>  	/home/****/tested_repo/test/e2e/pivoting_based_feature_test.go:186 +0x3a5
10:50:29  	>  github.com/metal3-io/cluster-api-provider-metal3/test/e2e.glob..func6.1()
10:50:29  	>  	/home/****/tested_repo/test/e2e/integration_test.go:28 +0xa7
10:50:29  	>  github.com/onsi/ginkgo/v2/internal.extractBodyFunction.func3({0x2a2daf0, 0xc000900420})
10:50:29  	>  	/home/****/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.15.0/internal/node.go:463 +0x13
10:50:29  	>  github.com/onsi/ginkgo/v2/internal.(*Suite).runNode.func3()
10:50:29  	>  	/home/****/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.15.0/internal/suite.go:889 +0x8d
10:50:29  	>  created by github.com/onsi/ginkgo/v2/internal.(*Suite).runNode in goroutine 10
10:50:29  	>  	/home/****/go/pkg/mod/github.com/onsi/ginkgo/v2@v2.15.0/internal/suite.go:876 +0xddb
10:50:29    INFO: Waiting for control plane metal3/test1 to be ready (implies underlying nodes to be ready as well)

/kind bug

@metal3-io-bot metal3-io-bot added kind/bug Categorizes issue or PR as related to a bug. needs-triage Indicates an issue lacks a `triage/foo` label and requires one. labels Mar 1, 2024
@Rozzii
Copy link
Member

Rozzii commented Mar 6, 2024

/triage accepted
/help

@metal3-io-bot
Copy link
Contributor

@Rozzii:
This request has been marked as needing help from a contributor.

Please ensure the request meets the requirements listed here.

If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help command.

In response to this:

/triage accepted
/help

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@metal3-io-bot metal3-io-bot added triage/accepted Indicates an issue is ready to be actively worked on. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. and removed needs-triage Indicates an issue lacks a `triage/foo` label and requires one. labels Mar 6, 2024
@Rozzii Rozzii added good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. needs-triage Indicates an issue lacks a `triage/foo` label and requires one. and removed help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. triage/accepted Indicates an issue is ready to be actively worked on. labels Mar 6, 2024
@Rozzii
Copy link
Member

Rozzii commented Mar 6, 2024

/triage accepted
/help

@metal3-io-bot
Copy link
Contributor

@Rozzii:
This request has been marked as needing help from a contributor.

Please ensure the request meets the requirements listed here.

If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help command.

In response to this:

/triage accepted
/help

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@metal3-io-bot metal3-io-bot added triage/accepted Indicates an issue is ready to be actively worked on. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. and removed needs-triage Indicates an issue lacks a `triage/foo` label and requires one. labels Mar 6, 2024
@lekaf974
Copy link
Contributor

seeing a similar issue here kubernetes-sigs/controller-runtime#2622

@metal3-io-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues will close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@metal3-io-bot metal3-io-bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 10, 2024
@Rozzii
Copy link
Member

Rozzii commented Jun 11, 2024

/remove-lifecycle stale

@metal3-io-bot metal3-io-bot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 11, 2024
@Rozzii
Copy link
Member

Rozzii commented Jun 11, 2024

/lifecycle frozen

@metal3-io-bot metal3-io-bot added the lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. label Jun 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. triage/accepted Indicates an issue is ready to be actively worked on.
Projects
None yet
Development

No branches or pull requests

4 participants