-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bump k8s.io dependencies #5097
Bump k8s.io dependencies #5097
Conversation
Hi @lucacome. Thanks for your PR. I'm waiting for a cert-manager member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/assign @JoshVanL |
Thank you for raising this 😁 /ok-to-test |
I think this is needed first helm/helm#10928 |
Fair enough! There's no major rush, we don't have a release for at least a month 👍 |
/milestone v1.9 |
Hi @lucacome thanks again for working on this! Do you think you will have time to rebase and update this PR? We're thinking to release the last 1.9 pre-release today/tomorrow and would be good to get this work in as well! |
you should be able to update with ./hack/update-bazel.sh there shouldn't be a need for another command to update the Bazel files |
/retest |
There seems to be at least one error related to helm. I'm not really sure how that can be fixed @irbekrm feel free to push to this PR if you have any ideas. |
Signed-off-by: irbekrm <irbekrm@gmail.com>
Signed-off-by: irbekrm <irbekrm@gmail.com>
As the later version has a breaking change (bumps github.com/emicklei/go-restful -> github.com/emicklei/go-restful/v3) Signed-off-by: irbekrm <irbekrm@gmail.com>
Building now succeeds, but he certificate signing request and gateway unit tests are still failing. |
github.com/containerd/containerd => github.com/containerd/containerd v1.5.10 | ||
github.com/miekg/dns v1.1.41 => github.com/miekg/dns v1.1.34 | ||
go.opentelemetry.io/contrib => go.opentelemetry.io/contrib v0.20.0 | ||
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc => go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc v0.20.0 | ||
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp => go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.20.0 | ||
go.opentelemetry.io/otel => go.opentelemetry.io/otel v0.20.0 | ||
go.opentelemetry.io/otel/exporters/otlp => go.opentelemetry.io/otel/exporters/otlp v0.20.0 | ||
go.opentelemetry.io/otel/metric => go.opentelemetry.io/otel/metric v0.20.0 | ||
go.opentelemetry.io/otel/oteltest => go.opentelemetry.io/otel/oteltest v0.20.0 | ||
go.opentelemetry.io/otel/sdk => go.opentelemetry.io/otel/sdk v0.20.0 | ||
go.opentelemetry.io/otel/sdk/export/metric => go.opentelemetry.io/otel/sdk/export/metric v0.20.0 | ||
go.opentelemetry.io/otel/sdk/metric => go.opentelemetry.io/otel/sdk/metric v0.20.0 | ||
go.opentelemetry.io/otel/trace => go.opentelemetry.io/otel/trace v0.20.0 | ||
go.opentelemetry.io/proto/otlp => go.opentelemetry.io/proto/otlp v0.7.0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These are needed till kubernetes/kubernetes#106536 gets fixed
As minimum resync period in client-go is 1s. Also makes sure that the tests don't sleep for 'too long'. Signed-off-by: irbekrm <irbekrm@gmail.com>
Signed-off-by: irbekrm <irbekrm@gmail.com>
/test pull-cert-manager-e2e-feature-gates-disabled |
This should now be ready for another lgtm (I cannot do that as I've now also contributed) |
const informerResyncPeriod = time.Millisecond * 10 | ||
const informerResyncPeriod = time.Second |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
KUBEBUILDER_TOOLS_linux_amd64_SHA256SUM=25daf3c5d7e8b63ea933e11cd6ca157868d71a12885aba97d1e7e1a15510713e | ||
KUBEBUILDER_TOOLS_darwin_amd64_SHA256SUM=bb27efb1d2ee43749475293408fc80b923324ab876e5da54e58594bbe2969c42 | ||
KUBEBUILDER_TOOLS_linux_amd64_SHA256SUM=6d9f0a6ab0119c5060799b4b8cbd0a030562da70b7ad4125c218eaf028c6cc28 | ||
KUBEBUILDER_TOOLS_darwin_amd64_SHA256SUM=3367987e2b40dadb5081a92a59d82664bee923eeeea77017ec88daf735e26cae |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
note: just spotted that they've added darwin-arm64 support so we can update to use that and remove our rosetta-based workaround. I'll raise a separate PR for that, don't think it needs to go in this one!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
/approve
/hold
Adding a hold in case anyone else wants to take a look. This seems totally reasonable but I've not dug deeply into the version upgrades or anything!
Thank you to everyone who's been involved in this, looks like it was quite a bit of work!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/unhold
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: jakexks, lucacome, SgtCoDFish The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
This PR:
v0.23.4
->v0.24.2
v0.11.1
->v0.11.2
v0.7.0
->v0.9.2
v3.8.1
->v3.9.0
(needed as [only this Helm is compatible with kubev0.24
)informerResyncPeriod
in unit test framework to not be less than the minimum resync period (see Slack thread) and bumps the wait time for test cache initial sync. The tests were failing on this - I am not exactly sure why as I don't see related changes in kube v0.24, but I assume it's because the wait time was less than the actual cache sync time.Signed-off-by: Luca Comellini luca.com@gmail.com