Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Better support of conversion webhooks in integration tests #1882

Closed
bleech1 opened this issue Apr 28, 2022 · 11 comments
Closed

Better support of conversion webhooks in integration tests #1882

bleech1 opened this issue Apr 28, 2022 · 11 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@bleech1
Copy link

bleech1 commented Apr 28, 2022

We have developed a CRD and an operator, and are working on creating a new major version of that CRD. After following the instructions in the kubebuilder book for setting up the webhooks but the book doesn't describe how to use the conversion webhooks in the integration tests. We have managed to hack together a way to get the conversion webhooks running in integration tests, but it's not that easy to do, and didn't have documentation.

Below are the different changes that we needed to make to run our conversion webhook in our integration tests:

1. Point to a different definition of our CRD

The kubebuilder book recommends configuring your envtest environment to the CRDs located in the /config/crd/bases folder of a project. However, the conversion webhook section of the CRD is added by kustomize, so doesn't live in that directory. This means that before running our integration tests, we needed to run kustomize, save the outputted CRDs, and point our envtest Environment to that directory. This is a manual step needed that would be nice to automate away.

2. Update the kustomized section of our CRD

By default, kustomize is run using the following command ./bin/kustomize build config/default to create our CRDs. That will then add a section like the following to our CRD:

  conversion:
    strategy: Webhook
    webhook:
      clientConfig:
        service:
          name: webhook-service
          namespace: default
          path: /convert
      conversionReviewVersions:
        - v1beta1
        - v1beta2

However, this doesn't work for running the conversion webhook in our integration tests because our controller is running locally on the computer running the integration tests, so instead of pointing to a service living in the envtest, we need to point to our machine. Therefore, we needed to update that section to instead look like:

  conversion:
    strategy: Webhook
    webhook:
      clientConfig:
        url: "https://localhost:9443/convert"
      conversionReviewVersions:
        - v1beta1
        - v1beta2

This is another manual change that would be great to not need to figure out and then do.

3. Creating certs for the conversion webhook

The APIServer requires that the conversion webhook uses https, so we needed to manually create a cert and key signed for the SAN of DNS:localhost. We then needed to add an argument to the manager that we create in suite_test.go for the CertDir and point to the directory holding our cert and key. Finally, we needed to update the conversion webhook of our CRD to look like this:

  conversion:
    strategy: Webhook
    webhook:
      clientConfig:
        caBundle: <Base64_Encoded_Cert>
        url: "https://localhost:9443/convert"
      conversionReviewVersions:
        - v1beta1
        - v1beta2




It would be great to better support the use case of testing conversion webhooks in integration tests by adding documentation describing what needs to be done to set up and run conversion webhooks in integration tests and automating or somehow alleviating the manual steps described above necessary to get conversion webhooks running in integration tests. Please let me know if there is any other information I can provide!

@robbie-demuth
Copy link

Our team is starting to write conversion webhooks for our operator and I stumbled across this. I did a bit of digging and #1525 makes it seem like 2, 3, and maybe 1 should be unnecessary. What version of controller-runtime are you using?

@robbie-demuth
Copy link

After doing some more digging, I can confirm that this is indeed an issue

@robbie-demuth
Copy link

I did a bit of digging and while it looks like there does seem to be code that modifies CRDs for webhook conversion, it hits the continue statement (at least in my case)

https://github.com/kubernetes-sigs/controller-runtime/blob/v0.11.2/pkg/envtest/crd.go#L367-L385

@wreed4
Copy link

wreed4 commented May 26, 2022

@robbie-demuth I've had it hit the continue there because in your scheme for the tests you need to make sure to add both the spokes (the place where your webhook is likely being tested from) and the hub. All versions of your crds need to be in the scheme. Otherwise the isConvertible check fails.

But I'm still hitting this issue.. not sure if I'm using an outdated version of the controller-runtime or not...

@wreed4
Copy link

wreed4 commented May 26, 2022

I solved it. It's a problem with how kubebuilder generates the code (in my case).

I started debugging, and convertibles seemed to be empty

(dlv) p convertibles
map[k8s.io/apimachinery/pkg/runtime/schema.GroupKind]struct {} []

So I looked at how that was generated, and it was the issue I was mentioning above. the check on

if ok, err := conversion.IsConvertible(scheme, obj); ok && err == nil {
fails. But why!?! I knew I had to add all my types to the scheme!

It's because the generated code called envtest.Start() before I built up my scheme. You need to build up the scheme first, and then pass that in with

		CRDInstallOptions: envtest.CRDInstallOptions{
			Scheme: scheme,
		},

By default it will use scheme.Scheme, so if you choose to use that, you're probably all good, but the generated code uses runtime.NewScheme(). so I opted to simply rearrange the code and pass in the scheme created in the automated code.

TL;DR, make sure all versions of your object you want to convert is added to the correct scheme before calling Start() on the Environment.

Hope that solves your problem too!!

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 24, 2022
@bleech1
Copy link
Author

bleech1 commented Sep 3, 2022

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 3, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 2, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 1, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Feb 1, 2023
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

5 participants