Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubevirt-update-validator.kubevirt.io: failed to call webhook... context deadline exceeded #11842

Closed
elijahchanakira opened this issue May 2, 2024 · 4 comments
Labels

Comments

@elijahchanakira
Copy link

elijahchanakira commented May 2, 2024

What happened:
I installed the latest version of kubevirt. Whenever I try to:

  • patch kubevirt
  • delete kubevirt
  • create a vm
    I get the following error:
# Modifying Kubevirt
error: kubevirts.kubevirt.io "kubevirt" could not be patched: Internal error occurred: failed calling webhook "kubevirt-update-validator.kubevirt.io": failed to call webhook: Post "https://kubevirt-operator-webhook.kubevirt.svc:443/kubevirt-validate-update?timeout=10s": dial tcp 10.233.32.193:443: i/o timeout

# Creating a VM
Error from server (InternalError): error when creating "vm_spec.yml": Internal error occurred: failed calling webhook "virtualmachines-mutator.kubevirt.io": failed to call webhook: Post "https://virt-api.kubevirt.svc:443/virtualmachines-mutate?timeout=10s": context deadline exceeded

What you expected to happen:
I expect the resource to be patched or deleted.

How to reproduce it (as minimally and precisely as possible):

# Install latest kubevirt version
export KUBEVIRT_VERSION=$(curl -s https://api.github.com/repos/kubevirt/kubevirt/releases | grep tag_name | grep -v -- '-rc' | sort -r | head -1 | awk -F': ' '{print $2}' | sed 's/,//' | xargs)
kubectl create -f https://github.com/kubevirt/kubevirt/releases/download/${KUBEVIRT_VERSION}/kubevirt-operator.yaml
kubectl create -f https://github.com/kubevirt/kubevirt/releases/download/${KUBEVIRT_VERSION}/kubevirt-cr.yaml

# After it's deployed
kubectl edit -n kubevirt kubevirt

Additional context:
I also see the following warnings whenever I'm using kubectl:

E0502 19:11:31.316999 2785886 memcache.go:287] couldn't get resource list for upload.cdi.kubevirt.io/v1beta1: the server is currently unable to handle the request
E0502 19:11:31.317521 2785886 memcache.go:287] couldn't get resource list for subresources.kubevirt.io/v1: the server is currently unable to handle the request
E0502 19:11:31.318821 2785886 memcache.go:287] couldn't get resource list for subresources.kubevirt.io/v1alpha3: the server is currently unable to handle the request
E0502 19:11:31.322569 2785886 memcache.go:121] couldn't get resource list for upload.cdi.kubevirt.io/v1beta1: the server is currently unable to handle the request
E0502 19:11:31.323507 2785886 memcache.go:121] couldn't get resource list for subresources.kubevirt.io/v1alpha3: the server is currently unable to handle the request
E0502 19:11:31.326942 2785886 memcache.go:121] couldn't get resource list for subresources.kubevirt.io/v1: the server is currently unable to handle the request
E0502 19:11:31.329505 2785886 memcache.go:121] couldn't get resource list for upload.cdi.kubevirt.io/v1beta1: the server is currently unable to handle the request
E0502 19:11:31.330627 2785886 memcache.go:121] couldn't get resource list for subresources.kubevirt.io/v1: the server is currently unable to handle the request
E0502 19:11:31.331830 2785886 memcache.go:121] couldn't get resource list for subresources.kubevirt.io/v1alpha3: the server is currently unable to handle the request
E0502 19:11:31.334317 2785886 memcache.go:121] couldn't get resource list for upload.cdi.kubevirt.io/v1beta1: the server is currently unable to handle the request
E0502 19:11:31.335110 2785886 memcache.go:121] couldn't get resource list for subresources.kubevirt.io/v1: the server is currently unable to handle the request
E0502 19:11:31.337168 2785886 memcache.go:121] couldn't get resource list for subresources.kubevirt.io/v1alpha3: the server is currently unable to handle the request

I also created a test pod to test dns lookups/connectictivity

[ root@curl:/ ]$ nslookup kubevirt-operator-webhook
Name:      kubevirt-operator-webhook
Address 1: 10.233.29.84 kubevirt-operator-webhook.kubevirt.svc.cluster.local

[ root@curl:/ ]$ ping kubevirt-operator-webhook
PING kubevirt-operator-webhook (10.233.29.84): 56 data bytes
^C
--- kubevirt-operator-webhook ping statistics ---
19 packets transmitted, 0 packets received, 100% packet loss

Environment:

  • KubeVirt version (use virtctl version): v1.2.0
  • Kubernetes version (use kubectl version): 1.26.5
  • VM or VMI specifications: N/A
  • Cloud provider or hardware configuration: N/A
  • OS (e.g. from /etc/os-release): ubuntu 20.04
  • Kernel (e.g. uname -a): Linux
  • Install tools: N/A
  • Others: N/A
@aburdenthehand
Copy link
Contributor

/cc @fossedihelm

@fossedihelm
Copy link
Contributor

Hey @elijahchanakira! Thanks for raising the issue. As shown in the support matrix: https://github.com/kubevirt/sig-release/blob/main/releases/k8s-support-matrix.md the minimum k8s supported version by KubeVirt 1.2 is 1.27.
I see that you are using k8s 1.26.5.
Can you try to use 1.27 and see if the issue is still there?
Thank you

@xpivarc
Copy link
Member

xpivarc commented May 10, 2024

Also, please share kubectl -n kubevirt get po

@elijahchanakira
Copy link
Author

I modified my setup so I wont be able to reproduce this for the time-being, I'll close my ticket for now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants