-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cannot migrate VMI: PVC golden-pvc is not shared Error on standalone kuberenetes cluster #11737
Comments
Hm... according to the logs, the VM has started successfully. What is wrong exactly? Any other logs/errors?
You need to create accessModes:
- ReadWriteMany But
This is an old and unsupported version of KubeVirt. Better to use the latest one |
I did upgrade to v1.2.0 still the same issue as above |
I have upgrade to 1.2.0 version(latest) I see on a single node kuberenetes cluster , where migration is not required it is giving me this error and disk not getting attached to vm |
The message about migration is not an error. It is just a condition showing that migration is not possible with your VM configuration. Could you provide logs of virt-handler and virt-launcher pods? Also, looking at your PVC, I see that it has |
I have enabled verbose to virt-handler log and virt-launcher. As per the procedure mentioned, there is no mention of storageclass with volumemode -> block. https://kubevirt.io/2019/KubeVirt_storage_rook_ceph.html One thing I did not understand, why it was trying to check for migration even though I have only a single node in the Kubernetes cluster. thanks in advance |
You can also try creating a simple pod with that PVC and see if that one works. If not, then probably smth is missing in the rook-ceph config.
I would still suggest trying to set
Even though you may now have a single node, potentially you can add more to the cluster. KubeVirt checks migration condition basing solely on the VM configuration and irrespective of your cluster layout. Again, this is not an error but just a condition status report. |
Actually, how do you see that PVC is not attached?
Do you have Could you provide the output of |
One more thing I noticed in your VM yaml you have |
I have removed the bootorder itself in yaml file . Still same issue.
|
User guide followed as mentioned in below link
-->https://kubevirt.io/2019/KubeVirt_storage_rook_ceph.html
kubectl get storageclass vmapplication -o yaml
PVC created :
root@b20cc2ad6868:~# kubectl get pvc golden-pvc -o yaml
VM creation:
What happened:
PVC is not getting attached to VM
What you expected to happen:
PVC should get attach VM an should be visible.
How to reproduce it (as minimally and precisely as possible):
using kubevirt version release-0.58
using cdi version : v1.55.2
follow the same procedure as mention in the guide mentioned above
Additional context:
Add any other context about the problem here.
Environment:
standalone kuberenetes cluster
kubevirt version release-0.58
cdi version : v1.55.2
architecture: amd64
bootID: a488f6b5-dd9e-4881-b945-eb00b81dc7f0
containerRuntimeVersion: containerd://1.7.13
kernelVersion: 6.1.74-talos
kubeProxyVersion: v1.29.3
kubeletVersion: v1.29.3
machineID: c6b168e76b12e2d1c3c19bae4457515c
operatingSystem: linux
osImage: Talos (v1.6.4)
systemUUID: 34393350-3837-5a43-3232-343630305959
The text was updated successfully, but these errors were encountered: