Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cannot migrate VMI: PVC golden-pvc is not shared Error on standalone kuberenetes cluster #11737

Open
gururajsrk opened this issue Apr 18, 2024 · 9 comments
Labels

Comments

@gururajsrk
Copy link

User guide followed as mentioned in below link
-->https://kubevirt.io/2019/KubeVirt_storage_rook_ceph.html

kubectl get storageclass vmapplication -o yaml

  clusterID: rook-ceph
  csi.storage.k8s.io/controller-expand-secret-name: rook-csi-rbd-provisioner
  csi.storage.k8s.io/controller-expand-secret-namespace: rook-ceph
  csi.storage.k8s.io/fstype: ext4
  csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node
  csi.storage.k8s.io/node-stage-secret-namespace: rook-ceph
  csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner
  csi.storage.k8s.io/provisioner-secret-namespace: rook-ceph
  imageFeatures: layering
  imageFormat: "2"
  pool: replicapool
provisioner: rook-ceph.rbd.csi.ceph.com
reclaimPolicy: Delete
volumeBindingMode: Immediate

PVC created :
root@b20cc2ad6868:~# kubectl get pvc golden-pvc -o yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
    cdi.kubevirt.io/storage.condition.running: "false"
    cdi.kubevirt.io/storage.condition.running.message: Import Complete
    cdi.kubevirt.io/storage.condition.running.reason: Completed
    cdi.kubevirt.io/storage.import.endpoint: https://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img
    cdi.kubevirt.io/storage.import.importPodName: importer-golden-pvc
    cdi.kubevirt.io/storage.pod.phase: Succeeded
    cdi.kubevirt.io/storage.pod.restarts: "0"
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{"cdi.kubevirt.io/storage.import.endpoint":"https://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img"},"labels":{"app":"containerized-data-importer"},"name":"golden-pvc","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"20Gi"}},"storageClassName":"vmapplication"}}
    pv.kubernetes.io/bind-completed: "yes"
    pv.kubernetes.io/bound-by-controller: "yes"
    volume.beta.kubernetes.io/storage-provisioner: rook-ceph.rbd.csi.ceph.com
    volume.kubernetes.io/storage-provisioner: rook-ceph.rbd.csi.ceph.com
  creationTimestamp: "2024-04-17T05:23:15Z"
  finalizers:
  - kubernetes.io/pvc-protection
  labels:
    app: containerized-data-importer
  name: golden-pvc
  namespace: default
  resourceVersion: "106742382"
  uid: 20dfa473-fa07-439b-9795-fa291dbab440
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 20Gi
  storageClassName: vmapplication
  volumeMode: Filesystem
  volumeName: pvc-20dfa473-fa07-439b-9795-fa291dbab440
status:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 20Gi
  phase: Bound

VM creation:

apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
  annotations:
    kubevirt.io/latest-observed-api-version: v1
    kubevirt.io/storage-observed-api-version: v1alpha3
    meta.helm.sh/release-name: app-release
  generation: 1
  labels:
    app.kubernetes.io/managed-by: Helm
  name: vmapp
  namespace: default
spec:
  running: true
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: vmapp
        kubevirt.io/domain: vmapp
        statusCheck: "true"
    spec:
      domain:
        cpu:
          cores: 8
        devices:
          disks:
          - bootOrder: 2
            disk:
              bus: sata
            name: virtiocontainerdisk
        firmware:
          bootloader:
            efi:
              secureBoot: false
        machine:
          type: q35
        resources:
          limits:
            cpu: "6"
            memory: 6G
          requests:
            memory: 6G
      volumes:
        - name: virtiocontainerdisk
          persistentVolumeClaim:
             claimName: golden-pvc

What happened:
PVC is not getting attached to VM

Status:
  Conditions:
    Last Probe Time:       <nil>
    Last Transition Time:  2024-04-18T06:21:11Z
    Status:                True
    Type:                  Ready
    Last Probe Time:       <nil>
    Last Transition Time:  <nil>
    Message:               cannot migrate VMI: PVC golden-pvc is not shared, live migration requires that all PVCs must be shared (using ReadWriteMany access mode)
    Reason:                DisksNotLiveMigratable
    Status:                False
    Type:                  LiveMigratable
  Created:                 true
  Printable Status:        Running
  Ready:                   true
  Volume Snapshot Statuses:
    Enabled:  false
    Name:     virtiocontainerdisk
    Reason:   No VolumeSnapshotClass: Volume snapshots are not configured for this StorageClass [vmapplication] [virtiocontainerdisk]
Events:
  Type    Reason            Age   From                       Message
  ----    ------            ----  ----                       -------
  Normal  SuccessfulCreate  82s   virtualmachine-controller  Started the virtual machine by creating the new virtual machine instance vmapp

What you expected to happen:
PVC should get attach VM an should be visible.

How to reproduce it (as minimally and precisely as possible):
using kubevirt version release-0.58
using cdi version : v1.55.2
follow the same procedure as mention in the guide mentioned above

Additional context:
Add any other context about the problem here.

Environment:
standalone kuberenetes cluster
kubevirt version release-0.58
cdi version : v1.55.2
architecture: amd64
bootID: a488f6b5-dd9e-4881-b945-eb00b81dc7f0
containerRuntimeVersion: containerd://1.7.13
kernelVersion: 6.1.74-talos
kubeProxyVersion: v1.29.3
kubeletVersion: v1.29.3
machineID: c6b168e76b12e2d1c3c19bae4457515c
operatingSystem: linux
osImage: Talos (v1.6.4)
systemUUID: 34393350-3837-5a43-3232-343630305959

@vasiliy-ul
Copy link
Contributor

vasiliy-ul commented Apr 18, 2024

Type    Reason            Age   From                       Message
  ----    ------            ----  ----                       -------
  Normal  SuccessfulCreate  82s   virtualmachine-controller  Started the virtual machine by creating the new virtual machine instance vmapp

Hm... according to the logs, the VM has started successfully. What is wrong exactly? Any other logs/errors?

cannot migrate VMI: PVC golden-pvc is not shared Error on standalone kuberenetes cluster

You need to create RWX PVC if you want your VM to be migratable. I.e.

  accessModes:
  - ReadWriteMany

But RWO should be just fine if migration is not needed.

kubevirt version release-0.58

This is an old and unsupported version of KubeVirt. Better to use the latest one v1.2.0.

@gururajsrk
Copy link
Author

gururajsrk commented Apr 29, 2024

I did upgrade to v1.2.0 still the same issue as above

@gururajsrk
Copy link
Author

I did upgrade to v1.2.0 still the same issue as above

Type    Reason            Age   From                       Message
  ----    ------            ----  ----                       -------
  Normal  SuccessfulCreate  82s   virtualmachine-controller  Started the virtual machine by creating the new virtual machine instance vmapp

Hm... according to the logs, the VM has started successfully. What is wrong exactly? Any other logs/errors?

cannot migrate VMI: PVC golden-pvc is not shared Error on standalone kuberenetes cluster

You need to create RWX PVC if you want your VM to be migratable. I.e.

  accessModes:
  - ReadWriteMany

But RWO should be just fine if migration is not needed.

kubevirt version release-0.58

This is an old and unsupported version of KubeVirt. Better to use the latest one v1.2.0.

I have upgrade to 1.2.0 version(latest) I see on a single node kuberenetes cluster , where migration is not required it is giving me this error and disk not getting attached to vm

@vasiliy-ul
Copy link
Contributor

vasiliy-ul commented May 2, 2024

The message about migration is not an error. It is just a condition showing that migration is not possible with your VM configuration. Could you provide logs of virt-handler and virt-launcher pods?

Also, looking at your PVC, I see that it has volumeMode: Filesystem. I am not a big expert of rook-ceph, but have you tried using volumeMode: Block?

@gururajsrk
Copy link
Author

gururajsrk commented May 2, 2024

The message about migration is not an error. It is just a condition showing that migration is not possible with your VM configuration. Could you provide logs of virt-handler and virt-launcher pods?

Also, looking at your PVC, I see that it has volumeMode: Filesystem. I am not a big expert of rook-ceph, but have you tried using volumeMode: Block?

virt-handler.txt

virt_launcher.txt

I have enabled verbose to virt-handler log and virt-launcher.

As per the procedure mentioned, there is no mention of storageclass with volumemode -> block.
Even filesystem option should work. In fact in the example they have used xfs.

https://kubevirt.io/2019/KubeVirt_storage_rook_ceph.html

One thing I did not understand, why it was trying to check for migration even though I have only a single node in the Kubernetes cluster.

thanks in advance

@vasiliy-ul
Copy link
Contributor

You can also try creating a simple pod with that PVC and see if that one works. If not, then probably smth is missing in the rook-ceph config.

As per the procedure mentioned, there is no mention of storageclass with volumemode -> block.
Even filesystem option should work. In fact, in the example they have used xfs.
https://kubevirt.io/2019/KubeVirt_storage_rook_ceph.html

I would still suggest trying to set volumeMode: Block for the PVC. The blog post is already several year old, so I would not expect it to be 100% accurate.

One thing I did not understand, why it was trying to check for migration even though I have only a single node in the Kubernetes cluster.

Even though you may now have a single node, potentially you can add more to the cluster. KubeVirt checks migration condition basing solely on the VM configuration and irrespective of your cluster layout. Again, this is not an error but just a condition status report.

@vasiliy-ul
Copy link
Contributor

vasiliy-ul commented May 3, 2024

Actually, how do you see that PVC is not attached?

What happened:
PVC is not getting attached to VM

Do you have virt-launcher pod running? According to the logs, the VM is running and that means the PVC should be attached.

Could you provide the output of k describe pod virt-launcher and k describe vmi vmapp?

@vasiliy-ul
Copy link
Contributor

One more thing I noticed in your VM yaml you have - bootOrder: 2. You do not need to specify boot order if you have only one disk.

@gururajsrk
Copy link
Author

One more thing I noticed in your VM yaml you have - bootOrder: 2. You do not need to specify boot order if you have only one disk.

I have removed the bootorder itself in yaml file . Still same issue.

apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
  name: vmapp
  namespace: default
spec:
  running: true
  template:
     spec:
      domain:
        cpu:
          cores: 8
        devices:
           disks:
           - disk:
                bus: sata
             name: virtiocontainerdisk
        machine:
         type: q35
        resources:
           limits:
             cpu: "6"
             memory: 6G
           requests:
             memory: 6G
      volumes:
      - dataVolume:
           name: paloalto-datavolume
        name: virtiocontainerdisk

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants