Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VM Resource Allocation #11909

Open
xrei opened this issue May 14, 2024 · 2 comments
Open

VM Resource Allocation #11909

xrei opened this issue May 14, 2024 · 2 comments
Labels

Comments

@xrei
Copy link

xrei commented May 14, 2024

Hello, I have a question regarding the configuration of VM resource requests and limits.

Consider the following scenario: I am using a c.4 instance type with a VM specification (where many fields have been omitted, and no other requests or limits are defined). This setup should allocate 4 CPUs and 31.2Gi of memory. However, the output from kubectl describe presents a strange value:

    resources:
      limits:
        devices.kubevirt.io/kvm: "1"
        devices.kubevirt.io/tun: "1"
        devices.kubevirt.io/vhost-net: "1"
      requests:
        cpu: 400m
        devices.kubevirt.io/kvm: "1"
        devices.kubevirt.io/tun: "1"
        devices.kubevirt.io/vhost-net: "1"
        ephemeral-storage: 50M
        memory: 33845097266800m  <-

According to the K8s documentation, the suffix used for the memory unit here is invalid. It appears to represent millibytes, leading to some inconsistencies. Using the VirtualMachineInstancetype, I am unable to accurately retreive the actual consumption of the VMs. Additionally, after deployment, there is an added overhead to memory usage, and limits are not configured. Although I executed a stress cmd within the VM and it adhered to the resource limits specified, I remain uncertain about this configuration.

It is crucial for me to understand how many VMs can be allocated on the cluster before it run out of resources. Therefore, I need to determine the resources assigned to each VM.

When I manually set requests and limits for a VM, the values and suffixes appear correct, and the node section labeled 'Allocated resources' reflects accurate consumption.

My question is: Is it safe to manually configure the resource section in the spec as follows?

    spec:
      domain:
        resources:
          requests:
            memory: 32Gi
            cpu: 4
          limits:
            memory: 32Gi
            cpu: 4

VirtualMachineInstancetype and VirtualMachine specs:

apiVersion: instancetype.kubevirt.io/v1beta1
kind: VirtualMachineInstancetype
metadata:
  name: c.4
  namespace: default
spec:
  cpu:
    guest: 4
  memory:
    guest: 31.2Gi
---
apiVersion: kubevirt.io/v1
kind: VirtualMachine
spec:
  instancetype:
    kind: VirtualMachineInstancetype
    name: c.4
...

Could you please advise on the safety and reliability of manually setting these resource specifications?

Environment:

  • KubeVirt version : v1.2.0
  • Kubernetes version (use kubectl version): v1.28.6
  • OS (e.g. from /etc/os-release): ubuntu
  • Kernel (e.g. uname -a):5.15.0-106-generic
@xrei xrei added the kind/bug label May 14, 2024
@aburdenthehand
Copy link
Contributor

/cc @lyarwood

@lyarwood
Copy link
Member

However, the output from kubectl describe presents a strange value:
[..]
memory: 33845097266800m <-
[..]

That's odd, it should be Mi IIRC, I'll try to reproduce with your instance type now.

According to the K8s documentation, the suffix used for the memory unit here is invalid. It appears to represent millibytes, leading to some inconsistencies. Using the VirtualMachineInstancetype, I am unable to accurately retreive the actual consumption of the VMs. Additionally, after deployment, there is an added overhead to memory usage, and limits are not configured. Although I executed a stress cmd within the VM and it adhered to the resource limits specified, I remain uncertain about this configuration.

This came up recently in slack [1] with the answer being to use the AutoResourceLimitsGate feature to apply limits when using instance types [2][3].

$ ./cluster-up/kubectl.sh patch kv/kubevirt -n kubevirt --type merge -p '{"spec":{"configuration":{"developerConfiguration":{"featureGates": ["AutoResourceLimitsGate"]}}}}'

$ ./cluster-up/kubectl.sh apply -f -<<EOF
apiVersion: v1
kind: ResourceQuota
metadata:
  name: quota-example
spec:
  hard:
    requests.cpu: "2"
    requests.memory: 4Gi
    limits.cpu: "8"
    limits.memory: 8Gi
EOF

$ ./cluster-up/virtctl.sh create vm --name fedora --instancetype u1.small --preference fedora --volume-containerdisk src:quay.io/containerdisks/fedora:39,name:fedora | ./cluster-up/kubectl.sh apply -f -

$ ./cluster-up/kubectl.sh get pods/virt-launcher-fedora-z7kbh -o json | jq '.spec.containers[] | select
(.name=="compute") | .resources'
selecting podman as container runtime
{
  "limits": {
    "cpu": "1",
    "devices.kubevirt.io/kvm": "1",
    "devices.kubevirt.io/tun": "1",
    "devices.kubevirt.io/vhost-net": "1",
    "memory": "4588Mi"
  },
  "requests": {
    "cpu": "100m",
    "devices.kubevirt.io/kvm": "1",
    "devices.kubevirt.io/tun": "1",
    "devices.kubevirt.io/vhost-net": "1",
    "ephemeral-storage": "50M",
    "memory": "2294Mi"
  }
}

Hopefully this helps!

[1] https://kubernetes.slack.com/archives/C0163DT0R8X/p1713817450084649
[2] https://kubevirt.io/user-guide/virtual_machines/resources_requests_and_limits/#auto-cpu-limits
[3] https://kubevirt.io/user-guide/virtual_machines/resources_requests_and_limits/#auto-memory-limits

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants