You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is likely open to interpretation around the goals of dedicatedCpuPlacement but at present on hosts without SMT enabled vCPUs exposed as threads to the guest OS are pinned to non-sibling pCPUs. Users might be surprised by the performance of workloads across such threads given the request.
What you expected to happen:
The VirtualMachineInstance to not schedule until a SMT enabled compute is present in the environment.
How to reproduce it (as minimally and precisely as possible):
Indeed, this is open to interpretation. So far, the general intent was to provide dedicated CPUs to the guest however, the topology assignment was on a best-efforts basis. The only exception was with guestMappingPassthrough where we are strict about numa nodes assignment.
We can of course follow up and improve the correct behavior by further enhancing the dedicatedCPUs API.
I would start by reporting whether SMP is enabled on the nodes.
We can of course follow up and improve the correct behavior by further enhancing the dedicatedCPUs API.
I would start by reporting whether SMP is enabled on the nodes.
ACK thanks for confirming this is a valid thing to fix. It should be easy enough to label a node given the value in /sys/devices/system/cpu/smt/active and to use that label when scheduling later if threads is greater than 1. I'll try to find some time to work on this in the coming weeks.
/cc @vladikr
What happened:
This is likely open to interpretation around the goals of
dedicatedCpuPlacement
but at present on hosts without SMT enabled vCPUs exposed as threads to the guest OS are pinned to non-sibling pCPUs. Users might be surprised by the performance of workloads across such threads given the request.What you expected to happen:
The
VirtualMachineInstance
to not schedule until a SMT enabled compute is present in the environment.How to reproduce it (as minimally and precisely as possible):
Uses kubevirt/kubevirtci#1171
Additional context:
N/A
Environment:
virtctl version
): N/Akubectl version
): N/Auname -a
): N/AThe text was updated successfully, but these errors were encountered: