-
Notifications
You must be signed in to change notification settings - Fork 38.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
A single scoring plugin for node resources #98746
Comments
@ahg-g: This issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
It's likely already has an sample plugin in scheduler-plugins/#36, it;s reimplement for that plugin? |
SGTM. Should we hold v1beta2 component config to next release to account for this? #95308 is not merged yet. |
I would hold it, any downsides to delaying it? |
not really, we are not under pressure of API deprecation like REST APIs are |
Ah, I missed this one. The above proposal tries to support every single combination, which I think is an overkill. We can make reasonable assumptions that we bake into each strategy to avoid the over-proliferation of options. For example, |
Should we have a KEP for this? |
Yes, I was thinking we could target 1.22 |
I can put a KEP together in an hour, do we have time to review and merge before the deadline tomorrow? I don't think this needs PRR if we focus this cycle on the config API change to support existing plugins through |
this will help us deprecate the existing plugins in v1beta2 in 1.22 if all goes to plan and the community is happy with the new api. |
Fine with me to do a review last minute, but PRR is a hard requirement now. |
ok, lets leave it to 1.22 then.. |
+1 this as a longer-term goal, I like exposing scheduler config options in the pod spec because it gives users better control over their app placement. |
I opened kubernetes/enhancements#2458 |
So that't the placeholder for the 1.22 work, right? |
I will send a PR in 30min, if it gets reviewed by tomorrow, then good, if not then we will delay to 1.22. |
I sent out kubernetes/enhancements#2461 @Huang-Wei @alculquicondor @damemi timing is too tight, so please feel free to push back... we may not get a PRR too, so... |
sry,I don't understand what is difference in (LeastAllocatable and BestFit) or (MostAllocatable and WorstFit)? |
/assign |
Here is the plan:
|
I created #102151 to remove algorithmProvider pkg. |
/cc |
/close This is now done. |
@ahg-g: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
We have four score plugins that implement different strategies for preferred resource allocation. Those plugins should not be enabled together, only one.
I propose we deprecate those and combine them under one Score plugin, the same one used for filtering (
NodeResourcesFit
), and add aFitStrategy
parameter toNodeResourcesFit
that allows users to select which exact scoring strategy to run.I also suggest that we go ahead and implement four other strategies:
LeastAllocatable
,MostAllocatable
,BestFit
andWorstFit
. The first two are basically a graduation of https://github.com/kubernetes-sigs/scheduler-plugins/tree/master/pkg/noderesources, the latter two are common scheduling placement strategies that k8s currently don't support.BestFit
for example is important to achieve higher utilization in clusters that have different VM shapes.In the future, I think we can explore adding
FitStrategy
to the pod spec, and make it a workload parameter rather than a static scheduler configuration. This can be useful when we have a mix of serving and batch workloads running on the same cluster. Although scheduling profiles may be an alternative for this, albeit a less flexible one.Previous discussion on this topic: #93547
/sig scheduling
The text was updated successfully, but these errors were encountered: