-
Notifications
You must be signed in to change notification settings - Fork 79
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support kube scheduler ComponentConfig #197
Comments
/cc @ravisantoshgudimetla any idea if we should be working on this? I'm not certain on the status of CC but I know upstream was trying to get it to GA |
/cc @ingvagabund |
There was some discussion in our channel about whether we'll ever actually need to support CC. I asked in sig-scheduling and it seems that yes, someday policy config will be deprecated but CC is still a long way from GA. So, this is something we should keep in mind for our operator but more of a long term goal |
Upstream tracking issue to deprecate policy in favor of componentconfig: kubernetes/kubernetes#87526 |
As of 1.19, the Currently, we observe the scheduler config by giving the scheduler a configmap that has the policy in it (see cluster-kube-scheduler-operator/pkg/operator/configobservation/scheduler/observe_scheduler.go Line 17 in 82b5c99
In For reference, currently when you try to run a >1.19 kube-scheduler image with our operator you get the following error right away:
Updating our KubeSchedulerConfiguration to v1alpha2 (#245) fixes it, but trying to pass a custom config results in this error:
Which shows the need to switch from our current cc @ingvagabund ^ what we talked about |
/priority important-soon |
To update this, the legacy Policy config is still available under |
PR transitioning us to the new componentconfig: #255 |
Should also note: Policy API is scheduled to be removed for 1.23: kubernetes/kubernetes#92143 |
Issues go stale after 90d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle stale |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle stale |
/close |
@damemi: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@mmgaggle the enhancement should definitely be updated to reflect its current state (it is now implemented). Thank you for pointing this out |
Excellent! I'm experimenting with this right now, but I'm having trouble figuring out how to configure a plugin. It looks like you can adjust the weights via policy, and can enable / disable plugins. What I'm interested in specifically is the ability to configure / use It's not clear to me if this can be done, and if it can, how I might do that. Any ideas? |
@mmgaggle I'm very sorry that this thread slipped off my radar, to answer your question though the cluster-level default constraints are currently not possible with the default kube-scheduler in OCP. If you wish to configure settings that require a custom KubeSchedulerConfiguration in OpenShift, right now your only option is to run a secondary scheduler in the cluster. |
I know right now we support policy config through the operator, but don't know what our status is on componentconfig. So making this issue to track that and look into it
The text was updated successfully, but these errors were encountered: