Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

support kubernetes recommended labels #298

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

dxma
Copy link

@dxma dxma commented Jul 23, 2022

Background:

When using dask helm as subchart in my setup, I found it use different set of labels. This will confuse some kubernetes services such as istio and kiali.

Changes:

This PR enables possibility to use kubernetes recommended label set.

Output:

all labels:
helm.sh/chart: dask-0.0.1-set.by.chartpress
app.kubernetes.io/name: dask-jupyter
app.kubernetes.io/instance: dask-jupyter
app.kubernetes.io/component: jupyter
app.kubernetes.io/version: "2022.6.1"
app.kubernetes.io/part-of: dask
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/created-by: dask-0.0.1-set.by.chartpress

selector labels:
app.kubernetes.io/name: dask-jupyter
app.kubernetes.io/instance: dask-jupyter
app.kubernetes.io/component: jupyter

I haven't applied this option to daskhub yet.

Cheers,
Dongxu

Copy link
Member

@jacobtomlinson jacobtomlinson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is great thanks for taking the time here. One small comment.

@@ -60,6 +60,7 @@ The following table lists the configurable parameters of the Dask chart and thei
| Parameter | Description | Default |
| ------------------------ | ----------------------- | -------------- |
| `scheduler.name` | Dask scheduler name. | `"scheduler"` |
| `scheduler.component` | | `"scheduler"` |
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm curious why these component options are configurable? What use cases do you see for changing them?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi, mainly because the value 'scheduler/worker/jupyter' is too generic :-) Think of making dask a part of large system, there might be existing deployments using one of these component value already. In that case, people might prefer to change it into something like 'dask-scheduler' instead.
my $0.02

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fair enough. I'm just on guard for config creep, especially for things that wont get used much. I haven't seen a need for this.

If you think this will come up for folks and isn't just a "what if" then I'm happy for it to go in.

@@ -150,6 +155,7 @@ The following table lists the configurable parameters of the Dask chart and thei
| `jupyter.ingress.pathType` | set pathType in ingress | `"Prefix"` |
| `jupyter.ingress.hostname` | Ingress hostname. | `"dask-jupyter.example.com"` |
| `jupyter.ingress.annotations` | | `{}` |
| `label.style` | helm|kubernetes | `"helm"` |
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like this.

@consideRatio
Copy link
Collaborator

When changing the selector labels of a deployment/statefulset resource, upgrades will fail - the selector labela of a deployment/statefulset resource are immutable.

This can be done still by deleting the deployment/statefulset resource ahead of the helm upgrade manually.

When deleting the deployment, one could declare that the resources created for it, the replicasets that in turn create pods, should still be around but orphaned instead of deleted as is the default. That makes sense if the new deployments selector labels would select the old labels, otherwise not.

Since there is no changelog etc for the charts, this is problematic in mind as it is a breaking change troublesome to document, especially since we use calver and cant indicate a breaking change via the version number either.

A non-breaking change would be to add the new labels next to the old in pod definitions, without changing the selector labels. In jupyterhub helm chart, i proposed this once but we ended up not making a change like this. The recommended labels from helm/k8s community has changed twice since jupyterhub helm chart was created, probably settled down now. A few charts has made these breaking changes, a few havent.

I think if this is to be merged, documentation about it being a breaking change should be established someplace. For example in a CHANGELOG.md file associated with the helm chart in question.

@dxma
Copy link
Author

dxma commented Jul 25, 2022

When changing the selector labels of a deployment/statefulset resource, upgrades will fail - the selector labela of a deployment/statefulset resource are immutable.

This can be done still by deleting the deployment/statefulset resource ahead of the helm upgrade manually.

When deleting the deployment, one could declare that the resources created for it, the replicasets that in turn create pods, should still be around but orphaned instead of deleted as is the default. That makes sense if the new deployments selector labels would select the old labels, otherwise not.

Since there is no changelog etc for the charts, this is problematic in mind as it is a breaking change troublesome to document, especially since we use calver and cant indicate a breaking change via the version number either.

A non-breaking change would be to add the new labels next to the old in pod definitions, without changing the selector labels. In jupyterhub helm chart, i proposed this once but we ended up not making a change like this. The recommended labels from helm/k8s community has changed twice since jupyterhub helm chart was created, probably settled down now. A few charts has made these breaking changes, a few havent.

I think if this is to be merged, documentation about it being a breaking change should be established someplace. For example in a CHANGELOG.md file associated with the helm chart in question.

Absolutely correct, this change (switching to new label set on a running system) is a breaking action, since it needs to delete all existing deployments together with their running pods.. It is one of the main reasons I make it an option and default to current label set.
Also it is not very clear how app should be defined, in current code I see it more like system (app=dask). In other documentations people try to match it with name/instance in kubernetes labels.
Makes sense to manage all labels in the way demonstrated by this PR: one change applies to everywhere.

@consideRatio
Copy link
Collaborator

consideRatio commented Jul 25, 2022

I make it an option and default to current label set.

Ahhhh nice!

If rendering templates with default configuration doesn't lead to changes in the matchLabels aka selector labels, this is good in my mind.

Sorry for my sloppy review to not realize precaution had been taken to avoid the breaking change!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants