Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Avoid restarting pods when Deployment doesn't change #2772

Open
dasch opened this issue May 31, 2018 · 10 comments
Open

Avoid restarting pods when Deployment doesn't change #2772

dasch opened this issue May 31, 2018 · 10 comments

Comments

@dasch
Copy link
Contributor

dasch commented May 31, 2018

In our setup, we have a single stage updating an increasing number of Deployments. In cases where we need to modify the resource requests for a single role, we would like to avoid disrupting unrelated Deployments.

I believe the default behavior of Deployments is to no-op if there are no changes to the pod template. However, since Samson injects extra metadata into pods on each release (the release id, for instance), every Deployment gets updated.

I don't think it's this is necessary if the purpose is to track the individual pods in a Deployment. For a given Deployment, you can get the current ReplicaSet, and for that ReplicaSet, you can get the pod-template-hash that each pod gets labelled with, and this allows you to query the pods.

@dasch
Copy link
Contributor Author

dasch commented May 31, 2018

@grosser am I wrong in my analysis? Is this way of tracking a leftover from the pre-Deployment days?

@grosser
Copy link
Contributor

grosser commented May 31, 2018

I'd recommend having a 1-off stage that can be used to deploy to these (possibly with changing the deploy-group as needed).

Historically samson was never a noop when not changed kinda tool ... so not really thrilled about this.

I would be possible to do this deplyment->replicaset->pod lookup, but then it also needs to work for Pod/DaemonSet/StatefulSet/Job ... so smells like a lot of work.

@dasch
Copy link
Contributor Author

dasch commented Jun 1, 2018

The problem is that we'd need a lot more knowledge on the parts of operators in order to be able to safely do resource/replication changes while under load – causing a full restart of existing processes each time is not great if we're already running up against our capacity. This will be true of any service.

@dasch
Copy link
Contributor Author

dasch commented Jun 1, 2018

In practice, we'd resort to mutating the Deployment directly through kubectl, which would have unforeseen consequences wrt Samson.

@grosser
Copy link
Contributor

grosser commented Jun 1, 2018 via email

@dasch
Copy link
Contributor Author

dasch commented Jun 4, 2018

This is not just due to scaling, it's also when we realize we need to change the resource limits.

@grosser
Copy link
Contributor

grosser commented Jun 4, 2018 via email

@dasch
Copy link
Contributor Author

dasch commented Jun 5, 2018

The problem is that we typically modify the resources for just a single Deployment, and sometimes only in a single Deploy Group – this doesn't actually change any other Deployments, yet all are restarted.

@grosser
Copy link
Contributor

grosser commented Jun 5, 2018 via email

@dasch
Copy link
Contributor Author

dasch commented Jun 6, 2018

Hmm...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants