Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature request: Skip k8s resource destroy if the underlying cluster is destroyed #491

Closed
lblackstone opened this issue Mar 20, 2019 · 12 comments
Assignees
Labels
customer/feedback Feedback from customers impact/performance Something is slower than expected kind/enhancement Improvements or new features resolution/wont-fix This issue won't be fixed

Comments

@lblackstone
Copy link
Member

In cases where a stack to be destroyed includes the k8s cluster, it would be nice to skip waiting on the k8s resources to delete, since they will be deleted implicitly when the cluster is deleted.

This is an optimization, and it's possible it wouldn't play nicely with cloud provider resources like load balancers that are created in response to k8s resource creation.

@lblackstone lblackstone added impact/performance Something is slower than expected customer/feedback Feedback from customers kind/feature labels Mar 20, 2019
@lblackstone lblackstone self-assigned this Mar 20, 2019
@cleverguy25
Copy link

I dont think this is just an optimization. In my case, some of the k8s resources did not want to tear themselves down, such as the Istio namespace. So the destroy was completely blocked.

@hausdorff
Copy link
Contributor

Related to, but not the same as, this: #416

@4c74356b41
Copy link

4c74356b41 commented Mar 28, 2019

honestly, i dont see a problem why cant you just infer that everything that depends on something will get autodeleted.

or at least make it an opt-in bevaiour

@hausdorff
Copy link
Contributor

@4c74356b41 In general, anything can depend on anything, so deleting a parent guarantees nothing about whether the child has been deleted, too. This is also true generally of providers -- deleting a specific high-order provider does not guarantee that all resources using it are also deleted.

What you are really asking for is a specific exception for Kubernetes clusters, and that's what this issue is proposing -- an exception.

@4c74356b41
Copy link

yeah, sure, what about kubernetes created and configured in the same config? what about azure resource groups in the same config (everything has to depend on them)? etc

@cleverguy25
Copy link

cleverguy25 commented Mar 29, 2019 via email

@hausdorff
Copy link
Contributor

I'm not sure how this proposal is different than what the issue is proposing. :)

@4c74356b41
Copy link

because its more general? not just about kubernetes.

@cleverguy25
Copy link

cleverguy25 commented Mar 30, 2019 via email

@hausdorff
Copy link
Contributor

Understood. Rest assured, we will do something that works for this class of problems here. :)

@dor-utila
Copy link

Seems related to pulumi/pulumi#11095

@lblackstone lblackstone added the resolution/wont-fix This issue won't be fixed label Jul 14, 2023
@lblackstone
Copy link
Member Author

We don't have plans to implement this directly. However, it is possible to delete the cluster separately, and then use the deleteUnreachable provider option to clean up the stack. See #2489

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
customer/feedback Feedback from customers impact/performance Something is slower than expected kind/enhancement Improvements or new features resolution/wont-fix This issue won't be fixed
Projects
None yet
Development

No branches or pull requests

6 participants