New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature request: Skip k8s resource destroy if the underlying cluster is destroyed #491
Comments
I dont think this is just an optimization. In my case, some of the k8s resources did not want to tear themselves down, such as the Istio namespace. So the destroy was completely blocked. |
Related to, but not the same as, this: #416 |
honestly, i dont see a problem why cant you just infer that everything that depends on something will get autodeleted. or at least make it an opt-in bevaiour |
@4c74356b41 In general, anything can depend on anything, so deleting a parent guarantees nothing about whether the child has been deleted, too. This is also true generally of providers -- deleting a specific high-order provider does not guarantee that all resources using it are also deleted. What you are really asking for is a specific exception for Kubernetes clusters, and that's what this issue is proposing -- an exception. |
yeah, sure, what about kubernetes created and configured in the same config? what about azure resource groups in the same config (everything has to depend on them)? etc |
Maybe not an exception, but a way to denote a true parent child relationship, or know that helm and kubernetes cluster have that relationship because of the provider reference.
…________________________________
From: 4c74356b41 <notifications@github.com>
Sent: Thursday, March 28, 2019 11:22:33 PM
To: pulumi/pulumi-kubernetes
Cc: Cleve Littlefield; Comment
Subject: Re: [pulumi/pulumi-kubernetes] Feature request: Skip k8s resource destroy if the underlying cluster is destroyed (#491)
yeah, sure, what about kubernetes created and configured in the same config? what about azure resource groups in the same config (everything has to depend on them)? etc
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub<#491 (comment)>, or mute the thread<https://github.com/notifications/unsubscribe-auth/AFIGIUaCR1wgJg2qqirp4DsiZ1qn9pe_ks5vbbEpgaJpZM4cAGE5>.
|
I'm not sure how this proposal is different than what the issue is proposing. :) |
because its more general? not just about kubernetes. |
Right. If you explicitly or implicitly know that there is true parent child relation ship where deleting the parent deletes the child, it doesn't matter if it is just kubernetes or any other tech. Helm charts depend on the cluster, so maybe that relationship can be implicit from the framework code, but you can allow it to be explicitly settable as well.
…________________________________
From: 4c74356b41 <notifications@github.com>
Sent: Friday, March 29, 2019 1:46:02 PM
To: pulumi/pulumi-kubernetes
Cc: Cleve Littlefield; Comment
Subject: Re: [pulumi/pulumi-kubernetes] Feature request: Skip k8s resource destroy if the underlying cluster is destroyed (#491)
because its more general? not just about kubernetes.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub<#491 (comment)>, or mute the thread<https://github.com/notifications/unsubscribe-auth/AFIGIUg8HgAveYtAwQ8-c9WfVKCwU3W4ks5vbnuKgaJpZM4cAGE5>.
|
Understood. Rest assured, we will do something that works for this class of problems here. :) |
Seems related to pulumi/pulumi#11095 |
We don't have plans to implement this directly. However, it is possible to delete the cluster separately, and then use the |
In cases where a stack to be destroyed includes the k8s cluster, it would be nice to skip waiting on the k8s resources to delete, since they will be deleted implicitly when the cluster is deleted.
This is an optimization, and it's possible it wouldn't play nicely with cloud provider resources like load balancers that are created in response to k8s resource creation.
The text was updated successfully, but these errors were encountered: