Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🐛 Issue-10544 ignore unreachable cluster while deleting machinePool #10553

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

serngawy
Copy link

@serngawy serngawy commented May 3, 2024

What this PR does / why we need it:
Ignore unreachable cluster while deleting machinePools

Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):
Fixes #10544

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign fabriziopandini for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/needs-area PR is missing an area label labels May 3, 2024
@k8s-ci-robot
Copy link
Contributor

Welcome @serngawy!

It looks like this is your first PR to kubernetes-sigs/cluster-api 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes-sigs/cluster-api has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. 😃

@k8s-ci-robot
Copy link
Contributor

Hi @serngawy. Thanks for your PR.

I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. size/S Denotes a PR that changes 10-29 lines, ignoring generated files. labels May 3, 2024
Copy link
Contributor

@killianmuldoon killianmuldoon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/ok-to-test
/area machinepools

@k8s-ci-robot
Copy link
Contributor

@killianmuldoon: The label(s) area/machinepools cannot be applied, because the repository doesn't have them.

In response to this:

/ok-to-test
/area machinepools

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels May 8, 2024
@killianmuldoon killianmuldoon added the area/machinepool Issues or PRs related to machinepools label May 8, 2024
@k8s-ci-robot k8s-ci-robot removed the do-not-merge/needs-area PR is missing an area label label May 8, 2024
@chrischdi
Copy link
Member

I think this one goes into a different direction then described in:

and

/hold

@k8s-ci-robot k8s-ci-robot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label May 8, 2024
@serngawy
Copy link
Author

/cc @mboersma

@serngawy
Copy link
Author

@mboersma would review the PR and let me know your thoughts.

@k8s-ci-robot k8s-ci-robot added size/M Denotes a PR that changes 30-99 lines, ignoring generated files. and removed size/S Denotes a PR that changes 10-29 lines, ignoring generated files. labels May 21, 2024
@serngawy serngawy force-pushed the issue-10544 branch 2 times, most recently from 022d53e to 6da8906 Compare May 21, 2024 19:39
deleteAllowed, clusterClient, err := r.isDeleteMachinePoolAllowed(ctx, cluster)

// Check for cluster allowing delete or machinePool delete timeout.
if deleteAllowed || r.isMachinePoolDeleteTimeoutPassed(machinepool) {
Copy link
Member

@enxebre enxebre May 31, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this saying that if the cluster doesn't have a deletion timestamp and the time out is met we never get through this code path and so we never delete the finaliser blocking deletion?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As I changed the logic based on the previous comment , I fix it now. It says if the machinePool delete allowed OR machinePool delete timeout pass go delete the machinePool external CRs and Nodes

}

return r.deleteRetiredNodes(ctx, clusterClient, machinepool.Status.NodeRefs, machinepool.Spec.ProviderIDList)
// Check if the target cluster client is reachable.
clusterClient, err := r.Tracker.GetClient(ctx, util.ObjectKey(cluster))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why is a function called isMachinePoolDeleteTimeoutPassed returning a clusterClient?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the func name is isDeleteMachinePoolAllowed, it checks if the cluster client is reachable and return the client as it is required to delete the node instead of re-get the client to pass it to the deleteNode func. I will change it to deleteMachinePoolAllowed .

if len(machinepool.Status.NodeRefs) == 0 {
return nil
// isMachinePoolDeleteTimeoutPassed check the machinePool node delete time out.
func (r *MachinePoolReconciler) isMachinePoolDeleteTimeoutPassed(machinepool *expv1.MachinePool) bool {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

// isMachinePoolDeleteTimeoutPassed check the machinePool node delete time out.

should this be named then isNodeTimeoutPassed? so we ref the specific "Node" time out and don't include the machinepool which implicit in the receiver

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we don't delete a specific node , we delete all the nodes belong to this machinePool.

}
func (r *MachinePoolReconciler) reconcileDelete(ctx context.Context, cluster *clusterv1.Cluster, machinepool *expv1.MachinePool) error {
deleteAllowed, clusterClient, err := r.isDeleteMachinePoolAllowed(ctx, cluster)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how is the Node timeout check related to the PR title/issue? If not can we have a separate PR for that?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

they are related , delete the machinePool either with unreachable clusters or when the delete timeout pass.

if err := r.reconcileDeleteNodes(ctx, cluster, mp); err != nil {
// Return early and don't remove the finalizer if we got an error.
return err
// Delete nodes when cluster accessor available & there are nodes to delete.
Copy link
Member

@enxebre enxebre May 31, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should this PR be scoped only to retrieve the cluster client and skip if not functional and is deleting?

Signed-off-by: melserngawy <melserng@redhat.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/machinepool Issues or PRs related to machinepools cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Failed to delete machinePool for unreachable cluster
7 participants