Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Velero CRDs are auto-removed #13393

Closed
ahmedwaleedmalik opened this issue May 13, 2024 · 2 comments · Fixed by #13396
Closed

Velero CRDs are auto-removed #13393

ahmedwaleedmalik opened this issue May 13, 2024 · 2 comments · Fixed by #13396
Labels
backport-needed Denotes a PR or issue that has not been fully backported. customer-request kind/bug Categorizes issue or PR as related to a bug.

Comments

@ahmedwaleedmalik
Copy link
Member

What happened?

With #12827, we introduced in-built Velero support for managing cluster backups in KKP. When cluster backups are enabled, KKP will install the CRDs and Velero on the user cluster, and we can configure it for periodic backups. Similarly, when disabled, all resources, including CRDs, are removed from the cluster.

A critical issue that our customers are now running into is that if they have Velero pre-installed in the user clusters, KKP now checks if "cluster backups" are enabled or disabled. If disabled, CRDs and thus the CRs for Velero are removed.

Expected behavior

Unless explicitly enabled or disabled, existing Velero resources shouldn't be touched. We also need to reconsider the removal of CRDs if this feature is disabled; maybe we shouldn't remove the CRDs; we will leave that up to whoever works on this issue.

How to reproduce the issue?

  1. kubectl apply -f https://github.com/vmware-tanzu/helm-charts/tree/main/charts/velero/crds
  2. Wait for a few seconds and KKP will remove these CRDs

How is your environment configured?

  • KKP version: 2.25
  • Shared or separate master/seed clusters?: N/A

/label priority/high
/label customer-request

@ahmedwaleedmalik ahmedwaleedmalik added the kind/bug Categorizes issue or PR as related to a bug. label May 13, 2024
@kubermatic-bot
Copy link
Contributor

@ahmedwaleedmalik: The label(s) /label priority/high cannot be applied. These labels are supported: blocked by backend, merge-type/merge, merge-type/rebase, needs details, service accounts, Epic, MVP, customer-request, design, feature, proposal, ready-to-challenge, redesign, sig/api, sig/app-management, sig/cluster-management, sig/community, sig/infra, sig/networking, sig/ui, sig/virtualization, sprint, team/marketing, team/ps, lifecycle/frozen, backport-needed, backport-complete, ee, needs-release-testing, test/require-vsphere, test/require-kubevirt, test/require-vmwareclouddirector, test/require-nutanix. Is this label configured under labels -> additional_labels or labels -> restricted_labels in plugin.yaml?

In response to this:

What happened?

With #12827, we introduced in-built Velero support for managing cluster backups in KKP. When cluster backups are enabled, KKP will install the CRDs and Velero on the user cluster, and we can configure it for periodic backups. Similarly, when disabled, all resources, including CRDs, are removed from the cluster.

A critical issue that our customers are now running into is that if they have Velero pre-installed in the user clusters, KKP now checks if "cluster backups" are enabled or disabled. If disabled, CRDs and thus the CRs for Velero are removed.

Expected behavior

Unless explicitly enabled or disabled, existing Velero resources shouldn't be touched. We also need to reconsider the removal of CRDs if this feature is disabled; maybe we shouldn't remove the CRDs; we will leave that up to whoever works on this issue.

How to reproduce the issue?

  1. kubectl apply -f https://github.com/vmware-tanzu/helm-charts/tree/main/charts/velero/crds
  2. Wait for a few seconds and KKP will remove these CRDs

How is your environment configured?

  • KKP version: 2.25
  • Shared or separate master/seed clusters?: N/A

/label priority/high
/label customer-request

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@ahmedwaleedmalik
Copy link
Member Author

/label backport-needed

@kubermatic-bot kubermatic-bot added the backport-needed Denotes a PR or issue that has not been fully backported. label May 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backport-needed Denotes a PR or issue that has not been fully backported. customer-request kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants