You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What steps did you take and what happened:
A GCP bucket BSL is Unavailable, but backup / restore is still functional well.
The reason is I had a top-level directory named dir-1.
Spec:
Default: true
Object Storage:
Bucket: test-1
Provider: gcp
Status:
Last Synced Time: 2024-05-09T08:28:24Z
Last Validation Time: 2024-05-09T08:27:44Z
Message: BackupStorageLocation "default" is unavailable: Backup store contains invalid top-level directories: [dir-1]
Phase: Unavailable
What did you expect to happen:
Since other top-level directories do not effect backup/restore, why check that?
The following information will help us better understand what's going on:
If you are using velero v1.7.0+:
Please use velero debug --backup <backupname> --restore <restorename> to generate the support bundle, and attach to this issue, more options please refer to velero debug --help
If you are using earlier versions:
Please provide the output of the following commands (Pasting long output into a GitHub gist or other pastebin is fine.)
kubectl logs deployment/velero -n velero
velero backup describe <backupname> or kubectl get backup/<backupname> -n velero -o yaml
velero backup logs <backupname>
velero restore describe <restorename> or kubectl get restore/<restorename> -n velero -o yaml
velero restore logs <restorename>
Anything else you would like to add:
Environment:
Velero version (use velero version):
Velero features (use velero client config get features):
Kubernetes version (use kubectl version):
Kubernetes installer & version:
Cloud provider or hardware configuration:
OS (e.g. from /etc/os-release):
Vote on this issue!
This is an invitation to the Velero community to vote on issues, you can see the project's top voted issues listed here.
Use the "reaction smiley face" up to the right of this comment to vote.
👍 for "I would like to see this bug fixed as soon as possible"
👎 for "There are more important bugs to focus on right now"
The text was updated successfully, but these errors were encountered:
Whether the backup repository layout change logic still is needed? The layout-checking logic is introduced in issue Enable sharing bucket between Ark and Restic #576. That issue is used to integrate the Restic repository into the Ark repository. The original issue of the checking doesn't exist anymore, but the data integration check is still valuable.
Whether the backup and restore associated with that BSL should be processed when the BSL is in an unavailable state?
What steps did you take and what happened:
A GCP bucket BSL is Unavailable, but backup / restore is still functional well.
The reason is I had a top-level directory named dir-1.
What did you expect to happen:
Since other top-level directories do not effect backup/restore, why check that?
The following information will help us better understand what's going on:
If you are using velero v1.7.0+:
Please use
velero debug --backup <backupname> --restore <restorename>
to generate the support bundle, and attach to this issue, more options please refer tovelero debug --help
If you are using earlier versions:
Please provide the output of the following commands (Pasting long output into a GitHub gist or other pastebin is fine.)
kubectl logs deployment/velero -n velero
velero backup describe <backupname>
orkubectl get backup/<backupname> -n velero -o yaml
velero backup logs <backupname>
velero restore describe <restorename>
orkubectl get restore/<restorename> -n velero -o yaml
velero restore logs <restorename>
Anything else you would like to add:
Environment:
velero version
):velero client config get features
):kubectl version
):/etc/os-release
):Vote on this issue!
This is an invitation to the Velero community to vote on issues, you can see the project's top voted issues listed here.
Use the "reaction smiley face" up to the right of this comment to vote.
The text was updated successfully, but these errors were encountered: