Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unavailable BSL is still functional for backup / restore #7785

Open
danfengliu opened this issue May 9, 2024 · 2 comments
Open

Unavailable BSL is still functional for backup / restore #7785

danfengliu opened this issue May 9, 2024 · 2 comments

Comments

@danfengliu
Copy link
Contributor

What steps did you take and what happened:
A GCP bucket BSL is Unavailable, but backup / restore is still functional well.
The reason is I had a top-level directory named dir-1.

Spec:
  Default:  true
  Object Storage:
    Bucket:  test-1
  Provider:  gcp
Status:
  Last Synced Time:      2024-05-09T08:28:24Z
  Last Validation Time:  2024-05-09T08:27:44Z
  Message:               BackupStorageLocation "default" is unavailable: Backup store contains invalid top-level directories: [dir-1]
  Phase:                 Unavailable

What did you expect to happen:
Since other top-level directories do not effect backup/restore, why check that?

The following information will help us better understand what's going on:

If you are using velero v1.7.0+:
Please use velero debug --backup <backupname> --restore <restorename> to generate the support bundle, and attach to this issue, more options please refer to velero debug --help

If you are using earlier versions:
Please provide the output of the following commands (Pasting long output into a GitHub gist or other pastebin is fine.)

  • kubectl logs deployment/velero -n velero
  • velero backup describe <backupname> or kubectl get backup/<backupname> -n velero -o yaml
  • velero backup logs <backupname>
  • velero restore describe <restorename> or kubectl get restore/<restorename> -n velero -o yaml
  • velero restore logs <restorename>

Anything else you would like to add:

Environment:

  • Velero version (use velero version):
  • Velero features (use velero client config get features):
  • Kubernetes version (use kubectl version):
  • Kubernetes installer & version:
  • Cloud provider or hardware configuration:
  • OS (e.g. from /etc/os-release):

Vote on this issue!

This is an invitation to the Velero community to vote on issues, you can see the project's top voted issues listed here.
Use the "reaction smiley face" up to the right of this comment to vote.

  • 👍 for "I would like to see this bug fixed as soon as possible"
  • 👎 for "There are more important bugs to focus on right now"
@blackpiglet
Copy link
Contributor

blackpiglet commented May 9, 2024

Two points need discussion in this issue.

func NewObjectStoreLayout(prefix string) *ObjectStoreLayout {
if prefix != "" && !strings.HasSuffix(prefix, "/") {
prefix = prefix + "/"
}
subdirs := map[string]string{
"backups": path.Join(prefix, "backups") + "/",
"restores": path.Join(prefix, "restores") + "/",
"restic": path.Join(prefix, "restic") + "/",
"metadata": path.Join(prefix, "metadata") + "/",
"plugins": path.Join(prefix, "plugins") + "/",
"kopia": path.Join(prefix, "kopia") + "/",
}
return &ObjectStoreLayout{
rootPrefix: prefix,
subdirs: subdirs,
}
}

  • Whether the backup repository layout change logic still is needed? The layout-checking logic is introduced in issue Enable sharing bucket between Ark and Restic #576. That issue is used to integrate the Restic repository into the Ark repository. The original issue of the checking doesn't exist anymore, but the data integration check is still valuable.
  • Whether the backup and restore associated with that BSL should be processed when the BSL is in an unavailable state?

@blackpiglet
Copy link
Contributor

The agreement of the Velero team is making the Unavailable BSL stop from taking backup and restore.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants