-
Notifications
You must be signed in to change notification settings - Fork 760
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
buildah manifest push fails for quay.io when images were pulled from quay.io #2594
Comments
|
Saw this error with Buildah 1.15.1 but failed to reproduce this with the upstream branch. This should have been fixed. @mh21 have you tried the newer version of Buidlah? |
I still get the same error with quay.io without skopeo when using buildah in a privileged rawhide container:
For comparison:
Environment:
|
@mh21 Did you try the whole reproducer with 1.17.0-dev or you just run |
Attached the log of the container run. I ran the script two times in the container. It seems like the first time it actually worked.
The second time, it fails while uploading the manifest with
|
Reopen this issue. @nalind is this the issue you are looking at? |
Thanks, it's containers/skopeo#1078 that I was looking at. I think @mtrmac guessed that it might be a symptom of containers/image#733, but I haven't chased it down well enough to be sure yet. |
@mh21 The problem can be caused by the |
Hi @QiWang19, thank you! I played around with it a bit more and I don't think it is related to the local images. The error is still there, independent of whether the container images are present when But thanks to you I found an easier workaround 😄. Instead of pulling down the images with skopeo, it is enough to specify the
For reference, this is the complete script used: #!/bin/bash
echo -e "FROM alpine\nRUN touch dummy.txt" > Dockerfile
export BUILDAH_FORMAT=docker
export REG=quay.io/mhofmann
#export REG=docker.io/mh21
function cleanup() {
rm ~/.local/share/containers/cache/blob-info-cache-v1.boltdb > /dev/null 2>&1
for host in localhost "$REG"; do
for tag in tag tag-amd64; do
buildah rmi $host/dummy:$tag > /dev/null 2>&1
done
done
}
# n jobs: create single-arch image
cleanup
buildah bud -f Dockerfile -t dummy:tag-amd64 .
buildah push dummy:tag-amd64 $REG/dummy:tag-amd64
# final job: create multi-arch manifest from all single-arch images
cleanup # doesn't matter whether images are cleaned here
buildah manifest create dummy:tag docker://$REG/dummy:tag-amd64
buildah manifest push dummy:tag docker://$REG/dummy:tag --all # adding --format v2s2 fixes it |
@mh21 Thanks for the check 🙂 . I am going to close this. |
Hi @QiWang19, this is still something that works out of the box with docker, but does not work with buildah. Wasn't the idea that buildah should be a drop-in replacement for docker? |
@nalind in this issue reproducer, buildah needs to specify |
That should be something we correctly handle automatically when we do a format conversion from OCI to Docker format during the push, and I could swear that we did, at least at one point. I'm looking into it. |
@nalind is this related to the issue you were chasing for the Skopeo quay images? containers/skopeo#1078 |
@TomSweeneyRedHat Yes, I think so. |
The question I have is when will quay.io fully support OCI images? |
Is it really the For containers/image#733 , what matters is a local cache between one push/pull and another push. So, it might not be the |
If this is indeed something like containers/image#733 , OCI support won’t make a difference, we would still fall back to v2s1 (the only format that doesn’t carry the MIME type so it can’t be inconsistent in a way Quay would notice), and the same strict validation that rejects v2s1 images in v2s2 manifests would reject v2s1 images in OCI manifests. |
This is included in the scripts as rm ~/.local/share/containers/cache/blob-info-cache-v1.boltdb |
I’m sorry , I have somehow missed both the script and some of the logs. Still, looking at https://github.com/containers/buildah/files/5418728/buildah-1-17-0.log , this really looks like 733: the first push of layer with DiffID 50644c29ef5a succeeds, compressing that layer on the fly and creating 0d9094d70e9c0ee00ae22533ace8595d3b1a7a24976dd1750ab6e8e62ef3a771 ; the second push of the same layer seems to reuse that cached location, and v2s2 fails. (There’s a lot of …opportunity… to make the debug logs easier to understand, with multi-threaded pushes it’s now often not clear which log line is related to which blob.)
That’s the wrong path, at least in the log linked above:
|
I added a sudo rm /var/lib/containers/cache/blob-info-cache-v1.boltdb to the script above. Again, without
For reference, the current reproducer: #!/bin/bash
echo -e "FROM alpine\nRUN touch dummy.txt" > Dockerfile
export BUILDAH_FORMAT=docker
export REG=quay.io/mhofmann
#export REG=docker.io/mh21
function cleanup() {
rm ~/.local/share/containers/cache/blob-info-cache-v1.boltdb > /dev/null 2>&1
sudo rm /var/lib/containers/cache/blob-info-cache-v1.boltdb > /dev/null 2>&1
for host in localhost "$REG"; do
for tag in tag tag-amd64; do
buildah rmi $host/dummy:$tag > /dev/null 2>&1
done
done
}
# n jobs: create single-arch image
cleanup
buildah bud -f Dockerfile -t dummy:tag-amd64 .
buildah push dummy:tag-amd64 $REG/dummy:tag-amd64
# final job: create multi-arch manifest from all single-arch images
cleanup # doesn't matter whether images are cleaned here
buildah manifest create dummy:tag docker://$REG/dummy:tag-amd64
buildah manifest push dummy:tag docker://$REG/dummy:tag --all # --format v2s2 fixes it
#buildah manifest push --format v2s2 dummy:tag docker://$REG/dummy:tag --all |
Based on the ongoing discussions, I'm going to reopen this one. |
I believe this all works correctly now, I am closing, reopen if I am mistaken. |
We are seemingly experiencing this problem with Buildah 1.19.6. Though we are using images from docker.io as base images, the error is the same:
|
@xrstf does removing the blob info cache (/var/lib/containers/cache/blob-info-cache-v1.boltdb and /var/lib/containers/storage/cache/blob-info-cache-v1.boltdb for root, ~/.local/share/containers/storage/cache/blob-info-cache-v1.boltdb and ~/.local/share/containers/cache/blob-info-cache-v1.boltdb for non-root) have any effect? Are you able to test with a more recent version? 1.19.6 didn't have the benefit of containers/image#1138, but fixes applied after that may also have changed things. |
Description
We are creating multiple single-arch container images, upload them to quay.io and later in another job try to generate multi-arch manifests for them. The manifest upload fails with "manifest invalid" for quay.io. When using docker.io, this works.
Minimal reproducer:
Error message for quay.io:
For docker.io, this works without error message:
This might be related to https://bugzilla.redhat.com/show_bug.cgi?id=1810768 and containers/image#733 as it can be fixed by pulling down the single-arch images via skopeo before the
buildah manifest create
and forcing a v2s2 manifest (without the--format v2s2
it does not work):Versions:
The text was updated successfully, but these errors were encountered: