Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

copy: compute blob compression on reused blobs based on source MediaType #1138

Merged
merged 2 commits into from Feb 19, 2021

Conversation

nalind
Copy link
Member

@nalind nalind commented Feb 3, 2021

Teach BlobInfoCache2 implementations to be able to retrieve the type of compression that was applied to a blob with a given digest, if we know it.

If we successfully reuse a blob while writing an image, and we know how a blob with that reused blob's digest was compressed, use that information when updating the MIME type for the reused blob.

This covers situations where TryReusingBlob() returns a BlobInfo containing a different digest and MIME type than the one we originally asked it to reuse. copy.imageCopier.copyLayers() sets it up as part of the LayerInfos in update info that copy.imageCopier.copyUpdatedConfigAndManifest() passes to src.UpdatedImage(), and the UpdateLayerInfos() implementations discard the MIME type from the new BlobInfo in favor of the MIME type in the original layers list.

The buildah blob cache triggers this case when we use it while pushing an image where the local copy of the manifest specifies an uncompressed digest and MIME type, and the blob cache tries to substitute the digest of the compressed version in LayerInfosForCopy(), since it knows it can retrieve the compressed version from its cache. The blob cache doesn't get used very often, so we've only recently started noticing it since we started testing #1089 in openshift/origin#25830.

@nalind
Copy link
Member Author

nalind commented Feb 3, 2021

@mtrmac PTAL - in my local testing this corrects the error that openshift/origin#25830 is seeing.

Copy link
Collaborator

@mtrmac mtrmac left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A minimal first set of questions, I haven’t read the blob cache code yet to get the full picture.

The blob cache is already detecting the compression AFAICS, so it could ultimately record that and return a known-correct value.

Also, LayerInfosForCopy in the blob cache seems to be setting MIME types already. Is the problem that we just ignore that for some reason? Maybe that’s something that needs changing?

copy/copy.go Outdated
@@ -1098,6 +1098,18 @@ func (ic *imageCopier) copyLayer(ctx context.Context, srcInfo types.BlobInfo, to
Artifact: srcInfo,
}
}

// If we know the blob that we just reused is compressed (or not), note how it's compressed (or not).
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do I understand it correctly that this does the same thing the OperationAndAlgorithmForCompressor call + use of its return value does in dockerImageDest.TryReusingBlob — and essentially overrides that — basically because the blob cache doesn’t / currently can’t use BlobInfoCache2?

If so, this should probably use OperationAndAlgorithmForCompressor and update both fields, so that it does truly overwrite the update done by TryReusingBlob.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When the blob cache is wrapping a destination, it ends up calling the destination's TryReusingBlob or PutBlob, so it's not in control of what's in the returned BlobInfo. The code path where we hit this is when TryReusingBlob finds the target blob is already in the repository (around here) but doesn't supply the info from the cache.
I think that updating TryReusingBlob to always fill in this information on this path would also work, but there are multiple TryReusingBlob implementations, so having the copy package supply the information looked like it'd require fewer changes.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When the blob cache is wrapping a destination, it ends up calling the destination's TryReusingBlob or PutBlob, so it's not in control of what's in the returned BlobInfo.

Just to confirm: blobCacheDestination.PutBlob never substitutes with a different digest, so in that case all the correct info should have already been in the source manifest and/or LayerInfosForCopy; is that correct?

copy/copy.go Outdated
@@ -1098,6 +1098,18 @@ func (ic *imageCopier) copyLayer(ctx context.Context, srcInfo types.BlobInfo, to
Artifact: srcInfo,
}
}

// If we know the blob that we just reused is compressed (or not), note how it's compressed (or not).
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Assuming this does the same thing as dockerImageDest: It’s just ugly to compute the same thing twice. Options:

  • Remove this code from dockerImageDest, and leave it only in the generic path. Conceptually a better match, OTOH a little performance impact due to locking the BIC over and over (and we can almost remove CandidateLocations2 again, except for the part where it drops UnknownCompression candidates)
  • Have the blob cache deal with this, consistently with dockerImageDest, one way or another (make BIC2 public, move blob cache to this repo, …)
  • Keep both, but only use the generic path if TryReusingBlob has not set CompressionOperation/CompressionAlgorithm
  • Something else?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree that we need to keep CandidateLocations2 to filter out UnknownCompression candidates. We also need to be able to look up compression info even if the destination repository isn't a location we've seen the blob before, so we need to be able to look up compression info independent of transport scopes, which CandidateLocations can't do, so I don't think we can avoid something like DigestCompressorName().

Moving the logic that sets the CompressionOperation and CompressionAlgorithm mostly out of dockerImageDest.TryReusingBlob() into copy also works, but as you note below, TryReusingBlob() still needs to learn to only try its initial HasBlob() check if it knows how the blob was compressed, so that it won't return a BlobInfo that specifies a digest whose compression type we don't know yet before trying locations from the blob info cache.

Treating reuse of a blob for which we don't know the compression type as an error breaks buildah's commit routine, which returns diffIDs from its LayerInfosForCopy() to avoid needing to recompress anything, and it knows that the containers-storage TryReusingBlob() will succeed anyway. Working around that by having the containers-storage PutBlob() add a compression note to the blob info cache works around that, but I'm not sure the combination is robust enough for other users of the library.

copy/copy.go Outdated
@@ -1098,6 +1098,18 @@ func (ic *imageCopier) copyLayer(ctx context.Context, srcInfo types.BlobInfo, to
Artifact: srcInfo,
}
}

// If we know the blob that we just reused is compressed (or not), note how it's compressed (or not).
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the dockerImageDest path, we have had to ignore candidates with unknown compression. AFAICS this doesn’t happen here, so this could still produce invalid images.

@nalind
Copy link
Member Author

nalind commented Feb 5, 2021

Updated to move most of the knowledge of compression types out of dockerImageDestination and into the copy module.

At the moment we're treating a BlobInfo with a digest for which we have no compression information coming back from TryReusingBlob as an error, but I don't know if that's a reasonable expectation to have.

In order to avoid triggering that error from buildah's commit logic, which returns uncompressed digests from its LayerInfosForCopy() for layer blobs, I ended up adding compression notes for the uncompressed digests that we pass to RecordDigestUncompressedPair() in the places where we call it.

@mtrmac
Copy link
Collaborator

mtrmac commented Feb 5, 2021

and the UpdateLayerInfos() implementations discard the MIME type from the new BlobInfo in favor of the MIME type in the original layers list.

Is that correct, BTW? I think we’ve ended up doing it that way because dockerImageDestination.PutBlob just returns a BlobInfo{Digest:Size:}, historically from times when BlobInfo.MediaType did not even exist. Maybe we can’t require that to be filled by PutBlob/TryReusingBlob without breaking the API, but if it is set (and CompressionOperation is not? or even then?), conceptually those values should be used.

OTOH ManifestUpdateOptions.LayerInfos is explicitly documented to ignore the MediaType field — from

@mtrmac mtrmac closed this Feb 5, 2021
@mtrmac mtrmac reopened this Feb 5, 2021
@mtrmac
Copy link
Collaborator

mtrmac commented Feb 5, 2021

OTOH ManifestUpdateOptions.LayerInfos is explicitly documented to ignore the MediaType field — from …

#357 documenting status at the time, not intent. Yet another fallout from abruptly declaring a stable API without any review/preparation :(

Copy link
Collaborator

@mtrmac mtrmac left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do I get the flow right?

  • blobCacheSource.LayerInfosForCopy notices a cached compressed blob, and creates a replacedInfos array with the right MIME type
  • TryReusingBlob notices that the same compressed blob already exists at the destination, and returns here, with the right MIME type
  • imageCopier correctly sets ic.manifestUpdates.LayerInfos with the compressed digest and MIME type
  • … but the MIME type is ignored

? It seems to me that TryReusingBlob has nothing to do with how that fails — if we go through the full PutBlob upload route, Compression* must be set as well (as this PR does in the PreserveOriginal path).

A bit closer would be saying that LayerInfosForCopy needs to set Compression* (but then we would have to preserve that, or not, across non-cooperative TryReusingBlob/PutBlob, ugh).

But basically it seems to me that UpdatedImage ignoring MediaType updates is the root of the issue, and it would be best handled there. That would also involve breaking the API for types.Image, but luckily the copy implementation doesn’t care about types.Image, only about the value returned by image.FromUnparsedImage, and we can replace / augment that one implementation.

Would this work?

  • Extend manifest.updatedMIMEType to optionally replace mimeType with updated.MediaType if non-empty, before applying compression updates.
  • Add UpdateLayerInfosWithMIME (with a better name) to the three format implementations in manifest, use the above
  • Add UpdatedImageWithMIME to image.genericManifest, call the above
  • Add an alternative to image.FromUnparsedImage that returns a new type (internal-only??) that contains UpdatedImageWithMIME (but not the original UpdatedImage?)
  • Call that from c/image/copy
  • Then we don’t need to change anything about the TryReusingBlob implementations and call sites.

copy/copy.go Outdated
// Update the BlobInfo with information about how the reused blob is compressed.
compressorName := ic.c.blobInfoCache.DigestCompressorName(blobInfo.Digest)
if compressorName == internalblobinfocache.UnknownCompression {
return types.BlobInfo{}, "", errors.Errorf("Tried to reuse blob %s with unknown compression", blobInfo.Digest)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will just break with NoCache or similar implementations.

And worse, we can’t recover, because layer writes are documented to happen in order, one at a time, so after TryReusingBlob succeeds, we can’t use PutBlob.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

… Oh, right, copy is using the default cache, which is never NoCache; still, it can be in-memory or just empty (TryReusingBlob is the first call we make about a blob).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I was really unsure about treating it as an error.

// at the destination. If we don't know, we want to fall back to trying known locations
// with known compression types.
bic := blobinfocache.FromBlobInfoCache(cache)
if compressorName := bic.DigestCompressorName(info.Digest); compressorName != blobinfocache.UnknownCompression {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Within the c/image codebase, at this point the input info is comes from the source manifest and is presumed to be correct, so we don’t need to change the MIME type (apart from format conversions).

It’s feels rather expensive to give up on layer reuse in this trivial case here.

@nalind
Copy link
Member Author

nalind commented Feb 8, 2021

Do I get the flow right?

* `blobCacheSource.LayerInfosForCopy` notices a cached compressed blob, and creates a `replacedInfos` array with the right MIME type

* `TryReusingBlob` notices that the same compressed blob already exists at the destination, and returns here, with the right MIME type

* `imageCopier` correctly sets `ic.manifestUpdates.LayerInfos` with the compressed digest and MIME type

* … but the MIME type is ignored

?

Yup!

It seems to me that TryReusingBlob has nothing to do with how that fails — if we go through the full PutBlob upload route, Compression* must be set as well (as this PR does in the PreserveOriginal path).

A bit closer would be saying that LayerInfosForCopy needs to set Compression* (but then we would have to preserve that, or not, across non-cooperative TryReusingBlob/PutBlob, ugh).

But basically it seems to me that UpdatedImage ignoring MediaType updates is the root of the issue, and it would be best handled there. That would also involve breaking the API for types.Image, but luckily the copy implementation doesn’t care about types.Image, only about the value returned by image.FromUnparsedImage, and we can replace / augment that one implementation.

Would this work?

* Extend `manifest.updatedMIMEType` to optionally replace `mimeType` with `updated.MediaType` if non-empty, before applying compression updates.

* Add `UpdateLayerInfosWithMIME` (with a better name) to the three format implementations in `manifest`, use the above

* Add `UpdatedImageWithMIME` to `image.genericManifest`, call the above

* Add an alternative to `image.FromUnparsedImage` that returns a new type (internal-only??) that contains `UpdatedImageWithMIME` (but not the original `UpdatedImage`?)

* Call that from `c/image/copy`

* Then we don’t need to change anything about the `TryReusingBlob` implementations and call sites.

I tried the first of these, and it breaks when we do a format conversion because the MediaType in the update isn't one that's allowed by the new manifest type.

An alternate pitch: modify dockerImageDestination.blobExists() to replace its HEAD requests with GET requests when checking for a blob in the destination repository, with a "Range" header specified so that it retrieves just enough data to let it check for compression magic, and use the result to decide what to return to callers. dockerImageDestination.TryReusingBlob() can then use this to finish filling out the BlobInfo that it returns when the check that it makes before looking at the blob info cache's candidates succeeds. The result that TryReusingBlob() then hands back to the copy package will let the current image-updating logic compute the correct MediaType.

@nalind nalind force-pushed the blobinfocache2-reuse branch 2 times, most recently from 35880a6 to b280c61 Compare February 8, 2021 22:39
@mtrmac
Copy link
Collaborator

mtrmac commented Feb 8, 2021

  • Extend manifest.updatedMIMEType to optionally replace mimeType with updated.MediaType if non-empty, before

I tried the first of these, and it breaks when we do a format conversion because the MediaType in the update isn't one that's allowed by the new manifest type.

So we would need to:

  • Set MediaType from the update, if set
  • Convert the manifest, if desired
  • Apply CompressionOperation etc.

? Ugh. It’s difficult to see any consistency in that (which parts of ManifestUpdates must be valid in the old manifest schema, and which ones in the new one). It feels to me that there should be a way to express the problem space in a more (much more?) elegant way, but I can’t see it.

Perhaps it could work to immediately create an UpdatedImage if LayerInfosForCopy requests any edits (i.e. these edits would all be done in the old format; and we could later do another round of edits based on reuse/compression).


An alternate pitch: modify dockerImageDestination.blobExists() to replace its HEAD requests with GET requests when checking for a blob in the destination repository, with a "Range" header specified so that it retrieves just enough data to let it check for compression magic, and use the result to decide what to return to callers. dockerImageDestination.TryReusingBlob() can then use this to finish filling out the BlobInfo that it returns when the check that it makes before looking at the blob info cache's candidates succeeds. The result that TryReusingBlob() then hands back to the copy package will let the current image-updating logic compute the correct MediaType.

I’m fine with saying that in the canSubstitute cases, the transport must bear some responsibility for the manifest edits; but in the straightforward “yup, this blob already exists here” case, this really should work without the transport helping (when we do have all data necessary to make it work, it’s just a matter of code structure).

@nalind
Copy link
Member Author

nalind commented Feb 9, 2021

This seems to be producing the right results.
The blobcache's TryReusingBlob() will return what it gets back from passing the BlobInfo that it's been given to either TryReusingBlob() or PutBlob() in the destination that it's wrapping, which I think delegates decisions about what to put in those results to the destination.
The containers-storage TryReusingBlob() either returns the values that it's passed or the digest and size of the uncompressed equivalent, in which case the compression-related fields, by not being set, are already set to the values we'd expect for uncompressed blobs.
The dockerImageDestination TryReusingBlob() is getting updated here. I wasn't sure about whether or not we wanted to populate the blob info cache with compression info when it isn't verifying the digest of the blob for that case, so at the moment it isn't recording that information.

@nalind
Copy link
Member Author

nalind commented Feb 9, 2021

I’m fine with saying that in the canSubstitute cases, the transport must bear some responsibility for the manifest edits; but in the straightforward “yup, this blob already exists here” case, this really should work without the transport helping (when we do have all data necessary to make it work, it’s just a matter of code structure).

Yeah, I don't have a good idea for passing it around without changing public APIs, either copy and the image-related types themselves, or extending the blob info cache to be able to carry that information from external implementations to places where we want to be able to read that info.

@nalind nalind changed the title copy: use blobinfocache2 compression info for reused blobs dockerImageDestination.blobExists: guess blob compression Feb 10, 2021
@mtrmac
Copy link
Collaborator

mtrmac commented Feb 10, 2021

extending the blob info cache to be able to carry that information from external implementations to places where we want to be able to read that info.

The information already exists in LayerSourcesForCopy, doesn’t it? So this can be fixed locally in c/image/copy.

@nalind
Copy link
Member Author

nalind commented Feb 10, 2021

extending the blob info cache to be able to carry that information from external implementations to places where we want to be able to read that info.

The information already exists in LayerSourcesForCopy, doesn’t it? So this can be fixed locally in c/image/copy.

You're suggesting that if LayerInfosForCopy() provides a MediaType in a BlobInfo, that copyLayers() should fill in the corresponding CompressionAlgorithm value in the input BlobInfo if it isn't set, and if TryReusingBlob() returns a BlobInfo containing the same digest, that it should copy the CompressionAlgorithm value from the source BlobInfo to the BlobInfo that it returns, and which eventually gets handed off to UpdatedImage()?
That also appears to work for my case. Updating to do that instead of reading the first bits of blobs in the registry in dockerImageDestination.blobExists.

@nalind nalind force-pushed the blobinfocache2-reuse branch 3 times, most recently from e019220 to b76a051 Compare February 10, 2021 22:45
@nalind nalind changed the title dockerImageDestination.blobExists: guess blob compression copy: compute blob compression on reused blobs based on source MediaType Feb 10, 2021
@mtrmac
Copy link
Collaborator

mtrmac commented Feb 10, 2021

Thanks, that’s much better than the dockerImageDestination-dependent version; it would need detailed comments to record the rationale/distant effects but otherwise something I wouldn’t actively NAK at this point.


I think the cleanest way to express this is the version with two UpdatedImage calls (one cleanly just changing layer details within a single manifest schema, one possibly changing manifest schema + compression, i.e. the rule being “you can call UpdatedImage with any set of desired outcomes that are consistent, i.e. you can request a conversion from v2s1 to zstd+OCI in one step, or a conversion from zstd+OCI to gzip+v2s1 in one step, but not a conversion from zstd+OCI to v2s1 in one step”):

         if updatedSrcInfos != nil && !reflect.DeepEqual(srcInfos, updatedSrcInfos) {
                 if !ic.canModifyManifest {
                         return errors.Errorf("Copying this image requires changing layer representation, which is not possible (image is signed or the destination specifies a digest)")
                 }
-                srcInfos = updatedSrcInfos
-                srcInfosUpdated = true
+                updates := *ic.manifestUpdates
+                updates.LayerInfos = updatedSrcInfos
+                ic.src, err = ic.src.UpdatedImageWithMIME(ctx, updates) // new method of `image.memoryImage`
+                // err…
+                srcInfos = ic.src.LayerInfos()
+                // assert that LayerInfosForCopy does not require an update?
         }

+ figure out interaction with determineManifestConversion (probably move all of LayerInfosForCopy handling out of copyLayers before the manifest conversion)

If pressed for time, maybe punt on UpdatedImageWIthMIME for now and do that in a private helper function via the Compression{Operation,Algorithm} edits, with a clearly documented TODO.

@nalind
Copy link
Member Author

nalind commented Feb 11, 2021

The contents of the srcInfos slice are not referred to in copyLayers() again except to compare them to the destInfos slice, so I've moved the logic that populates the CompressionAlgorithm field from there into copyLayer(), much closer to where their values get used after a call to TryReusingBlob() succeeds. It has the additional effect of passing that updated information to UpdatedImage when the addition of that information is the only thing that's changed, which wasn't the case just before.

Reworking copy the way you're hinting at sounds to me like a larger conversation, and I don't understand all of what you've outlined. The original case that I'm trying to solve with this PR does have some time pressure on it, though, so I'd rather not try to do that in here.


(@mitr For the record: I have mistakenly edited this comment; hopefully it’s now reverted to the original)

copy/copy.go Outdated

// If the reused blob has the same digest as the one we asked for, but
// the transport didn't/couldn't supply compression info, fill it in based
// on what we know from the srcInfo.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please document that this, as a side effect, causes a MIME type update if LayerInfosForCopy replaces a blob, because LayerInfosForCopy’s MediaType update does not cause a manifest edit.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added If the srcInfos came from LayerInfosForCopy(), then UpdatedImage() will call UpdateLayerInfos(), which uses this information to compute the MediaType value for the updated layer infos, and it the transport didn't pass the information along from its input to its output, then it can derive the MediaType incorrectly.

@@ -1353,7 +1374,11 @@ func (c *copier) copyBlobFromStream(ctx context.Context, srcStream io.Reader, sr
compressionOperation = types.PreserveOriginal
inputInfo = srcInfo
uploadCompressorName = srcCompressorName
uploadCompressionFormat = nil
if isCompressed {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please document that this, as a side effect, causes a MIME type update if LayerInfosForCopy replaces a blob, because LayerInfosForCopy’s MediaType update does not cause a manifest edit.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added Remember if the original blob was compressed, and if so how, so that if LayerInfosForCopy() returned something that differs from what was in the source's manifest, and UpdatedImage() needs to call UpdateLayerInfos(), it will be able to correctly derive the MediaType for the copied blob.

copy/copy.go Outdated
// If the srcInfo doesn't contain compression information, try to compute it from the
// MediaType, which was usually read from a manifest by way of LayerInfos(), so that
// we can use the information to ensure that the eventual call to UpdatedImage() that
// uses the blobInfo we return can correctly compute updated MediaType values.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
// uses the blobInfo we return can correctly compute updated MediaType values.
// uses the blobInfo we return can correctly compute updated MediaType values
// (sadly UpdatedImage() is documented to not update MediaTypes
// from ManifestUpdateOptions.LayerInfos[].MediaType, so we are doing it
// indirectly).
//
// This MIME type → compression mapping belongs in manifest-specific code in c/image/manifest
// (but we should preferably replace/change UpdatedImage instead of productizing this workaround).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added.

@mtrmac
Copy link
Collaborator

mtrmac commented Feb 12, 2021

Reworking copy the way you're hinting at sounds to me like a larger conversation, and I don't understand all of what you've outlined.

#1147 . Warning: Compile-tested-only, I don’t know what I don’t know and historically with these conversations I’ve been consistently missing things that were revealed by testing.

When copying a compressed blob without making any modifications to it,
set the CompressionFormat in the returned BlobInfo to reflect the
compression algorithm that was used to compress the blob.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
When copying blobs, if we end up reusing a blob from the destination
with the same digest as that given in the srcInfo, but don't know how
that reused blob was compressed, compute the compression information
from the MediaType included in the srcInfo, if one was supplied.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
@jomeier
Copy link

jomeier commented Feb 18, 2021

Hi, any progress on that?

@mtrmac
Copy link
Collaborator

mtrmac commented Feb 18, 2021

@nalind Up to you and/or other reviewers whether to use this or #1147 or something else. I’m not happy about this approach but it works and it’s documented clearly enough, so feel free to merge if you prefer this one.

@jomeier
Copy link

jomeier commented Feb 19, 2021

No worries. It's only the world that is waiting for this fix.

@rhatdan
Copy link
Member

rhatdan commented Feb 19, 2021

@nalind Is out on PTO today. I say we go with this and move along, since he worked hard on it.

@rhatdan
Copy link
Member

rhatdan commented Feb 19, 2021

LGTM

@rhatdan rhatdan merged commit cdb05b4 into containers:master Feb 19, 2021
@jomeier
Copy link

jomeier commented Feb 19, 2021

I understand that you like to improve this further. But lots of problems arose everywhere because of this problem.

Thanks to everybody who worked hard on finding a pragmatic solution !

@nalind nalind deleted the blobinfocache2-reuse branch February 22, 2021 16:50
nalind added a commit to nalind/image that referenced this pull request Mar 3, 2021
Since containers#1138, we've been paying attention to the MediaType values
returned by LayerInfosForCopy(), so lying about layer data being
compressed when the data we provide isn't would cause uncompressed
layers to be mistakenly marked as compressed when we copied the image
without attempting to add or modify the compression of layers.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
vrothberg pushed a commit that referenced this pull request Mar 3, 2021
Since #1138, we've been paying attention to the MediaType values
returned by LayerInfosForCopy(), so lying about layer data being
compressed when the data we provide isn't would cause uncompressed
layers to be mistakenly marked as compressed when we copied the image
without attempting to add or modify the compression of layers.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
nalind added a commit to nalind/image that referenced this pull request Mar 3, 2021
Since containers#1138, we've been paying attention to the MediaType values
returned by LayerInfosForCopy(), so lying about layer data being
compressed when the data we provide isn't would cause uncompressed
layers to be mistakenly marked as compressed when we copied the image
without attempting to add or modify the compression of layers.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
vrothberg pushed a commit that referenced this pull request Mar 3, 2021
Since #1138, we've been paying attention to the MediaType values
returned by LayerInfosForCopy(), so lying about layer data being
compressed when the data we provide isn't would cause uncompressed
layers to be mistakenly marked as compressed when we copied the image
without attempting to add or modify the compression of layers.

Signed-off-by: Nalin Dahyabhai <nalin@redhat.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants