New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Usage of AdditionalImageStores as layerstore can break #1751
Comments
AdditionalImageStores may alter independent of current RWLayerStore resulting in layers references to the ROImageStore, created during the reuseBlob operations, to break. Optionally allow ignoring stores in AdditionalImageStores as layer store. Closes: Closes: #20603 Signed-off-by: Mostafa Emami <mustafaemami@gmail.com> Signed-off-by: 😎Mostafa Emami <mustafaemami@gmail.com>
@rhatdan thanks a lot for quick response. Though both pulling/deleting the image fixes the issue, it requires the clients programs to automatically detect the issue and find the culprit image somehow, maybe by parsing error of My rudimentary suggestion was to make it configurable whether entries in |
in what cases would it be useful to leave these stores in the |
@giuseppe thanks for the reply. My suggestion does not entail ignoring addtionalImagesStores entries as image storage rather ignore them as layer store to deduplicate layers of RW storage. So that clients can still start containers off of both image-stores (RO/RW) but still the possibility to update the stores independent of each other maintained without risk of breaking links. |
@alexlarsson FYI. |
If I understand correctly, the effect would be to two separate, independently maintained, stores, each of which can add or remove images together with their dependent layers, such that removing an image, and its associated layers, from one store, does not break anything in the other store, because no layers are shared/reused across stores. And that would also mean that putting shared base images to the additional store would have no benefit; a pull of an application derived from that base image would need to write the full image into the local store. IOW, that additional store would only be useful for hosting/distributing complete top-level applications (which is still useful, sure). That’s not a trivial “do not check the other store” boolean: if an image is pulled into the local store, we would need to make sure that all layers are stored into the local store. Right now, layer/image existence is a yes/no state; from e.g. Podman’s view, a layer either exists (and is always reused), or doesn’t. We would need some new state indicating “a layer exists but pull it anyway, or at least copy-up from the other store”. That’s… a plausible feature in principle. My initial impression is that we already have way too many options and knobs (additional image stores, additional layer stores, transient stores, …?) with unclear interactions, and in some cases likely-rarely-used, possibly unmaintained, or outright dubious, implementations (just the top level of |
Following your line of thoughts @mtrmac on how many configuration are around the storage sharing and how interactions are unclear, I can see how defining the scope of each feature may help to keep knobs a bit saner. |
Feature request description
Usage of AdditionalImageStores as layer store for RWStorage could lead to broken RWStorage (at least in overlay driver) if any of the stores in AdditionalImageStores alters.
In case an image is pulled into RWStorage which has layers already existing in ROStorage a link is made from RWStorage to ROStorage instead of pulling layer. If in anytime in future the ROStorage alters so that the link breaks the RWStorage becomes corrupted.
Suggest potential solution
Obviously the behavior is useful generally in many cases but it is problematic in others. Therefore the decision should be left to the clients to define which AdditionalImageStores can be used as well as layer store for RWStorage.
The suggestion is to mark the stores in storage.conf so that they can optionally be ignored as layer-store.
The text was updated successfully, but these errors were encountered: