Device mapper pull image performance (5x) is much slower than overlayfs #6625
Replies: 4 comments
-
We use devmapper for kata and also see these sorts of huge discrepancies for the image pull and the first run of the new image. (90s for devmapper vs 10s for overlayfs is not unusual) Later runs using the same image (e.g. reusing the snapshot) seem to perform fine but it would be good to see if we can understand what's going on and if we can do something about the first time. We are using a (gulp) loop device so potentially that might be related but still ... smells like something that could be optimized. To be clear we are seeing this with v1.7.1 |
Beta Was this translation helpful? Give feedback.
-
Should also note that this is with fairly large I've tried the dm pool with and without |
Beta Was this translation helpful? Give feedback.
-
Ah... I see the problem now. Those values are basically the opposite of the standard mkfs.ext4 defaults which are in effect -- FWIW I think the current defaults are confusing and unexpected and when it gets down to the mkfs.ex4 call, the extended options really should be blank in most cases now. This has been the case in Kubernetes for six years now so really should be considered. https://github.com/kubernetes/kubernetes/pull/38865/files I think that was the gist of the observation in #6119 and the net result was that we were given the ability to set |
Beta Was this translation helpful? Give feedback.
-
I'm now looking at just using Most nodes will have their mke2fs defaults set with the |
Beta Was this translation helpful? Give feedback.
-
Hi folks, we are using containerd device mapper snapshotter but seeing huge performance degradation (e.g. from 13s to 60s+) during image pulling vs. default OverlayFS. And it also seemed to perform differently on different OS distro in Azure VMs. If anyone has encountered this kind of behavior before, please feel free to DM me, thanks in advance.
Beta Was this translation helpful? Give feedback.
All reactions