Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"cacheFrom" value not considered when building dev container features #153

Closed
Chuxel opened this issue Aug 31, 2022 · 21 comments · Fixed by #233
Closed

"cacheFrom" value not considered when building dev container features #153

Chuxel opened this issue Aug 31, 2022 · 21 comments · Fixed by #233
Assignees
Labels
bug Something isn't working verified

Comments

@Chuxel
Copy link
Member

Chuxel commented Aug 31, 2022

The build.cacheFrom value allows you to publish an image, and then use it as a local cache for subsequent builds on another machine (as does the Docker Compose equivalent). Currently contents of the image in cacheFrom do not appear to be being used during the dev container features build step which reduces the utility of it pretty significantly.

Repro:

  1. Go to https://github.com/chuxel/feature-library and create a Codespace, or clone the project locally and create a dev container from it
  2. Watch the Dockerfile build

Expected: Since there is a pre-built image with the "docker-in-docker" feature inside it (see https://github.com/Chuxel/feature-library/blob/main/.github/workflows/devcontainer-image.yaml), the feature docker build reuses the cached result.
Actual: The image contents are only used in during the core Dockerfile build not during the feature build

Log section illustrating the cache being used in one case, but not the other.

2022-08-31 19:41:33.478Z: #14 [dev_container_auto_added_stage_label 2/2] RUN su node -c "npm install -g @devcontainers/cli"
2022-08-31 19:41:33.758Z: #14 pulling sha256:0df7bb7c7da4a5becc4ee99cdb66cc9175a54cc0d860b26de69a7376b687e6a9
2022-08-31 19:41:34.206Z: #14 pulling sha256:0df7bb7c7da4a5becc4ee99cdb66cc9175a54cc0d860b26de69a7376b687e6a9 0.4s done
2022-08-31 19:41:38.855Z: #14 CACHED
2022-08-31 19:41:38.997Z: 
2022-08-31 19:41:38.997Z: #15 [dev_containers_target_stage 1/2] COPY --from=dev_containers_feature_content_source . /tmp/build-features/
2022-08-31 19:41:39.107Z: #15 DONE 0.2s
2022-08-31 19:41:39.246Z: 
2022-08-31 19:41:39.247Z: #16 [dev_containers_target_stage 2/2] RUN cd /tmp/build-features/docker-in-docker_1 && export $(cat devcontainer-features.env | xargs) && chmod +x ./install.sh && ./install.sh
2022-08-31 19:42:45.745Z: #16 66.46 DOCKER_MOBY_ARCHIVE_VERSION_CODENAMES=buster bullseye bionic focal jammy
2022-08-31 19:42:45.745Z: #16 66.46 Distro codename  'bullseye'  matched filter  'buster bullseye bionic focal jammy'
2022-08-31 19:42:45.745Z: #16 66.47 Running apt-get update...
2022-08-31 19:42:45.884Z: #16 66.76 Get:1 http://deb.debian.org/debian bullseye InRelease [116 kB]
2022-08-31 19:42:46.022Z: #16 66.78 Get:2 http://deb.debian.org/debian-security bullseye-security InRelease [48.4 kB]
2022-08-31 19:42:46.022Z: #16 66.78 Get:3 http://deb.debian.org/debian bullseye-updates InRelease [44.1 kB]
2022-08-31 19:42:46.022Z: #16 66.79 Get:4 https://dl.yarnpkg.com/debian stable InRelease [17.1 kB]
```
@Chuxel Chuxel added the bug Something isn't working label Aug 31, 2022
@Chuxel
Copy link
Member Author

Chuxel commented Aug 31, 2022

I am wondering if this line is invalidating the cache:

2022-08-31 19:41:38.997Z: #15 [dev_containers_target_stage 1/2] COPY --from=dev_containers_feature_content_source . /tmp/build-features/

It's possible COPY --link may be needed:

https://docs.docker.com/engine/reference/builder/#copy---link

Use --link to reuse already built layers in subsequent builds with --cache-from even if the previous layers have changed. This is especially important for multi-stage builds where a COPY --from statement would previously get invalidated if any previous commands in the same stage changed, causing the need to rebuild the intermediate stages again. With --link the layer the previous build generated is reused and merged on top of the new layers. This also means you can easily rebase your images when the base images receive updates, without having to execute the whole build again. In backends that support it, BuildKit can do this rebase action without the need to push or pull any layers between the client and the registry. BuildKit will detect this case and only create new image manifest that contains the new layers and old layers in correct order.

The same behavior where BuildKit can avoid pulling down the base image can also happen when using --link and no other commands that would require access to the files in the base image. In that case BuildKit will only build the layers for the COPY commands and push them to the registry directly on top of the layers of the base image.

@chrmarti
Copy link
Contributor

chrmarti commented Sep 1, 2022

Not sure --link can help (reading https://www.howtogeek.com/devops/how-to-accelerate-docker-builds-and-optimize-caching-with-copy-link/). I can't come up with an example showing more 'CACHED' layers.

Was the image you use as --cache-from built with the exact same CLI and features versions?

@stuartleeks
Copy link
Collaborator

stuartleeks commented Sep 1, 2022

I quickly put together a workflow to use the CI action across two jobs to test the image caching that Chuck mentioned last night

Workflow is here: https://github.com/stuartleeks/actions-playground/blob/main/.github/workflows/devcontainers-ci-cache-test.yml

Run here: https://github.com/stuartleeks/actions-playground/runs/8129339831?check_suite_focus=true

Notes:

  • the image build outputs date to a file and the runCmd echos it out
  • the automated comparison isn't picking up the values so I need to look at that (the runCmdOutput PR might help here in future) so I'm manually checking the run output
  • the imageTag input isn't being picked up so it's using latest which means that the first build job is using a cached image unless I change the Dockerfile etc (testing using a fixed value worked, so my GH workflow was incorrect here)
  • but... the build date output is showing the same value across the two jobs and I can see CACHED in the output. I.e. caching is working in this case

@Chuxel
Copy link
Member Author

Chuxel commented Sep 1, 2022

Was the image you use as --cache-from built with the exact same CLI and features versions?

@chrmarti This is the dev container in https://github.com/chuxel/feature-library. Here's the actions workflow run for it: https://github.com/Chuxel/feature-library/runs/8125187956?check_suite_focus=true What I am doing is testing out how quick just using "cacheFrom" is with a pre-built image (rather than recommending people have a separate devcontainer.json with an image reference in all cases). So it's the exact same devcontainer.json and Dockerfile contents in both cases from the same repo.

Actions is using 0.14.1 - Codespaces is likely on an earlier version. However, I also see this in the latest VS Code Insiders build. However, note that I am using an OCI Artifact reference here.

but... the build date output is showing the same value across the two jobs and I can see CACHED in the output. I.e. caching is working in this case

@stuartleeks Are you also seeing the caching work when you open a dev container in Remote - Containers for VS Code Insiders locally with the cache for the feature install working? That's what I'm specifically seeing not work. I used "latest" for the tag. Also - to be crystal clear, the Dockerfile part of the build is cached, it's specifically the feature install that isn't.

Also - what do you see if you use the docker-in-docker feature via ghcr.io specifically. That's the scenario here - it's possible this doesn't happen when you reference the feature the old way.

@chrmarti chrmarti self-assigned this Sep 2, 2022
@chrmarti
Copy link
Contributor

chrmarti commented Sep 2, 2022

Findings from my investigation using the terraform feature so far:

  • Running the CLI locally does not use the cache image from the GitHub action. (Confirming the bug.)
  • The GitHub action uses the cache image from previous workflow runs. (Same as @stuartleeks's finding.)
  • Running the CLI locally uses the cache image from previous local CLI runs. (Pushing the image to the registry and then removing all local images for the test.)
  • File checksums in $tmp/vsch-*/container-features/* are the same locally and in the GitHub workflow.

@chrmarti
Copy link
Contributor

chrmarti commented Sep 2, 2022

Got it: COPY includes file permissions in the checksum for the cache and the GitHub workflows run with umask 0022 while locally umask is usually 0002. So the files in $tmp/vsch-*/container-features/* do not have group write permission in the GitHub workflow resulting in a cache miss locally because the checksum is not the same.

In @Chuxel's sample (where the user's Dockerfile does not copy files to the image) running the CLI locally with umask 0022 makes cacheFrom work (only tested on Linux). The larger problem is that this also triggers a cache miss when the user's Dockerfile uses COPY (or ADD) to add files from the workspace folder to the image. In that case we cannot control the permissions as easily. (Haven't tested on Windows yet where it is unclear how permissions would ever match those of a cache image built on Linux.)

@Chuxel
Copy link
Member Author

Chuxel commented Sep 2, 2022

@chrmarti Yeah not sure though Windows would be done in WSL or a VM (ditto with mac), so it may work.

The umask is a bit interesting. Linux defaults to 0022 actually - wonder why it is 0002. Maybe we can change that.

If we can't change this on the host, the most recent version of buildkit does seem to support --chmod (in addition to --chown) in COPY: moby/moby#34819

@chrmarti
Copy link
Contributor

chrmarti commented Sep 5, 2022

Mac:
It actually works for me on Mac (x64). There the umask is 0022. Make sure all versions match (best to rerun the cache job before testing to make sure it is using the exact same versions of the scripts). Also run docker system prune --all to start from a clean image cache (first remove all containers, these would hold on to image layers).

Windows:

  • When the workspace folder is on a Windows filesystem the permissions on the files of a feature end up as 755 (also for non-executables) inside the container. The cache-from layers are not used starting at COPY.
  • When the workspace folder is on a Linux filesystem in WSL and the umask is the same as in CI, caching works. (When running the devcontainers CLI in WSL as Remote-Containers does.)

Linux:
When a Linux distro adds a group per user with the same name as the user, it might also set the umask to 0002 (instead of 0022). root always has umask 0022. man pam_umask explains:

       usergroups
           If the user is not root and the username is the same as primary group
           name, the umask group bits are set to be the same as owner bits (examples:
           022 -> 002, 077 -> 007). Note that using this option explicitly is
           discouraged. pam_umask enables this functionality by default if
           /etc/login.defs enables USERGROUPS_ENAB, and the umask is not set
           explicitly in other places than /etc/login.defs (this is compatible with
           login's behaviour without PAM).

I see 0002 on a Ubuntu 22.04 server I log in with ssh and a local Ubuntu 20.04 VM (on Parallels). In a Ubuntu 20.04 WSL distro I see 0022. (All have USERGROUPS_ENAB enabled, maybe WSL doesn't use PAM for login.)

Potential fixes:

  • On Linux (including WSL) and Mac the devcontainer CLI could remove group write permission from all of $tmp/vsch-*/container-features/<session-folder> before we run the image build. This will ensure that the checksums (locally and in CI) are the same as with umask 0022.
  • Windows needs more investigation, not sure what can be done there except for using --chmod.
  • When the user's Dockerfile adds files from the workspace, we probably don't want to automatically change permissions on the host fs.
    • We could log a warning when --cache-from is used with a Dockerfile containing COPY or ADD and the umask is 0002 (or when on Windows).
  • --chmod applies the same permissions to all files and folders. For features we might get away with using 700. That should also work for Windows (haven't tried yet).

chrmarti added a commit that referenced this issue Sep 5, 2022
@chrmarti
Copy link
Contributor

chrmarti commented Sep 6, 2022

Alternatively we could put the tgz archive files in the context folder for features and untar them in the RUN step. That should have the same cache behavior (like currently when it works) and all images seem to have tar and gzip installed (we tar xz the VS Code server in Remote-Containers).

@chrmarti
Copy link
Contributor

chrmarti commented Sep 9, 2022

Looked into copying the archives to the image using COPY --chmod, but still seeing a cache miss. Filed docker/buildx#1311 as it looks like that should actually work.

@davidwallacejackson
Copy link
Contributor

I'm looking at the generated Dockerfile and build command, and wondering if an additional docker stage could be used as a workaround here? I did something similar in my own project to get around a bug where COPY in buildkit sometimes mangled permissions.

I'm thinking, instead of

FROM $_DEV_CONTAINERS_BASE_IMAGE AS dev_containers_target_stage

USER root

COPY --from=dev_containers_feature_content_source . /tmp/build-features/

Why not do

FROM alpine as dev_containers_fix_permissions_stage
COPY --from=dev_containers_feature_content_source . /tmp/build-features/
RUN chmod -R 0600 /tmp/build-features

FROM $_DEV_CONTAINERS_BASE_IMAGE AS dev_containers_target_stage

USER root

COPY --from=dev_containers_fix_permissions_stage /tmp/build-features /tmp/build-features

This won't stop the dev_containers_fix_permissions_stage from running on every invocation, but that should be a trivial operation, and by the time you get to the actual expensive work the permissions will be normalized and cause a cache hit, right?

@Chuxel
Copy link
Member Author

Chuxel commented Oct 13, 2022

@davidwallacejackson I think the problem here is that the initial copy has different privs on Windows than anything else. So adding another stage still results in a cache miss. Doing the --chmod as a part of the copy should line up the cache so that doesn't happen, but it doesn't seem like that's working.

@chrmarti Given that these copied files are temporary, I wonder if using --chmod 755 in all cases might do the trick. That would update the Linux (and macOS) privs to mirror Windows. I'd suspect that the problem here is specifically with NTFS filesystem translation since it doesn't directly map to unix style privs in the other two OS's. We end up deleting the contents, so there's no real reason it has to be 600 privs. Reversing things to have the other two OS's mirror Windows might be the key.

@davidwallacejackson
Copy link
Contributor

@Chuxel I might be unclear on how caching works here, but I'd like to learn: I assumed that if two different sets of build steps led to the exact same state, they could then use the same cache thereafter. Or to put it another way, that a cached step could follow an uncached step, as long as the uncached step produced an output that had previously been produced by a cached step. Is that not the case?

@Chuxel
Copy link
Member Author

Chuxel commented Oct 13, 2022

@Chuxel I might be unclear on how caching works here, but I'd like to learn: I assumed that if two different sets of build steps led to the exact same state, they could then use the same cache thereafter. Or to put it another way, that a cached step could follow an uncached step, as long as the uncached step produced an output that had previously been produced by a cached step. Is that not the case?

@davidwallacejackson Yeah, that should be happening here with the chmod, but is not. The input files to the COPY on windows have different privs than on Linux/macOS, so the caching may be assuming that the source files have changed (which invalidates the cache). Both the inputs and outputs and any dependent layers have to be the same before caching kicks in. The fact this does not happen on macOS or Linux seems to indicate that priv differences are causing the issue... hence the bug to moby. It comes down to what caching considers a "change" for a copied file.

@davidwallacejackson
Copy link
Contributor

@Chuxel I might be unclear on how caching works here, but I'd like to learn: I assumed that if two different sets of build steps led to the exact same state, they could then use the same cache thereafter. Or to put it another way, that a cached step could follow an uncached step, as long as the uncached step produced an output that had previously been produced by a cached step. Is that not the case?

@davidwallacejackson Yeah, that should be happening here with the chmod, but is not. The input files to the COPY on windows have different privs than on Linux/macOS, so the caching may be assuming that the source files have changed (which invalidates the cache). Both the inputs and outputs and any dependent layers have to be the same before caching kicks in. The fact this does not happen on macOS or Linux seems to indicate that priv differences are causing the issue... hence the bug to moby. It comes down to what caching considers a "change" for a copied file.

Got it. But if it is possible for different builds to "merge" on a shared state, shouldn't the intermediate step solve that problem? The way I envision it is (apologies for the ASCII -- I really need to learn Mermaid...):

Current

Windows permissions -> build
OSX permissions -> build
Linux permissions -> build

After adding intermediate stage

Windows permissions-- v
OSX permissions ------------> normalized Linux permissions (by way of intermediate stage) ----> build
Linux permissions-------^

So the initial steps -- the COPY from the host filesystem context and the RUN to normalize the permissions -- would potentially be uncached because the host filesystem might mangle permissions. But that step is trivial, since the Features source should be really small, and the output from the intermediate stage should be the same on every platform, even if its inputs were different. So that would mean that once you hit the first step of the actual build stage, all the inputs are exactly the same, right? Or am I still missing something?

@davidwallacejackson
Copy link
Contributor

Update: I was curious about this, so I ran a test -- and it looks like I was right. This proves that adding another stage to normalize permissions would allow the work of actually installing the features to be cached, right?

@Chuxel
Copy link
Member Author

Chuxel commented Oct 14, 2022

@davidwallacejackson It sounds like it! @chrmarti ?

@davidwallacejackson
Copy link
Contributor

Made a PR with this change and tested it manually -- looks like this does, in fact, solve the problem!

@davidwallacejackson
Copy link
Contributor

@Chuxel @chrmarti -- would either of you have time to review the PR? This would make a big difference to my team's workflow if it's mergeable

@Chuxel
Copy link
Member Author

Chuxel commented Oct 20, 2022

@davidwallacejackson Sorry for the delay! @chrmarti and I have been out-of-office, but @jkeech mentioned @edgonmsft, @joshspicer, or @samruddhikhandale could look at #233 @stuartleeks took a quick look as well and it looked good.

chrmarti added a commit that referenced this issue Nov 1, 2022
chrmarti added a commit that referenced this issue Nov 2, 2022
chrmarti added a commit that referenced this issue Nov 2, 2022
@Chuxel
Copy link
Member Author

Chuxel commented Nov 3, 2022

This shipped for the CLI (0.23.2) and also VS Code Dev Containers yesterday. The Codespaces team will pick up the latest update soon as well.

If you are still hitting problems on Windows, I noticed that docker/for-win#12995 seems to be breaking caching all-up. The problem seems to be fetching metadata and timing out. Removing "credsStore": "desktop.exe" from ~/.docker/config.json makes the problem go away, though that doesn't work well if you are using a private image.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working verified
Projects
None yet
5 participants