-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
dvc fetch: Files downloaded from remote storage (AWS S3) to the DVC cache should have mtime restored #10347
Comments
DVC's caches/remotes are content-addressable. There is no 1:1 mapping between cache <> workspace or remote <> workspace. We don't always preserve timestamps even on the local cache (see #8602). In DVC, we use checksum rather than timestamp, which is superior to my mind. Unfortunately, I don't have a workaround to suggest here. The same thing would happen if you track with Git. |
But there should be a 1:1 mapping between local DVC cache and remote? As The link from workspace to local DVC cache is done in my particular case with link type |
With Git, I can use |
I see two workarounds for my particular use case:
This preserves mtime of objects stored in the remote (which is what I would like
After either of these two steps, |
Following up on #8602, what we ended up doing was to run We can then restore mtimes as necessary, based on the snapshot file. We use symlinks for I still think it would be good if DVC provided robust support for preserving mtimes, but this is how we are hacking around it at the moment. |
Thanks for sharing, @johnyaku. If I understand right, these two mtime related issues differ in the sense that #8602 is about the mtime assigned to outputs during pipeline execution, which may cause issues with other tools for re-running steps with outdated outputs as determined by mtime, while this issue relates to restoring mtime between clones of a DVC project. |
That's right, altho the second issue also plays into the first. Suppose I run a workflow on one platform and track the results via DVC, then I clone to another platform and add a few more samples. Then I suffer from both problems. This is not an edge case, it happens to us all the time. If our workflow managers were content-aware (like DVC) then this would be less of an issue. But DVC is still a long way short of being a fully fledged workflow manager (and it isn't clear to me that that is a worthwhile goal) and so for now we are left trying to get DVC to play nicely with Snakemake. And mtime is part of that puzzle. |
We want to use DVC to store media files of a static page that is build with Jupyter Book (Sphinx doc). However,
dvc fetch
/dvc pull
sets the mtime of the files downloaded from remote storage in AWS S3 to the local DVC cache to the current time instead of the last modified time of the remote file object. This then triggers a complete rebuild of the entire documentation, consisting of >1000 pages. The files are then checked out usingdvc checkout
(ordvc pull
, but after fetch it won't re-download anything) to the local repository using link typesymlink
. That latter step works to preserve the mtime of the object in the local DVC cache. But the download from remote storage to local cache is the issue.It would be great if DVC would set the mtime of the files in the cache to the last modified time of the remote storage object to help avoid the rebuild issue. Otherwise we would need to use AWS CLI or a custom script to download the remote folder to the local cache directory instead of
dvc fetch
.The text was updated successfully, but these errors were encountered: