iterative/dvc

dvc fetch: Files downloaded from remote storage (AWS S3) to the DVC cache should have mtime restored

aschuh-hf opened this issue · 7 comments

We want to use DVC to store media files of a static page that is build with Jupyter Book (Sphinx doc). However, dvc fetch / dvc pull sets the mtime of the files downloaded from remote storage in AWS S3 to the local DVC cache to the current time instead of the last modified time of the remote file object. This then triggers a complete rebuild of the entire documentation, consisting of >1000 pages. The files are then checked out using dvc checkout (or dvc pull, but after fetch it won't re-download anything) to the local repository using link type symlink. That latter step works to preserve the mtime of the object in the local DVC cache. But the download from remote storage to local cache is the issue.

It would be great if DVC would set the mtime of the files in the cache to the last modified time of the remote storage object to help avoid the rebuild issue. Otherwise we would need to use AWS CLI or a custom script to download the remote folder to the local cache directory instead of dvc fetch.

DVC's caches/remotes are content-addressable. There is no 1:1 mapping between cache <> workspace or remote <> workspace.

We don't always preserve timestamps even on the local cache (see #8602). In DVC, we use checksum rather than timestamp, which is superior to my mind.

Unfortunately, I don't have a workaround to suggest here. The same thing would happen if you track with Git.

But there should be a 1:1 mapping between local DVC cache and remote? As dvc fetch needs to download the object from S3 to a local file. That step should know the S3 object bucket and key (and maybe version ID) and thus be able to obtain the timestamp from there to be able to set the mtime of the file object in the DVC cache I would expect.

The link from workspace to local DVC cache is done in my particular case with link type symlink. This makes it not necessary to preserve file attributes such as mtime between cache and workspace. Only during the remote to cache transfer.

With Git, I can use git-restore-mtime to set the mtime to the last commit timestamp. For DVC, the equivalent would be "last push of a new object to persistent remote storage" timestamp.

I see two workarounds for my particular use case:

  • Option 1: Use AWS CLI to populate local DVC cache
aws s3 sync s3://<bucket>/<prefix>/files .dvc/cache/files

This preserves mtime of objects stored in the remote (which is what I would like dvc fetch to do).

  • Option 2: Use GitHub Action to cache .dvc/cache folder
jobs:
  <name>:
    # ...
    steps:
      - name: Restore media cache
        id: cache_media
        uses: actions/cache@v4
        with:
          key: media-${{ hashFiles('.dvc/config') }}
          path: .dvc/cache
      - name: Obtain AWS credentials
        uses: aws-actions/configure-aws-credentials@v4
        with:
          role-to-assume: ***
          aws-region: ***
      - name: Update media files
        run: |
          dvc gc --workspace --force
          dvc pull media.dvc

After either of these two steps, dvc pull or dvc checkout creates the symbolic links in my workspace.