Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Work around 10GB limit by changing caching backend? #126

Open
nipunn1313 opened this issue Mar 9, 2023 · 3 comments
Open

Work around 10GB limit by changing caching backend? #126

nipunn1313 opened this issue Mar 9, 2023 · 3 comments

Comments

@nipunn1313
Copy link

We've been running into the 10GB limit from github actions (have a few different workflows that each cache large artifacts). Would rust-cache be amenable to having pluggable cache storage backends (eg S3) with my main motivation being to work around the 10GB limit.

Sccache has a few pluggable backends and so the interface could take some inspiration. There are a few actions-cache-s3 type things in the actions marketplace to take inspiration from as well.

Open to other ideas as well to help with the 10GB limit!

@Swatinem
Copy link
Owner

Swatinem commented Mar 9, 2023

As you mentioned sccache, I believe that would indeed be a good alternative.
I really wouldn’t want to implement alternative backends, as that would add unreasonable complexity to the code. Right now the main caching, file, and download handling is all "inherit" from @actions/cache for free basically.

@joroshiba
Copy link
Contributor

I've opened a PR (#154) which provides an option to use BuildJet who provide up to 20GB/repo/week, and in my testing is much faster when using self hosted runners.

YMMV, but this was the lowest lift to get things working for me, while still having the "it's almost entirely drop in".

@NobodyXu
Copy link
Contributor

NobodyXu commented Sep 6, 2023

I think rust-cache can workaround this by compressing the cache using a separate algorithm before passing it to actions/cache, though that would also require manually decompressing it.

actions/cache uses zstd with default level settings, simply using -22 with -extreme would make the archive smaller without affecting the decompression time.

Switching to xz/lzma would make it smaller at the price of much slower compression and decompression compared to max-level of zstd, but it still can be useful as long as it's faster than compiling.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants