Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Smarter caching for kit OCI artifacts #386

Open
arnaldo2792 opened this issue Oct 5, 2024 · 1 comment
Open

Smarter caching for kit OCI artifacts #386

arnaldo2792 opened this issue Oct 5, 2024 · 1 comment

Comments

@arnaldo2792
Copy link
Contributor

What's the problem

Sometimes the caching mechanism used for kits gets in the way of development and validation workflows.

I've noted that cargo will skip the build of a binary even when the compilation environment changed (e.g. if a different subset of flags were set). This is misleading since I would think that my new compilation environment is correct, to later on learn that a cached layer with an older binary was actually used in the build instead of my new binary.

To circumvent this problem, and fully trust that the binaries are compiled with the new configured environment, I have to delete build/*, .cargo/* and the docker cache. This experience isn't ideal if the only thing I want to test is a one-liner change in the flags used by the compilers. Additionally, nuking the docker cache impacts other workflows like building the Bottlerocket SDK since its builds are somewhat expensive.

I haven't been able to reproduce, but sometimes the wrong layers are used to put together a kit. This already impacted Bottlerocket releases (aka 1.24.0), and again nuking the docker cache slows down the development workflow.

It would be nice if twoliter could invalidate the docker cache but only for certain layers, so that we don't have to nuke the entire docker cache. Even nicer would be to detect which build would use artifacts from the cache (e.g. while building a package, Twoliter could decide to use or invalidate the cache depending on user's inputs).

@bcressey
Copy link
Contributor

bcressey commented Oct 5, 2024

There are a couple things that could be going on here.

If the SDK changes, but the version doesn't change, then that doesn't trigger a full rebuild.

This is partly a quality-of-life concession to SDK development; it would otherwise require everything to be rebuilt if the SDK changed at all. We do this for external kits currently, so it's possible and perhaps even desirable to give the SDK the same treatment. I won't fight too hard to preserve the existing behavior if it's causing problems.

The other bit - changing CFLAGS not affecting Rust builds - likely stems from the "hidden" cache layer that's available to packages, and used automatically for Rust. For flags that cargo understands, things should be fine, but if a flag changes that affects a build script and the build script doesn't emit a cargo directive for it, then previous artifacts may be reused incorrectly.

That's more of an upstream bug IMO and should ideally be fixed by adding cargo directives to the affected crate.

However, we could tie the cache layer to the SDK version (or hash) to invalidate it whenever the SDK changes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants