You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Sometimes the caching mechanism used for kits gets in the way of development and validation workflows.
I've noted that cargo will skip the build of a binary even when the compilation environment changed (e.g. if a different subset of flags were set). This is misleading since I would think that my new compilation environment is correct, to later on learn that a cached layer with an older binary was actually used in the build instead of my new binary.
To circumvent this problem, and fully trust that the binaries are compiled with the new configured environment, I have to delete build/*, .cargo/* and the docker cache. This experience isn't ideal if the only thing I want to test is a one-liner change in the flags used by the compilers. Additionally, nuking the docker cache impacts other workflows like building the Bottlerocket SDK since its builds are somewhat expensive.
I haven't been able to reproduce, but sometimes the wrong layers are used to put together a kit. This already impacted Bottlerocket releases (aka 1.24.0), and again nuking the docker cache slows down the development workflow.
It would be nice if twoliter could invalidate the docker cache but only for certain layers, so that we don't have to nuke the entire docker cache. Even nicer would be to detect which build would use artifacts from the cache (e.g. while building a package, Twoliter could decide to use or invalidate the cache depending on user's inputs).
The text was updated successfully, but these errors were encountered:
There are a couple things that could be going on here.
If the SDK changes, but the version doesn't change, then that doesn't trigger a full rebuild.
This is partly a quality-of-life concession to SDK development; it would otherwise require everything to be rebuilt if the SDK changed at all. We do this for external kits currently, so it's possible and perhaps even desirable to give the SDK the same treatment. I won't fight too hard to preserve the existing behavior if it's causing problems.
The other bit - changing CFLAGS not affecting Rust builds - likely stems from the "hidden" cache layer that's available to packages, and used automatically for Rust. For flags that cargo understands, things should be fine, but if a flag changes that affects a build script and the build script doesn't emit a cargo directive for it, then previous artifacts may be reused incorrectly.
That's more of an upstream bug IMO and should ideally be fixed by adding cargo directives to the affected crate.
However, we could tie the cache layer to the SDK version (or hash) to invalidate it whenever the SDK changes.
What's the problem
Sometimes the caching mechanism used for kits gets in the way of development and validation workflows.
I've noted that
cargo
will skip the build of a binary even when the compilation environment changed (e.g. if a different subset of flags were set). This is misleading since I would think that my new compilation environment is correct, to later on learn that a cached layer with an older binary was actually used in the build instead of my new binary.To circumvent this problem, and fully trust that the binaries are compiled with the new configured environment, I have to delete
build/*
,.cargo/*
and thedocker
cache. This experience isn't ideal if the only thing I want to test is a one-liner change in the flags used by the compilers. Additionally, nuking thedocker
cache impacts other workflows like building the Bottlerocket SDK since its builds are somewhat expensive.I haven't been able to reproduce, but sometimes the wrong layers are used to put together a kit. This already impacted Bottlerocket releases (aka 1.24.0), and again nuking the
docker
cache slows down the development workflow.It would be nice if
twoliter
could invalidate thedocker
cache but only for certain layers, so that we don't have to nuke the entire docker cache. Even nicer would be to detect which build would use artifacts from the cache (e.g. while building a package, Twoliter could decide to use or invalidate the cache depending on user's inputs).The text was updated successfully, but these errors were encountered: