You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
As per the bug reported at git-lfs/git-lfs#3524, it appears that git stores the entire thing in memory before writing it out to disk. If a git-lfs object is larger than the available memory on the agent then it will get OOM killed during the checkout process.
Steps To Reproduce
Steps to reproduce the behavior:
Create a git-lfs repo with a large file (say 700MB)
Attempt to use an agent with 512MB to checkout the previously created repo
Expected behavior
The repository should be successfully checked out.
Actual behaviour
The checkout fails with a OOM error, leaving the repo in an unrecoverable state because the git lock is held.
Stack parameters (please complete the following information):
AWS Region: ap-southeast-2
Version v5.11.0
The text was updated successfully, but these errors were encountered:
Hi @kbrownlees is this still an issue for you? Have you had any success with the workarounds in the issue you linked: git-lfs/git-lfs#3524. We are interested to know if we can improve the agent's checkout behaviour.
@triarius I worked around this by making our uploader workers have quite a lot more memory. So it is 'working' but not ideal since they are quite a lot more expensive than the ec2 instances I was using before.
I suspect you can do something along the lines of exporting GIT_LFS_SKIP_SMUDGE=1 in an environment hook to make the git checkout skip the lfs files, and then running git lfs pull in a post-checkout hook. It's probably best to make these job lifecycle hooks that live on the agent: See https://buildkite.com/docs/agent/v3/hooks#hook-locations-agent-hooks, so that they are run for every job.
However, I have prepared a plugin that does the same thing: https://github.com/triarius/git-lfs-pull-buildkite-plugin. This should be easier to try out, provided you add it to all the steps, including the pipeline upload step, which lives in the pipeline definition in the web UI.
I've had success with the plugin on a ~1 GB file on a t3a.nano (which has 0.5 GB of RAM).
Describe the bug
As per the bug reported at git-lfs/git-lfs#3524, it appears that git stores the entire thing in memory before writing it out to disk. If a git-lfs object is larger than the available memory on the agent then it will get OOM killed during the checkout process.
Steps To Reproduce
Steps to reproduce the behavior:
Expected behavior
The repository should be successfully checked out.
Actual behaviour
The checkout fails with a OOM error, leaving the repo in an unrecoverable state because the git lock is held.
Stack parameters (please complete the following information):
The text was updated successfully, but these errors were encountered: