Replies: 13 comments
-
Are there any updates? |
Beta Was this translation helpful? Give feedback.
-
I am having this exact same issue, also running RunnerDeployments in EKS. I was just going to post about this as well. I'm using @sgandon this post was 3 months ago, did you figure out a resolution? |
Beta Was this translation helpful? Give feedback.
-
Related to this upstream issue? |
Beta Was this translation helpful? Give feedback.
-
I am also stuck on this as well, it works and then it doesn't on other runs. I have increased the memory and cpu hoping things will change but it just does not work at the moment, any updates? |
Beta Was this translation helpful? Give feedback.
-
The Github team should pay more attention to this problem. There are many complaints about this and they are not being resolved. |
Beta Was this translation helpful? Give feedback.
-
FYI: If it helps, we've seen this issue when nodes run out of disk (on both GitHub-hosted and self-hosted runners). The user experience is that the job is "cancelled" and no helpful error about the disk being the cause. |
Beta Was this translation helpful? Give feedback.
-
We are seeing this issue as well, could it occur if the associated pod exceeds the cpu or memory limit? |
Beta Was this translation helpful? Give feedback.
-
we are also experiencing this same issue |
Beta Was this translation helpful? Give feedback.
-
i am also experiencing this issue |
Beta Was this translation helpful? Give feedback.
-
If you are running on spot instances you will also get this when the nodes are preempted. |
Beta Was this translation helpful? Give feedback.
-
I've faced a similar issue for a long time. When I deep-dived into this, I got to know that there is a runner registration API called from pod which failed, the failure was due to token expiry. I'm using a GitHub app and after a few hours, this token gets refreshed. is this the reason why? |
Beta Was this translation helpful? Give feedback.
-
Why isn't Github paying attention to this issue? I see a lot of issues related with this, but the only answer that is given is "you're using old version of github runner". This issue is really old but is also occuring on the latest versions. |
Beta Was this translation helpful? Give feedback.
-
In silent pods where code should run, |
Beta Was this translation helpful? Give feedback.
-
Hello
We are using using the ARController in our EKS cluster and we are having some issues with our runners deployed using RunnerDeployment. We are using Github cloud (and not the GH server).
Sometimes, and to be honest quite often we are getting our job killed somehow.
The last message in the job logs is the following
the messages in the job summary is a bit different
And looking at the pods logs it seems that the pod is recieving a SIGTERM message but we don't know why.
Could anyone help us fix this issue please ?
Beta Was this translation helpful? Give feedback.
All reactions