-
-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
High resource usage over time (memory leak?) #3446
Comments
Had to restart it again. Could only last 3 days. |
s7evink/fetch-auth-events fixed it |
nvm lmao, it was fine for like 6 days and now its no again |
No idea if zRAM is a setup that should be supported but without a memory profile it will be difficult to tell what’s going on. |
it is zram, yeah edit : its only growing :') actually since its clearly majorly growing over a day, im down to set up reporting tomorrow day and report day after tmrw : ) ill do that |
You just need a single memory profile captured when the memory usage is high. That should contain enough info and the files are small. Having profiling enabled has next to no runtime cost so it’s fine to have it switched on for a long time. |
oh nice, okay then, ill enable the profiler tomorrow since for today i consider myself done and want the rest of the ill send out a memory profile capture in 1-3 days in this thread :) |
ok since it was just 1 environment variable i enabled it now, thought maybe its more complicated but nope, ill post the thing when the resource usage is high :) |
Background information
go version
: go version go1.23.0 linux/amd64Description
Steps to reproduce
This is its resource usage only 2 days later after its most recent restart. And it only creeps over time until I restart it:
It's weird.
The text was updated successfully, but these errors were encountered: