-
Notifications
You must be signed in to change notification settings - Fork 779
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Logging should be enabled by default #1996
Comments
It may make sense for the default configurations to even log to disk with a cap (e.g. 100MB of the most recent logs). |
This is a good idea. The screen output is currently tied to the log-level. Ideally we'd want to output default INFO log-level logs and write debug-level logs to disk. A lot of our users are running in docker containers or systemd scripts which save the logs, but again by default this is only INFO-level and not sufficient to debug any issues. We should find a way to save debug-level logs to disk separately |
Lighthouse-docker might still want to keep this feature disabled. At least I prefer to configure logging driver (and disk cap etc.) in docker. |
Ive just been looking for logs to diagnose a problem but couldn't find them and then found this issue. I think having a sensible and default logging config defined and enabled is definitely required as well. Makes it hard to know whats happened when issues arise if I cant look back and easily see the problem. |
Hi all, I've been spending some time working on this issue and am interested to hear some feedback. But I started looking into how I would go about adding log rotation (cycle through a variable number of log files where each file is limited to a certain size).
So the main options I see seem to be.
Something interesting to note about option 4, is that the authors of It could be an interesting switch to make for the long term as it is becoming the more popular logging library (better docs, more active dev, etc). I haven't looked into Would be interested to hear others' thoughts on the topic. |
If we are ignoring the overhead of the switch, i'm in favour of It integrates much nicer with the traditional IIRC we went with If someone wanted to take on the task of upgrading all our logs, I'd be for that. Otherwise I'd probably suggest option 1 and forgo the log rotation. |
Sounds good then! Unless anyone has any objections, I would be happy to take it on and switch everything over to |
I'm open to switching to I also fine to lose the Before we commit to switching, it would be good to know if we can maintain the same (or at least similar-enough) log formatting. I know we generally don't treat the logs as a stable API, but it would nice to know how much we're going to break it before we're too far down the track. Also, tracing-flame looks really cool! |
It's worth noting that we use custom |
On first glance, it appears that We may need to decide if the |
Ouch! In our production setup, That seems reasonable enough to me. I can't imagine it would break anyone's log parsing systems, we'd just have to make it very clear to them that you'll never see another I'd probably need to sit on this one overnight at least.. |
I'm erring towards saying we should stick with That way we can retain the Other thoughts that spring to mind:
|
Some notes on this:
By contrast,
If we do go with |
Following up on my previous comment, I've decided to go with
|
## Issue Addressed Closes #1996 ## Proposed Changes Run a second `Logger` via `sloggers` which logs to a file in the background with: - separate `debug-level` for background and terminal logging - the ability to limit log size - rotation through a customizable number of log files - an option to compress old log files (`.gz` format) Add the following new CLI flags: - `--logfile-debug-level`: The debug level of the log files - `--logfile-max-size`: The maximum size of each log file - `--logfile-max-number`: The number of old log files to store - `--logfile-compress`: Whether to compress old log files By default background logging uses the `debug` log level and saves logfiles to: - Beacon Node: `$HOME/.lighthouse/$network/beacon/logs/beacon.log` - Validator Client: `$HOME/.lighthouse/$network/validators/logs/validator.log` Or, when using the `--datadir` flag: `$datadir/beacon/logs/beacon.log` and `$datadir/validators/logs/validator.log` Once rotated, old logs are stored like so: `beacon.log.1`, `beacon.log.2` etc. > Note: `beacon.log.1` is always newer than `beacon.log.2`. ## Additional Info Currently the default value of `--logfile-max-size` is 200 (MB) and `--logfile-max-number` is 5. This means that the maximum storage space that the logs will take up by default is 1.2GB. (200MB x 5 from old log files + <200MB the current logfile being written to) Happy to adjust these default values to whatever people think is appropriate. It's also worth noting that when logging to a file, we lose our custom `slog` formatting. This means the logfile logs look like this: ``` Oct 27 16:02:50.305 INFO Lighthouse started, version: Lighthouse/v2.0.1-8edd9d4+, module: lighthouse:413 Oct 27 16:02:50.305 INFO Configured for network, name: prater, module: lighthouse:414 ```
Resolved in #2762 |
Logging currently has to be manually enabled using the
--logfile
command line parameter. I cannot imagine that anyone would run a beacon node without generating logs in case something needs debugging, so this should really be enabled by default in some sensible location.The text was updated successfully, but these errors were encountered: