Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Infinite retries by default = memory leak by default #314

Open
dpmm99 opened this issue Nov 22, 2023 · 1 comment
Open

Infinite retries by default = memory leak by default #314

dpmm99 opened this issue Nov 22, 2023 · 1 comment
Assignees

Comments

@dpmm99
Copy link

dpmm99 commented Nov 22, 2023

Describe the problem to solve
The default RetryConfig in MessageTransmitterConfig has Max = -1--infinite retries. If a message cannot succeed, e.g., because it's >65536 bytes and being sent via UDP, the memory for that message can never be freed. I don't feel this is a very sensible default configuration.

Describe the enhancement proposed
Make the default number of retries limited. I'd personally go with 5.

Describe alternatives
Have a fixed duration before expiration on top of the infinite retries. Don't use recursion to retry so the memory wasted by a single infinitely-failing message doesn't keep worsening. Also maybe catch SocketException in UdpClient.SendAsync. Also check the datagram size before trying to send UDP messages.

Additional context
I'm getting this message on Linux but not on Windows for the same messages being sent to Papertrail: [Syslog] SendAsync failed Exception: System.Net.Sockets.SocketException (90): Message too long, but my log messages aren't that long. Might be a separate bug. I haven't tracked down which message is causing it yet.

@dpmm99
Copy link
Author

dpmm99 commented Nov 27, 2023

I can't find the root cause for the SocketException. One of my messages apparently failed to send to Papertrail twice with the "Message too long" error, then it succeeded on the third try (appearing later in the log than messages with an earlier timestamp), so it's not exactly specific to the message text, either... but it might be influenced by the message text.

That's the weird part--it only seems to happen for the one instance of our service. Other instances have no problem logging >3KB messages, but their log messages are rarely more than ~1KB. This instance logs up to ~60KB messages sometimes, but most messages are >1KB, and after a fresh restart, I saw it fail when trying to log a ~1KB message, saying "Message too long." All the instances of our service use the same logging configurations and have multiple threads started with Task.Run(), but they process and log different kinds of data. I also couldn't reproduce the error on my Windows machine by sending the same message that seems to have caused it on the Linux host.

We're using Rfc5424 on .NET 7 with NLog 5.2.5 and NLog.Targets.Syslog 7.0.0. I updated to these from .NET 5, NLog 4.7.9, and NLog.Targets.Syslog 6.0.2 after I originally saw this issue, but this didn't change the problematic behavior. Like a few of the other open issues on this GitHub, our service would also stop logging altogether after a while, but I haven't checked for that behavior since updating.

I can provide one of the messages that failed--it was base-64 encoded garbage, clearly no unusual special characters or anything--but again, I couldn't make it happen on demand by resending the same message.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants