Write to SelfLog when the internal buffer size limit is hit
akilin opened this issue · 5 comments
I was playing around with dockerized seq, and noticed that it only saved a small part of total logs sent to it, without reporting any issues.
I am not sure if the issue is with seq or with the serilog sink.
Minimal repo with steps to reproduce can be found at the end of this post.
Log.Logger = new LoggerConfiguration()
.WriteTo.File($"log{DateTime.UtcNow.ToString("yyyy-dd-M--HH-mm-ss")}.txt")
.WriteTo.Seq("http://localhost:5341")
.CreateLogger();
Serilog.Debugging.SelfLog.Enable(Console.Error);
int counter = 0;
while (counter < 1_000_000)
{
Log.Information("count is {c} {g}", counter++, counter / 10_000);
}
Log.CloseAndFlush();
The code above when executed writes logs to file and seq.
Once the app is done running - file contains 1_000_000 rows as expected, while seq only has 12-20% of that.
select count(*) from stream => 185357 (this number varies between runs, but usually is between 120-200k).
No errors are thrown/written to console.
Diagnostics info can be found inside repo, file seq-diagnostics-8d79c5e01ed3e68.txt
.
Steps to reproduce:
clone the repo https://github.com/akilin/demo-seq-losing-logs
startup seq in docker: docker-compose up -d
run the app: dotnet run
Thanks for posting this. The issue here is likely to be the default queue size limit in the Seq sink, configured by passing queueSizeLimit
to WriteTo.Seq()
:
The default value is 100,000 - which is a good default for most apps, but not enough for the kind of synthetic case being tested here.
Passing a larger queue size, or sleeping for a millisecond or two on each iteration would resolve it.
The underlying batching mechanism doesn't currently write events to SelfLog
when the queue is exhausted:
This is because starting to emit a large number of logs via an infrequently-used channel when the system is under stress might itself lead to an outage. Writing a single warning event in this situation, using some synchronization to ensure it's only emitted once per application run, has been suggested in the past but so far is unimplemented.
Hope this helps!
Nick
Hi, thank you for quick and detailed response.
Passing a larger queue size, or sleeping for a millisecond or two on each iteration would resolve it.
Both these options are working, and I am seeing the same amount of logs in seq as there are in a file, so thank you.
I do think that some kind of warning mechanism should be in place, as the only reason I've noticed that logs were missing - is because the discrepancy was that high. If someone exceeds default limitations just barely - they will lose some logs, without any warning, and probably won't even know about it.
That said - my original issue is resolved, so feel free to close the ticket. Unless you want to keep is as a reminder to add that warning 😄
Thanks; I'll leave this open, for now, as you suggest 👍
Yes! I, too, faced a similar kind of issue.
We have Seq deployed in the Docker container on Azure. One of our app deployed on Azure AppService use to send logs via Serilog to Seq.
Since I have configured using bufferBaseFilename, I can see that a few logs are missed in Seq. However, the missed records did present in the respective buffer file. The file got created as AppNameHere-20211014_008.clef with the size being 3937 KB.
Earlier I was assuming that the logs from this file would get pushed to Seq eventually. But it seems that the missed one never gets pushed to Seq.
So,
- Can you please share what should be my next course of action to handle such a case?
- Is it safe if I enable SelfLog on production?
- How can I enable SelfLog to log into Azure logs?
The 7.x series resolves this by applying backpressure when the channel fills.