transparently streaming log to file
janwilmans opened this issue · 2 comments
also implement a filesize limit, since we cannot 'delete' the start of a file, maybe have two files, when the first grows to the limit, rename it, and delete any existing file with the destination name.
Open question: do we want a new thread to do this?
Its might be useful to offer this for both the LogFile and for views. On the other hand, we don't have to over-complicate the UI. Since the views can be re-created from the LogFile and diskspace is not really an issue these days, we could also offer LogFile only.
New proposal:
- no filesize limit
- no 'new logfile' every day
- no history depth
- no differences per logfile/ per view
Instead offer one option: File->Stream to disk
Just transparently stream message to files, start a new file every n-lines (say 20.000) like:
logfilename_timestamp_block_1.dblog
logfilename_timestamp_block_2.dblog
logfilename_timestamp_block_3.dblog
this way memory consumption will be limited to indexes for the views only.
the files would be written continuously until the N-line limit is reached.
If the view want to 'scroll back', it can just open de correct file by looking it up:
block = (linenr / 20.000) + 1
problem: sometimes debugview will need to read+write from the same file, and sometimes, it read from one file and writes to another.
(suppose you're viewing line 20,010 which is logfilename_timestamp_block_2.dblog and writing new incoming messages to logfilename_timestamp_block_3.dblog