Metafile integrity
Opened this issue · 3 comments
Bringing the discussion of Metadata Streaming #36 to the context of the GNURadio sink block of gr-sigmf, the final SigMF metafile could be produced in the block's destructor by concatenating two separate JSON files for the capture and annotation segments. A topic to discuss though is the case where an error (i.e segmentation fault) occurs during the concatenation.
How should the block handle such a case? An initial idea is the implementation of an integrity tool, that can parse the intermediate invalid JSON file and fill the missing segments using the two stored JSON files.
Any other thoughts?
@ctriant - Nice thinking about edge cases, and I think your proposal is good.
Another option would be to just delete the corrupt file and re-create it from your two intermediate files (you would need to pull the global
object out of the corrupt file).
@bhilburn However, an error may occur even before the execution reaches the constructor and the actual concatenation process. For example, the possibility of getting a problematic situation increases as the recording grows bigger. So, I suppose the same care should be considered even for the separate files.
Something that concerns me is the trade-off between the amount of information that can be left loaded in the memory vs the frequency of disk Ι/Οs.
I think this is generally a good idea as well. I wonder where the tool would live? I'm thinking of a python tool located in the apps folder of the OOT? Also, the primary task should be to minimize error probability, so I would set a medium priority here.