Catalog data file gets broken after compaction
Closed this issue · 0 comments
novoj commented
It seems that when the catalog file exceeds the threshold size (100MB by default) and is compressed into a new file, it's somehow corrupted. The previous file is not deleted, but partially overwritten, and the new file is also corrupted. This was discovered by analyzing file remnants where the original file was much smaller than the 100MB it should have been, and the contents were completely corrupted. It is likely that different tasks are writing to corrupted versions of the file. We need to investigate this issue and write a more complex integration test for this scenario.