PROBIC/mSWEEP

Reading plaintext alignments consumes a lot of memory (workaround inside)

Closed this issue · 4 comments

Reading a plaintext pseudoalignment from Themisto consumes a lot more memory than is necessary because plaintext input disables the internal encoding of the pseudoalignments as a sparse vector.

Workaround: use alignment-writer to compact the alignment file and then read in the compact alignment file instead of the plaintext one.

I'm using this workaround, and the memory usage went from over 1TB to only 20GB. However, mSWEEP is still very slow to read the pseudoalignment file. It has now been reading the file for 11 CPU hours at 4 threads. The compacted alignment file is only about 250MB, so that does not sound right.

The "reading" part also includes deserializing the pseudoalignment into memory, constructing equivalence classes, and assigning the reads to the equivalence classes so it's a bit more than just reading the file, but still this probably needs some design changes to handle large input (100 000 000 reads x 60 000 references in this example) better.

Relevant functions for this issue:

Deserializing the file

Memory use in plaintext data

Equivalence classes

  • telescope::Alignment::collapse Converting the deserialized file into equivalence classes, where...
  • telescope::GroupedAlignment::insert does the construction of the equivalence classes by hashing the pseudoalignment for each read represented as a boolean vector and constructing a hash map that links the pseudoalignment to data about it.

v2.1.0 should contain a fix for this.

I've also implemented a flag to filter out targets that have 0 alignments across reads, this can reduce the memory and cpu use significantly for sparse inputs. Filtering can be toggled with --min-hits 1. Using 1 as the threshold should produce the same results for the targets that have more than 0 alignments. Using values higher than 1 is also supported but will change the results, however something like --min-hits 1000 can be hugely beneficial for very large inputs.