soedinglab/plass

Assembling big data

lzaramela opened this issue · 5 comments

Hey,
I have a big dataset (>600M paired-end reads) and I am trying to generate a protein catalog using Plass. I am using the version 2.c7e35 in a server with 900Gb ram. The processing is ending without completion due to exceeding the resources requested. I am wondering if it is possible to tweak the parameters to allocate less memory.
Any input will be greatly appreciated.
Thanks,
Livia

Hi Livia,

Could you please post the log of the run? Plass should split up the work so it always fits into the available memory.

Best regards,
Milot

Sure... here is the log file
PLASS_West.txt

I got the following message:
Execution terminated
Exit_status=271
resources_used.cput=46:09:04
resources_used.mem=531170280kb
resources_used.vmem=832592604kb
resources_used.walltime=42:54:33

Thanks a lot! How much memory does your machine have? Normally Plass try to split the database if it does not fit in memory.

CentOS server, I can use up to 900Gb ram.

So it seems that the extractorfs step is hanging, which mostly requires IO. Is it possible that the tmp folder is on some slow network share?

One trick to reduce the amount of sequences extracted is to increase the minimum orf length with --min-length (default: 20).