lbcb-sci/graphmap2

Segmentation fault

Opened this issue · 6 comments

Hi!

I was running graphmap2 (v.0.6.01) on nanopore cDNA reads and got a segmentation fault:

14, qname: 1ae9c3be-6d48-4a45-94ab.../var/slurmd-inter/job103354/slurm_script: line 20: 129981 Segmentation fault (core dumped) graphmap2 align -t 20 -K fastq -L sam -x rnaseq --extcigar -r CBAS_MASURCA-2_final.genome.scf.fasta -d /naslx/projects/uk213/di29vos/cbas/data/ONT/cbas_cDNA_polyA-guppy-3.1.5-hac.porechop_100bp_to_20kb_combined.fastq -o CBAS_MASURCA-2_final.genome.scf._ONT_cdna_graphmap2_combined_raw_100bp.sam

I was running on 20 cores asking for 2000GB RAM. Before I was running on 40 cores asking for 3000GB RAM which gave the same error.

slurm header was this:

#!/bin/bash
#SBATCH --job-name=graphmap2_raw
#SBATCH --error=CBAS_R_graphmap2.log
#SBATCH --clusters=inter
#SBATCH --partition=teramem_inter
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
#SBATCH --cpus-per-task=20
#SBATCH --mem=2000G
#SBATCH --time=48:00:00

How can I prevent segmentation faults?

thanks
Michael

The same problem here! I am getting out of memory really quickly! (graphmap2 (v.0.6.01))

@HegedusB can you tell me which dataset and reference are you using? Do you maybe have a link to download?

Dear jmaricb,

I am trying to map 9097076 nanopore cDNA reads on a fungi genome (Coprinopsis cinerea AmutBmut pab1-1, NCBI:txid1132390). The reads are produced by my lab and are not publicly available yet. The graphmap2 works really fine with a small subset of the reads, but uses all of may memories if want to use all of the reads. Is there any way to reduce the memory need of the program? The result what I get so far are really promising.
Thank you for your help,

I ran the graphmap2 on a server which has a CentOS 7.6.1810 operation system and 192 Gb RAMs.

@HegedusB Sorry for not responding sooner. I am looking into the memory issue. Graphmap previously worked with lot less memory because it printed the alignments as they were calculated. The new version of Graphmap uses all of the generated alignments to improve their correctness, that is why it has to have them all in the memory.

But nevertheless I still might have some issues with the memory that I can handle and reduce it. I will let you know if I do.

@HegedusB Also, the memory sizes that you wrote should not happen. I will look into it.