CRG-CNAG/CalliNGS-NF

containerOverrides for AWS BATCH

Dharmendra-G-1 opened this issue · 1 comments

Hello,
I am running CalliNGS workflow using AWS BATCH on AWS Sagemaker using AWS s3 storage drives. I am using the following containerOverides:

containerOverrides={
        'command': [
            "s3://{0}/{1}".format(workflowBucket, workflowFolderPrefix),
            "--reads", "s3://nextflowdataegenesis1/RNASeq_workflow/payload_9/raw_fastq_test1/1839-{1,2}_R{1,2}_001.fastq.gz",
            "--genome", "s3://nextflowdataegenesis1/RNASeq_workflow/payload_9/reference_test1/Sus_scrofa.Sscrofa11.1.dna.toplevel_with_PL9_full_plus_pBACN.fa",
            "--variants", "s3://ngsexperiments/processed_data/WGS_Payload_9_pigs_05_2019/1839_Huck/1839_PL9_sample_short_reads_raw.snps.indels.vcf",
            "--results", "s3://nextflowdataegenesis/RNASeq_workflow/results_payload_9/output_RNASeq_variants_payload_9/1839_Huck"
            
        ]
    }

I am getting the following error:

Waiting for head job to start...
Head job is running...
s3://nextflow1/scripts --reads s3://payload_9/raw_fastq_test1/1839-{1,2}_R{1,2}_001.fastq.gz --genome s3://nextflow/RNASeq/payload_9/reference_test1/Sus_scrofa.Sscrofa11.1.dna.toplevel_with_PL9_full_plus_pBACN.fa --variants s3://ngsexperiments/processed_data/WGS_Payload_9_pigs_05_2019/1839_Huck/1839_PL9_sample_short_reads_raw.snps.indels.vcf --results s3://nextflow1/RNASeq_workflow/results_payload_9/output_RNASeq_variants_payload_9/1839_Huck
Transitioning to Nextflow
nextflow run ./main.nf --reads s3://nextflow/RNASeq/payload_9/raw_fastq_test1/1839-{1,2}_R{1,2}_001.fastq.gz --genome s3://nextflow/RNASeq/payload_9/reference_test1/Sus_scrofa.Sscrofa11.1.dna.toplevel_with_PL9_full_plus_pBACN.fa --variants s3://ngsexperiments/processed_data/WGS_Payload_9_pigs_05_2019/1839_Huck/1839_PL9_sample_short_reads_raw.snps.indels.vcf --results s3://nextflow1/RNASeq_workflow/results_payload_9/output_RNASeq_variants_payload_9/1839_Huck
N E X T F L O W  ~  version 19.04.0
Launching `./main.nf` [fervent_shockley] - revision: ee02720434
C A L L I N G S  -  N F    v 1.0 
================================
genome   : s3://nextflow/RNASeq/payload_9/reference_test1/Sus_scrofa.Sscrofa11.1.dna.toplevel_with_PL9_full_plus_pBACN.fa
reads    : s3://nextflow/RNASeq/payload_9/raw_fastq_test1/1839-{1,2}_R{1,2}_001.fastq.gz
variants : s3://ngsexperiments/processed_data/WGS_Payload_9_pigs_05_2019/1839_Huck/1839_PL9_sample_short_reads_raw.snps.indels.vcf
blacklist: /opt/work/aa6904a6-b74e-4350-a1c5-e631aebfa737/1/data/blacklist.bed
results  : s3://nextflow1/RNASeq_workflow/results_payload_9/output_RNASeq_variants_payload_9/1839_Huck
gatk     : /opt/work/aa6904a6-b74e-4350-a1c5-e631aebfa737/1/GenomeAnalysisTK.jar
Uploading local `bin` scripts folder to s3://nextflow1/dharm_nextflow_logs/runs/tmp/49/0dbd091c08849fbb2c2adcdd095920/bin
executor >  awsbatch (4)
[f6/1503b6] process > 1C_prepare_star_genome_index [  0%] 0 of 1
[ff/3f1fef] process > 1B_prepare_genome_picard     [  0%] 0 of 1
[a4/92c1de] process > 1D_prepare_vcf_file          [  0%] 0 of 1
[3f/a4a3d4] process > 1A_prepare_genome_samtools   [  0%] 0 of 1
Head job FAILED
executor >  awsbatch (4)
[f6/1503b6] process > 1C_prepare_star_genome_index [  0%] 0 of 1
[ff/3f1fef] process > 1B_prepare_genome_picard     [100%] 1 of 1, failed: 1 ✘
[a4/92c1de] process > 1D_prepare_vcf_file          [  0%] 0 of 1
[3f/a4a3d4] process > 1A_prepare_genome_samtools   [  0%] 0 of 1
ERROR ~ Error executing process > '1B_prepare_genome_picard (Sus_scrofa.Sscrofa11.1.dna.toplevel_with_PL9_full_plus_pBACN)'
Caused by:
  Process `1B_prepare_genome_picard (Sus_scrofa.Sscrofa11.1.dna.toplevel_with_PL9_full_plus_pBACN)` terminated with an error exit status (137)
Command executed:
  PICARD=`which picard.jar`
  java -jar $PICARD CreateSequenceDictionary R= Sus_scrofa.Sscrofa11.1.dna.toplevel_with_PL9_full_plus_pBACN.fa O= Sus_scrofa.Sscrofa11.1.dna.toplevel_with_PL9_full_plus_pBACN.dict
Command exit status:
  137
Command output:
  (empty)
Command error:
  [Thu May 16 14:32:17 UTC 2019] picard.sam.CreateSequenceDictionary REFERENCE=Sus_scrofa.Sscrofa11.1.dna.toplevel_with_PL9_full_plus_pBACN.fa OUTPUT=Sus_scrofa.Sscrofa11.1.dna.toplevel_with_PL9_full_plus_pBACN.dict    TRUNCATE_NAMES_AT_WHITESPACE=true NUM_SEQUENCES=2147483647 VERBOSITY=INFO QUIET=false VALIDATION_STRINGENCY=STRICT COMPRESSION_LEVEL=5 MAX_RECORDS_IN_RAM=500000 CREATE_INDEX=false CREATE_MD5_FILE=false GA4GH_CLIENT_SECRETS=client_secrets.json
  [Thu May 16 14:32:17 UTC 2019] Executing as root@ip-10-68-96-187 on Linux 4.14.101-75.76.amzn1.x86_64 amd64; Java HotSpot(TM) 64-Bit Server VM 1.8.0_121-b13; Picard version: 2.9.0-1-gf5b9f50-SNAPSHOT
  .command.sh: line 3:   106 Killed                  java -jar $PICARD CreateSequenceDictionary R= Sus_scrofa.Sscrofa11.1.dna.toplevel_with_PL9_full_plus_pBACN.fa O= Sus_scrofa.Sscrofa11.1.dna.toplevel_with_PL9_full_plus_pBACN.dict
Work dir:
  s3://nextflow1/dharm_nextflow_logs/runs/ff/3f1fef1a119d9c598d6dfaddb2bfa7
Tip: you can replicate the issue by changing to the process work dir and entering the command `bash .command.run`
 -- Check '.nextflow.log' file for details
executor >  awsbatch (4)
[f6/1503b6] process > 1C_prepare_star_genome_index [100%] 1 of 1, failed: 1
[ff/3f1fef] process > 1B_prepare_genome_picard     [100%] 1 of 1, failed: 1 ✘
[a4/92c1de] process > 1D_prepare_vcf_file          [100%] 1 of 1, failed: 1
[3f/a4a3d4] process > 1A_prepare_genome_samtools   [100%] 1 of 1, failed: 1
WARN: Killing pending tasks (3)
ERROR ~ Error executing process > '1B_prepare_genome_picard (Sus_scrofa.Sscrofa11.1.dna.toplevel_with_PL9_full_plus_pBACN)'
Caused by:
  Process `1B_prepare_genome_picard (Sus_scrofa.Sscrofa11.1.dna.toplevel_with_PL9_full_plus_pBACN)` terminated with an error exit status (137)
Command executed:
  PICARD=`which picard.jar`
  java -jar $PICARD CreateSequenceDictionary R= Sus_scrofa.Sscrofa11.1.dna.toplevel_with_PL9_full_plus_pBACN.fa O= Sus_scrofa.Sscrofa11.1.dna.toplevel_with_PL9_full_plus_pBACN.dict
Command exit status:
  137
Command output:
  (empty)
Command error:
  [Thu May 16 14:32:17 UTC 2019] picard.sam.CreateSequenceDictionary REFERENCE=Sus_scrofa.Sscrofa11.1.dna.toplevel_with_PL9_full_plus_pBACN.fa OUTPUT=Sus_scrofa.Sscrofa11.1.dna.toplevel_with_PL9_full_plus_pBACN.dict    TRUNCATE_NAMES_AT_WHITESPACE=true NUM_SEQUENCES=2147483647 VERBOSITY=INFO QUIET=false VALIDATION_STRINGENCY=STRICT COMPRESSION_LEVEL=5 MAX_RECORDS_IN_RAM=500000 CREATE_INDEX=false CREATE_MD5_FILE=false GA4GH_CLIENT_SECRETS=client_secrets.json
  [Thu May 16 14:32:17 UTC 2019] Executing as root@ip-10-68-96-187 on Linux 4.14.101-75.76.amzn1.x86_64 amd64; Java HotSpot(TM) 64-Bit Server VM 1.8.0_121-b13; Picard version: 2.9.0-1-gf5b9f50-SNAPSHOT
  .command.sh: line 3:   106 Killed                  java -jar $PICARD CreateSequenceDictionary R= Sus_scrofa.Sscrofa11.1.dna.toplevel_with_PL9_full_plus_pBACN.fa O= Sus_scrofa.Sscrofa11.1.dna.toplevel_with_PL9_full_plus_pBACN.dict
Work dir:
  s3://nextflow1/dharm_nextflow_logs/runs/ff/3f1fef1a119d9c598d6dfaddb2bfa7
Tip: you can replicate the issue by changing to the process work dir and entering the command `bash .command.run`
 -- Check '.nextflow.log' file for details
executor >  awsbatch (4)
[f6/1503b6] process > 1C_prepare_star_genome_index [100%] 1 of 1, failed: 1
[ff/3f1fef] process > 1B_prepare_genome_picard     [100%] 1 of 1, failed: 1 ✘
[a4/92c1de] process > 1D_prepare_vcf_file          [100%] 1 of 1, failed: 1
[3f/a4a3d4] process > 1A_prepare_genome_samtools   [100%] 1 of 1, failed: 1
WARN: Killing pending tasks (3)
ERROR ~ Error executing process > '1B_prepare_genome_picard (Sus_scrofa.Sscrofa11.1.dna.toplevel_with_PL9_full_plus_pBACN)'
Caused by:
  Process `1B_prepare_genome_picard (Sus_scrofa.Sscrofa11.1.dna.toplevel_with_PL9_full_plus_pBACN)` terminated with an error exit status (137)
Command executed:
  PICARD=`which picard.jar`
  java -jar $PICARD CreateSequenceDictionary R= Sus_scrofa.Sscrofa11.1.dna.toplevel_with_PL9_full_plus_pBACN.fa O= Sus_scrofa.Sscrofa11.1.dna.toplevel_with_PL9_full_plus_pBACN.dict
Command exit status:
  137
Command output:
  (empty)
Command error:
  [Thu May 16 14:32:17 UTC 2019] picard.sam.CreateSequenceDictionary REFERENCE=Sus_scrofa.Sscrofa11.1.dna.toplevel_with_PL9_full_plus_pBACN.fa OUTPUT=Sus_scrofa.Sscrofa11.1.dna.toplevel_with_PL9_full_plus_pBACN.dict    TRUNCATE_NAMES_AT_WHITESPACE=true NUM_SEQUENCES=2147483647 VERBOSITY=INFO QUIET=false VALIDATION_STRINGENCY=STRICT COMPRESSION_LEVEL=5 MAX_RECORDS_IN_RAM=500000 CREATE_INDEX=false CREATE_MD5_FILE=false GA4GH_CLIENT_SECRETS=client_secrets.json
  [Thu May 16 14:32:17 UTC 2019] Executing as root@ip-10-68-96-187 on Linux 4.14.101-75.76.amzn1.x86_64 amd64; Java HotSpot(TM) 64-Bit Server VM 1.8.0_121-b13; Picard version: 2.9.0-1-gf5b9f50-SNAPSHOT
  .command.sh: line 3:   106 Killed                  java -jar $PICARD CreateSequenceDictionary R= Sus_scrofa.Sscrofa11.1.dna.toplevel_with_PL9_full_plus_pBACN.fa O= Sus_scrofa.Sscrofa11.1.dna.toplevel_with_PL9_full_plus_pBACN.dict
Work dir:
  s3://nextflow1/dharm_nextflow_logs/runs/ff/3f1fef1a119d9c598d6dfaddb2bfa7
Tip: you can replicate the issue by changing to the process work dir and entering the command `bash .command.run`
 -- Check '.nextflow.log' file for details

If you don't mind can you please let us know if I am using the right commnets and flags in containertOverrides or I am making some other mistake to run this on AWS Batch.

Thanks,

With Regards,
Dharm

Not sure what containerOverrides is?