PacificBiosciences/FALCON

[ERROR]Some tasks are recently_done but not satisfied: set([Node(0-rawreads/tan-runs/tan_004), Node(0-rawreads/tan-runs/tan_001)])

laguda-1996 opened this issue · 4 comments

I downloaded and installed falcon via https://pb-falcon.readthedocs.io/en/latest/
Hi,I was trying to run falcon for ecoli in tutorial,and the program stopped with:

[ERROR]Task Node(0-rawreads/tan-runs/tan_004) failed with exit-code=1
[ERROR]Task Node(0-rawreads/tan-runs/tan_001) failed with exit-code=1
[ERROR]Some tasks are recently_done but not satisfied: set([Node(0-rawreads/tan-runs/tan_004), Node(0-rawreads/tan-runs/tan_001)])
[ERROR]ready: set([])
	submitted: set([Node(0-rawreads/tan-runs/tan_002), Node(0-rawreads/tan-runs/tan_003), Node(0-rawreads/tan-runs/tan_000)])
[ERROR]Noop. We cannot kill blocked threads. Hopefully, everything will die on SIGTERM.
Traceback (most recent call last):
  File "/home/dsy/miniconda2/bin/fc_run", line 11, in <module>
    load_entry_point('falcon-kit==1.2.4', 'console_scripts', 'fc_run')()
  File "/home/dsy/miniconda2/lib/python2.7/site-packages/falcon_kit/mains/run1.py", line 726, in main
    main1(argv[0], args.config, args.logger)
  File "/home/dsy/miniconda2/lib/python2.7/site-packages/falcon_kit/mains/run1.py", line 76, in main1
    input_fofn_fn=input_fofn_fn,
  File "/home/dsy/miniconda2/lib/python2.7/site-packages/falcon_kit/mains/run1.py", line 242, in run
    dist=Dist(NPROC=4, MB=4000, job_dict=config['job.step.da']),
  File "/home/dsy/miniconda2/lib/python2.7/site-packages/falcon_kit/pype.py", line 194, in gen_parallel_tasks
    wf.refreshTargets()
  File "/home/dsy/miniconda2/lib/python2.7/site-packages/pypeflow/simple_pwatcher_bridge.py", line 278, in refreshTargets
    self._refreshTargets(updateFreq, exitOnFailure)
  File "/home/dsy/miniconda2/lib/python2.7/site-packages/pypeflow/simple_pwatcher_bridge.py", line 362, in _refreshTargets
    raise Exception(msg)
Exception: Some tasks are recently_done but not satisfied: set([Node(0-rawreads/tan-runs/tan_004), Node(0-rawreads/tan-runs/tan_001)])

I looked at the stderr file in the given subfolder(/0-rawreads/build):

executable=${PYPEFLOW_JOB_START_SCRIPT}
+ executable=/home/dsy/test_e.coli/0-rawreads/build/run-P8c29bfea2edc95.bash
timeout=${PYPEFLOW_JOB_START_TIMEOUT:-60} # wait 60s by default
+ timeout=60

# Wait up to timeout seconds for the executable to become "executable",
# then exec.
#timeleft = int(timeout)
while [[ ! -x "${executable}" ]]; do
    if [[ "${timeout}" == "0" ]]; then
        echo "timed out waiting for (${executable})"
        exit 77
    fi
    echo "not executable: '${executable}', waiting ${timeout}s"
    sleep 1
    timeout=$((timeout-1))
done
+ [[ ! -x /home/dsy/test_e.coli/0-rawreads/build/run-P8c29bfea2edc95.bash ]]

/bin/bash ${executable}
+ /bin/bash /home/dsy/test_e.coli/0-rawreads/build/run-P8c29bfea2edc95.bash
+ '[' '!' -d /home/dsy/test_e.coli/0-rawreads/build ']'
+ cd /home/dsy/test_e.coli/0-rawreads/build
+ eval '/bin/bash run.sh'
++ /bin/bash run.sh
export PATH=$PATH:/bin
+ export PATH=/home/dsy/Falcon_pacbio/fc_env_181031/bin/bin:/home/dsy/miniconda2/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/dsy/miniconda2/bin/minimap2:/usr/bin/samtools:/bin
+ PATH=/home/dsy/Falcon_pacbio/fc_env_181031/bin/bin:/home/dsy/miniconda2/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/home/dsy/miniconda2/bin/minimap2:/usr/bin/samtools:/bin
cd /home/dsy/test_e.coli/0-rawreads/build
+ cd /home/dsy/test_e.coli/0-rawreads/build
/bin/bash task.sh
+ /bin/bash task.sh
pypeflow 2.1.1
2018-11-01 10:41:13,653 - root - DEBUG - Running "/home/dsy/miniconda2/lib/python2.7/site-packages/pypeflow/do_task.py /home/dsy/test_e.coli/0-rawreads/build/task.json"
2018-11-01 10:41:13,654 - root - DEBUG - Checking existence of '/home/dsy/test_e.coli/0-rawreads/build/task.json' with timeout=30
2018-11-01 10:41:13,654 - root - DEBUG - Loading JSON from '/home/dsy/test_e.coli/0-rawreads/build/task.json'
2018-11-01 10:41:13,654 - root - DEBUG - {u'bash_template_fn': u'template.sh',
 u'inputs': {u'config': u'../../General_config.json',
             u'input_fofn': u'../../input.fofn'},
 u'outputs': {u'db': u'raw_reads.db', u'length_cutoff': u'length_cutoff'},
 u'parameters': {u'pypeflow_mb': 4000,
                 u'pypeflow_nproc': 1,
                 u'topdir': u'../..'}}
2018-11-01 10:41:13,654 - root - WARNING - CD: '/home/dsy/test_e.coli/0-rawreads/build' <- '/home/dsy/test_e.coli/0-rawreads/build'
2018-11-01 10:41:13,654 - root - DEBUG - Checking existence of u'../../input.fofn' with timeout=30
2018-11-01 10:41:13,654 - root - DEBUG - Checking existence of u'../../General_config.json' with timeout=30
2018-11-01 10:41:13,654 - root - DEBUG - Checking existence of u'template.sh' with timeout=30
2018-11-01 10:41:13,654 - root - WARNING - CD: '/home/dsy/test_e.coli/0-rawreads/build' <- '/home/dsy/test_e.coli/0-rawreads/build'
2018-11-01 10:41:13,655 - root - INFO - $('/bin/bash user_script.sh')
hostname
+ hostname
pwd
+ pwd
date
+ date
# Substitution will be similar to snakemake "shell".
python -m falcon_kit.mains.dazzler --config-fn=../../General_config.json --db-fn=raw_reads.db  build --input-fofn-fn=../../input.fofn --length-cutoff-fn=length_cutoff
+ python -m falcon_kit.mains.dazzler --config-fn=../../General_config.json --db-fn=raw_reads.db build --input-fofn-fn=../../input.fofn --length-cutoff-fn=length_cutoff
falcon-kit 1.2.4
pypeflow 2.1.1
[INFO]Log-level: INFO
[INFO]config('../../General_config.json'):
{u'falcon_sense_greedy': False, u'pa_fasta_filter_option': u'streamed-internal-median', u'seed_coverage': u'30', u'overlap_filtering_setting': u'--max-diff 100 --max-cov 100 --min-cov 20 --bestn 10 --n-core 6', u'length_cutoff_pr': u'12000', u'input_fofn': u'input.fofn', u'pa_HPCdaligner_option': u'-v -B4 -t16 -e.70 -l1000 -s1000', u'ver': u'100', u'bestn': 12, u'pa_HPCTANmask_option': u'-l500', u'input_type': u'raw', u'pa_REPmask_code': u'0,300/0,300/0,300', u'pa_DBsplit_option': u'-x500 -s50', u'fc_ovlp_to_graph_option': u' --min-len 12000', u'genome_size': u'4652500', u'avoid_text_file_busy': True, u'pa_dazcon_option': u'-j 4 -x -l 500', u'skip_checks': False, u'ovlp_daligner_option': u'-v -e.96  -l500 -s1000 -h60', u'target': u'assembly', u'falcon_sense_skip_contained': False, u'pa_daligner_option': u'-v -e.70 -l1000 -s1000 --> -s100', u'length_cutoff': u'-1', u'pa_DBdust_option': u'', u'ovlp_HPCdaligner_option': u'-v -B4 -t32 -h60 -e.96 -l500 -s1000', u'LA4Falcon_preload': False, u'dazcon': False, u'falcon_sense_option': u'--output-multi --min-idt 0.70 --min-cov 4 --max-n-read 200 --n-core 6', u'ovlp_DBsplit_option': u'-x500 -s50'}
[INFO]our subset of config:
{'daligner_opt': u'-v -e.70 -l1000 -s1000 --> -s100 -v -B4 -t16 -e.70 -l1000 -s1000', 'genome_size': 4652500, 'TANmask_opt': u'-v -e.70 -l1000 -s1000 --> -s100 -l500', 'DBdust_opt': u'', 'seed_coverage': 30.0, 'user_length_cutoff': -1, 'fasta_filter_option': u'streamed-internal-median', 'REPmask_opt': u'-v -e.70 -l1000 -s1000 --> -s100 ', 'DBsplit_opt': u'-x500 -s50'}
[INFO]Building rdb from '../../input.fofn', to write 'raw_reads.db'
[INFO]$('bash -vex build_db.sh')
#!/bin/bash
set -vex
+ set -vex
echo "PBFALCON_ERRFILE=$PBFALCON_ERRFILE"
+ echo PBFALCON_ERRFILE=
set -o pipefail
+ set -o pipefail
rm -f raw_reads.db .raw_reads.* # in case of re-run
+ rm -f raw_reads.db '.raw_reads.*'
#fc_fasta2fasta < my.input.fofn >| fc.fofn
while read fn; do  cat  ${fn} | python -m falcon_kit.mains.fasta_filter streamed-internal-median - | fasta2DB -v raw_reads -i${fn##*/}; done < my.input.fofn
+ read fn
+ cat /home/dsy/test_e.coli/ecoli.m140913_050931_42139_c100713652400000001823152404301535_s1_p0.subreads/ecoli.1.fasta
+ python -m falcon_kit.mains.fasta_filter streamed-internal-median -
+ fasta2DB -v raw_reads -iecoli.1.fasta
falcon-kit 1.2.4
pypeflow 2.1.1
INFO:root:Counting bytes from "-"
INFO:root:        #1 count=         24,885 
Adding 'ecoli.1.fasta.fasta' ...
INFO:root:        #4 count=         56,588 
INFO:root:       #10 count=        110,822 
INFO:root:       #17 count=        219,377 
INFO:root:       #33 count=        431,653 
INFO:root:       #71 count=        834,148 
INFO:root:      #141 count=      1,634,922 
INFO:root:      #284 count=      3,234,318 
INFO:root:      #570 count=      6,422,036 
INFO:root:    #1,196 count=     12,806,779 
INFO:root:    #2,513 count=     25,548,890 
INFO:root:    #5,106 count=     51,032,525 
INFO:root:   #10,156 count=    102,004,104 
INFO:root:   #20,808 count=    203,940,368 
+ read fn
+ cat /home/dsy/test_e.coli/ecoli.m140913_050931_42139_c100713652400000001823152404301535_s1_p0.subreads/ecoli.2.fasta
+ python -m falcon_kit.mains.fasta_filter streamed-internal-median -
+ fasta2DB -v raw_reads -iecoli.2.fasta
falcon-kit 1.2.4
pypeflow 2.1.1
INFO:root:Counting bytes from "-"
INFO:root:        #1 count=          8,431 
INFO:root:        #2 count=         18,720 
Adding 'ecoli.2.fasta.fasta' ...
INFO:root:        #5 count=         41,485 
INFO:root:        #8 count=         87,632 
INFO:root:       #12 count=        169,359 
INFO:root:       #25 count=        309,915 
INFO:root:       #50 count=        608,856 
INFO:root:      #110 count=      1,164,505 
INFO:root:      #225 count=      2,245,051 
INFO:root:      #466 count=      4,411,007 
INFO:root:      #933 count=      8,729,414 
INFO:root:    #1,861 count=     17,369,636 
INFO:root:    #3,611 count=     34,641,069 
INFO:root:    #7,153 count=     69,184,371 
INFO:root:   #14,295 count=    138,259,252 
INFO:root:   #28,404 count=    276,393,751 
+ read fn
+ cat /home/dsy/test_e.coli/ecoli.m140913_050931_42139_c100713652400000001823152404301535_s1_p0.subreads/ecoli.3.fasta
+ python -m falcon_kit.mains.fasta_filter streamed-internal-median -
+ fasta2DB -v raw_reads -iecoli.3.fasta
falcon-kit 1.2.4
pypeflow 2.1.1
INFO:root:Counting bytes from "-"
INFO:root:        #1 count=          2,478 
INFO:root:        #2 count=         19,929 
Adding 'ecoli.3.fasta.fasta' ...
INFO:root:        #4 count=         48,600 
INFO:root:        #8 count=         90,141 
INFO:root:       #15 count=        161,374 
INFO:root:       #26 count=        309,296 
INFO:root:       #47 count=        590,609 
INFO:root:      #102 count=      1,153,567 
INFO:root:      #215 count=      2,277,181 
INFO:root:      #443 count=      4,525,924 
INFO:root:      #892 count=      8,994,698 
INFO:root:    #1,802 count=     17,930,324 
INFO:root:    #3,610 count=     35,804,922 
INFO:root:    #7,263 count=     71,551,998 
INFO:root:   #14,522 count=    143,047,521 
INFO:root:   #28,897 count=    286,008,531 
+ read fn
#cat fc.fofn | xargs rm -f
DBdust  raw_reads
+ DBdust raw_reads

DBsplit -f -x500 -s50 raw_reads
+ DBsplit -f -x500 -s50 raw_reads
#LB=$(cat raw_reads.db | LD_LIBRARY_PATH= awk '$1 == "blocks" {print $3}')
#echo -n $LB >| db_block_count

CUTOFF=$(python2.7 -m falcon_kit.mains.calc_cutoff --coverage 30.0 4652500 <(DBstats -b1 raw_reads))
++ python2.7 -m falcon_kit.mains.calc_cutoff --coverage 30.0 4652500 /dev/fd/63
+++ DBstats -b1 raw_reads
falcon-kit 1.2.4
pypeflow 2.1.1
+ CUTOFF=22484
echo -n $CUTOFF >| length_cutoff
+ echo -n 22484
# TODO: Verify that db exists.
#ln -sf length_cutoff length_cutoff

date
+ date
2018-11-01 10:41:41,660 - root - DEBUG - Call '/bin/bash user_script.sh' returned 0.
2018-11-01 10:41:41,660 - root - WARNING - CD: '/home/dsy/test_e.coli/0-rawreads/build' -> '/home/dsy/test_e.coli/0-rawreads/build'
2018-11-01 10:41:41,660 - root - DEBUG - Checking existence of u'length_cutoff' with timeout=30
2018-11-01 10:41:41,660 - root - DEBUG - Checking existence of u'raw_reads.db' with timeout=30
2018-11-01 10:41:41,660 - root - WARNING - CD: '/home/dsy/test_e.coli/0-rawreads/build' -> '/home/dsy/test_e.coli/0-rawreads/build'

real	0m28.098s
user	0m29.034s
sys	0m2.714s
touch /home/dsy/test_e.coli/0-rawreads/build/run.sh.done
+ touch /home/dsy/test_e.coli/0-rawreads/build/run.sh.done
+ finish
+ echo 'finish code: 0'

and my .cfg file are as following :

[General]
input_fofn = input.fofn
input_type = raw
length_cutoff = -1
genome_size = 4652500
seed_coverage = 30
length_cutoff_pr = 12000

# pa_daligner_option =   -v -e.70 -l1000 -s1000 --> -s100
pa_daligner_option =   -v -e.70 -l1000 -s1000 
ovlp_daligner_option = -v -e.96  -l500 -s1000 -h60

# pa_HPCdaligner_option = -v -B4 -M8
pa_HPCdaligner_option =  -v -B4 -t16 -e.70 -l1000 -s1000
# ovlp_HPCdaligner_option =  -v -B4 -M8
ovlp_HPCdaligner_option = -v -B4 -t32 -h60 -e.96 -l500 -s1000

pa_DBsplit_option = -x500 -s50
ovlp_DBsplit_option = -x500 -s50

falcon_sense_option = --output-multi --min-idt 0.70 --min-cov 4 --max-n-read 200 --n-core 6
overlap_filtering_setting = --max-diff 100 --max-cov 100 --min-cov 20 --bestn 10 --n-core 6

[job.defaults]
job_type = local
njobs = 6
pwatcher_type = blocking
submit = /bin/bash -c "${JOB_SCRIPT}" > "${JOB_STDOUT}" 2> "${JOB_STDERR}"
#JOB_QUEUE = myqueue
#MB = 4000
NPROC = 4

default_concurrent_jobs = 30
pa_concurrent_jobs = 30
ovlp_concurrent_jobs = 30
cns_concurrent_jobs = 30

[job.step.da]
NPROC = 4

[job.step.la]
NPROC = 4

[job.step.pda]
NPROC = 4

[job.step.pla]
NPROC = 4

[job.step.cns]
NPROC = 4 
MB = 2000

[job.step.asm]
NPROC = 4 

I have seen some issues on github and tried different parameter settings based on other people's answers, but all of them are reported incorrectly.Although I saw the stderr file, I could not understand it.Can you give me some advice?thank you very much.

It looks like there is possibly just misconfigured (or not even configured) tan repeat masking parameters in your fc_run.cfg file.

Is there a reason you needed to install via the readthedocs tarball method?

We have simplified installation as well as added new features to the bioconda version of pb-assembly.

See this link for details:
https://github.com/PacificBiosciences/pb-assembly

Look for a stderr file under 0-rawreads/tan-runs/tan_004.

JN-0 commented

Hi guys.

I am getting a similar error at top level but my stderr actually looks different.

I installed FALCON through conda and I used fc_run D01_v01.cfg &> fc_run.log & as my test run.

My directory is here
.
├── 0-rawreads
│   └── build
│   ├── build_db.sh
│   ├── my.input.fofn
│   ├── pwatcher.dir -> /gpfs1/scratch/30days/uqswan29/fc/mypwatcher/jobs/Pc9f2e42080f9f1
│   ├── raw_reads.db
│   ├── run.sh
│   ├── task.json
│   ├── task.sh
│   ├── template.sh
│   ├── top.txt
│   └── user_script.sh
├── 1-preads_ovl
├── 2-asm-falcon
├── all.log
├── config.json
├── D01_v01.cfg
├── D01_v01.cfg.original
├── fc_run_arabidopsis.cfg
├── fc_run_human_.cfg
├── fc_run_human.cfg
├── fc_run.log
├── fc_run_plant.cfg
├── fc_unzip.cfg
├── General_config.json
├── input.fofn
├── mypwatcher
│   ├── exits
│   │   └── exit-Pc9f2e42080f9f1
│   ├── heartbeats
│   ├── jobs
│   │   └── Pc9f2e42080f9f1
│   │   ├── stderr
│   │   └── stdout
│   └── wrappers
│   └── run-Pc9f2e42080f9f1.bash
├── nohup.out
└── _.pbs

The stderr file is following

+ python3 -m pwatcher.mains.fs_heartbeat --directory=/gpfs1/scratch/30days/uqswan29/fc/0-rawreads/build --heartbeat-file=/gpfs1/scratch/30days/uqswan29/fc/mypwatcher/heartbeats/heartbeat-Pc9f2e42080f9f1 --exit-file=/gpfs1/scratch/30days/uqswan29/fc/mypwatcher/exits/exit-Pc9f2e42080f9f1 --rate=10.0 /bin/bash run.sh
export PATH=$PATH:/bin
+ export PATH=/home/uqswan29/.conda/envs/conda_PacBio/bin:/sw/RCC/Anaconda/2019.07/condabin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/opt/ganglia/bin:/opt/ganglia/sbin:/usr/java/latest/bin:/opt/pbs/bin:/opt/rocks/bin:/opt/rocks/sbin:/home/uqswan29/.local/bin:/home/uqswan29/bin:/bin
+ PATH=/home/uqswan29/.conda/envs/conda_PacBio/bin:/sw/RCC/Anaconda/2019.07/condabin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/opt/ganglia/bin:/opt/ganglia/sbin:/usr/java/latest/bin:/opt/pbs/bin:/opt/rocks/bin:/opt/rocks/sbin:/home/uqswan29/.local/bin:/home/uqswan29/bin:/bin
cd /gpfs1/scratch/30days/uqswan29/fc/0-rawreads/build
+ cd /gpfs1/scratch/30days/uqswan29/fc/0-rawreads/build
/bin/bash task.sh
+ /bin/bash task.sh
pypeflow 2.3.0
2020-04-13 00:58:34,828 - root - DEBUG - Running "/home/uqswan29/.conda/envs/conda_PacBio/lib/python3.7/site-packages/pypeflow/do_task.py /gpfs1/scratch/30days/uqswan29/fc/0-rawreads/build/task.json"
2020-04-13 00:58:34,829 - root - DEBUG - Checking existence of '/gpfs1/scratch/30days/uqswan29/fc/0-rawreads/build/task.json' with timeout=30
2020-04-13 00:58:34,829 - root - DEBUG - Loading JSON from '/gpfs1/scratch/30days/uqswan29/fc/0-rawreads/build/task.json'
2020-04-13 00:58:34,830 - root - DEBUG - {'bash_template_fn': 'template.sh',
 'inputs': {'config': '../../General_config.json',
            'input_fofn': '../../input.fofn'},
 'outputs': {'db': 'raw_reads.db', 'length_cutoff': 'length_cutoff'},
 'parameters': {'pypeflow_mb': 4000, 'pypeflow_nproc': 1, 'topdir': '../..'}}
2020-04-13 00:58:34,830 - root - INFO - CD: '/gpfs1/scratch/30days/uqswan29/fc/0-rawreads/build' <- '/gpfs1/scratch/30days/uqswan29/fc/0-rawreads/build'
2020-04-13 00:58:34,830 - root - DEBUG - Checking existence of '../../General_config.json' with timeout=30
2020-04-13 00:58:34,830 - root - DEBUG - Checking existence of '../../input.fofn' with timeout=30
2020-04-13 00:58:34,830 - root - DEBUG - Checking existence of 'template.sh' with timeout=30
2020-04-13 00:58:34,830 - root - INFO - CD: '/gpfs1/scratch/30days/uqswan29/fc/0-rawreads/build' <- '/gpfs1/scratch/30days/uqswan29/fc/0-rawreads/build'
2020-04-13 00:58:34,832 - root - INFO - $('/bin/bash user_script.sh')
hostname
+ hostname
pwd
+ pwd
date
+ date
# Substitution will be similar to snakemake "shell".
python3 -m falcon_kit.mains.dazzler --config-fn=../../General_config.json --db-fn=raw_reads.db  build --input-fofn-fn=../../input.fofn --length-cutoff-fn=length_cutoff
+ python3 -m falcon_kit.mains.dazzler --config-fn=../../General_config.json --db-fn=raw_reads.db build --input-fofn-fn=../../input.fofn --length-cutoff-fn=length_cutoff
falcon-kit 1.8.1 (pip thinks "falcon-kit 1.8.1")
pypeflow 2.3.0
[INFO]Log-level: INFO
[INFO]config('../../General_config.json'):
{'LA4Falcon_preload': False, 'avoid_text_file_busy': True, 'bestn': 12, 'dazcon': False, 'falcon_sense_greedy': False, 'falcon_sense_option': '--output-multi --min-idt 0.70 --min-cov 2 --max-n-read 200 --n-core 8', 'falcon_sense_skip_contained': True, 'fc_ovlp_to_graph_option': ' --min-len 5000', 'genome_size': '3000000000', 'input_fofn': 'input.fofn', 'input_type': 'raw', 'length_cutoff': '-1', 'length_cutoff_pr': '5000', 'overlap_filtering_setting': '--max-diff 100 --max-cov 100 --min-cov 2 --n-core 12', 'ovlp_DBdust_option': '', 'ovlp_DBsplit_option': '-s400', 'ovlp_HPCdaligner_option': '-v -B4 -M32 -h60 -e.96 -l500 -s1000', 'ovlp_daligner_option': '-e.96 -l500 -s1000 -h60', 'pa_DBdust_option': '', 'pa_DBsplit_option': '-a -x500 -s400', 'pa_HPCTANmask_option': '-l500', 'pa_HPCdaligner_option': '-v -B4 -M32 -e.70 -l1000 -s1000', 'pa_REPmask_code': '0,300/0,300/0,300', 'pa_daligner_option': '-e.70 -l1000 -s1000', 'pa_dazcon_option': '-j 4 -x -l 500', 'pa_fasta_filter_option': 'streamed-internal-median', 'pa_subsample_coverage': 0, 'pa_subsample_random_seed': 12345, 'pa_subsample_strategy': 'random', 'seed_coverage': '30', 'skip_checks': False, 'target': 'assembly', 'ver': '100'}
[INFO]our subset of config:
{'genome_size': 3000000000, 'seed_coverage': 30.0, 'DBdust_opt': '', 'DBsplit_opt': '-a -x500 -s400', 'daligner_opt': '-e.70 -l1000 -s1000 -v -B4 -M32 -e.70 -l1000 -s1000', 'TANmask_opt': '-e.70 -l1000 -s1000 -l500', 'REPmask_opt': '-e.70 -l1000 -s1000 ', 'user_length_cutoff': -1, 'fasta_filter_option': 'streamed-internal-median', 'subsample_coverage': 0, 'subsample_random_seed': 12345, 'subsample_strategy': 'random'}
[INFO]Building rdb from '../../input.fofn', to write 'raw_reads.db'
[INFO]$('bash -vex build_db.sh')
module () {  eval $($LMOD_CMD bash "$@") && eval $(${LMOD_SETTARG_CMD:-:} -s sh)
}
ml () {  eval $($LMOD_DIR/ml_cmd "$@")
}
# -*- shell-script -*-
#
########################################################################
# Start Lmod BASHENV
########################################################################

# Remember the XTRACE and VERBOSE states so that they are restored
# at the end of this script.
[[ ${-/x} != $- ]] && XTRACE_STATE="-x"  || XTRACE_STATE="+x"
+ [[ hvB != hvxB ]]
+ XTRACE_STATE=-x
[[ ${-/v} != $- ]] && VERBOSE_STATE="-v" || VERBOSE_STATE="+v"
+ [[ hxB != hvxB ]]
+ VERBOSE_STATE=-v
set +xv     # Turn off tracing for now.
+ set +xv
unset XTRACE_STATE  VERBOSE_STATE
+ unset XTRACE_STATE VERBOSE_STATE

########################################################################
# End Lmod BASHENV
########################################################################
#
# Local Variables:
# mode: shell-script
# indent-tabs-mode: nil
# End:
#!/bin/bash
set -vex
+ set -vex
echo "PBFALCON_ERRFILE=$PBFALCON_ERRFILE"
+ echo PBFALCON_ERRFILE=
set -o pipefail
+ set -o pipefail
rm -f raw_reads.db raw_reads.dam .raw_reads.* # in case of re-run
+ rm -f raw_reads.db raw_reads.dam '.raw_reads.*'
#fc_fasta2fasta < my.input.fofn >| fc.fofn
zmw_whitelist_option=""
+ zmw_whitelist_option=
use_subsampling=0
+ use_subsampling=0
if [[ $use_subsampling -eq 1 ]]; then
    while read fn; do  zcat  ${fn} | python3 -m falcon_kit.mains.zmw_collect; done < my.input.fofn > zmw.all.tsv
    cat zmw.all.tsv | python3 -m falcon_kit.mains.zmw_subsample --coverage "0" --random-seed "12345" --strategy "random" --genome-size "3000000000" zmw.whitelist.json
    zmw_whitelist_option="--zmw-whitelist-fn zmw.whitelist.json"
fi
+ [[ 0 -eq 1 ]]
while read fn; do  zcat  ${fn} | python3 -m falcon_kit.mains.fasta_filter ${zmw_whitelist_option} streamed-internal-median - | fasta2DB -v raw_reads -i${fn##*/}; done < my.input.fofn
+ read fn
+ zcat /gpfs1/scratch/30days/uqswan29/XNUCL_20191212_S64018_PL100147073-1_D01.subreads.fastq.gz
+ python3 -m falcon_kit.mains.fasta_filter streamed-internal-median -
+ fasta2DB -v raw_reads -iXNUCL_20191212_S64018_PL100147073-1_D01.subreads.fastq.gz
falcon-kit 1.8.1 (pip thinks "falcon-kit 1.8.1")
pypeflow 2.3.0
INFO:root:Counting bytes from "None"

gzip: Standard input is empty, terminating!
stdout: Broken pipe
build_db.sh: line 14: 32331 Exit 1                  zcat ${fn}
     32332 Killed                  | python3 -m falcon_kit.mains.fasta_filter ${zmw_whitelist_option} streamed-internal-median -
     32333 Done                    | fasta2DB -v raw_reads -i${fn##*/}
[WARNING]Call 'bash -vex build_db.sh' returned 35072.
Traceback (most recent call last):
  File "/home/uqswan29/.conda/envs/conda_PacBio/lib/python3.7/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/home/uqswan29/.conda/envs/conda_PacBio/lib/python3.7/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/home/uqswan29/.conda/envs/conda_PacBio/lib/python3.7/site-packages/falcon_kit/mains/dazzler.py", line 1532, in <module>
    main()
  File "/home/uqswan29/.conda/envs/conda_PacBio/lib/python3.7/site-packages/falcon_kit/mains/dazzler.py", line 1528, in main
    args.func(args)
  File "/home/uqswan29/.conda/envs/conda_PacBio/lib/python3.7/site-packages/falcon_kit/mains/dazzler.py", line 1050, in cmd_build
    build_db(ours, args.input_fofn_fn, args.db_fn, args.length_cutoff_fn)
  File "/home/uqswan29/.conda/envs/conda_PacBio/lib/python3.7/site-packages/falcon_kit/mains/dazzler.py", line 169, in build_db
    io.syscall('bash -vex {}'.format(script_fn))
  File "/home/uqswan29/.conda/envs/conda_PacBio/lib/python3.7/site-packages/pypeflow/io.py", line 27, in syscall
    raise Exception(msg)
Exception: Call 'bash -vex build_db.sh' returned 35072.
2020-04-13 01:00:21,247 - root - WARNING - Call '/bin/bash user_script.sh' returned 256.
2020-04-13 01:00:21,247 - root - INFO - CD: '/gpfs1/scratch/30days/uqswan29/fc/0-rawreads/build' -> '/gpfs1/scratch/30days/uqswan29/fc/0-rawreads/build'
2020-04-13 01:00:21,247 - root - INFO - CD: '/gpfs1/scratch/30days/uqswan29/fc/0-rawreads/build' -> '/gpfs1/scratch/30days/uqswan29/fc/0-rawreads/build'
2020-04-13 01:00:21,248 - root - CRITICAL - Error in /home/uqswan29/.conda/envs/conda_PacBio/lib/python3.7/site-packages/pypeflow/do_task.py with args="{'json_fn': '/gpfs1/scratch/30days/uqswan29/fc/0-rawreads/build/task.json',\n 'timeout': 30,\n 'tmpdir': None}"
Traceback (most recent call last):
  File "/home/uqswan29/.conda/envs/conda_PacBio/lib/python3.7/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/home/uqswan29/.conda/envs/conda_PacBio/lib/python3.7/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/home/uqswan29/.conda/envs/conda_PacBio/lib/python3.7/site-packages/pypeflow/do_task.py", line 280, in <module>
    main()
  File "/home/uqswan29/.conda/envs/conda_PacBio/lib/python3.7/site-packages/pypeflow/do_task.py", line 272, in main
    run(**vars(parsed_args))
  File "/home/uqswan29/.conda/envs/conda_PacBio/lib/python3.7/site-packages/pypeflow/do_task.py", line 266, in run
    run_cfg_in_tmpdir(cfg, tmpdir, '.')
  File "/home/uqswan29/.conda/envs/conda_PacBio/lib/python3.7/site-packages/pypeflow/do_task.py", line 241, in run_cfg_in_tmpdir
    run_bash(bash_template, myinputs, myoutputs, parameters)
  File "/home/uqswan29/.conda/envs/conda_PacBio/lib/python3.7/site-packages/pypeflow/do_task.py", line 200, in run_bash
    util.system(cmd)
  File "/home/uqswan29/.conda/envs/conda_PacBio/lib/python3.7/site-packages/pypeflow/io.py", line 27, in syscall
    raise Exception(msg)
Exception: Call '/bin/bash user_script.sh' returned 256.
+++ pwd
++ echo 'FAILURE. Running top in /gpfs1/scratch/30days/uqswan29/fc/0-rawreads/build (If you see -terminal database is inaccessible- you are using the python bin-wrapper, so you will not get diagnostic info. No big deal. This process is crashing anyway.)'
++ rm -f top.txt
++ which python
++ which top
++ env -u LD_LIBRARY_PATH top -b -n 1
++ env -u LD_LIBRARY_PATH top -b -n 1
++ pstree -apl

real	1m46.859s
user	1m52.798s
sys	0m22.293s
Namespace(command=['/bin/bash', 'run.sh'], directory='/gpfs1/scratch/30days/uqswan29/fc/0-rawreads/build', exit_file='/gpfs1/scratch/30days/uqswan29/fc/mypwatcher/exits/exit-Pc9f2e42080f9f1', heartbeat_file='/gpfs1/scratch/30days/uqswan29/fc/mypwatcher/heartbeats/heartbeat-Pc9f2e42080f9f1', rate=10.0)

cwd:'/gpfs1/scratch/30days/uqswan29/fc/0-rawreads/build'
hostname=flashlite2.local
heartbeat_fn='/gpfs1/scratch/30days/uqswan29/fc/mypwatcher/heartbeats/heartbeat-Pc9f2e42080f9f1'
exit_fn='/gpfs1/scratch/30days/uqswan29/fc/mypwatcher/exits/exit-Pc9f2e42080f9f1'
sleep_s=10.0
before setpgid: pid=32316 pgid=32304
 after setpgid: pid=32316 pgid=32316
In cwd: /gpfs1/scratch/30days/uqswan29/fc/0-rawreads/build, Blocking call: '/bin/bash run.sh'
 returned: 256

all.log is here

2020-04-12 14:58:34,349 - fc_run:549 - INFO - Setup logging from file "None".
2020-04-12 14:58:34,349 - root:41 - INFO - $ lfs setstripe -c 12 /gpfs1/scratch/30days/uqswan29/fc >
2020-04-12 14:58:34,353 - falcon_kit.util.system:52 - INFO - Apparently '/gpfs1/scratch/30days/uqswan29/fc' is not in lustre filesystem, which is fine.
2020-04-12 14:58:34,353 - fc_run:50 - INFO - fc_run started with configuration D01_v01.cfg
2020-04-12 14:58:34,354 - fc_run:55 - INFO - cfg=
{
  "General": {
    "LA4Falcon_preload": false,
    "avoid_text_file_busy": true,
    "bestn": 12,
    "dazcon": false,
    "falcon_sense_greedy": false,
    "falcon_sense_option": "--output-multi --min-idt 0.70 --min-cov 2 --max-n-read 200 --n-core 8",
    "falcon_sense_skip_contained": true,
    "fc_ovlp_to_graph_option": " --min-len 5000",
    "genome_size": "3000000000",
    "input_fofn": "input.fofn",
    "input_type": "raw",
    "length_cutoff": "-1",
    "length_cutoff_pr": "5000",
    "overlap_filtering_setting": "--max-diff 100 --max-cov 100 --min-cov 2 --n-core 12",
    "ovlp_DBdust_option": "",
    "ovlp_DBsplit_option": "-s400",
    "ovlp_HPCdaligner_option": "-v -B4 -M32 -h60 -e.96 -l500 -s1000",
    "ovlp_daligner_option": "-e.96 -l500 -s1000 -h60",
    "pa_DBdust_option": "",
    "pa_DBsplit_option": "-a -x500 -s400",
    "pa_HPCTANmask_option": "-l500",
    "pa_HPCdaligner_option": "-v -B4 -M32 -e.70 -l1000 -s1000",
    "pa_REPmask_code": "0,300/0,300/0,300",
    "pa_daligner_option": "-e.70 -l1000 -s1000",
    "pa_dazcon_option": "-j 4 -x -l 500",
    "pa_fasta_filter_option": "streamed-internal-median",
    "pa_subsample_coverage": 0,
    "pa_subsample_random_seed": 12345,
    "pa_subsample_strategy": "random",
    "seed_coverage": "30",
    "skip_checks": false,
    "target": "assembly"
  },
  "job.defaults": {
    "job_type": "local",
    "njobs": "32",
    "pwatcher_type": "fs_based",
    "stop_all_jobs_on_failure": "False",
    "use_tmpdir": false
  },
  "job.step.asm": {},
  "job.step.cns": {
    "njobs": "8"
  },
  "job.step.da": {},
  "job.step.dust": {},
  "job.step.la": {},
  "job.step.pda": {},
  "job.step.pla": {}
}
2020-04-12 14:58:34,355 - pypeflow.simple_pwatcher_bridge:738 - INFO - In simple_pwatcher_bridge, pwatcher_impl=<module 'pwatcher.fs_based' from '/home/uqswan29/.conda/envs/conda_PacBio/lib/python3.7/site-packages/pwatcher/fs_based.py'>
2020-04-12 14:58:34,360 - pypeflow.simple_pwatcher_bridge:749 - INFO - job_type='local', (default)job_defaults={'job_type': 'local', 'stop_all_jobs_on_failure': 'False', 'njobs': '32', 'pwatcher_type': 'fs_based', 'use_tmpdir': False}, use_tmpdir=False, squash=False, job_name_style=0
2020-04-12 14:58:34,360 - pypeflow.simple_pwatcher_bridge:377 - INFO - Setting max_jobs to 32; was None
2020-04-12 14:58:34,361 - root:23 - DEBUG - 'Serializing 9 records'
2020-04-12 14:58:34,361 - root:23 - DEBUG - '  Serialized to 0.0MB as json'
2020-04-12 14:58:34,365 - root:23 - DEBUG - "  Opened for write: './config.json'"
2020-04-12 14:58:34,365 - root:23 - DEBUG - "Deserializing from '/gpfs1/scratch/30days/uqswan29/fc/config.json'"
2020-04-12 14:58:34,368 - root:23 - DEBUG - "  Opened for read: '/gpfs1/scratch/30days/uqswan29/fc/config.json'"
2020-04-12 14:58:34,368 - root:23 - DEBUG - '  Read 0.0MB as json'
2020-04-12 14:58:34,368 - root:23 - DEBUG - '  Deserialized 9 records'
2020-04-12 14:58:34,368 - root:23 - DEBUG - 'Serializing 33 records'
2020-04-12 14:58:34,369 - root:23 - DEBUG - '  Serialized to 0.0MB as json'
2020-04-12 14:58:34,378 - root:23 - DEBUG - "  Opened for write: '/gpfs1/scratch/30days/uqswan29/fc/General_config.json'"
2020-04-12 14:58:34,378 - falcon_kit.util.system:95 - DEBUG - mkdir -p '0-rawreads'
2020-04-12 14:58:34,394 - falcon_kit.util.system:95 - DEBUG - mkdir -p '1-preads_ovl'
2020-04-12 14:58:34,396 - falcon_kit.util.system:95 - DEBUG - mkdir -p '2-asm-falcon'
2020-04-12 14:58:34,396 - pypeflow.tasks:45 - DEBUG - gen_task(python3 -m falcon_kit.mains.dazzler --config-fn={input.config} --db-fn={output.db}  build --input-fofn-fn={input.input_fofn} --length-cutoff-fn={output.length_cutoff}
# TODO: Verify that db exists.
#ln -sf {output.length_cutoff} length_cutoff

	inputs={'config': 'General_config.json', 'input_fofn': 'input.fofn'},
	outputs={'length_cutoff': '0-rawreads/build/length_cutoff', 'db': '0-rawreads/build/raw_reads.db'})
2020-04-12 14:58:34,397 - pypeflow.tasks:50 - DEBUG -  parameters={'pypeflow_mb': 4000, 'pypeflow_nproc': 1, 'topdir': '../..'}
2020-04-12 14:58:34,397 - pypeflow.tasks:52 - DEBUG -  dist.job_dict={'MB': 4000, 'NPROC': 1}
2020-04-12 14:58:34,397 - pypeflow.simple_pwatcher_bridge:634 - DEBUG - New PypeTask(wdir=None,
	inputs={'config': 'General_config.json', 'input_fofn': 'input.fofn'},
	outputs={'length_cutoff': '0-rawreads/build/length_cutoff', 'db': '0-rawreads/build/raw_reads.db'})
2020-04-12 14:58:34,398 - pypeflow.simple_pwatcher_bridge:713 - DEBUG - Created PypeTask('0-rawreads/build', '/gpfs1/scratch/30days/uqswan29/fc/0-rawreads/build', "{'db': '/gpfs1/scratch/30days/uqswan29/fc/0-rawreads/build/raw_reads.db',\n 'length_cutoff': '/gpfs1/scratch/30days/uqswan29/fc/0-rawreads/build/length_cutoff'}", "{'config': '/gpfs1/scratch/30days/uqswan29/fc/General_config.json',\n 'input_fofn': '/gpfs1/scratch/30days/uqswan29/fc/input.fofn'}")
2020-04-12 14:58:34,398 - pypeflow.simple_pwatcher_bridge:656 - DEBUG - Added PRODUCERS['0-rawreads/build'] = PypeTask('0-rawreads/build', '/gpfs1/scratch/30days/uqswan29/fc/0-rawreads/build', "{'db': '/gpfs1/scratch/30days/uqswan29/fc/0-rawreads/build/raw_reads.db',\n 'length_cutoff': '/gpfs1/scratch/30days/uqswan29/fc/0-rawreads/build/length_cutoff'}", "{'config': '/gpfs1/scratch/30days/uqswan29/fc/General_config.json',\n 'input_fofn': '/gpfs1/scratch/30days/uqswan29/fc/input.fofn'}")
2020-04-12 14:58:34,398 - pypeflow.simple_pwatcher_bridge:562 - DEBUG - No producer PypeTask for basedir '.' from path '/gpfs1/scratch/30days/uqswan29/fc/General_config.json' -- Pure input?
2020-04-12 14:58:34,398 - pypeflow.simple_pwatcher_bridge:562 - DEBUG - No producer PypeTask for basedir '.' from path '/gpfs1/scratch/30days/uqswan29/fc/input.fofn' -- Pure input?
2020-04-12 14:58:34,398 - pypeflow.simple_pwatcher_bridge:660 - DEBUG - Built PypeTask('0-rawreads/build', '/gpfs1/scratch/30days/uqswan29/fc/0-rawreads/build', "{'db': PLF('raw_reads.db', '0-rawreads/build'),\n 'length_cutoff': PLF('length_cutoff', '0-rawreads/build')}", "{'config': PLF('General_config.json', None),\n 'input_fofn': PLF('input.fofn', None)}")
2020-04-12 14:58:34,398 - pypeflow.simple_pwatcher_bridge:260 - DEBUG - New Node(0-rawreads/build) needs set()
2020-04-12 14:58:34,398 - pypeflow.tasks:45 - DEBUG - gen_task(python3 -m falcon_kit.mains.dazzler --config={input.config} --db={input.db}  tan-split --split={output.split} --bash-template={output.bash_template}

	inputs={'config': 'General_config.json', 'db': '0-rawreads/build/raw_reads.db'},
	outputs={'split': '0-rawreads/tan-split/tan-uows.json', 'bash_template': '0-rawreads/tan-split/bash_template.sh'})
2020-04-12 14:58:34,398 - pypeflow.tasks:50 - DEBUG -  parameters={'pypeflow_mb': 4000, 'pypeflow_nproc': 1, 'topdir': '../..'}
2020-04-12 14:58:34,399 - pypeflow.tasks:52 - DEBUG -  dist.job_dict={'MB': 4000, 'NPROC': 1}
2020-04-12 14:58:34,399 - pypeflow.simple_pwatcher_bridge:634 - DEBUG - New PypeTask(wdir=None,
	inputs={'config': 'General_config.json', 'db': '0-rawreads/build/raw_reads.db'},
	outputs={'split': '0-rawreads/tan-split/tan-uows.json', 'bash_template': '0-rawreads/tan-split/bash_template.sh'})
2020-04-12 14:58:34,399 - pypeflow.simple_pwatcher_bridge:713 - DEBUG - Created PypeTask('0-rawreads/tan-split', '/gpfs1/scratch/30days/uqswan29/fc/0-rawreads/tan-split', "{'bash_template': '/gpfs1/scratch/30days/uqswan29/fc/0-rawreads/tan-split/bash_template.sh',\n 'split': '/gpfs1/scratch/30days/uqswan29/fc/0-rawreads/tan-split/tan-uows.json'}", "{'config': '/gpfs1/scratch/30days/uqswan29/fc/General_config.json',\n 'db': '/gpfs1/scratch/30days/uqswan29/fc/0-rawreads/build/raw_reads.db'}")
2020-04-12 14:58:34,399 - pypeflow.simple_pwatcher_bridge:656 - DEBUG - Added PRODUCERS['0-rawreads/tan-split'] = PypeTask('0-rawreads/tan-split', '/gpfs1/scratch/30days/uqswan29/fc/0-rawreads/tan-split', "{'bash_template': '/gpfs1/scratch/30days/uqswan29/fc/0-rawreads/tan-split/bash_template.sh',\n 'split': '/gpfs1/scratch/30days/uqswan29/fc/0-rawreads/tan-split/tan-uows.json'}", "{'config': '/gpfs1/scratch/30days/uqswan29/fc/General_config.json',\n 'db': '/gpfs1/scratch/30days/uqswan29/fc/0-rawreads/build/raw_reads.db'}")
2020-04-12 14:58:34,399 - pypeflow.simple_pwatcher_bridge:562 - DEBUG - No producer PypeTask for basedir '.' from path '/gpfs1/scratch/30days/uqswan29/fc/General_config.json' -- Pure input?
2020-04-12 14:58:34,400 - pypeflow.simple_pwatcher_bridge:660 - DEBUG - Built PypeTask('0-rawreads/tan-split', '/gpfs1/scratch/30days/uqswan29/fc/0-rawreads/tan-split', "{'bash_template': PLF('bash_template.sh', '0-rawreads/tan-split'),\n 'split': PLF('tan-uows.json', '0-rawreads/tan-split')}", "{'config': PLF('General_config.json', None),\n 'db': PLF('raw_reads.db', '0-rawreads/build')}")
2020-04-12 14:58:34,400 - pypeflow.simple_pwatcher_bridge:260 - DEBUG - New Node(0-rawreads/tan-split) needs {Node(0-rawreads/build)}
2020-04-12 14:58:34,400 - pypeflow.simple_pwatcher_bridge:305 - INFO - Num unsatisfied: 2, graph: 2
2020-04-12 14:58:34,400 - pypeflow.simple_pwatcher_bridge:315 - INFO - About to submit: Node(0-rawreads/build)
2020-04-12 14:58:34,400 - pypeflow.simple_pwatcher_bridge:109 - DEBUG - enque nodes:
{Node(0-rawreads/build)}
2020-04-12 14:58:34,401 - root:15 - DEBUG - mkdir -p "/gpfs1/scratch/30days/uqswan29/fc/0-rawreads/build"
2020-04-12 14:58:34,406 - pypeflow.simple_pwatcher_bridge:132 - DEBUG - In rundir='/gpfs1/scratch/30days/uqswan29/fc/0-rawreads/build', job_dict={'job_type': 'local', 'stop_all_jobs_on_failure': 'False', 'njobs': '32', 'pwatcher_type': 'fs_based', 'use_tmpdir': False, 'NPROC': 1, 'MB': 4000}
2020-04-12 14:58:34,406 - pwatcher.fs_based:663 - DEBUG - run(jobids=<1>, job_type=local, job_defaults_dict={'job_type': 'local', 'stop_all_jobs_on_failure': 'False', 'njobs': '32', 'pwatcher_type': 'fs_based', 'use_tmpdir': False})
2020-04-12 14:58:34,406 - pwatcher.fs_based:447 - DEBUG - jobs:
{'Pc9f2e42080f9f1': Job(jobid='Pc9f2e42080f9f1', cmd='/bin/bash run.sh', rundir='/gpfs1/scratch/30days/uqswan29/fc/0-rawreads/build', options={'job_type': 'local', 'stop_all_jobs_on_failure': 'False', 'njobs': '32', 'pwatcher_type': 'fs_based', 'use_tmpdir': False, 'NPROC': 1, 'MB': 4000})}
2020-04-12 14:58:34,407 - pwatcher.fs_based:213 - DEBUG - Wrapped "python3 -m pwatcher.mains.fs_heartbeat --directory=/gpfs1/scratch/30days/uqswan29/fc/0-rawreads/build --heartbeat-file=/gpfs1/scratch/30days/uqswan29/fc/mypwatcher/heartbeats/heartbeat-Pc9f2e42080f9f1 --exit-file=/gpfs1/scratch/30days/uqswan29/fc/mypwatcher/exits/exit-Pc9f2e42080f9f1 --rate=10.0 /bin/bash run.sh || echo 99 >| /gpfs1/scratch/30days/uqswan29/fc/mypwatcher/exits/exit-Pc9f2e42080f9f1"
2020-04-12 14:58:34,407 - pwatcher.fs_based:219 - DEBUG - Writing wrapper "/gpfs1/scratch/30days/uqswan29/fc/mypwatcher/wrappers/run-Pc9f2e42080f9f1.bash"
2020-04-12 14:58:34,407 - pwatcher.fs_based:457 - INFO -  starting job Job(jobid='Pc9f2e42080f9f1', cmd='/bin/bash run.sh', rundir='/gpfs1/scratch/30days/uqswan29/fc/0-rawreads/build', options={'job_type': 'local', 'stop_all_jobs_on_failure': 'False', 'njobs': '32', 'pwatcher_type': 'fs_based', 'use_tmpdir': False, 'NPROC': 1, 'MB': 4000}) w/ job_type=LOCAL
2020-04-12 14:58:34,408 - pwatcher.fs_based:61 - DEBUG - CD: '/gpfs1/scratch/30days/uqswan29/fc/mypwatcher/jobs/Pc9f2e42080f9f1' <- '/gpfs1/scratch/30days/uqswan29/fc'
2020-04-12 14:58:34,414 - pwatcher.fs_based:233 - INFO - dir: '/gpfs1/scratch/30days/uqswan29/fc/mypwatcher/jobs/Pc9f2e42080f9f1'
CALL:
 '/bin/bash /gpfs1/scratch/30days/uqswan29/fc/mypwatcher/wrappers/run-Pc9f2e42080f9f1.bash 1>|stdout 2>|stderr & '
2020-04-12 14:58:34,416 - pwatcher.fs_based:236 - INFO - pid=32304 pgid=32304 sub-pid=32314
2020-04-12 14:58:34,416 - pwatcher.fs_based:66 - DEBUG - CD: '/gpfs1/scratch/30days/uqswan29/fc/mypwatcher/jobs/Pc9f2e42080f9f1' -> '/gpfs1/scratch/30days/uqswan29/fc'
2020-04-12 14:58:34,416 - pwatcher.fs_based:122 - INFO - Submitted backgroundjob=MetaJobLocal(MetaJob(job=Job(jobid='Pc9f2e42080f9f1', cmd='/bin/bash run.sh', rundir='/gpfs1/scratch/30days/uqswan29/fc/0-rawreads/build', options={'job_type': 'local', 'stop_all_jobs_on_failure': 'False', 'njobs': '32', 'pwatcher_type': 'fs_based', 'use_tmpdir': False, 'NPROC': 1, 'MB': 4000}), lang_exe='/bin/bash'))
2020-04-12 14:58:34,417 - pypeflow.simple_pwatcher_bridge:153 - DEBUG - Result of watcher.run()={'submitted': ['Pc9f2e42080f9f1']}
2020-04-12 14:58:34,417 - pypeflow.simple_pwatcher_bridge:325 - DEBUG - N in queue: 1 (max_jobs=32)
2020-04-12 14:58:34,417 - pwatcher.fs_based:667 - DEBUG - query(which='list', jobids=<1>)
2020-04-12 14:58:34,417 - pwatcher.fs_based:556 - DEBUG - Status RUNNING for heartbeat:heartbeat-Pc9f2e42080f9f1
2020-04-12 14:58:34,417 - pypeflow.simple_pwatcher_bridge:335 - INFO - (slept for another 0.0s -- another 1 loop iterations)
2020-04-12 14:58:34,517 - pypeflow.simple_pwatcher_bridge:325 - DEBUG - N in queue: 1 (max_jobs=32)
2020-04-12 14:58:34,517 - pwatcher.fs_based:667 - DEBUG - query(which='list', jobids=<1>)
2020-04-12 14:58:34,517 - pwatcher.fs_based:556 - DEBUG - Status RUNNING for heartbeat:heartbeat-Pc9f2e42080f9f1
2020-04-12 14:58:34,718 - pypeflow.simple_pwatcher_bridge:325 - DEBUG - N in queue: 1 (max_jobs=32)
2020-04-12 14:58:34,718 - pwatcher.fs_based:667 - DEBUG - query(which='list', jobids=<1>)
2020-04-12 14:58:34,718 - pwatcher.fs_based:556 - DEBUG - Status RUNNING for heartbeat:heartbeat-Pc9f2e42080f9f1
2020-04-12 14:58:34,718 - pypeflow.simple_pwatcher_bridge:335 - INFO - (slept for another 0.30000000000000004s -- another 2 loop iterations)
2020-04-12 14:58:35,018 - pypeflow.simple_pwatcher_bridge:325 - DEBUG - N in queue: 1 (max_jobs=32)
2020-04-12 14:58:35,019 - pwatcher.fs_based:667 - DEBUG - query(which='list', jobids=<1>)
2020-04-12 14:58:35,019 - pwatcher.fs_based:556 - DEBUG - Status RUNNING for heartbeat:heartbeat-Pc9f2e42080f9f1
2020-04-12 14:58:35,419 - pypeflow.simple_pwatcher_bridge:325 - DEBUG - N in queue: 1 (max_jobs=32)
2020-04-12 14:58:35,419 - pwatcher.fs_based:667 - DEBUG - query(which='list', jobids=<1>)
2020-04-12 14:58:35,419 - pwatcher.fs_based:556 - DEBUG - Status RUNNING for heartbeat:heartbeat-Pc9f2e42080f9f1
2020-04-12 14:58:35,920 - pypeflow.simple_pwatcher_bridge:325 - DEBUG - N in queue: 1 (max_jobs=32)
2020-04-12 14:58:35,920 - pwatcher.fs_based:667 - DEBUG - query(which='list', jobids=<1>)
2020-04-12 14:58:35,920 - pwatcher.fs_based:556 - DEBUG - Status RUNNING for heartbeat:heartbeat-Pc9f2e42080f9f1
2020-04-12 14:58:35,920 - pypeflow.simple_pwatcher_bridge:335 - INFO - (slept for another 1.2000000000000002s -- another 3 loop iterations)
2020-04-12 14:58:36,521 - pypeflow.simple_pwatcher_bridge:325 - DEBUG - N in queue: 1 (max_jobs=32)
2020-04-12 14:58:36,521 - pwatcher.fs_based:667 - DEBUG - query(which='list', jobids=<1>)
2020-04-12 14:58:36,521 - pwatcher.fs_based:556 - DEBUG - Status RUNNING for heartbeat:heartbeat-Pc9f2e42080f9f1
2020-04-12 14:58:37,222 - pypeflow.simple_pwatcher_bridge:325 - DEBUG - N in queue: 1 (max_jobs=32)
2020-04-12 14:58:37,222 - pwatcher.fs_based:667 - DEBUG - query(which='list', jobids=<1>)
2020-04-12 14:58:37,223 - pwatcher.fs_based:556 - DEBUG - Status RUNNING for heartbeat:heartbeat-Pc9f2e42080f9f1
2020-04-12 14:58:38,024 - pypeflow.simple_pwatcher_bridge:325 - DEBUG - N in queue: 1 (max_jobs=32)
2020-04-12 14:58:38,024 - pwatcher.fs_based:667 - DEBUG - query(which='list', jobids=<1>)
2020-04-12 14:58:38,024 - pwatcher.fs_based:556 - DEBUG - Status RUNNING for heartbeat:heartbeat-Pc9f2e42080f9f1
2020-04-12 14:58:38,925 - pypeflow.simple_pwatcher_bridge:325 - DEBUG - N in queue: 1 (max_jobs=32)
2020-04-12 14:58:38,941 - pwatcher.fs_based:667 - DEBUG - query(which='list', jobids=<1>)
2020-04-12 14:58:38,941 - pwatcher.fs_based:556 - DEBUG - Status RUNNING for heartbeat:heartbeat-Pc9f2e42080f9f1
2020-04-12 14:58:38,941 - pypeflow.simple_pwatcher_bridge:335 - INFO - (slept for another 2.9999999999999996s -- another 4 loop iterations)
2020-04-12 14:58:39,943 - pypeflow.simple_pwatcher_bridge:325 - DEBUG - N in queue: 1 (max_jobs=32)
2020-04-12 14:58:39,943 - pwatcher.fs_based:667 - DEBUG - query(which='list', jobids=<1>)
2020-04-12 14:58:39,943 - pwatcher.fs_based:556 - DEBUG - Status RUNNING for heartbeat:heartbeat-Pc9f2e42080f9f1
2020-04-12 14:58:41,044 - pypeflow.simple_pwatcher_bridge:325 - DEBUG - N in queue: 1 (max_jobs=32)
2020-04-12 14:58:41,044 - pwatcher.fs_based:667 - DEBUG - query(which='list', jobids=<1>)
2020-04-12 14:58:41,044 - pwatcher.fs_based:556 - DEBUG - Status RUNNING for heartbeat:heartbeat-Pc9f2e42080f9f1
2020-04-12 14:58:42,246 - pypeflow.simple_pwatcher_bridge:325 - DEBUG - N in queue: 1 (max_jobs=32)
2020-04-12 14:58:42,246 - pwatcher.fs_based:667 - DEBUG - query(which='list', jobids=<1>)
2020-04-12 14:58:42,246 - pwatcher.fs_based:556 - DEBUG - Status RUNNING for heartbeat:heartbeat-Pc9f2e42080f9f1
2020-04-12 14:58:43,547 - pypeflow.simple_pwatcher_bridge:325 - DEBUG - N in queue: 1 (max_jobs=32)
2020-04-12 14:58:43,548 - pwatcher.fs_based:667 - DEBUG - query(which='list', jobids=<1>)
2020-04-12 14:58:43,548 - pwatcher.fs_based:556 - DEBUG - Status RUNNING for heartbeat:heartbeat-Pc9f2e42080f9f1
2020-04-12 14:58:44,949 - pypeflow.simple_pwatcher_bridge:325 - DEBUG - N in queue: 1 (max_jobs=32)
2020-04-12 14:58:44,949 - pwatcher.fs_based:667 - DEBUG - query(which='list', jobids=<1>)
2020-04-12 14:58:44,950 - pwatcher.fs_based:556 - DEBUG - Status RUNNING for heartbeat:heartbeat-Pc9f2e42080f9f1
2020-04-12 14:58:44,950 - pypeflow.simple_pwatcher_bridge:335 - INFO - (slept for another 6.0s -- another 5 loop iterations)
2020-04-12 14:58:46,451 - pypeflow.simple_pwatcher_bridge:325 - DEBUG - N in queue: 1 (max_jobs=32)
2020-04-12 14:58:46,451 - pwatcher.fs_based:667 - DEBUG - query(which='list', jobids=<1>)
2020-04-12 14:58:46,452 - pwatcher.fs_based:556 - DEBUG - Status RUNNING for heartbeat:heartbeat-Pc9f2e42080f9f1
2020-04-12 14:58:48,053 - pypeflow.simple_pwatcher_bridge:325 - DEBUG - N in queue: 1 (max_jobs=32)
2020-04-12 14:58:48,053 - pwatcher.fs_based:667 - DEBUG - query(which='list', jobids=<1>)
2020-04-12 14:58:48,054 - pwatcher.fs_based:556 - DEBUG - Status RUNNING for heartbeat:heartbeat-Pc9f2e42080f9f1
2020-04-12 14:58:49,755 - pypeflow.simple_pwatcher_bridge:325 - DEBUG - N in queue: 1 (max_jobs=32)
2020-04-12 14:58:49,756 - pwatcher.fs_based:667 - DEBUG - query(which='list', jobids=<1>)
2020-04-12 14:58:49,756 - pwatcher.fs_based:556 - DEBUG - Status RUNNING for heartbeat:heartbeat-Pc9f2e42080f9f1
2020-04-12 14:58:51,558 - pypeflow.simple_pwatcher_bridge:325 - DEBUG - N in queue: 1 (max_jobs=32)
2020-04-12 14:58:51,558 - pwatcher.fs_based:667 - DEBUG - query(which='list', jobids=<1>)
2020-04-12 14:58:51,558 - pwatcher.fs_based:556 - DEBUG - Status RUNNING for heartbeat:heartbeat-Pc9f2e42080f9f1
2020-04-12 14:58:53,460 - pypeflow.simple_pwatcher_bridge:325 - DEBUG - N in queue: 1 (max_jobs=32)
2020-04-12 14:58:53,460 - pwatcher.fs_based:667 - DEBUG - query(which='list', jobids=<1>)
2020-04-12 14:58:53,460 - pwatcher.fs_based:556 - DEBUG - Status RUNNING for heartbeat:heartbeat-Pc9f2e42080f9f1
2020-04-12 14:58:55,462 - pypeflow.simple_pwatcher_bridge:325 - DEBUG - N in queue: 1 (max_jobs=32)
2020-04-12 14:58:55,463 - pwatcher.fs_based:667 - DEBUG - query(which='list', jobids=<1>)
2020-04-12 14:58:55,463 - pwatcher.fs_based:556 - DEBUG - Status RUNNING for heartbeat:heartbeat-Pc9f2e42080f9f1
2020-04-12 14:58:55,463 - pypeflow.simple_pwatcher_bridge:335 - INFO - (slept for another 10.500000000000002s -- another 6 loop iterations)
2020-04-12 14:58:57,565 - pypeflow.simple_pwatcher_bridge:325 - DEBUG - N in queue: 1 (max_jobs=32)
2020-04-12 14:58:57,565 - pwatcher.fs_based:667 - DEBUG - query(which='list', jobids=<1>)
2020-04-12 14:58:57,565 - pwatcher.fs_based:556 - DEBUG - Status RUNNING for heartbeat:heartbeat-Pc9f2e42080f9f1
2020-04-12 14:58:59,768 - pypeflow.simple_pwatcher_bridge:325 - DEBUG - N in queue: 1 (max_jobs=32)
2020-04-12 14:58:59,768 - pwatcher.fs_based:667 - DEBUG - query(which='list', jobids=<1>)
2020-04-12 14:58:59,768 - pwatcher.fs_based:556 - DEBUG - Status RUNNING for heartbeat:heartbeat-Pc9f2e42080f9f1
2020-04-12 14:59:02,070 - pypeflow.simple_pwatcher_bridge:325 - DEBUG - N in queue: 1 (max_jobs=32)
2020-04-12 14:59:02,071 - pwatcher.fs_based:667 - DEBUG - query(which='list', jobids=<1>)
2020-04-12 14:59:02,071 - pwatcher.fs_based:556 - DEBUG - Status RUNNING for heartbeat:heartbeat-Pc9f2e42080f9f1
2020-04-12 14:59:04,473 - pypeflow.simple_pwatcher_bridge:325 - DEBUG - N in queue: 1 (max_jobs=32)
2020-04-12 14:59:04,473 - pwatcher.fs_based:667 - DEBUG - query(which='list', jobids=<1>)
2020-04-12 14:59:04,473 - pwatcher.fs_based:556 - DEBUG - Status RUNNING for heartbeat:heartbeat-Pc9f2e42080f9f1
2020-04-12 14:59:06,976 - pypeflow.simple_pwatcher_bridge:325 - DEBUG - N in queue: 1 (max_jobs=32)
2020-04-12 14:59:06,976 - pwatcher.fs_based:667 - DEBUG - query(which='list', jobids=<1>)
2020-04-12 14:59:06,976 - pwatcher.fs_based:556 - DEBUG - Status RUNNING for heartbeat:heartbeat-Pc9f2e42080f9f1
2020-04-12 14:59:09,579 - pypeflow.simple_pwatcher_bridge:325 - DEBUG - N in queue: 1 (max_jobs=32)
2020-04-12 14:59:09,579 - pwatcher.fs_based:667 - DEBUG - query(which='list', jobids=<1>)
2020-04-12 14:59:09,579 - pwatcher.fs_based:556 - DEBUG - Status RUNNING for heartbeat:heartbeat-Pc9f2e42080f9f1
2020-04-12 14:59:12,282 - pypeflow.simple_pwatcher_bridge:325 - DEBUG - N in queue: 1 (max_jobs=32)
2020-04-12 14:59:12,282 - pwatcher.fs_based:667 - DEBUG - query(which='list', jobids=<1>)
2020-04-12 14:59:12,283 - pwatcher.fs_based:556 - DEBUG - Status RUNNING for heartbeat:heartbeat-Pc9f2e42080f9f1
2020-04-12 14:59:12,283 - pypeflow.simple_pwatcher_bridge:335 - INFO - (slept for another 16.800000000000004s -- another 7 loop iterations)
2020-04-12 14:59:15,086 - pypeflow.simple_pwatcher_bridge:325 - DEBUG - N in queue: 1 (max_jobs=32)
2020-04-12 14:59:15,113 - pwatcher.fs_based:667 - DEBUG - query(which='list', jobids=<1>)
2020-04-12 14:59:15,113 - pwatcher.fs_based:556 - DEBUG - Status RUNNING for heartbeat:heartbeat-Pc9f2e42080f9f1
2020-04-12 14:59:18,016 - pypeflow.simple_pwatcher_bridge:325 - DEBUG - N in queue: 1 (max_jobs=32)
2020-04-12 14:59:18,016 - pwatcher.fs_based:667 - DEBUG - query(which='list', jobids=<1>)
2020-04-12 14:59:18,017 - pwatcher.fs_based:556 - DEBUG - Status RUNNING for heartbeat:heartbeat-Pc9f2e42080f9f1
2020-04-12 14:59:21,020 - pypeflow.simple_pwatcher_bridge:325 - DEBUG - N in queue: 1 (max_jobs=32)
2020-04-12 14:59:21,020 - pwatcher.fs_based:667 - DEBUG - query(which='list', jobids=<1>)
2020-04-12 14:59:21,020 - pwatcher.fs_based:556 - DEBUG - Status RUNNING for heartbeat:heartbeat-Pc9f2e42080f9f1
2020-04-12 14:59:24,123 - pypeflow.simple_pwatcher_bridge:325 - DEBUG - N in queue: 1 (max_jobs=32)
2020-04-12 14:59:24,123 - pwatcher.fs_based:667 - DEBUG - query(which='list', jobids=<1>)
2020-04-12 14:59:24,124 - pwatcher.fs_based:556 - DEBUG - Status RUNNING for heartbeat:heartbeat-Pc9f2e42080f9f1
2020-04-12 14:59:27,327 - pypeflow.simple_pwatcher_bridge:325 - DEBUG - N in queue: 1 (max_jobs=32)
2020-04-12 14:59:27,327 - pwatcher.fs_based:667 - DEBUG - query(which='list', jobids=<1>)
2020-04-12 14:59:27,327 - pwatcher.fs_based:556 - DEBUG - Status RUNNING for heartbeat:heartbeat-Pc9f2e42080f9f1
2020-04-12 14:59:30,631 - pypeflow.simple_pwatcher_bridge:325 - DEBUG - N in queue: 1 (max_jobs=32)
2020-04-12 14:59:30,631 - pwatcher.fs_based:667 - DEBUG - query(which='list', jobids=<1>)
2020-04-12 14:59:30,631 - pwatcher.fs_based:556 - DEBUG - Status RUNNING for heartbeat:heartbeat-Pc9f2e42080f9f1
2020-04-12 14:59:34,034 - pypeflow.simple_pwatcher_bridge:325 - DEBUG - N in queue: 1 (max_jobs=32)
2020-04-12 14:59:34,035 - pwatcher.fs_based:667 - DEBUG - query(which='list', jobids=<1>)
2020-04-12 14:59:34,035 - pwatcher.fs_based:556 - DEBUG - Status RUNNING for heartbeat:heartbeat-Pc9f2e42080f9f1
2020-04-12 14:59:37,538 - pypeflow.simple_pwatcher_bridge:325 - DEBUG - N in queue: 1 (max_jobs=32)
2020-04-12 14:59:37,539 - pwatcher.fs_based:667 - DEBUG - query(which='list', jobids=<1>)
2020-04-12 14:59:37,539 - pwatcher.fs_based:556 - DEBUG - Status RUNNING for heartbeat:heartbeat-Pc9f2e42080f9f1
2020-04-12 14:59:37,539 - pypeflow.simple_pwatcher_bridge:335 - INFO - (slept for another 25.20000000000001s -- another 8 loop iterations)
2020-04-12 14:59:41,143 - pypeflow.simple_pwatcher_bridge:325 - DEBUG - N in queue: 1 (max_jobs=32)
2020-04-12 14:59:41,143 - pwatcher.fs_based:667 - DEBUG - query(which='list', jobids=<1>)
2020-04-12 14:59:41,143 - pwatcher.fs_based:556 - DEBUG - Status RUNNING for heartbeat:heartbeat-Pc9f2e42080f9f1
2020-04-12 14:59:44,845 - pypeflow.simple_pwatcher_bridge:325 - DEBUG - N in queue: 1 (max_jobs=32)
2020-04-12 14:59:44,845 - pwatcher.fs_based:667 - DEBUG - query(which='list', jobids=<1>)
2020-04-12 14:59:44,845 - pwatcher.fs_based:556 - DEBUG - Status RUNNING for heartbeat:heartbeat-Pc9f2e42080f9f1
2020-04-12 14:59:48,649 - pypeflow.simple_pwatcher_bridge:325 - DEBUG - N in queue: 1 (max_jobs=32)
2020-04-12 14:59:48,649 - pwatcher.fs_based:667 - DEBUG - query(which='list', jobids=<1>)
2020-04-12 14:59:48,650 - pwatcher.fs_based:556 - DEBUG - Status RUNNING for heartbeat:heartbeat-Pc9f2e42080f9f1
2020-04-12 14:59:52,554 - pypeflow.simple_pwatcher_bridge:325 - DEBUG - N in queue: 1 (max_jobs=32)
2020-04-12 14:59:52,554 - pwatcher.fs_based:667 - DEBUG - query(which='list', jobids=<1>)
2020-04-12 14:59:52,554 - pwatcher.fs_based:556 - DEBUG - Status RUNNING for heartbeat:heartbeat-Pc9f2e42080f9f1
2020-04-12 14:59:56,558 - pypeflow.simple_pwatcher_bridge:325 - DEBUG - N in queue: 1 (max_jobs=32)
2020-04-12 14:59:56,558 - pwatcher.fs_based:667 - DEBUG - query(which='list', jobids=<1>)
2020-04-12 14:59:56,558 - pwatcher.fs_based:556 - DEBUG - Status RUNNING for heartbeat:heartbeat-Pc9f2e42080f9f1
2020-04-12 15:00:00,659 - pypeflow.simple_pwatcher_bridge:325 - DEBUG - N in queue: 1 (max_jobs=32)
2020-04-12 15:00:00,659 - pwatcher.fs_based:667 - DEBUG - query(which='list', jobids=<1>)
2020-04-12 15:00:00,659 - pwatcher.fs_based:556 - DEBUG - Status RUNNING for heartbeat:heartbeat-Pc9f2e42080f9f1
2020-04-12 15:00:04,864 - pypeflow.simple_pwatcher_bridge:325 - DEBUG - N in queue: 1 (max_jobs=32)
2020-04-12 15:00:04,864 - pwatcher.fs_based:667 - DEBUG - query(which='list', jobids=<1>)
2020-04-12 15:00:04,864 - pwatcher.fs_based:556 - DEBUG - Status RUNNING for heartbeat:heartbeat-Pc9f2e42080f9f1
2020-04-12 15:00:09,168 - pypeflow.simple_pwatcher_bridge:325 - DEBUG - N in queue: 1 (max_jobs=32)
2020-04-12 15:00:09,169 - pwatcher.fs_based:667 - DEBUG - query(which='list', jobids=<1>)
2020-04-12 15:00:09,169 - pwatcher.fs_based:556 - DEBUG - Status RUNNING for heartbeat:heartbeat-Pc9f2e42080f9f1
2020-04-12 15:00:13,573 - pypeflow.simple_pwatcher_bridge:325 - DEBUG - N in queue: 1 (max_jobs=32)
2020-04-12 15:00:13,573 - pwatcher.fs_based:667 - DEBUG - query(which='list', jobids=<1>)
2020-04-12 15:00:13,574 - pwatcher.fs_based:556 - DEBUG - Status RUNNING for heartbeat:heartbeat-Pc9f2e42080f9f1
2020-04-12 15:00:13,574 - pypeflow.simple_pwatcher_bridge:335 - INFO - (slept for another 36.000000000000014s -- another 9 loop iterations)
2020-04-12 15:00:18,076 - pypeflow.simple_pwatcher_bridge:325 - DEBUG - N in queue: 1 (max_jobs=32)
2020-04-12 15:00:18,076 - pwatcher.fs_based:667 - DEBUG - query(which='list', jobids=<1>)
2020-04-12 15:00:18,077 - pwatcher.fs_based:556 - DEBUG - Status RUNNING for heartbeat:heartbeat-Pc9f2e42080f9f1
2020-04-12 15:00:22,682 - pypeflow.simple_pwatcher_bridge:325 - DEBUG - N in queue: 1 (max_jobs=32)
2020-04-12 15:00:22,682 - pwatcher.fs_based:667 - DEBUG - query(which='list', jobids=<1>)
2020-04-12 15:00:22,682 - pwatcher.fs_based:556 - DEBUG - Status EXIT 256 for heartbeat:heartbeat-Pc9f2e42080f9f1
2020-04-12 15:00:22,682 - pypeflow.simple_pwatcher_bridge:94 - ERROR - Task Node(0-rawreads/build) failed with exit-code=256
2020-04-12 15:00:22,683 - pypeflow.simple_pwatcher_bridge:343 - DEBUG - recently_done: [(Node(0-rawreads/build), False)]
2020-04-12 15:00:22,683 - pypeflow.simple_pwatcher_bridge:344 - DEBUG - Num done in this iteration: 1
2020-04-12 15:00:22,683 - pypeflow.simple_pwatcher_bridge:358 - ERROR - Some tasks are recently_done but not satisfied: {Node(0-rawreads/build)}
2020-04-12 15:00:22,683 - pypeflow.simple_pwatcher_bridge:359 - ERROR - ready: set()
	submitted: set()
2020-04-12 15:00:22,683 - pwatcher.fs_based:671 - DEBUG - delete(which='known', jobids=<0>)
2020-04-12 15:00:22,683 - pwatcher.fs_based:633 - DEBUG - Deleting jobs for jobids from known ([])
2020-04-12 15:00:22,683 - pwatcher.fs_based:614 - ERROR - Failed to kill job for heartbeat 'heartbeat-Pc9f2e42080f9f1' (which might mean it was already gone): FileNotFoundError(2, 'No such file or directory')
Traceback (most recent call last):
  File "/home/uqswan29/.conda/envs/conda_PacBio/lib/python3.7/site-packages/pypeflow/simple_pwatcher_bridge.py", line 278, in refreshTargets
    self._refreshTargets(updateFreq, exitOnFailure)
  File "/home/uqswan29/.conda/envs/conda_PacBio/lib/python3.7/site-packages/pypeflow/simple_pwatcher_bridge.py", line 362, in _refreshTargets
    raise Exception(msg)
Exception: Some tasks are recently_done but not satisfied: {Node(0-rawreads/build)}

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/uqswan29/.conda/envs/conda_PacBio/lib/python3.7/site-packages/pwatcher/fs_based.py", line 611, in delete_heartbeat
    bjob.kill(state, heartbeat)
  File "/home/uqswan29/.conda/envs/conda_PacBio/lib/python3.7/site-packages/pwatcher/fs_based.py", line 273, in kill
    with open(heartbeat_fn) as ifs:
FileNotFoundError: [Errno 2] No such file or directory: '/gpfs1/scratch/30days/uqswan29/fc/mypwatcher/heartbeats/heartbeat-Pc9f2e42080f9f1'
2020-04-12 15:00:22,716 - pwatcher.fs_based:621 - DEBUG - Cannot remove heartbeat '/gpfs1/scratch/30days/uqswan29/fc/mypwatcher/heartbeats/heartbeat-Pc9f2e42080f9f1': FileNotFoundError(2, 'No such file or directory')
2020-04-12 15:00:22,716 - pypeflow.simple_pwatcher_bridge:189 - DEBUG - In notifyTerminate(), result of delete:None

fc_run.log:

falcon-kit 1.8.1 (pip thinks "falcon-kit 1.8.1")
pypeflow 2.3.0
[INFO]Setup logging from file "None".
[INFO]$ lfs setstripe -c 12 /gpfs1/scratch/30days/uqswan29/fc >
[INFO]Apparently '/gpfs1/scratch/30days/uqswan29/fc' is not in lustre filesystem, which is fine.
[INFO]fc_run started with configuration D01_v01.cfg
[INFO]cfg=
{
  "General": {
    "LA4Falcon_preload": false,
    "avoid_text_file_busy": true,
    "bestn": 12,
    "dazcon": false,
    "falcon_sense_greedy": false,
    "falcon_sense_option": "--output-multi --min-idt 0.70 --min-cov 2 --max-n-read 200 --n-core 8",
    "falcon_sense_skip_contained": true,
    "fc_ovlp_to_graph_option": " --min-len 5000",
    "genome_size": "3000000000",
    "input_fofn": "input.fofn",
    "input_type": "raw",
    "length_cutoff": "-1",
    "length_cutoff_pr": "5000",
    "overlap_filtering_setting": "--max-diff 100 --max-cov 100 --min-cov 2 --n-core 12",
    "ovlp_DBdust_option": "",
    "ovlp_DBsplit_option": "-s400",
    "ovlp_HPCdaligner_option": "-v -B4 -M32 -h60 -e.96 -l500 -s1000",
    "ovlp_daligner_option": "-e.96 -l500 -s1000 -h60",
    "pa_DBdust_option": "",
    "pa_DBsplit_option": "-a -x500 -s400",
    "pa_HPCTANmask_option": "-l500",
    "pa_HPCdaligner_option": "-v -B4 -M32 -e.70 -l1000 -s1000",
    "pa_REPmask_code": "0,300/0,300/0,300",
    "pa_daligner_option": "-e.70 -l1000 -s1000",
    "pa_dazcon_option": "-j 4 -x -l 500",
    "pa_fasta_filter_option": "streamed-internal-median",
    "pa_subsample_coverage": 0,
    "pa_subsample_random_seed": 12345,
    "pa_subsample_strategy": "random",
    "seed_coverage": "30",
    "skip_checks": false,
    "target": "assembly"
  },
  "job.defaults": {
    "job_type": "local",
    "njobs": "32",
    "pwatcher_type": "fs_based",
    "stop_all_jobs_on_failure": "False",
    "use_tmpdir": false
  },
  "job.step.asm": {},
  "job.step.cns": {
    "njobs": "8"
  },
  "job.step.da": {},
  "job.step.dust": {},
  "job.step.la": {},
  "job.step.pda": {},
  "job.step.pla": {}
}
[INFO]In simple_pwatcher_bridge, pwatcher_impl=<module 'pwatcher.fs_based' from '/home/uqswan29/.conda/envs/conda_PacBio/lib/python3.7/site-packages/pwatcher/fs_based.py'>
[INFO]job_type='local', (default)job_defaults={'job_type': 'local', 'stop_all_jobs_on_failure': 'False', 'njobs': '32', 'pwatcher_type': 'fs_based', 'use_tmpdir': False}, use_tmpdir=False, squash=False, job_name_style=0
[INFO]Setting max_jobs to 32; was None
[INFO]Num unsatisfied: 2, graph: 2
[INFO]About to submit: Node(0-rawreads/build)
[INFO] starting job Job(jobid='Pc9f2e42080f9f1', cmd='/bin/bash run.sh', rundir='/gpfs1/scratch/30days/uqswan29/fc/0-rawreads/build', options={'job_type': 'local', 'stop_all_jobs_on_failure': 'False', 'njobs': '32', 'pwatcher_type': 'fs_based', 'use_tmpdir': False, 'NPROC': 1, 'MB': 4000}) w/ job_type=LOCAL
[INFO]dir: '/gpfs1/scratch/30days/uqswan29/fc/mypwatcher/jobs/Pc9f2e42080f9f1'
CALL:
 '/bin/bash /gpfs1/scratch/30days/uqswan29/fc/mypwatcher/wrappers/run-Pc9f2e42080f9f1.bash 1>|stdout 2>|stderr & '
[INFO]pid=32304 pgid=32304 sub-pid=32314
[INFO]Submitted backgroundjob=MetaJobLocal(MetaJob(job=Job(jobid='Pc9f2e42080f9f1', cmd='/bin/bash run.sh', rundir='/gpfs1/scratch/30days/uqswan29/fc/0-rawreads/build', options={'job_type': 'local', 'stop_all_jobs_on_failure': 'False', 'njobs': '32', 'pwatcher_type': 'fs_based', 'use_tmpdir': False, 'NPROC': 1, 'MB': 4000}), lang_exe='/bin/bash'))
[INFO](slept for another 0.0s -- another 1 loop iterations)
[INFO](slept for another 0.30000000000000004s -- another 2 loop iterations)
[INFO](slept for another 1.2000000000000002s -- another 3 loop iterations)
[INFO](slept for another 2.9999999999999996s -- another 4 loop iterations)
[INFO](slept for another 6.0s -- another 5 loop iterations)
[INFO](slept for another 10.500000000000002s -- another 6 loop iterations)
[INFO](slept for another 16.800000000000004s -- another 7 loop iterations)
[INFO](slept for another 25.20000000000001s -- another 8 loop iterations)
[INFO](slept for another 36.000000000000014s -- another 9 loop iterations)
[ERROR]Task Node(0-rawreads/build) failed with exit-code=256
[ERROR]Some tasks are recently_done but not satisfied: {Node(0-rawreads/build)}
[ERROR]ready: set()
	submitted: set()
[ERROR]Failed to kill job for heartbeat 'heartbeat-Pc9f2e42080f9f1' (which might mean it was already gone): FileNotFoundError(2, 'No such file or directory')
Traceback (most recent call last):
  File "/home/uqswan29/.conda/envs/conda_PacBio/lib/python3.7/site-packages/pypeflow/simple_pwatcher_bridge.py", line 278, in refreshTargets
    self._refreshTargets(updateFreq, exitOnFailure)
  File "/home/uqswan29/.conda/envs/conda_PacBio/lib/python3.7/site-packages/pypeflow/simple_pwatcher_bridge.py", line 362, in _refreshTargets
    raise Exception(msg)
Exception: Some tasks are recently_done but not satisfied: {Node(0-rawreads/build)}

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/uqswan29/.conda/envs/conda_PacBio/lib/python3.7/site-packages/pwatcher/fs_based.py", line 611, in delete_heartbeat
    bjob.kill(state, heartbeat)
  File "/home/uqswan29/.conda/envs/conda_PacBio/lib/python3.7/site-packages/pwatcher/fs_based.py", line 273, in kill
    with open(heartbeat_fn) as ifs:
FileNotFoundError: [Errno 2] No such file or directory: '/gpfs1/scratch/30days/uqswan29/fc/mypwatcher/heartbeats/heartbeat-Pc9f2e42080f9f1'
Traceback (most recent call last):
  File "/home/uqswan29/.conda/envs/conda_PacBio/bin/fc_run", line 11, in <module>
    load_entry_point('falcon-kit==1.8.1', 'console_scripts', 'fc_run')()
  File "/home/uqswan29/.conda/envs/conda_PacBio/lib/python3.7/site-packages/falcon_kit/mains/run1.py", line 706, in main
    main1(argv[0], args.config, args.logger)
  File "/home/uqswan29/.conda/envs/conda_PacBio/lib/python3.7/site-packages/falcon_kit/mains/run1.py", line 73, in main1
    input_fofn_fn=input_fofn_fn,
  File "/home/uqswan29/.conda/envs/conda_PacBio/lib/python3.7/site-packages/falcon_kit/mains/run1.py", line 235, in run
    dist=Dist(NPROC=4, MB=4000, job_dict=config['job.step.da']),
  File "/home/uqswan29/.conda/envs/conda_PacBio/lib/python3.7/site-packages/falcon_kit/pype.py", line 106, in gen_parallel_tasks
    wf.refreshTargets()
  File "/home/uqswan29/.conda/envs/conda_PacBio/lib/python3.7/site-packages/pypeflow/simple_pwatcher_bridge.py", line 278, in refreshTargets
    self._refreshTargets(updateFreq, exitOnFailure)
  File "/home/uqswan29/.conda/envs/conda_PacBio/lib/python3.7/site-packages/pypeflow/simple_pwatcher_bridge.py", line 362, in _refreshTargets
    raise Exception(msg)
Exception: Some tasks are recently_done but not satisfied: {Node(0-rawreads/build)}

My cfg file:

[General]
input_fofn = input.fofn

input_type = raw
#input_type = preads

target = assembly

genome_size = 3000000000
seed_coverage = 30
length_cutoff = -1

length_cutoff_pr = 5000

pa_HPCdaligner_option =  -v -B4 -M32 -e.70 -l1000 -s1000
ovlp_HPCdaligner_option = -v -B4 -M32 -h60 -e.96 -l500 -s1000

pa_daligner_option = -e.70 -l1000 -s1000
ovlp_daligner_option = -e.96 -l500 -s1000 -h60

pa_DBsplit_option = -a -x500 -s400
ovlp_DBsplit_option = -s400

falcon_sense_option=--output-multi --min-idt 0.70 --min-cov 2 --max-n-read 200 --n-core 8
falcon_sense_skip_contained = True

overlap_filtering_setting=--max-diff 100 --max-cov 100 --min-cov 2 --n-core 12


[job.defaults]
job_type=local
stop_all_jobs_on_failure = False
njobs = 32

[job.step.cns]
njobs = 8

I tried workaround in #687 and I am only allowed to update packages but no the conda base which was updated July last year.

I can't seem to debug it, can you tell me where I am doing it wrong?

Thanks in advance.

You always have to look for stderr, in this case in 0-rawreads/build/. These problems are usually software installation.