rule-specific config parsed as a dict, not as a list
nickp60 opened this issue · 3 comments
Thanks for putting together the config docs! It looks like things diverged at one point, and the rule-specific yaml file gets parsed as a dict rather than a list. Sorry I should have saved the error message!
I was getting errors until looking at the sge profile and seeing how they specified their keys. I think the example in the lsf README
__default__:
- "-P project2"
- "-W 1:05"
foo:
- "-P gpu"
- "-gpu 'gpu resources'"
should become soemthing like
__default__:
P: "project2"
W: "1:05"
foo:
P: gpu
gpu: 'gpu resources'
Hi @nickp60 the lsf.yaml
file is completely separate to the snakemake cluster config. Changing the config as in the example you provided would not work with this profile. I would really need to see an error log to help figure out what's going wrong here
Hi @mbhall88 ,
For instance:
docker run --rm -v $PWD:/data/ snakemake/snakemake:v7.1.0 snakemake --jobs 2 --cluster-config /data/lsf.yaml --snakefile /data/test.smk --cluster "bsub"
with two files in your working directory
test.smk
__default__:
- "-P project2"
- "-W 1:05"
foo:
- "-P gpu"
- "-gpu 'gpu resources'"
and lsf.yaml
rule all:
input: "/data/bar.txt"
rule foo:
output: "/data/bar.txt"
shell:
"echo 'bar' > {output}"
gives
Traceback (most recent call last):
File "/opt/conda/envs/snakemake/lib/python3.10/site-packages/snakemake/__init__.py", line 714, in snakemake
success = workflow.execute(
File "/opt/conda/envs/snakemake/lib/python3.10/site-packages/snakemake/workflow.py", line 1097, in execute
success = self.scheduler.schedule()
File "/opt/conda/envs/snakemake/lib/python3.10/site-packages/snakemake/scheduler.py", line 540, in schedule
self.run(runjobs)
File "/opt/conda/envs/snakemake/lib/python3.10/site-packages/snakemake/scheduler.py", line 588, in run
executor.run_jobs(
File "/opt/conda/envs/snakemake/lib/python3.10/site-packages/snakemake/executors/__init__.py", line 153, in run_jobs
self.run(
File "/opt/conda/envs/snakemake/lib/python3.10/site-packages/snakemake/executors/__init__.py", line 1107, in run
jobscript = self.get_jobscript(job)
File "/opt/conda/envs/snakemake/lib/python3.10/site-packages/snakemake/executors/__init__.py", line 787, in get_jobscript
f = job.format_wildcards(self.jobname, cluster=self.cluster_wildcards(job))
File "/opt/conda/envs/snakemake/lib/python3.10/site-packages/snakemake/executors/__init__.py", line 899, in cluster_wildcards
return Wildcards(fromdict=self.cluster_params(job))
File "/opt/conda/envs/snakemake/lib/python3.10/site-packages/snakemake/executors/__init__.py", line 875, in cluster_params
cluster.update(self.cluster_config.get(job.name, dict()))
AttributeError: 'list' object has no attribute 'update'
whereas changing the config as I described to
__default__:
P: "project2"
W: "1:05"
foo:
P: "gpu"
gpu: "gpu resources"
does not raise the error. I'm just trying to note that the documentation of the way the readme describes the cluster config may need to be clarified.
Yes, the problem is you're using the lsf.yaml
in the wrong way. You do not pass it to the --cluster-config
option. See this section of the docs for where lsf.yaml
must be placed. tl;dr it must be in the working directory for the pipeline and you also need to have the pipeline installed as per these instructions and use the --profile lsf
(if the profile was named lsf
)