nibabies fails with multiple T2 images
Closed this issue · 2 comments
What happened?
nibabies fails when multiple T2 images are present.
What command did you use?
docker run \
--rm \
-it \
-v /home/admin/fs-license.txt:/opt/freesurfer/license.txt:ro \
-v /home/admin/bcp:/data:ro \
-v /home/admin/output:/out \
nipreps/nibabies:23.1.0 \
/data \
/out \
participant \
--participant-label 01 \
--age-months 1 \
--stop-on-first-crash
What version of NiBabies are you using?
23.1.0
Relevant log output
231221-17:06:10,184 nipype.workflow IMPORTANT:
Running nibabies version 23.1.0:
* BIDS dataset path: /data.
* Participant list: [['01', '1mo']].
* Run identifier: 20231221-170600_c895c470-3aad-467d-aa97-7bc0b2ce4d44.
* Output spaces: MNIInfant.
* Pre-run FreeSurfer's SUBJECTS_DIR: /out/sourcedata/freesurfer.
231221-17:06:10,250 cli WARNING:
`--age-months` is deprecated and will be removed in a future release.Please use a `sessions.tsv` or `participants.tsv` file to track participants age.
231221-17:06:11,101 nipype.workflow IMPORTANT:
BOLD series will be slice-timing corrected to an offset of 0.311s.
231221-17:06:11,153 nipype.workflow CRITICAL:
The carpetplot requires CIFTI outputs
[WARNING] Citeproc: citation templateflow not found
231221-17:06:12,556 nipype.workflow IMPORTANT:
nibabies started!
231221-17:06:13,520 nipype.workflow WARNING:
Some nodes demand for more threads than available (4).
231221-17:06:23,536 nipype.workflow ERROR:
Node ds_t2w_ref_xfms failed to run on host 6dc86becf079.
231221-17:06:23,537 nipype.workflow ERROR:
Saving crash info to /out/sub-01/ses-1mo/log/20231221-170600_c895c470-3aad-467d-aa97-7bc0b2ce4d44/crash-20231221-170623-root-ds_t2w_ref_xfms-e50adaa0-22d4-413a-a24b-40762313519a.txt
Traceback (most recent call last):
File "/opt/conda/envs/nibabies/lib/python3.10/site-packages/nipype/pipeline/plugins/multiproc.py", line 292, in _send_procs_to_workers
num_subnodes = self.procs[jobid].num_subnodes()
File "/opt/conda/envs/nibabies/lib/python3.10/site-packages/nipype/pipeline/engine/nodes.py", line 1309, in num_subnodes
self._check_iterfield()
File "/opt/conda/envs/nibabies/lib/python3.10/site-packages/nipype/pipeline/engine/nodes.py", line 1332, in _check_iterfield
raise ValueError(
ValueError: Input in_file was not set but it is listed in iterfields.
When creating this crashfile, the results file corresponding
to the node could not be found.
231221-17:06:23,537 nipype.workflow CRITICAL:
nibabies failed: Traceback (most recent call last):
File "/opt/conda/envs/nibabies/lib/python3.10/site-packages/nipype/pipeline/plugins/multiproc.py", line 292, in _send_procs_to_workers
num_subnodes = self.procs[jobid].num_subnodes()
File "/opt/conda/envs/nibabies/lib/python3.10/site-packages/nipype/pipeline/engine/nodes.py", line 1309, in num_subnodes
self._check_iterfield()
File "/opt/conda/envs/nibabies/lib/python3.10/site-packages/nipype/pipeline/engine/nodes.py", line 1332, in _check_iterfield
raise ValueError(
ValueError: Input in_file was not set but it is listed in iterfields.
When creating this crashfile, the results file corresponding
to the node could not be found.
Traceback (most recent call last):
File "/opt/conda/envs/nibabies/lib/python3.10/site-packages/nipype/pipeline/plugins/multiproc.py", line 292, in _send_procs_to_workers
num_subnodes = self.procs[jobid].num_subnodes()
File "/opt/conda/envs/nibabies/lib/python3.10/site-packages/nipype/pipeline/engine/nodes.py", line 1309, in num_subnodes
self._check_iterfield()
File "/opt/conda/envs/nibabies/lib/python3.10/site-packages/nipype/pipeline/engine/nodes.py", line 1332, in _check_iterfield
raise ValueError(
ValueError: Input in_file was not set but it is listed in iterfields.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/conda/envs/nibabies/bin/nibabies", line 8, in <module>
sys.exit(main())
File "/opt/conda/envs/nibabies/lib/python3.10/site-packages/nibabies/cli/run.py", line 105, in main
nibabies_wf.run(**_plugin)
File "/opt/conda/envs/nibabies/lib/python3.10/site-packages/nipype/pipeline/engine/workflows.py", line 638, in run
runner.run(execgraph, updatehash=updatehash, config=self.config)
File "/opt/conda/envs/nibabies/lib/python3.10/site-packages/nipype/pipeline/plugins/base.py", line 199, in run
self._send_procs_to_workers(updatehash=updatehash, graph=graph)
File "/opt/conda/envs/nibabies/lib/python3.10/site-packages/nipype/pipeline/plugins/multiproc.py", line 295, in _send_procs_to_workers
self._clean_queue(
File "/opt/conda/envs/nibabies/lib/python3.10/site-packages/nipype/pipeline/plugins/base.py", line 256, in _clean_queue
raise RuntimeError("".join(result["traceback"]))
RuntimeError: Traceback (most recent call last):
File "/opt/conda/envs/nibabies/lib/python3.10/site-packages/nipype/pipeline/plugins/multiproc.py", line 292, in _send_procs_to_workers
num_subnodes = self.procs[jobid].num_subnodes()
File "/opt/conda/envs/nibabies/lib/python3.10/site-packages/nipype/pipeline/engine/nodes.py", line 1309, in num_subnodes
self._check_iterfield()
File "/opt/conda/envs/nibabies/lib/python3.10/site-packages/nipype/pipeline/engine/nodes.py", line 1332, in _check_iterfield
raise ValueError(
ValueError: Input in_file was not set but it is listed in iterfields.
When creating this crashfile, the results file corresponding
to the node could not be found.
Add any additional information or context about the problem here.
To reproduce:
- Install data from https://gin.g-node.org/nipreps-data/bcp
cp bcp/sub-01/ses-1mo/anat/sub-01_ses-1mo_run-001_T2w.nii.gz bcp/sub-01/ses-1mo/anat/sub-01_ses-1mo_run-002_T2w.nii.gz
cp bcp/sub-01/ses-1mo/anat/sub-01_ses-1mo_run-001_T2w.json bcp/sub-01/ses-1mo/anat/sub-01_ses-1mo_run-002_T2w.json
- Install
fs-license.txt
Then run the command given above.
I have encountered this bug on Linux (Debian) and macOS, as well as with Singularity on RHEL.
I have removed the start of the output (the BIDS validator notices) in this report.
Thanks for the report - originally, we aggregated all anatomicals across sessions when constructing the reference T1/T2, but since we have moved to per-session processing, this way of processing is something we should revisit. In your case, what is the intention of collection multiple runs of anatomicals? Are both used?
Thanks for the fix, @mgxd.
In response to your earlier question, we'll often acquire more than one to acclimatize the baby to the noise or if we suspect motion in a scan. We can certainly arrange to limit the input to one scan, but it would be useful to not have to, or at least to receive a more constructive error message. :)