bbfrederick/rapidtide

Denoising fails on nilearn dataset

tsalo opened this issue · 5 comments

tsalo commented

Describe the bug
There is a histogram error when running on an example functional file from nilearn in MNI space.

To Reproduce

First, download data to use for the example, using Python:

from nilearn import datasets, masking, plotting
import pandas as pd

data = datasets.fetch_development_fmri(n_subjects=1, age_group="adult")
# Functional scan is
# /Users/tsalo/nilearn_data/development_fmri/development_fmri/sub-pixar123_task-pixar_space-MNI152NLin2009cAsym_desc-preproc_bold.nii.gz

# Create a mask from the functional scan
mask = "out/brainmask.nii.gz"
mask_img = masking.compute_epi_mask(data.func[0])
mask_img.to_filename(mask)

Then, run rapidtide from the command line:

rapidtide /Users/tsalo/nilearn_data/development_fmri/development_fmri/sub-pixar123_task-pixar_space-MNI152NLin2009cAsym_desc-preproc_bold.nii.gz out/output --denoising --filterband lfo --noprogressbar --datatstep 2 --globalmeaninclude out/brainmask.nii.gz

This results in the following error:

In /Users/tsalo/anaconda/envs/python3/lib/python3.6/site-packages/matplotlib/mpl-data/stylelib/_classic_test.mplstyle: 
The savefig.frameon rcparam was deprecated in Matplotlib 3.1 and will be removed in 3.3.
In /Users/tsalo/anaconda/envs/python3/lib/python3.6/site-packages/matplotlib/mpl-data/stylelib/_classic_test.mplstyle: 
The verbose.level rcparam was deprecated in Matplotlib 3.1 and will be removed in 3.3.
In /Users/tsalo/anaconda/envs/python3/lib/python3.6/site-packages/matplotlib/mpl-data/stylelib/_classic_test.mplstyle: 
The verbose.fileo rcparam was deprecated in Matplotlib 3.1 and will be removed in 3.3.
Traceback (most recent call last):
  File "/Users/tsalo/anaconda/envs/python3/bin/rapidtide", line 7, in <module>
    exec(compile(f.read(), __file__, 'exec'))
  File "/Users/tsalo/Documents/tsalo/rapidtide/rapidtide/scripts/rapidtide", line 32, in <module>
    rapidtide_workflow.rapidtide_main(rapidtide_parser.process_args)
  File "/Users/tsalo/Documents/tsalo/rapidtide/rapidtide/workflows/rapidtide.py", line 1784, in rapidtide_main
    thedict=optiondict)
  File "/Users/tsalo/Documents/tsalo/rapidtide/rapidtide/stats.py", line 453, in makeandsavehistogram
    refine=refine)
  File "/Users/tsalo/Documents/tsalo/rapidtide/rapidtide/stats.py", line 401, in makehistogram
    thehist = np.histogram(indata, thebins, therange)
  File "<__array_function__ internals>", line 6, in histogram
  File "/Users/tsalo/anaconda/envs/python3/lib/python3.6/site-packages/numpy/lib/histograms.py", line 792, in histogram
    bin_edges, uniform_bins = _get_bin_edges(a, bins, range, weights)
  File "/Users/tsalo/anaconda/envs/python3/lib/python3.6/site-packages/numpy/lib/histograms.py", line 426, in _get_bin_edges
    first_edge, last_edge = _get_outer_edges(a, range)
  File "/Users/tsalo/anaconda/envs/python3/lib/python3.6/site-packages/numpy/lib/histograms.py", line 316, in _get_outer_edges
    "supplied range of [{}, {}] is not finite".format(first_edge, last_edge))
ValueError: supplied range of [nan, nan] is not finite

Expected behavior
I expected rapidtide to complete, but it failed during the denoising stage.

Desktop (please complete the following information):

  • OS: OSX
  • Browser: N/A
  • Version: dev at 19563ef (should be the same as dev)

Additional context
I'm trying to mock up a Jupyter notebook with a reproducible example for the documentation, so I'd like to use data that can be fetched easily with nilearn.

The reason this crashes is because the processing mask (called the "corrmask" in the code - I should change this to "procmask") is not being calculated properly, probably because there are negative values of the mean over time in the MNI reformatted functional file (happens routinely in fmriprep data). The solution is to set the corrmask explicitly, using "--corrmask out/brainmask.nii.gz" (same mask as you use for the global mean). A better solution is to fix the automatic mask generation code, which I'll look at.

A few other things:
--datatstep is usually not needed, since it's already in the NIFTI header. You specify 2, the header says 1 - I assume you did it on purpose, but why is the header wrong?
--filterband lfo is the default, so you don't need to specify it.
The output maps are strangely quantized. I'm not sure exactly what that means, but I suspect that since the TR in the header is wrong, slice time correction may have been applied incorrectly during fmriprep.

tsalo commented

The solution is to set the corrmask explicitly, using "--corrmask out/brainmask.nii.gz" (same mask as you use for the global mean).

That worked! Thanks for the fix. Since there's a workaround, I could close this issue, unless you want to keep it open until you have a more general fix for the automatic mask generation code?

--datatstep is usually not needed, since it's already in the NIFTI header. You specify 2, the header says 1 - I assume you did it on purpose, but why is the header wrong?

Yes, it looks like the version of fMRIPrep they used overwrote the TR in the header. I had to look up the TR from the nilearn docstring.

--filterband lfo is the default, so you don't need to specify it.

Ah, good to know. I'll drop it from the example call.

The output maps are strangely quantized. I'm not sure exactly what that means, but I suspect that since the TR in the header is wrong, slice time correction may have been applied incorrectly during fmriprep.

I think that's unlikely. fMRIPrep uses the repetition time in the metadata sidecar files, rather than metadata stored directly in the niftis.

tsalo commented

Do you have any thoughts about the relative benefits of trying to patch up my masking routine vs. just adopting nilearn.masking.compute_epi_mask()? That would add a dependency, but no sense reinventing the wheel, especially a presumably well-thought out wheel.

I was planning to request adding nilearn as a dependency anyway, since there are a few functions in nilearn that I think could be used instead of custom ones, like nilearn.image.smooth_img(), which could perhaps be used instead of rapidtide.filter.ssmooth(). In any case, nilearn is fairly stable and I think using it, when applicable, would be a good idea.

Also, another suggestion for your example - it’s usually helpful to use a little spatial filtering in calculating the delay maps - I use half a pixel width as a rule of thumb, so --spatialfilt 2 in this case. It tends to significantly improve the delay estimation.

Will do. I plan to open a PR with the example fairly soon. I think that locally-run examples would be preferable over ones run by RTD, since rapidtide's workflows are too computationally demanding for RTD's servers to run. I believe that nilearn uses the same approach, so I can ask one of its devs for details.

As a result of this discussion, I added a feature - if you specify a negative value for GAUSSSIGMA, rapidtide will set it to 0.5 * the mean voxel dimension.