bbfrederick/rapidtide

Maxcorr map voxel =1 bug?

Closed this issue · 13 comments

I noticed that when I ran rapidtide v 2.0.5 for a gas challenge experiment a few people had isolated voxels where the maxcorr and maxcorrsq map was equal to 1. I don't remember seeing this in rapidtide version 1.9.3. The difference might be the despeckling steps, but it seems like the gaussout_info is messed up for these voxels so I'm not sure. Below is one of the voxels where maxcorr/maxcorrsq is 1.

Screen Shot 2021-08-18 at 1 27 51 PM

A neighboring voxel looks normal:

Screen Shot 2021-08-18 at 1 28 59 PM

And another maxcorr=1 voxel looks like this:

Screen Shot 2021-08-18 at 1 29 59 PM

Selected options include: --lagmin=-10.0 --lagmax=30.0 --pickleft --noglm --numnull 0 --regressor=/scratch/probeRegressor/blockTimes225.txt --similaritymetric=correlation --searchrange -30 100 --nofitfilt --nprocs=1 --convergencethresh=0.0005 --oversampfac -1 --spatialfilt 3 --filterfreqs 0.005 0.05

It seems to happen more on people who have pathology so it could be that these people give rapidtide some trouble. Their regressor took a few more passes than usual to refine.

Screen Shot 2021-08-18 at 1 34 23 PM

Trying to diagnose it myself by running without --nofitfilt or without despeckling to see if either is causing the problem. Should've tried that first. Will post the results here.

After removing the arguments "--nofitfilt" and "convergencethresh=0.0005" setting "despecklepasses=0" this error still occurs so it appears not to be the despeckling, multiple passes, or --nofitfilt causing the error.

Most of the time these voxels are near the ventricles and not that important, but I wouldn't think this would be the expected behavior. The gaussian fitting could be getting tripped up by voxels with negative reactivity correlation peaks near the edges of the correlation range:
Screen Shot 2021-08-18 at 4 27 57 PM

Screen Shot 2021-08-18 at 4 28 27 PM

Ok, that is legitimately very weird. Those correlation functions aren't that strange, but the fit routine is clearly failing by overestimating the width. That may be because the baseline is so low - it sort of assumes a zero baseline (but not that strongly). I'll take a look at it tonight to see if I can put in something to compensate for that.

No rush! Thanks

Can you try setting "--sigmalimit 500" and not setting --nofitfilt? That should at least let rapidtide know that it should zero out super wide peaks like you are seeing. You may have to go down lower, like 100, but start at 500.

Not setting --nofitiflt and setting --sigmalimit to 500 or 100 didn't change the output in any meaningful way.

When I ran the data through rapidtide 1.9.1. there don't seem to be any voxels with R2=1. Do you think not setting the flag --nofitfilt isn't working in the new version?

Hopefully I'm not making a mistake on my end! (because I'm surprised you haven't seen this happen at least once.)

Can you try setting "--similaritymetric hybrid"?

If it's possible for you to share this dataset, I'd be happy to look at it more closely - that might be faster.

Taking this back onto GitHub for archival purposes...

Can you send me the exact commands you run for rapidtide 1.9.1, and for 2.0.x, so I can compare the correlation functions generated by the two? Something looks seriously wrong with the filtering from the runs I've been doing here.

EDIT: Please excuse the accidental closing of the issue. Pushed the wrong button...

Using 1.9.6 now:

singularity exec /scratch/Hudson/rapidtide_1.9.6.sif rapidtide2x "$procImg" "$destination"/output --lagminthresh=-10 --lagmaxthresh=30 --pickleft --noglm -N 0 --regressor="$probe" -r -30,100 --nprocs="$nCores" -O 5 -f 3 -F 0.005,0.05 --passes=3 --pca --corrmask="$maskimg" --refineinclude="$maskimg" --refineoffset

Where $maskimg is generated by nilearn.masking.compute_epi_mask (I also make sure that the pixdims are the exact same from the nilearn generated mask as there are sometimes differences on the magnitude of 0.0001 difference which causes rapidtide to not accept the masks).
And where $probe=binary gas challenge blocks

For 2.0.7 to reproduce the error more quickly (and keep consistent with 1.9.6) I set passes=3 and despecklepasses=0, (but the error still happens if I use convergencethresh=0.0005 and don't specify despecklepasses). I also set oversample to 5 for 1.9.6 and 2.0.7 so they match one another:

singularity exec /scratch/Hudson/rapidtide_2.0.7.sif rapidtide "$procImg" "$destination"/output --lagminthresh=-10.0 --lagmaxthresh=30.0 --pickleft --noglm --numnull 0 --regressor="$probe" --similaritymetric="correlation" --searchrange -30 100 --nofitfilt --nprocs="$nCores" --passes=3 --oversampfac 5 --spatialfilt 3 --filterfreqs 0.005 0.05 --despecklepasses=0

EDIT: sorry I left out the lagmaxthresh and lagminthresh from the second command because I was testing removing them.

Ok, I think I've fixed it (at least I'm not seeing hot pixels any more, after making the change that I thought would stop them!) Give 2.0.9 a try and see if it resolves the issue (I'm having a little bit of an issue syncing Docker images at the moment, so update your source). Available as source or in an updated Docker container. Also, I changed my pixel size comparison procedure to allow for rounding errors and slop in header fields. If you're up to it, try generating a mask with nilearn again and see if rapidtide takes it unmodified.

Looks like that fixed it for me as well. Thanks! Will let you know on the masking

Following up on the masking--looks like rapidtide accepts nilearn generated masks without updated header information now, so that's good.

Cool - thanks for the update!