Physiological Noise Correction
poeplau opened this issue · 4 comments
I was wondering, whether it makes sense to run refinement with regressors generated from physiological data. It seems counterintuitive to reduce the available information in this case.
Sorry for the long delay in responding. Don't know why I lost track of this...
It depends on what you're trying to do. If you are trying to estimate the direct effect of a particular physiological variable on the brain signal, yes, I agree. Especially if you are trying to make an argument about a mechanism.
If your goal is simply to remove physiological noise from fMRI data, or to estimate hemodynamic delays within the brain, then refinement is warranted. While the physiology measured outside of the brain gives you a starting point for estimating the moving regressor in the brain, refinement will select for the signal present in the brain itself, after whatever distortions or changes happen to it in getting to the brain. There isn't necessarily a linear relationship between externally measured physiological quantities and their effect on BOLD data (although, it seems to be MOSTLY linear, which is a relief, since it makes rapidtide possible with a simple crosscorrelation). The correlation between the refined signal and, say, the LFO band plethysmogram signal in the fingertip is high, but it isn't 1 - the signals are clearly related, but not the same, so using the refined signal tends to be more sensitive.
Okay, thank you.
My Master's thesis revolves around the comparison of different denoising strategies. It sounds like you found dynamic GSR to outperform e. g. denoising with delayed NIRS regressors. I have to ask, because I couldn't find such a comparison in your publications (of course admitting to the possibility that I just may have missed it entirely).
Anyways, I have decided to include dynamic GSR and delayed HRV and RVT regressors (as calculated by rapidtide) in my comparison.
Yes, we went over to the dynamic GSR for a few reasons:
- Speed of calculation - the old method used many regressors, fit with fslglm. This took forever - the cross-correlation is VERY fast, and mathematically, is solving pretty much the same problem. It's not exactly the same - if there are multiple significant pools of delayed blood within the same voxel, you won't get that with the cross-correlation, but in my tests, that does not seem to be the case.
- It's much more parsimonious. You typically aren't hurting for degrees of freedom in fMRI, but simultaneously fitting many identical time courses (except for delay) burns way more of your DOF than fitting one optimally delayed one.
- Ease of interpretation. As time went by I became at least as interested in determining blood arrival time as I was in denoising. Using the crosscorrelation gives you the delay time directly (and if you oversample and peak fit, you aren't limited by the TR - subTR resolution is easily achieved.).
- The iterative crosscorrelation method performs at least as well as the external NIRS regressor, and has the advantage of not needing any additional hardware - you can do it retrospectively on any existing fMRI dataset, which is pretty sweet. Using the NIRS regressor was important in the early days in order to get acceptance for the technique. It's a lot easier to fend off criticism that we're seeing some neuronal (not hemodynamic) signal if the analysis regressor is a recording in the fingertip. But once we DID establish that, bootstrapping the regressor out of the fMRI data itself is much more convenient and flexible.
As you say, I never really wrote that down anywhere - our thinking just kind of evolved over time as we did a lot of these analyses, and the benefits of the new method were obvious (to us). Since I generally assume nobody pays attention to what I'm doing anyway, I didn't think anybody would look for an explanation as to why we changed...
Sorry to be THAT guy. :D
Anyways, thank you very much for the detailed response and putting up with my questions.