trentool/TRENTOOL3

How to tune parameters for lag estimation?

Opened this issue · 2 comments

The group I work for is trying to use transfer entropy to compare some simulated signals, in order to eventually apply the same computational pipeline to real data once it is available. In particular, our focus is on estimating the lags. We've found that in a variety of simple and complicated simulated datasets, TRENTOOL consistently hugely underestimates the delay time. Usually it picks the smallest lag from its search range, no matter what.

This may be a problem with our choice of parameters for the estimator. We're considering doing running a grid search over the parameter space, but figured we should first ask:

  1. Has this problem been observed before? Do you have any speculation as to what caused it?
  2. For successful uses of delay estimation, what parameters did people choose?
  3. When doing a grid search, are there any parameters that you think might be more important than others?

We had been using trajectories sampled from a set of stochastic differential equations that are supposed to simulate something like the dynamics observed when measuring EEG. In the interest of figuring out where we're going wrong, though, I've tried to simplify things.

When simulating a simple AR(1) process generated from the code below (the output is just a channels x time points x trials matrix which is later converted to FieldTrip format) I can't even get the code to run due to failure to meet the autocorrelation threshold, even after quite a bit of tinkering. Are there parameters you recommend raising or lowering here? Or is this just a time series that shouldn't be expected to work?

    af_output = zeros(2,n_time,n_trials);
    for trial=1:n_trials
        af_output(1,1,trial) = 5*rand();
        for i=2:30
            af_output(1, i, trial) = af_output(1, i-1, trial) + (rand()-.5);
        end
        af_output(2,1:30,trial) = 0;
        for i=31:n_time
            af_output(1, i, trial) = af_output(1, i-1, trial) + (rand()-.5);
            af_output(2, i, trial) = .2*af_output(1, i-30, trial) + af_output(2, i-1, trial) + (rand()-.5);
        end
    end

While this problem is far simpler than the original process we were using, I think it may help me understand where we're going wrong and/or what the limitations of transfer entropy are.

Thanks for your help,

Michael