CerebusOSS/CereLink

Using cbSDK to grab analogue data from a Blackrock NSP

Closed this issue · 5 comments

I’m trying to develop a C++ code to grab analogue data from a Blackrock NSP using cbSDK. I can easily grab spikes, but I haven’t been able to get the wideband analogue data (30k). If someone knows how, please let me know?

Thank you in advance
Tom

Hi Tom, what have you tried so far?

Don't forget you have to first use Central to turn on the continuous data for each channel you want continuous data from. I've been tripped up on this a couple times.

Edit: It should be possible to turn on continuous data using the cbSdk, but it's a lot easier to just use Central, unless you have a need to not use Central.

I don't actually have the NSP for testing. I'm streaming previously recorded data using NPlay.

I made a class that initializes the connection to the Central and then gets the data. The wideband part I made is integrated in the two functions, connection_init_neuro() and get_neuro() (see below). A similar code works fine for extracting spikes (setting uEvents = 1 and uConts = 0 in the init function), but nothing comes through when I try to get wideband data.

int CBlackrock::connection_init_neuro(void)
{
    cbSdkResult resTest2;

    // SET CONFIG SETUP 
    UINT16 uBegChan   = 1;
    UINT32 uBegMask   = 0;
    UINT32 uBegVal    = 0;
    UINT16 uEndChan   = 128;
    UINT32 uEndMask   = 0;
    UINT32 uEndVal    = 0;
    bool   bDouble    = false;
    bool   bAbsolute  = false;
    UINT32 uWaveforms = 0;
    UINT32 uConts     = 0;
    UINT32 uEvents    = 0;
    UINT32 uComments  = 0;
    UINT32 uTrackings = 0;
    UINT32 bWithinTrial = false;

    resTest2 = cbSdkGetTrialConfig(m_blkrckInstance
        , &bWithinTrial
        , &uBegChan
        , &uBegMask
        , &uBegVal
        , &uEndChan
        , &uEndMask
        , &uEndVal
        , &bDouble
        , &uWaveforms
        , &uConts
        , &uEvents
        , &uComments
        , &uTrackings);

    uEvents = 0;
    uConts = cbSdk_CONTINUOUS_DATA_SAMPLES;
    UINT32 bActive = 1; // 0 leave buffer intact, 1 clear the buffer
    uBegChan = 0;
    uEndChan = 0;
    uBegVal = 0;
    uEndVal = 0;
    uBegMask = 0;
    uEndMask = 0;
    resTest2 = cbSdkSetTrialConfig(m_blkrckInstance
        , bActive
        , uBegChan
        , uBegMask
        , uBegVal
        , uEndChan
        , uEndMask
        , uEndVal
        , bDouble
        , uWaveforms
        , uConts
        , uEvents
        , uComments
        , uTrackings
        , bAbsolute); // Configure a data collection trial

    if( resTest2 != CBSDKRESULT_SUCCESS )
    {
        bool debug2 = true;
    }

    // SEND COMMENT
    UINT8 charset = 1;
    std::string commentMessage = "Neuro init";
    resTest2 = cbSdkSetComment(m_blkrckInstance, 50000, charset, commentMessage.c_str());

    //DEACTIVATING ALL THE CHANNELS
    UINT32 bActive2 = 0; //activate channel 1, deactivating 0
    UINT16 dchannel = 0;
    resTest2 = cbSdkSetChannelMask(m_blkrckInstance, dchannel, bActive2);

    //ACTIVATING ONLY THE CHANNELS TO BE USED
    bActive2 = 1; //activate channel: 1, deactivating 0
    for (UINT16 ch = 1; ch <= m_numNeuroChannels; ch++)  //i must start from 1 because dchannel =0, bactive= 0 means deactivate everything
    {
        resTest2 = cbSdkSetChannelMask(m_blkrckInstance, ch, bActive2);
    }

    return 0;
}


void CBlackrock::get_neuro(const int* const CHANNELS2USE
                            , INT16** Signals)
{

    cbSdkResult resTest2;
    // INIT TRIAL (EVERY TIME)
    resTest2 = cbSdkInitTrialData(m_blkrckInstance, NULL, &m_trialCont, NULL, NULL);

    // BUFFER FOR FASTER UPDATING - smaller size (20 samples)
    for (int chio=0; chio< m_trialCont.count; chio++)
    {
        m_trialCont.num_samples[chio] = m_neuroBufferSize;
    }

    // GET CONFIG
    bool bTrialDouble = false;
    resTest2 = cbSdkGetTrialConfig(m_blkrckInstance, NULL, NULL, NULL, NULL, NULL, NULL, NULL, &bTrialDouble);


    // ALLOCATE MEMORY
    int ch = 0;
    double* pi = NULL;

    if(bTrialDouble)
    {
        for(ch=0; ch<m_trialCont.count; ch++)
        {
            m_trialCont.samples[ch] = new double[m_trialCont.num_samples[ch]];
        }
    }
    else
    {
        for(ch=0; ch<m_trialCont.count; ch++)
        {
            m_trialCont.samples[ch] = new INT16[m_trialCont.num_samples[ch]];
        }
    }

    // GET DATA
    bool bFlushBuffer = true;
    UINT16 activeChannels = -1;
    UINT32 num_samples = -1;
    UINT32 time_rec = -1;
    resTest2 = cbSdkGetTrialData(m_blkrckInstance, bFlushBuffer, NULL, &m_trialCont, NULL, NULL);


    // WAIT UNTIL BUFFER UPDATED
    double waitTime = 0;
    if (m_trialCont.sample_rates[0] > 0)
    {
        waitTime = double(m_emgBufferSize)/m_trialCont.sample_rates[0]*1000-2;
    }

    Sleep(waitTime);

    // STORE IN GLOBAL FUNCTION
    for (size_t chan = 0; chan<m_numNeuroChannels; chan++)
    {
        INT16* test = (INT16 *)m_trialCont.samples[CHANNELS2USE[chan]];
        for (size_t j=0; j<m_trialCont.num_samples[0]; j++)
        {
            Signals[chan][j] = test[j];
        }
    }
}

Can you edit your post and precede your code block with a "triple backtick C++" (```C++), then end it with a triple backtick? It'll make it much easier to read.

I use the Python wrapper mostly, so it's a little different, but I have a few suggestions.

  • Don't bother with the channel masking until after you've successfully retrieved data.
    • I think you can set all the relevant parameters to 0.
  • It might be instructive to look at the Pythonic way to get continuous data here, which is really calling functions defined here.
    • For every data fetch, it first calls cbSdkInitTrialData(nInstance, 0, trialcont, 0, 0), allocates the return buffer, then cbSdkGetTrialData(nInstance, reset, 0, trialcont, 0, 0). In my case I always have reset=True.

@tmile77 ,

I put together a labstreaminglayer app to stream data from the NSP. It isn't exactly the most concise way to demonstrate how to use CereLink, but it might be helpful.

Edit: I ended up abandoning this approach. The cost (extra copying of a lot of data) is a bit too much to justify the benefit (easier sync with other data sources, easier integration into existing LSL ecosystem), given that the implementation was incomplete and the Python wrapper makes it relatively easy to stream the data into a signal processing pipeline. The text below remains for posterity.

I coded-in two different approaches:

  1. Using a callback function
    • Data are returned in 'groups', where each group contains all channels that are sampled at the same rate (500, 1000, 2000, 10000, 30000, RAW).
    • For each group
      • cbSdkGetSampleGroupList to get the number of channels in the group
      • From there you can also get the info for each channel to do with as you please.
    • The callback function is called for each sample.
      • const cbPKT_GROUP *pPkt = reinterpret_cast< const cbPKT_GROUP* >(data);
      • Check the sample's group with pPkt->type
      • Do what you want with the data in pPkt->data. I simply add it to a buffer which I use elsewhere.
  2. Using cbSdkGetTrialData
    • cbSdkGetTrialConfig; but this gives me zeros and null values in the out params.
    • call cbSdkSetTrialConfig with above out params, modified as needed.
    • Then, on every iteration of the main loop
      • Initialize a cbSdkTrialCont
      • Call cbSdkInitTrialData to find out how many samples per channel are waiting
      • Create a buffer for each channel and assign it to cbSdkTrialCont.samples[chanIx]
      • Call cbSdkGetTrialData with your new cbSdkTrialCont. Its .samples will be filled with the data.

The second approach isn't fully implemented because I didn't want to bother keeping a map of which channels belong to which group, and because the callback approach required less data reshaping.

Closing. Use #97