NomisCiri/eeg_manypipes_arc

look at behavior: accuracy, ... ?

Closed this issue · 8 comments

we should also analyse the behavioral accuracy and other data -- perhaps some subjs did not perform sig. different from chance level, and if so, we should exclude them

sure. binomial test against .5?

sure. binomial test against .5?

sounds good to me, for each choice they either said "seen before" (true/false) or "new" (true/false)

just thinking that it could be that .5 is not the right probability for this case, at least not without checking If "seen before" is actually true... is there a reasonable chance baseline for this scenario?

better would be to calculate some signal detection threshold... d'?
and have that to be bigger than .9 or .8?

For our n-back we have been using .8

The experiment comprised 600 different images; 300 images that were presented only once and another
300 images that were presented three times (first presentation as a new image, second and third
presentation as old), resulting in a total of 1200 trials, half of which featured a new image. Image
repetitions occurred after a lag of 10 to 60 intervening trials.

for each choice participants make a binary choice: "seen before" or "new" ... somebody clicking randomly would have a 50% accuracy in this task.

Somebody always clicking "new" would also have 50% accuracy 🤔 same as somebody clicking "seen before"

... because there is no distinction between "seen once" and "seen twice", right?

yes you are right. that would give us sensitivity but as we have a signal detection task we also have specificity and I feel like d' is just the more accurate measure of whether someone performs the task as they should.

but I am actually not sure if with this particular task design it will be a more reasonable measure than .5 test.
they put some thought into it :)

your call i'd say.

I am curious to see your approach now, so maybe you can give your way a shot and make a PR for it :-)

deal

closed by #11