Build a comprehensive benchmark of popular BCI algorithms applied on an extensive list of freely available EEG datasets.
This is an open science project that may evolve depending on the need of the community.
First and foremost, Welcome! 🎉 Willkommen! 🎊 Bienvenue! 🎈🎈🎈
Thank you for visiting the Mother of all BCI Benchmark repository.
This document is a hub to give you some information about the project. Jump straight to one of the sections below, or just scroll down to find out more.
- What are we doing? (And why?)
- Who are we?
- Get in touch
- Installation
- Running
- Supported datasets
- Documentation
- Architecture and main concepts
- Citing MOABB and related publications
- Reproducible Research in BCI has a long way to go.
- While many BCI datasets are made freely available, researchers do not publish code, and reproducing results required to benchmark new algorithms turns out to be more tricky than it should be.
- Performances can be significantly impacted by parameters of the preprocessing steps, toolboxes used and implementation “tricks” that are almost never reported in the literature.
As a results, there is no comprehensive benchmark of BCI algorithm, and newcomers are spending a tremendous amount of time browsing literature to find out what algorithm works best and on which dataset.
The Mother of all BCI Benchmark allows to:
- Build a comprehensive benchmark of popular BCI algorithms applied on an extensive list of freely available EEG datasets.
- The code will be made available on github, serving as a reference point for the future algorithmic developments.
- Algorithms can be ranked and promoted on a website, providing a clear picture of the different solutions available in the field.
This project will be successful when we read in an abstract “ … the proposed method obtained a score of 89% on the MOABB (Mother of All BCI Benchmarks), outperforming the state of the art by 5% ...”.
The founders of the Mother of all BCI Benchmarks are Alexander Barachant and Vinay Jayaram. This project is under the umbrella of NeuroTechX, the international community for NeuroTech enthusiasts. The project is currently maintained by Sylvain Chevallier.
You! In whatever way you can help.
We need expertise in programming, user experience, software sustainability, documentation and technical writing and project management.
We'd love your feedback along the way.
Our primary goal is to build a comprehensive benchmark of popular BCI algorithms applied on an extensive list of freely available EEG datasets, and we're excited to support the professional development of any and all of our contributors. If you're looking to learn to code, try out working collaboratively, or translate you skills to the digital domain, we're here to help.
If you think you can help in any of the areas listed above (and we bet you can) or in any of the many areas that we haven't yet thought of (and here we're sure you can) then please check out our contributors' guidelines and our roadmap.
Please note that it's very important to us that we maintain a positive and supportive environment for everyone who wants to participate. When you join us we ask that you follow our code of conduct in all interactions both on and offline.
If you want to report a problem or suggest an enhancement we'd love for you to open an issue at this github repository because then we can get right on it.
For a less formal discussion or exchanging ideas, you can also reach us on the Gitter channel or join our weekly office hours! This an open video meeting happening every Thursday at 18:30 GMT+1, please ask the link on the gitter channel. We are also on NeuroTechX slack #moabb channel.
Thank you so much (Danke schön! Merci beaucoup!) for visiting the project and we do hope that you'll join us on this amazing journey to build a comprehensive benchmark of popular BCI algorithms applied on an extensive list of freely available EEG datasets.
A PyPi package will be available soon. For now, you need to fork or clone the repository and go to the downloaded directory, then run:
- install
poetry
(only once per machine):
curl -sSL https://raw.githubusercontent.com/python-poetry/poetry/master/get-poetry.py | python -
or checkout installation instruction or use conda forge version - (Optional, skip if not sure) Disable automatical environment creation:
poetry config virtualenvs.create false
- install all dependencies in one command (have to be run in project directory):
poetry install
see pyproject.toml
file for full list of dependencies
To ensure it is running correctly, you can also run
python -m unittest moabb.tests
once it is installed.
First, you could take a look at our tutorials, that cover the most important concepts and use cases. Also, we have a several examples available.
You might be interested in MOABB documentation
The list of supported dataset can be found here : http://moabb.neurotechx.com/docs/datasets.html
you can submit new dataset by mentioning it to this issue. The datasets currently on our radar can be seen [here] (https://github.com/NeuroTechX/moabb/wiki/Datasets-Support)
there are 4 main concepts in the MOABB: the datasets, the paradigm, the evaluation, and the pipelines. In addition, we offer statistical and visualization utilities to simplify the workflow.A dataset handle and abstract low level access to the data. the dataset will takes data stored locally, in the format in which they have been downloaded, and will convert them into a MNE raw object. There are options to pool all the different recording sessions per subject or to evaluate them separately.
A paradigm defines how the raw data will be converted to trials ready to be processed by a decoding algorithm. This is a function of the paradigm used, i.e. in motor imagery one can have two-class, multi-class, or continuous paradigms; similarly, different preprocessing is necessary for ERP vs ERD paradigms.
An evaluation defines how we go from trials per subject and session to a generalization statistic (AUC score, f-score, accuracy, etc) -- it can be either within-recording-session accuracy, across-session within-subject accuracy, across-subject accuracy, or other transfer learning settings.
Pipeline defines all steps required by an algorithm to obtain predictions. Pipelines are typically a chain of sklearn compatible transformers and end with an sklearn compatible estimator. See Pipelines for more info.
Once an evaluation has been run, the raw results are returned as a DataFrame. This can be further processed via the following commands to generate some basic visualization and statistical comparisons:
from moabb.analysis import analyze
results = evaluation.process(pipeline_dict)
analyze(results)
To cite MOABB, you could use the following paper:
Vinay Jayaram and Alexandre Barachant. "MOABB: trustworthy algorithm benchmarking for BCIs." Journal of neural engineering 15.6 (2018): 066011. DOI
If you publish a paper using MOABB, please contact us on gitter or open an issue, and we will add you paper to the dedicated wiki page.