/Across-Modalities

Primary LanguageJupyter NotebookThe UnlicenseUnlicense

Across-Modalities

Reproducing the source code of the article: A common representation of time across visual and auditory modalities.

The contributions of this work are to reproduce the work, besides proposing improvements in the decoding process.

Abstract

Project Organization


The structure of this project follows the Cookiecutter for reprodubility template.

.
├── LICENSE
├── README.md
├── bin
├── config
├── data
│   ├── external
│   ├── interim
│   ├── processed
│   └── raw
├── docs
├── notebooks
├── reports
│   └── figures
└── src
    ├── data
    ├── external
    ├── models
    ├── tools
    └── visualization

Requirements

In the development of this code was tested exclusively on Python 3, so we do not guarantee the operation in another version.

  • Python >= 3.5
  • TO-DO

Installing the dependencies

Install virtualenv and creating a new virtual environment:

pip install virtualenv
virtualenv -p /usr/bin/python ./venv

Install dependencies

pip install -r config/requirements.txt

Citation

If you find this code useful for your research, please cite:

@article{Barne:2018,
title = "A common representation of time across visual and auditory modalities",
journal = "Neuropsychologia",
volume = "119",
pages = "223 - 232",
year = "2018",
issn = "0028-3932",
doi = "https://doi.org/10.1016/j.neuropsychologia.2018.08.014",
url = "http://www.sciencedirect.com/science/article/pii/S0028393218304913",
author = "Louise C. Barne and João R. Sato and Raphael Y. de Camargo and Peter M.E. Claessens and Marcelo S. Caetano and André M. Cravo",
keywords = "Time perception, Multivariate pattern analysis, EEG, Vision, Audition",
}


@article{}