Here, we aim to provide a comprehensive collection of projects using hyperdimensional computing. Please let us know if you have any related project (abbas@ee.ethz.ch).
The way the brain works suggests that rather than working with numbers that we are used to, computing with hyperdimensional (HD) vectors, referred to as “hypervectors,” is more efficient. Computing with hypervectors, offers a general and scalable model of computing as well as well-defined set of arithmetic operations that can enable fast and one-shot learning (no need of backpropagation). Furthermore it is memory-centric with embarrassingly parallel operations and is extremely robust against most failure mechanisms and noise. Hypervectors are high-dimensional (e.g., 10,000 bits), (pseudo)random with independent identically distributed components leading to holographic representation (i.e., not microcoded). Hypervectors can use various coding: dense or sparse, bipolar, binary, real, complex. They can be combined using arithmetic operations such as multiplication, addition, and permutation, and be compared for similarity using distance metrics.
- Hyperdimensional Computing: An Introduction to Computing in Distributed Representation with High-Dimensional Random Vectors
- High-dimensional Computing as a Nanoscalable Paradigm
- Vector Symbolic Architectures Answer Jackendoff's Challenges for Cognitive Neuroscience
- Pentti Kanerva. 1988. Sparse Distributed Memory. MIT Press, Cambridge, MA, USA
- Chris Eliasmith. 2013. How to Build a Brain: A Neural Architecture for Biological Cognition. Oxford University Press, Oxford, UK
- Tony A. Plate. 2003. Holographic Reduced Representation: Distributed Representation for Cognitive Structures. CSLI Publications, Stanford, CA, USA
- Project specification: Desining an efficient algorithm with end-to-end binary operations for both learning and classification of human epileptic seizures from intracranial electroencephalography (iEEG)
- Input: 36 to 100 implanted iEEG electrodes
- Output: Binary classification (interictal vs. ictal states)
- Implementation: Matlab and Python
- Remarks: One-/few-shot learning from seizures with higher accuracy than SVM and MLP; linearly scalable to large number of electrodes; lower memory footprint
- Link to code and FREE dataset
- link to paper
- Project specification: Designing an energy-efficient algorithm for long-term iEEG monitoring. The main idea of this algorithm is to combine local binary patterns (LBP) with hyperdimensional (HD) computing followed by a patient-specific postprocessing to learn and detect seizures from intracranial electroencephalography (iEEG)
- Input: 24 to 128 implanted iEEG electrodes
- Output: Binary classification (interictal vs. ictal states)
- Implementation: Python, OpenMP, Verilog
- Remarks: No false alarms over 1357 hours of testing; Lower number of undetected seizures, false alarms (to zero), execution time, and energy consumption for classification on a TX2 embedded device compared to support vector machine (SVM), convolutional neural network (CNN), and long short-term memory recurrent neural network (LSTM); Fast learning from one or two seizure examples.
- Link to code and largest iEEG datset
- link to paper
- Project specification: Designing a fast learning algorithm to fuse physiological signals across different modalities such as GSR, ECG, and EEG for emotion recognition. The algorithm maps real-valued features to binary hypervectors using a random nonlinear function, and further encodes them over time, and fuses across three different modalities.
- Input: 32 GSR features, 77 ECG features, and 105 EEG features
- Output: 2 parallel binary classifiers (high vs. low arousal, and positive vs. negative valence)
- Implementation: Matlab
- Remarks: Compared to GaussianNB, SVM, and XGB, our algorithm achieves higher classification accuracy for both valence (83.2% vs. 80.1%) and arousal (70.1% vs. 68.4%) using only 1/4 training data. It also achieves at least 5% higher averaged accuracy compared to all the other methods in any point along the learning curve.
- Link to code
- link to paper
- Project specification: Design and development of an embedded system using a large-area, high-density EMG sensor array for robust hand gesture recognition
- Input: 64 EMG electrodes
- Ouput: 5 classes as different hand gestures
- Implementation: Matlab and C
- Remarks: One-shot learning; higher accuracy than SVM
- Link to code and dataset
- System Demo
- link to paper
- Project specification: Designing an algorithm for hand gesture recognition from a stream of EMG sensors for a smart prosthetic application
- Input: EMG signals from 4 channels
- Output: 5 classes as different hand gestures
- Implementation: Matlab
- Remarks: ~3x less training data; higher accuracy than SVM
- Link to code and dataset
- link to paper
- Project specification: HD computing's acceleration on a silicon prototype of the PULPv3 4-core chip (1.5 mm2, 2 mW) with optimization of memory accesses and operations
- Implementation: C (for ARM Cortex M4 processors) and OpenMP (for multi-core processors)
- Remarks: Simultaneous 3.7× end-to-end speed-up and 2× energy saving compared to its single-core execution
- Link to code
- Demo
- link to paper
- Project specification: Hardware techniques for optimizations of HD computing, in a synthesizable VHDL library, to enable co-located implementation of both learning and classification tasks on only a small portion of an FPGA
- Implementation: VHDL (RTL)
- Remarks: Design space exploration with library modules shows simultaneous 2.39× area and 986× throughput improvements
- Link to code
- link to paper
- Project specification: Multiclass learning and inference using motor-imagery based brain–computer interface (MI-BCI) from electroencephalography (EEG) signals
- Input: 16 EEG electrodes, and 22 EEG electrodes
- Output: Multicalss classification (3 classes, and 4 classes)
- Implementation: Python
- Remarks: ~26x faster training time, and ~22x lower energy
- Link to code and dataset
- link to paper
- Project specification: Binary classification of EEG error-related potentials for noninvasive brain–computer interfaces
- Input: 64 EEG electrodes
- Output: Binary classification (correct class vs. error class)
- Implementation: Matlab
- Remarks: ~3x less training data and preprocessing; no domain expert knowledge for electrode selection
- Link to code
- link to paper
- Project specification: Exploring tradeoffs of selecting parameters of binary HD representations when applied to pattern recognition tasks. Particular design choices include density of representations and strategies for mapping data from the original representation.
- Implementation: Matlab
- Remarks: For the considered pattern recognition tasks both sparse and dense approaches behave nearly identical. At the same time implementation peculiarities may favor one approach over another
- Link to code
- link to paper
- Project specification: Designing an algorithm and memory-centric architecture for European language recognition from letter n-grams
- Input: Streams of characters
- Output: 21 classes as the European languages
- Implementation: Matlab and SystemVerilog (RTL)
- Remarks: ~50% energy saving; 8.8x higher robustness in memory failures
- Link to code
- link to paper
- Project specification: A knowledge-representation architecture allowing a robot to learn arbitrarily complex, hierarchical/symbolic relationships between sensors and actuators
- Implementation: C++
- Remarks: Despite their extreme computational simplicity, these architectures can be easily “programmed” to perform subsumption hierarchies and other rule-like behaviors in the service of interesting tasks, in the absence of explicit if/then statements or other traditional symbolic constructs
- Link to code
- link to paper