ZacharyZekaiXu's Stars
TomGeorge1234/STDP-SR
We study place and grid cells of an RL agent learns successor representations (SR) in compositional mazes.
juliancervos/stdp-nmnist
Repository for the master thesis titled "Local Unsupervised Learning of Multimodal Event-Based Data with Spiking Neural Networks", by Julian Lopez Gordillo (MSc in Artificial Intelligence, 2019-2021).
matthewvowels1/Awesome-VAEs
A curated list of awesome work on VAEs, disentanglement, representation learning, and generative models.
daifengwanglab/scMNC
natashamjaques/MultimodalAutoencoder
Code supporting the paper "Multimodal Autoencoder: A Deep Learning Approach to Filling In Missing Sensor Data and Enabling Better Mood Prediction"
jhuebotter/SpikingVAE
Master Thesis Project of Justus Hübotter
BrainsOnBoard/paper_RPEs_in_drosophila_mb
Code and data for the manuscript "Learning with reinforcement prediction errors in a model of the Drosophila mushroom body".
amrzv/awesome-colab-notebooks
Collection of google colaboratory notebooks for fast and easy experiments
mehmetfdemirel/PolycrystalGraph
Code base for the graph neural network-based polygrain microstructure property prediction project
clear-nus/TactileSGNet
A spiking graph neural network for event-based learning
xingyul/flownet3d
FlowNet3D: Learning Scene Flow in 3D Point Clouds (CVPR 2019)
MaxChanger/awesome-point-cloud-scene-flow
A list of point cloud scene flow papers, codes and datasets.
SMohammadi89/PointView-GCN
The code and dataset will be available soon here
ricardodeazambuja/IJCNN2017
Short-Term Plasticity in a Liquid State Machine Biomimetic Robot Arm Controller
ricardodeazambuja/Bee
The Spiking Reservoir (Liquid State Machine - LSM) Simulator
guillaume-chevalier/Spiking-Neural-Network-SNN-with-PyTorch-where-Backpropagation-engenders-STDP
What about coding a Spiking Neural Network using an automatic differentiation framework? In SNNs, there is a time axis and the neural network sees data throughout time, and activation functions are instead spikes that are raised past a certain pre-activation threshold. Pre-activation values constantly fades if neurons aren't excited enough.
ZulunZhu/SpikingGCN
IGITUGraz/MemoryDependentComputation
Code for Limbacher, T., Özdenizci, O., & Legenstein, R. (2022). Memory-enriched computation and learning in spiking neural networks through Hebbian plasticity. arXiv preprint arXiv:2205.11276.
kamata1729/FullySpikingVAE
Official implementation of Fully Spiking Variational Autoencoder [AAAI2022]
google-deepmind/3d-shapes
This repository contains the 3D shapes dataset, used in Kim, Hyunjik and Mnih, Andriy. "Disentangling by Factorising." In Proceedings of the 35th International Conference on Machine Learning (ICML). 2018. to assess the disentanglement properties of unsupervised learning methods.
rhgao/ObjectFolder
ObjectFolder Dataset
event-driven-robotics/tactile_braille_reading
neural-reckoning/cosyne-tutorial-2022
Cosyne workshop tutorial 2022
comob-project/snn-sound-localization
Training spiking neural networks for sound localization
alberto-antonietti/paper_whisking
Code related to the paper: "Brain-inspired spiking neural network controller for a neurorobotic whisker system": https://doi.org/10.3389/fnbot.2022.817948
clear-nus/VT_SNN
VT-SNN
annkennedy/mushroomBody
Code for simulating and analyzing the spiking mushroom body model
gyyang/nn-brain
Tutorial codes for modeling brains with neural nets
facebookresearch/3D-Vision-and-Touch
When told to understand the shape of a new object, the most instinctual approach is to pick it up and inspect it with your hand and eyes in tandem. Here, touch provides high fidelity localized information while vision provides complementary global context. However, in 3D shape reconstruction, the complementary fusion of visual and haptic modalities remains largely unexplored. In this paper, we study this problem and present an effective chart-based approach to fusing vision and touch, which leverages advances in graph convolutional networks. To do so, we introduce a dataset of simulated touch and vision signals from the interaction between a robotic hand and a large array of 3D objects. Our results show that (1) leveraging both vision and touch signals consistently improves single-modality baselines, especially when the object is occluded by the hand touching it; (2) our approach outperforms alternative modality fusion methods and strongly benefits from the proposed chart-based structure; (3) reconstruction quality boosts with the number of grasps provided; and (4) the touch information not only enhances the reconstruction at the touch site but also extrapolates to its local neighborhood.
facebookresearch/Active-3D-Vision-and-Touch
A repository for the paper Active 3D Shape Reconstruction from Vision and Touch and robotic touch simulator package.