Pinned Repositories
a-PyTorch-Tutorial-to-Object-Detection
SSD: Single Shot MultiBox Detector | a PyTorch Tutorial to Object Detection
bindsnet
Simulation of spiking neural networks (SNNs) using PyTorch.
cheetah
cs5785-fall-2019
Website link:
ebola-genomics-research-ngs
Determining a genetic difference between survivors and victims of Ebola virus disease from RNA-seq data -- genomics project (RNA alignment: STAR , feature counting, QC reporting, Differential expression analysis: DESeq2)
Machine-Learning-Tutorials
Educational Scripts and Notebooks of Machine Learning Methods
peaknet4antfarm-zmq
popvae
genotype dimensionality reduction with a VAE
psocake
Spiking-Neural-Network-SNN-with-PyTorch-where-Backpropagation-engenders-STDP
What about coding a Spiking Neural Network using an automatic differentiation framework? In SNNs, there is a time axis and the neural network sees data throughout time, and activation functions are instead spikes that are raised past a certain pre-activation threshold. Pre-activation values constantly fades if neurons aren't excited enough.
tatkeller's Repositories
tatkeller/a-PyTorch-Tutorial-to-Object-Detection
SSD: Single Shot MultiBox Detector | a PyTorch Tutorial to Object Detection
tatkeller/bindsnet
Simulation of spiking neural networks (SNNs) using PyTorch.
tatkeller/cheetah
tatkeller/cs5785-fall-2019
Website link:
tatkeller/ebola-genomics-research-ngs
Determining a genetic difference between survivors and victims of Ebola virus disease from RNA-seq data -- genomics project (RNA alignment: STAR , feature counting, QC reporting, Differential expression analysis: DESeq2)
tatkeller/Machine-Learning-Tutorials
Educational Scripts and Notebooks of Machine Learning Methods
tatkeller/peaknet4antfarm-zmq
tatkeller/popvae
genotype dimensionality reduction with a VAE
tatkeller/psocake
tatkeller/Spiking-Neural-Network-SNN-with-PyTorch-where-Backpropagation-engenders-STDP
What about coding a Spiking Neural Network using an automatic differentiation framework? In SNNs, there is a time axis and the neural network sees data throughout time, and activation functions are instead spikes that are raised past a certain pre-activation threshold. Pre-activation values constantly fades if neurons aren't excited enough.