manwu1994
Research interests include deep learning and the software/hardware implementation of energy-efficient computing
Pinned Repositories
ramulator-pim
A fast and flexible simulation infrastructure for exploring general-purpose processing-in-memory (PIM) architectures. Ramulator-PIM combines a widely-used simulator for out-of-order and in-order processors (ZSim) with Ramulator, a DRAM simulator with memory models for DDRx, LPDDRx, GDDRx, WIOx, HBMx, and HMCx. Ramulator is described in the IEEE CAL 2015 paper by Kim et al. at https://people.inf.ethz.ch/omutlu/pub/ramulator_dram_simulator-ieee-cal15.pdf Ramulator-PIM is used in the DAC 2019 paper by Singh et al. at https://people.inf.ethz.ch/omutlu/pub/NAPEL-near-memory-computing-performance-prediction-via-ML_dac19.pdf
8-bits-LFSR
awesome-quantization-and-fixed-point-training
Neural Network Quantization & Low-Bit Fixed Point Training For Hardware-Friendly Algorithm Design
caffe-twns
Implementation of Ternary Weight Networks In Caffe
model-compression
model compression based on pytorch (1、quantization: 16/8/4/2 bits(dorefa/Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference)、ternary/binary value(twn/bnn/xnor-net);2、 pruning: normal、regular and group convolutional channel pruning;3、 group convolution structure;4、batch-normalization folding for quantization)
Ternary-spiking-neural-network
Ternary spiking neural network
wuman
xxxx_phd.github.io
manwu1994's Repositories
manwu1994/8-bits-LFSR
manwu1994/Ternary-spiking-neural-network
Ternary spiking neural network
manwu1994/awesome-quantization-and-fixed-point-training
Neural Network Quantization & Low-Bit Fixed Point Training For Hardware-Friendly Algorithm Design
manwu1994/caffe-twns
Implementation of Ternary Weight Networks In Caffe
manwu1994/model-compression
model compression based on pytorch (1、quantization: 16/8/4/2 bits(dorefa/Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference)、ternary/binary value(twn/bnn/xnor-net);2、 pruning: normal、regular and group convolutional channel pruning;3、 group convolution structure;4、batch-normalization folding for quantization)
manwu1994/wuman
manwu1994/xxxx_phd.github.io