Pinned Repositories
AEC
Acoustic Echo Cancellation with LMS/RLS (基于LMS/RLS的自适应回声抵消)
android-dev-com
Some Famous Android Developers Information, 微信公众号:codekk, 网站:
awesome-quantization-and-fixed-point-training
Neural Network Quantization & Low-Bit Fixed Point Training For Hardware-Friendly Algorithm Design
CGMM_BF
For ASR CGMM BF
cJSON
I did not write this code, but I like it.
cmockery
Fork of Google's C unit testing framework.
De-reverberation
Using Artificial Neural Network
Libevent
Nick's public libevent repository. The official repository is at git://levent.git.sourceforge.net/gitroot/levent/libevent
SpeechEnhancement-1
about Speech enhancement
Tinyhttpd
tinyhttpd 是一个不到 500 行的超轻量型 Http Server,用来学习非常不错,可以帮助我们真正理解服务器程序的本质。
WXB506's Repositories
WXB506/awesome-quantization-and-fixed-point-training
Neural Network Quantization & Low-Bit Fixed Point Training For Hardware-Friendly Algorithm Design
WXB506/admm-pruning
Prune DNN using Alternating Direction Method of Multipliers (ADMM)
WXB506/ASR_Kaldi_DNN_Chinese
基于Kaldi的小词汇量汉语语音识别,使用DNN训练
WXB506/ASR_Theory
语音识别理论,论文和PPT
WXB506/awesome-semantic-segmentation
:metal: awesome-semantic-segmentation
WXB506/Beamforming-for-speech-enhancement
simple delaysum, MVDR and CGMM-MVDR
WXB506/BERT-pytorch
Google AI 2018 BERT pytorch implementation
WXB506/CTCDecoder
Connectionist Temporal Classification (CTC) decoding algorithms: best path, prefix search, beam search and token passing. Implemented in Python and OpenCL.
WXB506/dabnn
dabnn is an accelerated binary neural networks inference framework for mobile platform
WXB506/e6870
assignments for e6870 ASR class
WXB506/elasticsearch-spark-recommender
Use Jupyter Notebooks to demonstrate how to build a Recommender with Apache Spark & Elasticsearch
WXB506/FrontEnd-AEC
Acoustic echo cancelation(AEC) is a main algorithm in the pipe line of acoustic devices with KWS or ASR. FNLMS is used.
WXB506/google-access-helper
谷歌访问助手破解版
WXB506/kaldi-decoders
Custom decoders for Kaldi
WXB506/list_of_recommender_systems
A List of Recommender Systems and Resources
WXB506/model-compression
model compression based on pytorch (1、quantization: 16/8/4/2 bits(dorefa/Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference)、ternary/binary value(twn/bnn/xnor-net);2、 pruning: normal、regular and group convolutional channel pruning;3、 group convolution structure;4、batch-normalization folding for quantization)
WXB506/neural-networks-quantization-notes
WXB506/Normalized-Quantized-LSTM
Implementation of NeurIPS 2019 paper "Normalization Helps Training of Quantized LSTM"
WXB506/professional-programming
A collection of full-stack resources for programmers.
WXB506/python-speech-enhancement
a python library for speech enhancement
WXB506/pytorch-beginner
pytorch tutorial for beginners
WXB506/pytorch-distributed-example
WXB506/QNNPACK
Quantized Neural Network PACKage - mobile-optimized implementation of quantized neural network operators
WXB506/rnnoise-nu
Recurrent neural network for audio noise reduction, slightly improved for general use
WXB506/seq2seq
A general-purpose encoder-decoder framework for Tensorflow
WXB506/Simple_KWS
GRU based keyword spotting, clean envirnoment
WXB506/SpecAugment
A Implementation of SpecAugment with Tensorflow & Pytorch, introduced by Google Brain
WXB506/Speech-Enhancement
A Multi-Channel Front End processing for speech enhancement
WXB506/stanfordnlp
Official Stanford NLP Python Library for Many Human Languages
WXB506/TensorFlow-speech-enhancement-Chinese
基于深度学习的语音增强、去混响