xbm626's Stars
ohmyzsh/ohmyzsh
🙃 A delightful community-driven (with 2,400+ contributors) framework for managing your zsh configuration. Includes 300+ optional plugins (rails, git, macOS, hub, docker, homebrew, node, php, python, etc), 140+ themes to spice up your morning, and an auto-update tool that makes it easy to keep up with the latest updates from the community.
labuladong/fucking-algorithm
刷算法全靠套路,认准 labuladong 就够了!English version supported! Crack LeetCode, not only how, but also why.
openai/whisper
Robust Speech Recognition via Large-Scale Weak Supervision
Olshansk/interview
Everything you need to prepare for your technical interview
microsoft/Swin-Transformer
This is an official implementation for "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows".
wenet-e2e/wenet
Production First and Production Ready End-to-End Speech Recognition Toolkit
astooke/rlpyt
Reinforcement Learning in PyTorch
cgpotts/cs224u
Code for Stanford CS224u
mit-han-lab/once-for-all
[ICLR 2020] Once for All: Train One Network and Specialize it for Efficient Deployment
mit-han-lab/proxylessnas
[ICLR 2019] ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware
wyharveychen/CloserLookFewShot
source code to ICLR'19, 'A Closer Look at Few-shot Classification'
k2-fsa/icefall
huggingface/naacl_transfer_learning_tutorial
Repository of code for the tutorial on Transfer Learning in NLP held at NAACL 2019 in Minneapolis, MN, USA
google-research/nasbench
NASBench: A Neural Architecture Search Dataset and Benchmark
mit-han-lab/lite-transformer
[ICLR 2020] Lite Transformer with Long-Short Range Attention
hirofumi0810/neural_sp
End-to-end ASR/LM implementation with PyTorch
zhijian-liu/torchprofile
A general and accurate MACs / FLOPs profiler for PyTorch models
microsoft/Focal-Transformer
[NeurIPS 2021 Spotlight] Official code for "Focal Self-attention for Local-Global Interactions in Vision Transformers"
mit-han-lab/amc
[ECCV 2018] AMC: AutoML for Model Compression and Acceleration on Mobile Devices
google-research/l2p
Learning to Prompt (L2P) for Continual Learning @ CVPR22 and DualPrompt: Complementary Prompting for Rehearsal-free Continual Learning @ ECCV22
mit-han-lab/haq
[CVPR 2019, Oral] HAQ: Hardware-Aware Automated Quantization with Mixed Precision
mit-han-lab/hardware-aware-transformers
[ACL'20] HAT: Hardware-Aware Transformers for Efficient Natural Language Processing
kssteven418/Squeezeformer
[NeurIPS'22] Squeezeformer: An Efficient Transformer for Automatic Speech Recognition
renqianluo/NAO_pytorch
Pytorch Implementation of Neural Architecture Optimization
j-min/MoChA-pytorch
PyTorch Implementation of "Monotonic Chunkwise Attention" (ICLR 2018)
asappresearch/sew
kalviny/IMTA
igolan/bgd
Implementation of Bayesian Gradient Descent
Cyril9227/Keras_AttentiveNormalization
Unofficial Keras implementation of the paper Attentive Normalization.
nmasse/Context-Dependent-Gating
Algorithm to alleviate catastrophic forgetting in neural networks by gating hidden units