yuechen91's Stars
facebookresearch/metaseq
Repo for external large-scale work
hugo2046/QuantsPlaybook
量化研究-券商金工研报复现
facioquo/stock-indicators-python
Stock Indicators for Python. Maintained by @LeeDongGeon1996
LastAncientOne/Stock_Analysis_For_Quant
Various Types of Stock Analysis in Excel, Matlab, Power BI, Python, R, and Tableau
tradytics/surpriver
Find big moving stocks before they move using machine learning and anomaly detection
facebookresearch/pytorch_GAN_zoo
A mix of GAN implementations including progressive growing
Lornatang/tf-gans
Tensorflow implements the most primitive GAN
lukemelas/pytorch-pretrained-gans
Pretrained GANs in PyTorch: StyleGAN2, BigGAN, BigBiGAN, SAGAN, SNGAN, SelfCondGAN, and more
VITA-Group/TransGAN
[NeurIPS‘2021] "TransGAN: Two Pure Transformers Can Make One Strong GAN, and That Can Scale Up", Yifan Jiang, Shiyu Chang, Zhangyang Wang
SHI-Labs/Compact-Transformers
Escaping the Big Data Paradigm with Compact Transformers, 2021 (Train your Vision Transformers in 30 mins on CIFAR-10 with a single GPU!)
huyvnphan/PyTorch_CIFAR10
Pretrained TorchVision models on CIFAR10 dataset (with weights)
leimao/PyTorch-Pruning-Example
PyTorch Pruning Example
Ceyron/machine-learning-and-simulation
All the handwritten notes 📝 and source code files 🖥️ used in my YouTube Videos on Machine Learning & Simulation (https://www.youtube.com/channel/UCh0P7KwJhuQ4vrzc3IRuw4Q)
PengchaoHan/Adaptive_Gradient_Sparsification_FL
Adaptive gradient sparsification for efficient federated learning: an online learning approach
kiddyboots216/CommEfficient
PyTorch for benchmarking communication-efficient distributed SGD optimization algorithms
jiangyuang/PruneFL
This is the code repository for the following paper: "Model pruning enables efficient federated learning on edge devices".
chuanqi305/LeNet5
Pure numpy implementation of LeNet5, to help you understand how CNN works.
karakusc/horovod
Distributed training framework for TensorFlow, Keras, and PyTorch.
fengbintu/Neural-Networks-on-Silicon
This is originally a collection of papers on neural network accelerators. Now it's more like my selection of research on deep learning and computer architecture.
charleslipku/LotteryFL
LotteryFL: Empower Edge Intelligence with Personalized and Communication-Efficient Federated Learning (2021 IEEE/ACM Symposium on Edge Computing)
FedML-AI/FedML
FEDML - The unified and scalable ML library for large-scale distributed training, model serving, and federated learning. FEDML Launch, a cross-cloud scheduler, further enables running any AI jobs on any GPU cloud or on-premise cluster. Built on this library, TensorOpera AI (https://TensorOpera.ai) is your generative AI platform at scale.
lucfra/LDS-GNN
Learning Discrete Structures for Graph Neural Networks (TensorFlow implementation)
microsoft/tf-gnn-samples
TensorFlow implementations of Graph Neural Networks
VITA-Group/L2-GCN
[CVPR 2020] L2-GCN: Layer-Wise and Learned Efficient Training of Graph Convolutional Networks
diwu1990/uSystolic-Sim
A systolic array simulator for multi-cycle MACs and varying-byte words, with the paper accepted to HPCA 2022.
Mayank-Parasar/gem5-network-topologies
adamsolomou/SC-DNN
Stochastic Computing for Deep Neural Networks
leimao/PyTorch-Static-Quantization
PyTorch Static Quantization Example
Sanjana7395/static_quantization
Post-training static quantization using ResNet18 architecture
ZihengWang-2/CNN-cifar10