rog93's Stars
dongzhuoyao/admm_nn
Training Neural Networks Without Gradients: A Scalable ADMM Approach python implement
numpy/numpy
The fundamental package for scientific computing with Python.
frankxu2004/gpt-neox
An implementation of model parallel autoregressive transformers on GPUs, based on the DeepSpeed library.
VHellendoorn/Code-LMs
Guide to using pre-trained large language models of source code
NVIDIA/Megatron-LM
Ongoing research training transformer models at scale
facebookresearch/fairseq
Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
microsoft/SPACH
volcengine/veGiantModel
chengyangfu/retinamask
RetinaMask
hpcaitech/ColossalAI
Making large AI models cheaper, faster and more accessible
Oneflow-Inc/oneflow
OneFlow is a deep learning framework designed to be user-friendly, scalable and efficient.
OpenPPL/ppq
PPL Quantization Tool (PPQ) is a powerful offline neural network quantization tool.
njuhope/cuda_sgemm
zhilin007/FFA-Net
FFA-Net: Feature Fusion Attention Network for Single Image Dehazing
facebookresearch/ConvNeXt
Code release for ConvNeXt model
yunxiaoshi/Neural-IMage-Assessment
A PyTorch Implementation of Neural IMage Assessment
truskovskiyk/nima.pytorch
NIMA: Neural IMage Assessment
onnx/onnx-tensorrt
ONNX-TensorRT: TensorRT backend for ONNX
shouxieai/tensorRT_Pro
C++ library based on tensorrt integration
open-mmlab/mmdeploy
OpenMMLab Model Deployment Framework
pengzhiliang/MAE-pytorch
Unofficial PyTorch implementation of Masked Autoencoders Are Scalable Vision Learners
open-mmlab/mmgeneration
MMGeneration is a powerful toolkit for generative models, based on PyTorch and MMCV.
SwinTransformer/Video-Swin-Transformer
This is an official implementation for "Video Swin Transformers".
bes-dev/pytorch_clip_guided_loss
A simple library that implements CLIP guided loss in PyTorch.
DanceTrack/DanceTrack
[CVPR2022] DanceTrack: Multiple Object Tracking in Uniform Appearance and Diverse Motion
ray-project/ray
Ray is an AI compute engine. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
PeterL1n/RobustVideoMatting
Robust Video Matting in PyTorch, TensorFlow, TensorFlow.js, ONNX, CoreML!
PeterL1n/BackgroundMattingV2
Real-Time High-Resolution Background Matting
microsoft/NUWA
A unified 3D Transformer Pipeline for visual synthesis
666DZY666/micronet
micronet, a model compression and deploy lib. compression: 1、quantization: quantization-aware-training(QAT), High-Bit(>2b)(DoReFa/Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference)、Low-Bit(≤2b)/Ternary and Binary(TWN/BNN/XNOR-Net); post-training-quantization(PTQ), 8-bit(tensorrt); 2、 pruning: normal、regular and group convolutional channel pruning; 3、 group convolution structure; 4、batch-normalization fuse for quantization. deploy: tensorrt, fp32/fp16/int8(ptq-calibration)、op-adapt(upsample)、dynamic_shape