Pinned Repositories
annotated_deep_learning_paper_implementations
🧑🏫 60 Implementations/tutorials of deep learning papers with side-by-side notes 📝; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, sophia, ...), gans(cyclegan, stylegan2, ...), 🎮 reinforcement learning (ppo, dqn), capsnet, distillation, ... 🧠
cpl
CPL: Weakly Supervised Temporal Sentence Grounding with Gaussian-based Contrastive Proposal Learning
CudaSteps
基于《cuda编程-基础与实践》(樊哲勇 著)的cuda学习之路。
DeepLearningSystem
Deep Learning System core principles introduction.
DeepSpeed
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
docker-pytorch
A Docker image for PyTorch
FasterTransformer
Transformer related optimization, including BERT, GPT
generative-models
Generative Models by Stability AI
GRiT
GRiT: A Generative Region-to-text Transformer for Object Understanding (https://arxiv.org/abs/2212.00280)
JCAR-Competition
Kinova robot grasp for JCAR Competition
freeman-1995's Repositories
freeman-1995/JCAR-Competition
Kinova robot grasp for JCAR Competition
freeman-1995/annotated_deep_learning_paper_implementations
🧑🏫 60 Implementations/tutorials of deep learning papers with side-by-side notes 📝; including transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, sophia, ...), gans(cyclegan, stylegan2, ...), 🎮 reinforcement learning (ppo, dqn), capsnet, distillation, ... 🧠
freeman-1995/cpl
CPL: Weakly Supervised Temporal Sentence Grounding with Gaussian-based Contrastive Proposal Learning
freeman-1995/CudaSteps
基于《cuda编程-基础与实践》(樊哲勇 著)的cuda学习之路。
freeman-1995/DeepLearningSystem
Deep Learning System core principles introduction.
freeman-1995/DeepSpeed
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
freeman-1995/docker-pytorch
A Docker image for PyTorch
freeman-1995/FasterTransformer
Transformer related optimization, including BERT, GPT
freeman-1995/generative-models
Generative Models by Stability AI
freeman-1995/GRiT
GRiT: A Generative Region-to-text Transformer for Object Understanding (https://arxiv.org/abs/2212.00280)
freeman-1995/GroundingDINO
The official implementation of "Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection"
freeman-1995/LAVIS
LAVIS - A One-stop Library for Language-Vision Intelligence
freeman-1995/LLaMA-Adapter
Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters
freeman-1995/LLMSurvey
The official GitHub page for the survey paper "A Survey of Large Language Models".
freeman-1995/Megatron-DeepSpeed
Ongoing research training transformer language models at scale, including: BERT & GPT-2
freeman-1995/ru-dalle
Generate images from texts. In Russian
freeman-1995/scenic
Scenic: A Jax Library for Computer Vision Research and Beyond
freeman-1995/segment-anything
The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
freeman-1995/shikra
freeman-1995/stablediffusion
High-Resolution Image Synthesis with Latent Diffusion Models
freeman-1995/transformers
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
freeman-1995/tuning_playbook
A playbook for systematically maximizing the performance of deep learning models.
freeman-1995/xtuner
XTuner is a toolkit for efficiently fine-tuning LLM