Pinned Repositories
Emu
Emu Series: Generative Multimodal Models from BAAI
EVA
EVA Series: Visual Representation Fantasies from BAAI
Emu
Emu: An Open Multimodal Generalist
EVA
Exploring the Limits of Masked Visual Representation Learning at Scale (https://arxiv.org/abs/2211.07636)
FlagAI
FlagAI (Fast LArge-scale General AI models) is a fast, easy-to-use and extensible toolkit for large-scale model.
LOVEU_TRACK1_TOP3_SUBMISSION
TOP3 Submission to LOVEU Challenge 2021 Track1
Megatron-LLaMA
Best practice for training LLaMA models in Megatron-LM
Open-Sora
Building your own video generation model like OpenAI's Sora
stanford_alpaca
Code and documentation to train Stanford's Alpaca models, and generate the data.
TADPOLE-ECE5970
machine learning with biomedical data
Quan-Sun's Repositories
Quan-Sun/TADPOLE-ECE5970
machine learning with biomedical data
Quan-Sun/Open-Sora
Building your own video generation model like OpenAI's Sora
Quan-Sun/stanford_alpaca
Code and documentation to train Stanford's Alpaca models, and generate the data.
Quan-Sun/Emu
Emu: An Open Multimodal Generalist
Quan-Sun/EVA
Exploring the Limits of Masked Visual Representation Learning at Scale (https://arxiv.org/abs/2211.07636)
Quan-Sun/FlagAI
FlagAI (Fast LArge-scale General AI models) is a fast, easy-to-use and extensible toolkit for large-scale model.
Quan-Sun/LOVEU_TRACK1_TOP3_SUBMISSION
TOP3 Submission to LOVEU Challenge 2021 Track1
Quan-Sun/Megatron-LLaMA
Best practice for training LLaMA models in Megatron-LM
Quan-Sun/minGPT
A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training
Quan-Sun/Open-Assistant
Quan-Sun/open_clip
An open source implementation of CLIP.
Quan-Sun/OpenBilibli
Quan-Sun/Quan-Sun
Quan-Sun/RAM-multiprocess-dataloader
Demystify RAM Usage in Multi-Process Data Loaders
Quan-Sun/transformers
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Quan-Sun/vit-pytorch
Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch