Pinned Repositories
awesome-detection-transformer
Collect some papers about transformer for detection and segmentation. Awesome Detection Transformer for Computer Vision (CV)
ControlNet
Let us control diffusion models
deep-learning-for-image-processing
deep learning for image processing including classification and object-detection etc.
detr
End-to-End Object Detection with Transformers
detrex
detrex is a research platform for Transformer-based Instance Recognition algorithms including DETR (ECCV 2020), Deformable-DETR (ICLR 2021), Conditional-DETR (ICCV 2021), DAB-DETR (ICLR 2022), DN-DETR (CVPR 2022), DINO (ICLR 2023), H-DETR (CVPR 2023), MaskDINO (CVPR 2023), etc.
evals
Evals is a framework for evaluating OpenAI models and an open-source registry of benchmarks.
fairseq
Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
flash-attention
Fast and memory-efficient exact attention
torchscale
Transformers at any scale
unilm
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
ustcwhy's Repositories
ustcwhy/unilm
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
ustcwhy/awesome-detection-transformer
Collect some papers about transformer for detection and segmentation. Awesome Detection Transformer for Computer Vision (CV)
ustcwhy/ControlNet
Let us control diffusion models
ustcwhy/deep-learning-for-image-processing
deep learning for image processing including classification and object-detection etc.
ustcwhy/detr
End-to-End Object Detection with Transformers
ustcwhy/detrex
detrex is a research platform for Transformer-based Instance Recognition algorithms including DETR (ECCV 2020), Deformable-DETR (ICLR 2021), Conditional-DETR (ICCV 2021), DAB-DETR (ICLR 2022), DN-DETR (CVPR 2022), DINO (ICLR 2023), H-DETR (CVPR 2023), MaskDINO (CVPR 2023), etc.
ustcwhy/evals
Evals is a framework for evaluating OpenAI models and an open-source registry of benchmarks.
ustcwhy/fairseq
Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
ustcwhy/flash-attention
Fast and memory-efficient exact attention
ustcwhy/FlexGen
Running large language models like OPT-175B/GPT-3 on a single GPU. Focusing on high-throughput large-batch generation.
ustcwhy/LAVIS
LAVIS - A One-stop Library for Language-Vision Intelligence
ustcwhy/mae
PyTorch implementation of MAE https//arxiv.org/abs/2111.06377
ustcwhy/MT-Reading-List
A machine translation reading list maintained by Tsinghua Natural Language Processing Group
ustcwhy/Scene-Graph-Benchmark.pytorch
A new codebase for popular Scene Graph Generation methods (2020). Visualization & Scene Graph Extraction on custom images/datasets are provided. It's also a PyTorch implementation of paper “Unbiased Scene Graph Generation from Biased Training CVPR 2020”
ustcwhy/torchscale
Transformers at any scale
ustcwhy/gpt-fast
Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.
ustcwhy/Grounded-Segment-Anything
Marrying Grounding DINO with Segment Anything & Stable Diffusion - Detect , Segment and Generate Anything with Text Inputs
ustcwhy/JARVIS
JARVIS, a system to connect LLMs with ML community
ustcwhy/llama
Inference code for LLaMA models
ustcwhy/LMFlow
An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Model for All.
ustcwhy/LMOps
General technology for enabling AI capabilities w/ LLMs and MLLMs
ustcwhy/Metaworld
Collections of robotics environments geared towards benchmarking multi-task and meta reinforcement learning
ustcwhy/nebullvm
Plug and play modules to optimize the performances of your AI systems 🚀
ustcwhy/OpenChatKit
ustcwhy/taichi
Productive & portable high-performance programming in Python.
ustcwhy/TransformerEngine
A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs, to provide better performance with lower memory utilization in both training and inference.
ustcwhy/transformers
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
ustcwhy/ustcwhy.github.io
Github Pages template for academic personal websites, forked from mmistakes/minimal-mistakes
ustcwhy/VideoMAE
[NeurIPS 2022 Spotlight] VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training
ustcwhy/WorkingTime