0uMuMu0's Stars
vllm-project/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
ray-project/ray
Ray is an AI compute engine. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
openai/gym
A toolkit for developing and comparing reinforcement learning algorithms.
huggingface/peft
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
IDEA-Research/Grounded-Segment-Anything
Grounded SAM: Marrying Grounding DINO with Segment Anything & Stable Diffusion & Recognize Anything - Automatically Detect , Segment and Generate Anything
mlfoundations/open_clip
An open source implementation of CLIP.
artidoro/qlora
QLoRA: Efficient Finetuning of Quantized LLMs
huggingface/text-generation-inference
Large Language Model Text Generation Inference
NVIDIA/TensorRT-LLM
TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorRT-LLM also contains components to create Python and C++ runtimes that execute those TensorRT engines.
OpenGVLab/InternVL
[CVPR 2024 Oral] InternVL Family: A Pioneering Open-Source Alternative to GPT-4o. 接近GPT-4o表现的开源多模态对话模型
bitsandbytes-foundation/bitsandbytes
Accessible large language models via k-bit quantization for PyTorch.
facontidavide/PlotJuggler
The Time Series Visualization Tool that you deserve.
OpenDriveLab/UniAD
[CVPR 2023 Best Paper Award] Planning-oriented Autonomous Driving
LLaVA-VL/LLaVA-NeXT
xinyu1205/recognize-anything
Open-source and strong foundation image recognition models.
mit-han-lab/llm-awq
[MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration
google-research/big_vision
Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.
danijar/dreamerv3
Mastering Diverse Domains through World Models
Computer-Vision-in-the-Wild/CVinW_Readings
A collection of papers on the topic of ``Computer Vision in the Wild (CVinW)''
OpenGVLab/OmniQuant
[ICLR2024 spotlight] OmniQuant is a simple and powerful quantization technique for LLMs.
SqueezeAILab/SqueezeLLM
[ICML 2024] SqueezeLLM: Dense-and-Sparse Quantization
opendilab/InterFuser
[CoRL 2022] InterFuser: Safety-Enhanced Autonomous Driving Using Interpretable Sensor Fusion Transformer
mit-han-lab/qserve
QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving
NM512/dreamerv3-torch
Implementation of Dreamer v3 in pytorch.
WisconsinAIVision/ViP-LLaVA
[CVPR2024] ViP-LLaVA: Making Large Multimodal Models Understand Arbitrary Visual Prompts
ikostrikov/rlpd
j96w/MimicPlay
"MimicPlay: Long-Horizon Imitation Learning by Watching Human Play" code repository
RupertLuo/Valley
The official repository of "Video assistant towards large language model makes everything easy"
OpenDriveLab/DriveAdapter
[ICCV 2023 Oral] A New Paradigm for End-to-end Autonomous Driving to Alleviate Causal Confusion
SJTU-ReArch-Group/Paper-Reading-List