yunocchi-love's Stars
Timothyxxx/WorldModelPapers
Paper collections of the continuous effort start from World Models.
chaytonmin/Awesome-Papers-World-Models-Autonomous-Driving
Awesome Papers about World Models in Autonomous Driving
opendilab/awesome-model-based-RL
A curated list of awesome model based RL resources (continually updated)
NVIDIA/Cosmos
Cosmos is a world model development platform that consists of world foundation models, tokenizers and video processing pipeline to accelerate the development of Physical AI at Robotics & AV labs. Cosmos is purpose built for physical AI. The Cosmos repository will enable end users to run the Cosmos models, run inference scripts and generate videos.
Genesis-Embodied-AI/genesis-doc
turingmotors/ACT-Bench
ACT-Bench – We Evaluate Action-Fidelity of World Models for Autonomous Driving
Genesis-Embodied-AI/Genesis
A generative world for general-purpose robotics & embodied AI learning.
Hajime-Y/reasoning-model
facebookresearch/flow_matching
A PyTorch library for implementing flow matching algorithms, featuring continuous and discrete flow matching implementations. It includes practical examples for both text and image modalities.
autowarefoundation/autoware
Autoware - the world's leading open-source software project for autonomous driving
FoundationVision/VAR
[NeurIPS 2024 Best Paper][GPT beats diffusion🔥] [scaling laws in visual generation📈] Official impl. of "Visual Autoregressive Modeling: Scalable Image Generation via Next-Scale Prediction". An *ultra-simple, user-friendly yet state-of-the-art* codebase for autoregressive image generation!
IDEA-FinAI/LLM-as-a-Judge
pytorch/vision
Datasets, Transforms and Models specific to Computer Vision
pytorch/torchtune
PyTorch native post-training library
pytorch/torchtitan
A PyTorch native library for large model training
pytorch/ao
PyTorch native quantization and sparsity for training and inference
databricks/megablocks
byeongjun-park/Switch-DiT
[ECCV 2024] Official pytorch implementation of "Switch Diffusion Transformer: Synergizing Denoising Tasks with Sparse Mixture-of-Experts"
kyegomez/SwitchTransformers
Implementation of Switch Transformers from the paper: "Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity"
lucidrains/mixture-of-experts
A Pytorch implementation of Sparsely-Gated Mixture of Experts, for massively increasing the parameter count of language models
ameliawong1996/From_News_to_Forecast
This repository is for the paper entitled: From News to Forecast: Integrating Event Analysis in LLM-based Time Series Forecasting with Reflection (NeurIPS 2024)
huggingface/accelerate
🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support
NVIDIA/apex
A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch
microsoft/unilm
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
iShohei220/adopt
Official Implementation of "ADOPT: Modified Adam Can Converge with Any β2 with the Optimal Rate"
facebookresearch/lingua
Meta Lingua: a lean, efficient, and easy-to-hack codebase to research LLMs.
jdeschena/sdtt
SDTT: a simple and effective distillation method for discrete diffusion models
siliconflow/onediff
OneDiff: An out-of-the-box acceleration library for diffusion models.
pytorch-labs/gpt-fast
Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.
NVIDIA-Merlin/dataloader
The merlin dataloader lets you rapidly load tabular data for training deep leaning models with TensorFlow, PyTorch or JAX