yuxuanfanOrion's Stars
shap/shap
A game theoretic approach to explain the output of any machine learning model.
OpenGVLab/LLaMA-Adapter
[ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters
sz3/libcimbar
Optimized implementation for color-icon-matrix barcodes
microsoft/LMOps
General technology for enabling AI capabilities w/ LLMs and MLLMs
HarderThenHarder/transformers_tasks
⭐️ NLP Algorithms with transformers lib. Supporting Text-Classification, Text-Generation, Information-Extraction, Text-Matching, RLHF, SFT etc.
hzwer/WritingAIPaper
Writing AI Conference Papers: A Handbook for Beginners
atong01/conditional-flow-matching
TorchCFM: a Conditional Flow Matching library
ActiveVisionLab/Awesome-LLM-3D
Awesome-LLM-3D: a curated list of Multi-modal Large Language Model in 3D world Resources
lxtGH/OMG-Seg
OMG-LLaVA and OMG-Seg codebase [CVPR-24 and NeurIPS-24]
stepjam/RLBench
A large-scale benchmark and learning environment.
zjunlp/KnowledgeEditingPapers
Must-read Papers on Knowledge Editing for Large Language Models.
shivammehta25/Matcha-TTS
[ICASSP 2024] 🍵 Matcha-TTS: A fast TTS architecture with conditional flow matching
david-cortes/contextualbandits
Python implementations of contextual bandits algorithms
OpenBMB/IoA
An open-source framework for collaborative AI agents, enabling diverse, distributed agents to team up and tackle complex tasks through internet-like connectivity.
huangwl18/ReKep
ReKep: Spatio-Temporal Reasoning of Relational Keypoint Constraints for Robotic Manipulation
RoboFlamingo/RoboFlamingo
Code for RoboFlamingo
ruizheliUOA/Awesome-Interpretability-in-Large-Language-Models
This repository collects all relevant resources about interpretability in LLMs
RayYoh/Awesome-Robot-Learning
This repo contains a curative list of robot learning (mainly for manipulation) resources.
zyc00/Point-SAM
Point-SAM: This is the official repository of "Point-SAM: Promptable 3D Segmentation Model for Point Clouds". We provide codes for running our demo and links to download checkpoints.
real-stanford/cow
[CVPR 2023] CoWs on Pasture: Baselines and Benchmarks for Language-Driven Zero-Shot Object Navigation
changhaonan/A3VLM
[CoRL2024] Official repo of `A3VLM: Actionable Articulation-Aware Vision Language Model`
SiyuanHuang95/ManipVQA
[IROS24 Oral]ManipVQA: Injecting Robotic Affordance and Physically Grounded Information into Multi-Modal Large Language Models
yufeiwang63/RL-VLM-F
Code for Reinforcement Learning from Vision Language Foundation Model Feedback
kyegomez/awesome-robotic-foundation-models
A vast array of Multi-Modal Embodied Robotic Foundation Models!
White65534/BHSD
This is the official project webpage of BHSD (MLMI 2023).
steve-zeyu-zhang/Awesome-3D-Medical-Imaging-Segmentation
3D Medical Imaging Segmentation: A Comprehensive Survey
Lingkai-Kong/RE-Control
kyegomez/NeoCortex
An Multi-Modality Foundation Model for Humanoid robots
kyegomez/Awesome-LLM-Robotics
A comprehensive list of papers using large language/multi-modal models for Robotics/RL, including papers, codes, and related websites
yuxuanfanOrion/Awesome-Robotics-with-Foundation-Models
This repository compiles a comprehensive collection of papers that leverage foundation models (such as Large Language Models and Vision-Language Models) in the field of robotics.