yuxuanfanOrion's Stars
Sense-X/Co-DETR
[ICCV 2023] DETRs with Collaborative Hybrid Assignments Training
PKU-MARL/DexterousHands
This is a library that provides dual dexterous hand manipulation tasks through Isaac Gym
yamadashy/repomix
📦 Repomix (formerly Repopack) is a powerful tool that packs your entire repository into a single, AI-friendly file. Perfect for when you need to feed your codebase to Large Language Models (LLMs) or other AI tools like Claude, ChatGPT, and Gemini.
notFoundThisPerson/RoboCAS-v0
cwchenwang/awesome-4d-generation
List of papers on 4D Generation.
syt2/zotero-addons
Install add-ons directly in Zotero | Zotero Add-on Market | Zotero插件市场
dinggh0817/4D_Radar_MOT
The code for reproducing experiment results in the conference paper "Which Framework is Suitable for Online 3D Multi-Object Tracking for Autonomous Driving with Automotive 4D Imaging Radar?" in 35th IEEE Intelligent Vehicles Symposium (IV 2024)
aialt/awesome-mobile-agents
✨✨Latest Papers and Datasets on Mobile and PC GUI Agent
CleanDiffuserTeam/CleanDiffuser
CleanDiffuser: An Easy-to-use Modularized Library for Diffusion Models in Decision Making
zjunlp/KnowledgeEditingPapers
Must-read Papers on Knowledge Editing for Large Language Models.
kyegomez/Awesome-LLM-Robotics
A comprehensive list of papers using large language/multi-modal models for Robotics/RL, including papers, codes, and related websites
kyegomez/NeoCortex
An Multi-Modality Foundation Model for Humanoid robots
kyegomez/awesome-robotic-foundation-models
A vast array of Multi-Modal Embodied Robotic Foundation Models!
ActiveVisionLab/Awesome-LLM-3D
Awesome-LLM-3D: a curated list of Multi-modal Large Language Model in 3D world Resources
microsoft/LMOps
General technology for enabling AI capabilities w/ LLMs and MLLMs
RoboFlamingo/RoboFlamingo
Code for RoboFlamingo
OpenBMB/IoA
An open-source framework for collaborative AI agents, enabling diverse, distributed agents to team up and tackle complex tasks through internet-like connectivity.
shap/shap
A game theoretic approach to explain the output of any machine learning model.
sz3/libcimbar
Optimized implementation for color-icon-matrix barcodes
stepjam/RLBench
A large-scale benchmark and learning environment.
ruizheliUOA/Awesome-Interpretability-in-Large-Language-Models
This repository collects all relevant resources about interpretability in LLMs
yuxuanfanOrion/Awesome-Robotics-with-Foundation-Models
This repository compiles a comprehensive collection of papers that leverage foundation models (such as Large Language Models and Vision-Language Models) in the field of robotics.
SiyuanHuang95/ManipVQA
[IROS24 Oral]ManipVQA: Injecting Robotic Affordance and Physically Grounded Information into Multi-Modal Large Language Models
changhaonan/A3VLM
[CoRL2024] Official repo of `A3VLM: Actionable Articulation-Aware Vision Language Model`
OpenGVLab/LLaMA-Adapter
[ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters
White65534/BHSD
This is the official project webpage of BHSD (MLMI 2023).
real-stanford/cow
[CVPR 2023] CoWs on Pasture: Baselines and Benchmarks for Language-Driven Zero-Shot Object Navigation
david-cortes/contextualbandits
Python implementations of contextual bandits algorithms
zyc00/Point-SAM
Point-SAM: This is the official repository of "Point-SAM: Promptable 3D Segmentation Model for Point Clouds". We provide codes for running our demo and links to download checkpoints.
lxtGH/OMG-Seg
OMG-LLaVA and OMG-Seg codebase [CVPR-24 and NeurIPS-24]