dongjieHuo's Stars
LYX0501/InstructNav
nilseuropa/realsense_ros_gazebo
Intel Realsense Tracking and Depth camera simulations
IDEA-Research/Grounded-Segment-Anything
Grounded SAM: Marrying Grounding DINO with Segment Anything & Stable Diffusion & Recognize Anything - Automatically Detect , Segment and Generate Anything
IDEA-Research/GroundingDINO
[ECCV 2024] Official implementation of the paper "Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection"
lucidrains/vit-pytorch
Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch
huggingface/pytorch-image-models
The largest collection of PyTorch image encoders / backbones. Including train, eval, inference, export scripts, and pretrained weights -- ResNet, ResNeXT, EfficientNet, NFNet, Vision Transformer (ViT), MobileNetV4, MobileNet-V3 & V2, RegNet, DPN, CSPNet, Swin Transformer, MaxViT, CoAtNet, ConvNeXt, and more
UT-ADL/milrem_visual_offroad_navigation
Vision-based off-road navigation with geographical hints
RobotecAI/visualnav-transformer-ros2
Official code and checkpoint release for mobile robot foundation models: GNM, ViNT, and NoMaD.
Michael-Equi/lfg-nav
Official code release for "Navigation with Large Language Models: Semantic Guesswork as a Heuristic for Planning"
d2l-ai/d2l-zh
《动手学深度学习》:面向中文读者、能运行、可讨论。中英文版被70多个国家的500多所大学用于教学。
Auromix/ROS-LLM
ROS-LLM is a framework designed for embodied intelligence applications in ROS. It allows natural language interactions and leverages Large Language Models (LLMs) for decision-making and robot control. With an easy configuration process, this framework allows for swift integration, enabling your robot to operate with it in as little as ten minutes.
zchoi/Awesome-Embodied-Agent-with-LLMs
This is a curated list of "Embodied AI or robot with Large Language Models" research. Watch this repository for the latest updates! 🔥
AGI-Edgerunners/LLM-Planning-Papers
Must-read Papers on Large Language Model (LLM) Planning.
robodhruv/visualnav-transformer
Official code and checkpoint release for mobile robot foundation models: GNM, ViNT, and NoMaD.
yuanzhongqiao/awesome-robotic-tooling-cn
用于C++和Python专业机器人开发的工具,涉及ROS、自动驾驶和航空航天。Tooling for professional robotic development in C++ and Python with a touch of ROS, autonomous driving and aerospace.
linorobot/linorobot2
Autonomous mobile robots (2WD, 4WD, Mecanum Drive)
LihanChen2004/pb_rm_simulation
ROS2-Gazebo simulation package leveraging Mid360 and FASTLIO for navigation. Contact me (QQ): 757003373
concept-fusion/concept-fusion
Code release for ConceptFusion [RSS 2023]
Chenjq-99/Motion-plan
深蓝运动规划课程
symao/minimum_snap_trajectory_generation
easy sample code for minimum snap trajectory planning in MATLAB
zm0612/Minimum-Snap
使用C++对Minimum Snap算法进行了实现,实现的结果超过了论文中给出的计算速度。并且实现了三维和二维的Minimum Snap轨迹生成算法
ZJU-FAST-Lab/uneven_planner
An Efficient Trajectory Planner for Car-like Robots on Uneven Terrain
sunmiaozju/learn
一些知识的整理
daohu527/dig-into-apollo
Apollo notes (Apollo学习笔记) - Apollo learning notes for beginners.
Zhefan-Xu/CERLAB-UAV-Autonomy
[CMU] A Versatile and Modular Framework Designed for Autonomous Unmanned Aerial Vehicles [UAVs] (C++/ROS/PX4)
robotics-upo/dynamic_obstacle_detector
A simple (but effective) detector of dynamic obstacles in laser scans.
janedipan/Demo-Acbf2312
搭建ros-gazebo仿真环境,测试mpc-cbf在动态环境下规避障碍物的效果
rikonaka/translator-rs
A PDF paper real-time translation written with rust (of course not only PDF other can also be translated)
mit-acl/faster
3D Trajectory Planner in Unknown Environments
KailinTong/Motion-Planning-for-Mobile-Robots