SSZ1
Ph.D. candidate in LIESMARS, Wuhan University. Research Interests: Computer Vision and Deep Learning, UAV, Image and Point cloud intelligent processing.
LIESMARS, Wuhan UniversityWuhan,China
SSZ1's Stars
facebookresearch/segment-anything
The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
PaddlePaddle/Paddle
PArallel Distributed Deep LEarning: Machine Learning Framework from Industrial Practice (『飞桨』核心框架,深度学习&机器学习高性能单机、分布式训练和跨平台部署)
heartexlabs/label-studio
Label Studio is a multi-type data labeling and annotation tool with standardized output format
PaddlePaddle/PaddleSeg
Easy-to-use image segmentation library with awesome pre-trained model zoo, supporting wide-range of practical tasks in Semantic Segmentation, Interactive Segmentation, Panoptic Segmentation, Image Matting, 3D Segmentation, etc.
PX4/PX4-Autopilot
PX4 Autopilot Software
facebookresearch/ImageBind
ImageBind One Embedding Space to Bind Them All
CASIA-IVA-Lab/FastSAM
Fast Segment Anything
Deci-AI/super-gradients
Easily train or fine-tune SOTA computer vision models with one open source training library. The home of Yolo-NAS.
UX-Decoder/Segment-Everything-Everywhere-All-At-Once
[NeurIPS 2023] Official implementation of the paper "Segment Everything Everywhere All at Once"
z-x-yang/Segment-and-Track-Anything
An open-source project dedicated to tracking and segmenting any objects in videos, either automatically or interactively. The primary algorithms utilized include the Segment Anything Model (SAM) for key-frame segmentation and Associating Objects with Transformers (AOT) for efficient tracking and propagation purposes.
HKUST-Aerial-Robotics/A-LOAM
Advanced implementation of LOAM
microsoft/PromptCraft-Robotics
Community for applying LLMs to robotics and a robot simulator with ChatGPT integration
KumarRobotics/msckf_vio
Robust Stereo Visual Inertial Odometry for Fast Autonomous Flight
MIT-SPARK/Kimera-VIO
Visual Inertial Odometry with SLAM capabilities and 3D Mesh generation.
TommyZihao/zihao_course
同济子豪兄的公开课
ZJU-FAST-Lab/ego-planner
ethz-asl/rotors_simulator
RotorS is a UAV gazebo simulator
ethz-asl/rovio
cuitaixiang/LOAM_NOTED
loam code noted in Chinese(loam中文注解版)
microsoft/VideoX
VideoX: a collection of video cross-modal models
hyye/lio-mapping
Implementation of Tightly Coupled 3D Lidar Inertial Odometry and Mapping (LIO-mapping)
VladyslavUsenko/basalt-mirror
Mirror of the Basalt repository. All pull requests and issues should be sent to https://gitlab.com/VladyslavUsenko/basalt
smilefacehh/LIO-SAM-DetailedNote
LIO-SAM源码详细注释,3D SLAM融合激光、IMU、GPS
ZJU-FAST-Lab/EGO-Planner-v2
Swarm Playground, the codebase of the paper "Swarm of micro flying robots in the wild"
Livox-SDK/livox_horizon_loam
Livox horizon porting for loam
thien94/vision_to_mavros
A collection of ROS and non-ROS (Python) code that converts data from vision-based system (external localization system like fiducial tags, VIO, SLAM, or depth image) to corresponding mavros topics or MAVLink messages that can be consumed by a flight control stack (with working and tested examples for ArduPilot).
ryanbgriffiths/ICRA2023PaperList
ICRA2023 Paper List
argonne-lcf/dlio_benchmark
An I/O benchmark for deep Learning applications
ZZY-Zhou/DSEC-MOS
pedrogasg/VIO_bridge
Interface PX4 with Realsense T265