xiaodongww's Stars
facebookresearch/omni3d
Code release for "Omni3D A Large Benchmark and Model for 3D Object Detection in the Wild"
introlab/rtabmap
RTAB-Map library and standalone application
youquanl/Segment-Any-Point-Cloud
[NeurIPS'23 Spotlight] Segment Any Point Cloud Sequences by Distilling Vision Foundation Models
IDEA-Research/OpenSeeD
[ICCV 2023] Official implementation of the paper "A Simple Framework for Open-Vocabulary Segmentation and Detection"
pengsongyou/openscene
[CVPR'23] OpenScene: 3D Scene Understanding with Open Vocabularies
CVMI-Lab/PLA
(CVPR 2023) PLA: Language-Driven Open-Vocabulary 3D Scene Understanding & (CVPR2024) RegionPLC: Regional Point-Language Contrastive Learning for Open-World 3D Scene Understanding
xiaodongww/IPCA
amusi/Deep-Learning-Interview-Book
深度学习面试宝典(含数学、机器学习、深度学习、计算机视觉、自然语言处理和SLAM等方向)
XingangPan/DragGAN
Official Code for DragGAN (SIGGRAPH 2023)
fudan-zvg/Semantic-Segment-Anything
Automated dense category annotation engine that serves as the initial semantic labeling for the Segment Anything dataset (SA-1B).
CASIA-IVA-Lab/FastSAM
Fast Segment Anything
ChaoningZhang/MobileSAM
This is the official code for MobileSAM project that makes SAM lightweight for mobile applications and beyond!
facebookresearch/ov-seg
This is the official PyTorch implementation of the paper Open-Vocabulary Semantic Segmentation with Mask-adapted CLIP.
LAION-AI/laion-3d
Collect large 3d dataset and build models
IDEA-Research/Grounded-Segment-Anything
Grounded SAM: Marrying Grounding DINO with Segment Anything & Stable Diffusion & Recognize Anything - Automatically Detect , Segment and Generate Anything
Pointcept/Pointcept
Pointcept: a codebase for point cloud perception research. Latest works: PTv3 (CVPR'24 Oral), PPT (CVPR'24), OA-CNNs (CVPR'24), MSC (CVPR'23)
Pointcept/SegmentAnything3D
[ICCV'23 Workshop] SAM3D: Segment Anything in 3D Scenes
haochenheheda/segment-anything-annotator
We developed a python UI based on labelme and segment-anything for pixel-level annotation. It support multiple masks generation by SAM(box/point prompt), efficient polygon modification and category record. We will add more features (such as incorporating CLIP-based methods for category proposal and VOS methods for video datasets
changgyhub/leetcode_101
LeetCode 101:和你一起你轻松刷题(C++)
Anything-of-anything/Anything-3D
Segment-Anything + 3D. Let's lift anything to 3D.
mohuangrui/ucasproposal
LaTeX Proposal Template for the University of Chinese Academy of Sciences
YanjieZe/Virtual-Multi-View-Fusion
An Elegant PyTorch Implementation of ECCV'2020: Virtual Multi View Fusion for 3D Semantic Segmentation.
runnanchen/CLIP2Scene
MasterBin-IIAU/UNINEXT
[CVPR'23] Universal Instance Perception as Object Discovery and Retrieval
liyunsheng13/BDL
UX-Decoder/Segment-Everything-Everywhere-All-At-Once
[NeurIPS 2023] Official implementation of the paper "Segment Everything Everywhere All at Once"
Stability-AI/StableLM
StableLM: Stability AI Language Models
yangyangyang127/PointCLIP_V2
[ICCV 2023] PointCLIP V2: Prompting CLIP and GPT for Powerful 3D Open-world Learning
yzhuoning/Awesome-CLIP
Awesome list for research on CLIP (Contrastive Language-Image Pre-Training).
facebookresearch/segment-anything
The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.