April-Yz's Stars
3DTopia/LGM
[ECCV 2024 Oral] LGM: Large Multi-View Gaussian Model for High-Resolution 3D Content Creation.
nerfstudio-project/gsplat
CUDA accelerated rasterization of gaussian splatting
g-truc/glm
OpenGL Mathematics (GLM)
YanjieZe/GNFactor
[CoRL 2023 Oral] GNFactor: Multi-Task Real Robot Learning with Generalizable Neural Feature Fields
yanmin-wu/OpenGaussian
[NeurIPS 2024] OpenGaussian: Towards Point-Level 3D Gaussian-based Open Vocabulary Understanding
buaacyw/GaussianEditor
[CVPR 2024] GaussianEditor: Swift and Controllable 3D Editing with Gaussian Splatting
Lee-JaeWon/2024-Arxiv-Paper-List-Gaussian-Splatting
2024 Gaussian Splatting Paper List(Arxiv)
stepjam/ARM
Q-attention (within the ARM system) and coarse-to-fine Q-attention (within C2F-ARM system).
MrSecant/GaussianGrasper
[RA-L 2024] GaussianGrasper: 3D Language Gaussian Splatting for Open-vocabulary Robotic Grasping
April-Yz/SAGS
学习SAGA查分光栅化多了depth和alpha The official implementation of SAGS (Segment Anything in 3D Gaussians)
ashawkey/diff-gaussian-rasterization
Harry-Zhi/semantic_nerf
The implementation of "In-Place Scene Labelling and Understanding with Implicit Scene Representation" [ICCV 2021].
IDEA-Research/GroundingDINO
[ECCV 2024] Official implementation of the paper "Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection"
XuHu0529/SAGS
The official implementation of SAGS (Segment Anything in 3D Gaussians)
szymanowiczs/splatter-image
Official implementation of `Splatter Image: Ultra-Fast Single-View 3D Reconstruction' CVPR 2024
liuff19/ReconX
ReconX: Reconstruct Any Scene from Sparse Views with Video Diffusion Model
dvlab-research/3D-Box-Segment-Anything
We extend Segment Anything to 3D perception by combining it with VoxelNeXt.
IDEA-Research/Grounded-Segment-Anything
Grounded SAM: Marrying Grounding DINO with Segment Anything & Stable Diffusion & Recognize Anything - Automatically Detect , Segment and Generate Anything
peract/peract
Perceiver-Actor: A Multi-Task Transformer for Robotic Manipulation
Forairaaaaa/Monica
DIY Amoled 屏手表
zubair-irshad/Awesome-Robotics-3D
A curated list of 3D Vision papers relating to Robotics domain in the era of large models i.e. LLMs/VLMs, inspired by awesome-computer-vision, including papers, codes, and related websites
NVlabs/ODISE
Official PyTorch implementation of ODISE: Open-Vocabulary Panoptic Segmentation with Text-to-Image Diffusion Models [CVPR 2023 Highlight]
markusgrotz/PyRep
A toolkit for robot learning research.
Tengbo-Yu/peract_bimanual
markusgrotz/RLBench
A large-scale benchmark and learning environment.
facebookresearch/habitat-sim
A flexible, high-performance 3D simulator for Embodied AI research.
city-super/Scaffold-GS
[CVPR 2024 Highlight] Scaffold-GS: Structured 3D Gaussians for View-Adaptive Rendering
markusgrotz/peract_bimanual
PKU-MARL/DexterousHands
This is a library that provides dual dexterous hand manipulation tasks through Isaac Gym
ARISE-Initiative/robosuite
robosuite: A Modular Simulation Framework and Benchmark for Robot Learning