Zoe-Yan123's Stars
rlabbe/Kalman-and-Bayesian-Filters-in-Python
Kalman Filter book using Jupyter Notebook. Focuses on building intuition and experience, not formal proofs. Includes Kalman filters,extended Kalman filters, unscented Kalman filters, particle filters, and more. All exercises include solutions.
mgaoling/mpl_calibration_toolbox
An easy calibration toolbox for VECtor Benchmark
OpenRobotLab/EmbodiedScan
[CVPR 2024 & NeurIPS 2024] EmbodiedScan: A Holistic Multi-Modal 3D Perception Suite Towards Embodied AI
zdhNarsil/Awesome-GFlowNets
A curated list of resources about generative flow networks (GFlowNets).
zdhNarsil/GFlowNet-CombOpt
PyTorch implementation for our NeurIPS 2023 spotlight paper "Let the Flows Tell: Solving Graph Combinatorial Optimization Problems with GFlowNets".
ling-pan/GAFN
GFNOrg/gflownet
Generative Flow Networks
diff-usion/Awesome-Diffusion-Models
A collection of resources and papers on Diffusion Models
uzh-rpg/rpg_esim
ESIM: an Open Event Camera Simulator
wengflow/robust-e-nerf
Source code for "Robust e-NeRF: NeRF from Sparse & Noisy Events under Non-Uniform Motion" (ICCV 2023)
DmitryRyumin/ICCV-2023-Papers
ICCV 2023 Papers: Discover cutting-edge research from ICCV 2023, the leading computer vision conference. Stay updated on the latest in computer vision and deep learning, with code included. ⭐ support visual intelligence development!
robomaster-oss/rmoss_core
RoboMaster OSS中的基础项目,为RoboMaster提供通用基础功能模块包,如相机模块,弹道运动模块等。
uzh-rpg/event-based_vision_resources
timothybrooks/instruct-pix2pix
FangyunWei/SLRT
binbinjiang/SL_Papers
Latest AI Sign Language Papers & Survey & Review
enhuiz/phoenix-datasets
PyTorch dataset wrappers for PHOENIX 2014 & PHOENIX-2014-T sign language datasets.
binbinjiang/CVT-SLR
Official code of CVPR 2023 Highlight paper CVT-SLR
kerrj/lerf
Code for LERF: Language Embedded Radiance Fields
ayaanzhaque/instruct-nerf2nerf
Instruct-NeRF2NeRF: Editing 3D Scenes with Instructions (ICCV 2023)
IDEA-Research/GroundingDINO
[ECCV 2024] Official implementation of the paper "Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection"
Harry-Zhi/semantic_nerf
The implementation of "In-Place Scene Labelling and Understanding with Implicit Scene Representation" [ICCV 2021].
nv-tlabs/editGAN_release
CompVis/latent-diffusion
High-Resolution Image Synthesis with Latent Diffusion Models
CompVis/stable-diffusion
A latent text-to-image diffusion model
threestudio-project/threestudio
A unified framework for 3D content generation.
IDEA-Research/Grounded-Segment-Anything
Grounded SAM: Marrying Grounding DINO with Segment Anything & Stable Diffusion & Recognize Anything - Automatically Detect , Segment and Generate Anything
gaomingqi/Track-Anything
Track-Anything is a flexible and interactive tool for video object tracking and segmentation, based on Segment Anything, XMem, and E2FGVI.
Totoro97/f2-nerf
Fast neural radiance field training with free camera trajectories
facebookresearch/segment-anything
The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.