Pinned Repositories
CornerNet
CornerNet-Lite
DeepV2D
DPVO
Deep Patch Visual Odometry/SLAM
DROID-SLAM
infinigen
Infinite Photorealistic Worlds using Procedural Generation
lietorch
pose-hg-train
Training and experimentation code used for "Stacked Hourglass Networks for Human Pose Estimation"
RAFT
RAFT-Stereo
Princeton Vision & Learning Lab's Repositories
princeton-vl/infinigen
Infinite Photorealistic Worlds using Procedural Generation
princeton-vl/RAFT
princeton-vl/DROID-SLAM
princeton-vl/lietorch
princeton-vl/DPVO
Deep Patch Visual Odometry/SLAM
princeton-vl/pytorch_stacked_hourglass
Pytorch implementation of the ECCV 2016 paper "Stacked Hourglass Networks for Human Pose Estimation"
princeton-vl/CoqGym
A Learning Environment for Theorem Proving with the Coq proof assistant
princeton-vl/SEA-RAFT
[ECCV2024 - Oral, Best Paper Award Candidate] SEA-RAFT: Simple, Efficient, Accurate RAFT for Optical Flow
princeton-vl/SimpleView
Official Code for ICML 2021 paper "Revisiting Point Cloud Shape Classification with a Simple and Effective Baseline"
princeton-vl/CER-MVS
princeton-vl/MultiSlam_DiffPose
princeton-vl/SpatialSense
An Adversarially Crowdsourced Benchmark for Spatial Relation Recognition
princeton-vl/selfstudy
Code for reproducing experiments in "How Useful is Self-Supervised Pretraining for Visual Tasks?"
princeton-vl/PackIt
Code for reproducing results in ICML 2020 paper "PackIt: A Virtual Environment for Geometric Planning"
princeton-vl/OGNI-DC
[ECCV24] official code for "OGNI-DC: Robust Depth Completion with Optimization-Guided Neural Iterations"
princeton-vl/Oriented1D
Official code for ICCV 2023 paper "Convolutional Networks with Oriented 1D Kernels"
princeton-vl/OcMesher
princeton-vl/attach-juxtapose-parser
Code for the paper "Strongly Incremental Constituency Parsing with Graph Neural Networks"
princeton-vl/LayeredFlow
[ECCV 2024] LayeredFlow: A Real-World Benchmark for Non-Lambertian Multi-Layer Optical Flow
princeton-vl/Rel3D
Official code for NeurRIPS 2020 paper "Rel3D: A Minimally Contrastive Benchmark for Grounding Spatial Relations in 3D"
princeton-vl/OMNI-DC
princeton-vl/FetchBench-CORL2024
princeton-vl/MetaQNL
Learning Symbolic Rules for Reasoning in Quasi-Natural Language: https://arxiv.org/abs/2111.12038
princeton-vl/PackIt_Extra
Code for generating data in ICML 2020 paper "PackIt: A Virtual Environment for Geometric Planning"
princeton-vl/UniQA-3D
[3DV25] Official code for "Towards Foundation Models for 3D Vision: How Close Are We?"
princeton-vl/Rel3D_Render
Code for rendering images for NeurRIPS 2020 paper "Rel3D: A Minimally Contrastive Benchmark for Grounding Spatial Relations in 3D"
princeton-vl/RAFT-fork
princeton-vl/infinigen_gpl
princeton-vl/FetchBench-Imit
princeton-vl/infinigen_cos426
Infinite Photorealistic Worlds using Procedural Generation