Pinned Repositories
CityWalker
[CVPR2025] CityWalker: Learning Embodied Urban Navigation from Web-Scale Videos
DeepMapping
[CVPR2019 Oral] Self-supervised Point Cloud Map Estimation
DeepMapping2
[CVPR2023] DeepMapping2: Self-Supervised Large-Scale LiDAR Map Optimization
DiscoNet
[NeurIPS2021] Learning Distilled Collaboration Graph for Multi-Agent Perception
GARF
[ICCV2025] GARF: Learning Generalizable 3D Reassembly for Real-World Fractures
insta360_ros_driver
A ROS driver for Insta360 cameras, enabling real-time image capture, processing, and publishing in ROS environments.
MSG
[NeurIPS2024] Multiview Scene Graph (topologically representing a scene from unposed images by interconnected place and object nodes)
Occ4cast
Occ4cast: LiDAR-based 4D Occupancy Completion and Forecasting
peac
[ICRA2014] Fast Plane Extraction Using Agglomerative Hierarchical Clustering (AHC)
SSCBench
[IROS2024] SSCBench: A Large-Scale 3D Semantic Scene Completion Benchmark for Autonomous Driving
AI4CE Lab @ NYU's Repositories
ai4ce/SSCBench
[IROS2024] SSCBench: A Large-Scale 3D Semantic Scene Completion Benchmark for Autonomous Driving
ai4ce/Occ4cast
Occ4cast: LiDAR-based 4D Occupancy Completion and Forecasting
ai4ce/CityWalker
[CVPR2025] CityWalker: Learning Embodied Urban Navigation from Web-Scale Videos
ai4ce/V2X-Sim
[RA-L2022] V2X-Sim Dataset and Benchmark
ai4ce/insta360_ros_driver
A ROS driver for Insta360 cameras, enabling real-time image capture, processing, and publishing in ROS environments.
ai4ce/MSG
[NeurIPS2024] Multiview Scene Graph (topologically representing a scene from unposed images by interconnected place and object nodes)
ai4ce/GARF
[ICCV2025] GARF: Learning Generalizable 3D Reassembly for Real-World Fractures
ai4ce/SeeDo
[IROS 2025] Human Demo Videos to Robot Action Plans
ai4ce/EUVS-Benchmark
[ICCV2025] Extrapolated Urban View Synthesis Benchmark
ai4ce/FusionSense
[ICRA2025] Integrates the vision, touch, and common-sense information of foundational models, customized to the agent's perceptual needs.
ai4ce/RAP
[ICCV2025] Adversarial Exploitation of Data Diversity Improves Visual Localization
ai4ce/NYU-VPR
[IROS2021] NYU-VPR: Long-Term Visual Place Recognition Benchmark with View Direction and Data Anonymization Influences
ai4ce/INT-ACT
Official repo for From Intention to Execution: Probing the Generalization Boundaries of Vision-Language-Action Models
ai4ce/NYC-Event-VPR
ai4ce/TF-VPR
Self-supervised place recognition by exploring temporal and feature neighborhoods
ai4ce/NYC-Indoor-VPR
ai4ce/EgoPAT3Dv2
[ICRA 2024] Official Implementation of EgoPAT3Dv2: Predicting 3D Action Target from 2D Egocentric Vision for Human-Robot Interaction
ai4ce/vis_nav_player
[ROB-GY 6203] Example Visual Navigation Player Code for Course Project
ai4ce/LUWA
[CVPR 2024 Highlight] The first benchmark for lithic use-wear analysis leveraging SOTA vision and vision-language models (DINOv2, GPT-4V), demonstrating AI performance surpassing that of expert archaeologists.
ai4ce/LoQI-VPR
Implementation of the ICCV workshop paper LoQI-VPR. Project website available below
ai4ce/UNav-Server
ai4ce/SeeUnsafe
Integrate language and vision for traffic accident identification, reasoning, and visual grounding
ai4ce/ai4ce_sensor_ROS2_interfaces
This repo contains all the ROS2 packages developed at AI4CE lab for interfacing with various specialized sensors
ai4ce/ai4ce_project_website_template
A project website template
ai4ce/CoVISION
Co-VisiON: Co-Visibility ReasONing on Sparse Image Sets of Indoor Scenes
ai4ce/DPVO
DPVO accompanying the CityWalker repository
ai4ce/folder2hdf5
A sample codebase for converting folder dataset to a hdf5 dataset
ai4ce/gelsight_ROS2_interface
ai4ce/joy_hand_eye_ROS2
This minimal package can be used to perform hand-eye-calibration in a ROS2 environment with a joystick
ai4ce/vis_nav_game_public
Public version of vis_nav_game