Pinned Repositories
3D-Box-Segment-Anything
We extend Segment Anything to 3D perception by combining it with VoxelNeXt.
3D-PointCloud
Papers and Datasets about Point Cloud.
3DTrans
An Open-source Codebase for exploring Continuous-learning/Pre-training-oriented Autonomous Driving Task
AeDet
AeDet: Azimuth-invariant Multi-view 3D Object Detection, CVPR2023
Aigc_chatgpt_projects
AIGC和基于ChatGPT的项目和应用集合站,整理分享有价值有意思的chatgpt衍生项目
apollo
An open autonomous driving platform
Auto_pruning
auto Iterative pruning based on caffe
awesome-aigc
A list of awesome AIGC works
Awesome-Autonomous-Driving
awesome-autonomous-driving
awesome-end-to-end-autonomous-driving
A curated list of awesome End-to-End Autonomous Driving resources (continually updated)
hulaifeng's Repositories
hulaifeng/3D-Box-Segment-Anything
We extend Segment Anything to 3D perception by combining it with VoxelNeXt.
hulaifeng/3DTrans
An Open-source Codebase for exploring Continuous-learning/Pre-training-oriented Autonomous Driving Task
hulaifeng/AeDet
AeDet: Azimuth-invariant Multi-view 3D Object Detection, CVPR2023
hulaifeng/Awesome-Autonomous-Driving
awesome-autonomous-driving
hulaifeng/awesome-end-to-end-autonomous-driving
A curated list of awesome End-to-End Autonomous Driving resources (continually updated)
hulaifeng/BEVFusion
Offical PyTorch implementation of "BEVFusion: A Simple and Robust LiDAR-Camera Fusion Framework"
hulaifeng/Cosmos
Cosmos is a world model development platform that consists of world foundation models, tokenizers and video processing pipeline to accelerate the development of Physical AI at Robotics & AV labs. Cosmos is purpose built for physical AI. The Cosmos repository will enable end users to run the Cosmos models, run inference scripts and generate videos.
hulaifeng/CV-CUDA
CV-CUDA™ is an open-source, GPU accelerated library for cloud-scale image processing and computer vision.
hulaifeng/CVPR-2023-Papers
hulaifeng/CVPR2023-Papers-with-Code
CVPR 2023 论文和开源项目合集
hulaifeng/DetZero
DetZero
hulaifeng/End-to-End-Autonomous-Driving
A collection of recent resources on End-to-End Autonomous Driving [survey accepted in IEEE TIV]
hulaifeng/FasterTransformer
Transformer related optimization, including BERT, GPT
hulaifeng/Forge_VFM4AD
A comprehensive survey of forging vision foundation models for autonomous driving, including challenges, methodologies, and opportunities.
hulaifeng/GeoMAE
This is the official implementation of the paper - GeoMAE: Masked Geometric Target Prediction for Self-supervised Point Cloud Pre-Training
hulaifeng/HoP
[ICCV 2023] Temporal Enhanced Training of Multi-view 3D Object Detector via Historical Object Prediction
hulaifeng/JAD
hulaifeng/LoGoNet
[CVPR2023] LoGoNet: Towards Accurate 3D Object Detection with Local-to-Global Cross-Modal Fusion.
hulaifeng/MapTR
[ICLR'23 Spotlight] MapTR: Structured Modeling and Learning for Online Vectorized HD Map Construction
hulaifeng/MMAR_2023
Repository for submodules containing code for MMAR 2023 "Detection-segmentation convolutional neural network for autonomous vehicle perception" paper
hulaifeng/Robo3D
Robo3D: Towards Robust and Reliable 3D Perception against Corruptions
hulaifeng/RT-DETR
[CVPR 2024] Official RT-DETR (RTDETR paddle pytorch), Real-Time DEtection TRansformer, DETRs Beat YOLOs on Real-time Object Detection. 🔥 🔥 🔥
hulaifeng/Segment-Any-Point-Cloud
Segment Any Point Cloud Sequences by Distilling Vision Foundation Models
hulaifeng/segment-anything
The repository provides code for running inference with the SegmentAnything Model (SAM), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
hulaifeng/SegmentAnything3D
SAM3D: Segment Anything in 3D Scenes
hulaifeng/SST
Codes for “Fully Sparse 3D Object Detection” & “Embracing Single Stride 3D Object Detector with Sparse Transformer”
hulaifeng/tensorrtx
Implementation of popular deep learning networks with TensorRT network definition API
hulaifeng/World-Models-Autonomous-Driving-Latest-Survey
A curated list of world models for autonomous driving. Keep updated.
hulaifeng/xtreme1
Xtreme1 - The Next GEN Platform for Multimodal Training Data. #3D annotation, 3D segmentation, lidar-camera fusion annotation, image annotation and rlhf tools are supported!
hulaifeng/zod
Software Development Kit for the latest Zenseact Open Dataset (ZOD)