danxuhk
A Computer Vision and Multimedia research group led by Prof. Dan Xu in the CSE Department at HKUST.
CSE, HKUSTClear Water Bay, Kowloon, Hong Kong
danxuhk's Stars
harlanhong/CVPR2022-DaGAN
Official code for CVPR2022 paper: Depth-Aware Generative Adversarial Network for Talking Head Video Generation
Ha0Tang/AttentionGAN
AttentionGAN for Unpaired Image-to-Image Translation & Multi-Domain Image-to-Image Translation
prismformore/Multi-Task-Transformer
Code of ICLR2023 paper "TaskPrompter: Spatial-Channel Multi-Task Prompting for Dense Scene Understanding" and ECCV2022 paper "Inverted Pyramid Multi-task Transformer for Dense Scene Understanding"
harlanhong/ICCV2023-MCNET
The official code of our ICCV2023 work: Implicit Identity Representation Conditioned Memory Compensation Network for Talking Head video Generation
MiZhenxing/Switch-NeRF
Codes for Switch-NeRF (ICLR 2023)
yangcaoai/CoDA_NeurIPS2023
Official code for NeurIPS2023 paper: CoDA: Collaborative Novel Box Discovery and Cross-modal Alignment for Open-vocabulary 3D Object Detection
interactive-3d/interactive3d
[CVPR'24] Interactive3D: Create What You Want by Interactive 3D Generation
BiDiff/bidiff
[CVPR'24] Text-to-3D Generation with Bidirectional Diffusion using both 2D and 3D priors
xinzhuma/monodle
Delving into Localization Errors for Monocular 3D Object Detection, CVPR'2021
xulianuwa/MCTformer
Code for CVPR 2022 paper "Multi-Class Token Transformer for Weakly Supervised Semantic Segmentation"
andrea-pilzer/unsup-stereo-depthGAN
Code of "Unsupervised Adversarial Depth Estimation using Cycled Generative Networks" 3DV2018
MiZhenxing/GBi-Net
Codes for GBi-Net (CVPR2022)
danxuhk/StructuredAttentionDepthEstimation
Structured Attention Guided Convolutional Neural Fields for Monocular Depth Estimation in CVPR 2018 (Spotlight)
yanchi-3dv/diff-gaussian-rasterization-for-gsslam
lzrobots/dgmn
prismformore/DiffusionMTL
Code of our CVPR2024 paper - DiffusionMTL: Learning Multi-Task Denoising Diffusion Model from Partially Annotated Data
Holistic-Motion2D/Tender
The official code for Tender
W-Ted/GScream
Official code for ECCV2024 paper: GScream: Learning 3D Geometry and Feature Consistent Gaussian Splatting for Object Removal
Gorilla-Lab-SCUT/LPDC-Net
CVPR2021 paper "Learning Parallel Dense Correspondence from Spatio-Temporal Descriptorsfor Efficient and Robust 4D Reconstruction"
xulianuwa/AuxSegNet
W-Ted/UDC-NeRF
Official code for ICCV2023 paper: Learning Unified Decompositional and Compositional NeRF for Editable Novel View Synthesis
andrea-pilzer/PFN-depth
Code for "Progressive Fusion for Unsupervised Binocular Depth Estimation using Cycled Networks" TPAMI 2019
qwang666/RoomTex-
[ECCV24] Official code for RoomTex: Texturing Compositional Indoor Scenes via Iterative Inpainting
YinminZhang/MonoGeo
ygjwd12345/VISTA-Net
The code release for "Variational Structured Attention Networks for Visual Dense Representation Learning"
MiZhenxing/alpha_visualizer
Visualizing point clouds with transparency in Switch-NeRF (ICLR2023)
zhongyingji/CVT-xRF
CVT-xRF: Contrastive In-Voxel Transformer for 3D Consistent Radiance Fields from Sparse Inputs (CVPR 2024)
Holistic-Motion2D/Holistic-Motion2D
Data release of Holistic-Motion2D
prismformore/Boundary-Detection-Evaluation-Tools
A user-friendly evaluation tool that encompasses all necessary components for boundary detection on PASCAL-Context and NYUD-v2 datasets.
stevejaehyeok/MoCo-NeRF