Ysz2022
PhD Student@Peking University, Computer Vision/Machine Learning
Peking UniversityShenzhen, China
Ysz2022's Stars
facebookresearch/segment-anything-2
The repository provides code for running inference with the Meta Segment Anything Model 2 (SAM 2), links for downloading the trained model checkpoints, and example notebooks that show how to use the model.
FoundationVision/VAR
[NeurIPS 2024 Oral][GPT beats diffusion🔥] [scaling laws in visual generation📈] Official impl. of "Visual Autoregressive Modeling: Scalable Image Generation via Next-Scale Prediction". An *ultra-simple, user-friendly yet state-of-the-art* codebase for autoregressive image generation!
naver/dust3r
DUSt3R: Geometric 3D Vision Made Easy
facebookresearch/sapiens
High-resolution models for human tasks.
naver/mast3r
Grounding Image Matching in 3D with MASt3R
Drexubery/ViewCrafter
Official implementation of "ViewCrafter: Taming Video Diffusion Models for High-fidelity Novel View Synthesis"
Tencent/DepthCrafter
DepthCrafter: Generating Consistent Long Depth Sequences for Open-world Videos
3DTopia/3DTopia-XL
3DTopia-XL: High-Quality 3D PBR Asset Generation via Primitive Diffusion
CLAY-3D/OpenCLAY
CLAY: A Controllable Large-scale Generative Model for Creating High-quality 3D Assets
liuff19/ReconX
ReconX: Reconstruct Any Scene from Sparse Views with Video Diffusion Model
YingqingHe/Awesome-LLMs-meet-Multimodal-Generation
🔥🔥🔥 A curated list of papers on LLMs-based multimodal generation (image, video, 3D and audio).
autonomousvision/LaRa
[ECCV 2024] Efficient Large-Baseline Radiance Fields, a feed-forward 2DGS model
VDIGPKU/GALA3D
[ICML 2024] GALA3D: Towards Text-to-3D Complex Scene Generation via Layout-guided Generative Gaussian Splatting
PKU-YuanGroup/Cycle3D
[AAAI 2025🔥] Official implementation of Cycle3D: High-quality and Consistent Image-to-3D Generation via Generation-Reconstruction Cycle
ingra14m/Awesome-Inverse-Rendering
A collection of papers on neural field-based inverse rendering.
zhengzhang01/Pixel-GS
[ECCV 2024] Pixel-GS Density Control with Pixel-aware Gradient for 3D Gaussian Splatting
yanqinJiang/Animate3D
[NeurIPS 2024] Animate3D: Animating Any 3D Model with Multi-view Video Diffusion
alibaba-yuanjing-aigclab/GeoLRM
[NeurIPS 2024] Geometry-Aware Large Reconstruction Model for Efficient and High-Quality 3D Generation
lyndonzheng/Free3D
[CVPR'24] Consistent Novel View Synthesis without 3D Representation
coltonstearns/dynamic-gaussian-marbles
iSEE-Laboratory/DiffUIR
The official implementation of the paper of CVPR2024: Selective Hourglass Mapping for Universal Image Restoration Based on Diffusion Model
huanngzh/EpiDiff
[CVPR 2024] EpiDiff: Enhancing Multi-View Synthesis via Localized Epipolar-Constrained Diffusion
silent-chen/DGE
[ECCV 2024] DGE: Direct Gaussian 3D Editing by Consistent Multi-view Editing
Guaishou74851/SCNet
(IJCV 2024) Self-Supervised Scalable Deep Compressed Sensing [PyTorch]
ashawkey/vscode-mesh-viewer
A 3D mesh viewer for vscode
TingtingLiao/unique3d-diffusion
MyNiuuu/RS-NeRF
[ECCV2024] RS-NeRF: Neural Radiance Fields from Rolling Shutter Images
Guaishou74851/DCCM
(Nature Communications Engineering 2024) Compressive Confocal Microscopy Imaging at the Single-Photon Level with Ultra-Low Sampling Ratios [PyTorch]
lwq20020127/ResVR
[ACM MM 24 Oral] ResVR: Joint Rescaling and Viewport Rendering of Omnidirectional Images
Guaishou74851/DPC-DUN
Dynamic Path-Controllable Deep Unfolding Network for Compressive Sensing (IEEE TIP 2023)