3d-generation
There are 74 repositories under 3d-generation topic.
xxlong0/Wonder3D
Single Image to 3D using Cross-Domain Diffusion for 3D Generation
deepseek-ai/DreamCraft3D
[ICLR 2024] Official implementation of DreamCraft3D: Hierarchical 3D Generation with Bootstrapped Diffusion Prior
junshutang/Make-It-3D
[ICCV 2023] Make-It-3D: High-Fidelity 3D Creation from A Single Image with Diffusion Prior
One-2-3-45/One-2-3-45
[NeurIPS 2023] Official code of "One-2-3-45: Any Single Image to 3D Mesh in 45 Seconds without Per-Shape Optimization"
guochengqian/Magic123
[ICLR24] Official PyTorch Implementation of Magic123: One Image to High-Quality 3D Object Generation Using Both 2D and 3D Diffusion Priors
OpenMotionLab/MotionGPT
[NeurIPS 2023] MotionGPT: Human Motion as a Foreign Language, a unified motion-language generation model using LLMs
hongfz16/AvatarCLIP
[SIGGRAPH 2022 Journal Track] AvatarCLIP: Zero-Shot Text-Driven Generation and Animation of 3D Avatars
lukasHoel/text2room
Text2Room generates textured 3D meshes from a given text prompt using 2D text-to-image models (ICCV2023).
mingyuan-zhang/MotionDiffuse
MotionDiffuse: Text-Driven Human Motion Generation with Diffusion Model
EricGuo5513/momask-codes
Official implementation of "MoMask: Generative Masked Modeling of 3D Human Motions (CVPR2024)"
hzxie/CityDreamer
The official implementation of "CityDreamer: Compositional Generative Model of Unbounded 3D Cities". (Xie et al., CVPR 2024)
FrozenBurning/Text2Light
[SIGGRAPH Asia 2022] Text2Light: Zero-Shot Text-Driven HDR Panorama Generation
baaivision/GeoDream
GeoDream: Disentangling 2D and Geometric Priors for High-Fidelity and Consistent 3D Generation
ChenFengYe/motion-latent-diffusion
[CVPR 2023] Executing your Commands via Motion Diffusion in Latent Space, a fast and high-quality motion diffusion model
buaacyw/MeshAnythingV2
From anything to mesh like human artists. Official impl. of "MeshAnything V2: Artist-Created Mesh Generation With Adjacent Mesh Tokenization"
pals-ttic/sjc
Score Jacobian Chaining: Lifting Pretrained 2D Diffusion Models for 3D Generation (CVPR 2023)
shengyu-meng/dreamfields-3D
A colab friendly toolkit to generate 3D mesh model / video / nerf instance / multiview images of colourful 3D objects by text and image prompts input, based on dreamfields.
menyifang/En3D
Official implementation of "En3D: An Enhanced Generative Model for Sculpting 3D Humans from 2D Synthetic Data"
Kobaayyy/Awesome-CVPR2024-ECCV2024-AIGC
A Collection of Papers and Codes for CVPR2024/ECCV2024 AIGC
wyysf-98/CraftsMan
CraftsMan: High-fidelity Mesh Generation with 3D Native Diffusion and Interactive Geometry Refiner
zhizdev/sparsefusion
[CVPR 2023] SparseFusion: Distilling View-conditioned Diffusion for 3D Reconstruction
Pointcept/GPT4Point
[CVPR'24 Highlight] GPT4Point: A Unified Framework for Point-Language Understanding and Generation.
GaussianCube/GaussianCube
GaussianCube: A Structured and Explicit Radiance Representation for 3D Generative Modeling
mdyao/Awesome-3D-AIGC
A curated list of papers and open-source resources focused on 3D AIGC.
kxhit/EscherNet
[CVPR2024 Oral] EscherNet: A Generative Model for Scalable View Synthesis
nv-tlabs/XCube
[CVPR 2024 Highlight] XCube: Large-Scale 3D Generative Modeling using Sparse Voxel Hierarchies
zj-dong/AG3D
Official code release for ICCV2023 paper AG3D: Learning to Generate 3D Avatars from 2D Image Collections
yanqinJiang/Consistent4D
[ICLR 2024] Official Implementation of Consistent4D: Consistent 360° Dynamic Object Generation from Monocular Video
lzzcd001/GShell
Official implentation of "Ghost on the Shell: An Expressive Representation of General 3D Shapes" (ICLR 2024 Oral)
OpenMeshLab/MeshXL
MeshXL: Neural Coordinate Field for Generative 3D Foundation Models, a 3D fundamental model for mesh generation
WU-CVGL/MVControl
Official implementation of "Controllable Text-to-3D Generation via Surface-Aligned Gaussian Splatting"
Amir-Arsalan/Synthesize3DviaDepthOrSil
[CVPR 2017] Generation and reconstruction of 3D shapes via modeling multi-view depth maps or silhouettes
NIRVANALAN/LN3Diff
[ECCV-2024] LN3Diff creates high-quality 3D object mesh from text within 8 V100-SECONDS.
Sin3DM/Sin3DM
single 3D shape diffusion model
zxhuang1698/ZeroShape
Code repository for "ZeroShape: Regression-based Zero-shot Shape Reconstruction".
FrozenBurning/PrimDiffusion
[NeurIPS 2023] PrimDiffusion: Volumetric Primitives Diffusion for 3D Human Generation