An extensive node suite that enables ComfyUI to process 3D inputs (Mesh & UV Texture, etc) using cutting edge algorithms (3DGS, NeRF, Differentiable Rendering, SDS/VSD Optimization, etc.)
Features — Roadmap — Install — Run — Tips-
For use case please check Example Workflows. [Last update: 11/02/2024]
- Note: you need to put Example Inputs Files & Folders under ComfyUI Root Directory\ComfyUI\input folder before you can run the example workflow
-
Large Multiview Gaussian Model: 3DTopia/LGM
-
Enable single image to 3D Gaussian in less than 30 seconds on a RTX3080 GPU, later you can also convert 3D Gaussian to mesh
2024-02-08.23-36-31.mp4
-
-
Triplane Gaussian Transformers: VAST-AI-Research/TriplaneGaussian
-
Enable single image to 3D Gaussian in less than 10 seconds on a RTX3080 GPU, later you can also convert 3D Gaussian to mesh
2024-02-08.23-57-37.mp4
-
-
Preview 3DGS and 3D Mesh: 3D Visualization inside ComfyUI:
-
Stack Orbit Camera Poses: Automatically generate all range of camera pose combinations
-
You can use it to conditioning the StableZero123 (You need to Download the checkpoint first), with full range of camera poses in one prompt pass
-
You can use it to generate the orbit camera poses and directly input to other 3D process node (e.g. GaussianSplatting and BakeTextureToMesh)
-
Example usage:
-
Coordinate system:
- Azimuth: In top view, from angle 0 rotate 360 degree with step -90 you get (0, -90, -180/180, 90, 0), in this case camera rotates clock-wise, vice versa.
- Elevation: 0 when camera points horizontally forward, pointing down to the ground is negitive angle, vice versa.
-
-
3D Gaussian Splatting
- Improved Differential Gaussian Rasterization
- Better Compactness-based Densification method from Gsgen,
- Support initialize gaussians from given 3D mesh (Optional)
- Support mini-batch optimazation
- Multi-View images as inputs
- Export to standard 3DGS .ply format supported
-
Gaussian Splatting Orbit Renderer
- Render 3DGS to images sequences or video, given a 3DGS file and camera poses generated by Stack Orbit Camera Poses node
-
Bake Multi-View images into UVTexture of given 3D mesh using Nvdiffrast, supports:
- Export to .obj, .ply, .glb
-
Deep Marching Tetrahedrons
- Allow convert 3DGS .ply file to 3D mesh
Note: I didn't spent time to turn the hyperprameters yet, the result will be improved in the future!
- Allow convert 3DGS .ply file to 3D mesh
-
Save & Load 3D file
- .obj, .ply, .glb for 3D Mesh
- .ply for 3DGS
-
Switch Axis for 3DGS & 3D Mesh
- Since different algorithms likely use different coordinate system, so the ability to re-mapping the axis of coordinate is crucial for passing generated result between differnt nodes.
-
Add DMTet algorithm to allow conversion from points cloud(Gaussian/.ply) to mesh (.obj, .ply, .glb)
-
Add interactive 3D UI inside ComfuUI to visulaize training and generated results for 3D representations
-
Add a new node to generate renderer image sequence given a 3D gaussians and orbit camera poses (So we can later feed it to the differentiable renderer to bake it onto a given mesh)
-
Integrate LGM: Large Multi-View Gaussian Model for High-Resolution 3D Content Creation
-
Add a general SDS/ISM Optimization algorithm to allow training 3D representations with diffusion model, The real fun starts here ;)
- Need to do some in-depth research on Interval Score Matching (ISM), since math behind it makes perfect sense and also there are so many ways we could improve upon the result obtained from LucidDreamer
-
Improve 3DGS to Mesh conversion algorithms:
- Support to training DMTet with images(RGB, Alpha, Normal Map)
- Find better methods to converts 3DGS or Points Cloud to Mesh (Normal maps reconstruction maybe?)
-
Add Structure from motion (SfM) initialization for 3DGS (Better first guess -> Faster convergence & Better result)
-
Add a few best Nerf algorithms (No idea yet, instant-ngp maybe?)
[IMPORTANT!!!]
Currently this package is only been tested in following setups:
- Windows 10/11 (Tested on my laptop)
- Ubuntu 23.10 (Tested by @watsieboi)
- ComfyUI python_embed/Miniconda/Conda Python 3.11.x
- Torch version >= 2.1.2+cu121
Assume you have already downloaded ComfyUI & Configed your CUDA environment.
Node: (I've only built the packages with windows_python3.11_cuda12.1, whitch is ComfyUI Windows Portable's default setup)
First install Visual Studio Build Tools 2022/2019 with Workloads: Desktop development with C++ (There are a few JIT torch cpp extension that builds in runtime)
-
Alternatively, according to @doctorpangloss, you can setup the c++/cuda build environments in windows by using chocolatey with following command:
# using git bash for the sake of simplicity # enable developer mode # google this: allow os.symlink on windows by adding your username to the local security policy entry for it. # you will have to restart your computer # install chocolatey using powershell, then install the prereqs for compilation on Windows choco install -y visualstudio2022buildtools choco install -y visualstudio2022-workload-vctools --package-parameters "--add Microsoft.VisualStudio.Component.VC.Llvm.ClangToolset --add Microsoft.VisualStudio.Component.VC.Llvm.Clang"
Go to the Comfy3D root directory: ComfyUI Root Directory\ComfyUI\custom_nodes\ComfyUI-3D-Pack and run:
install_windows_portable_win_py311_cu121.bat
Note: In some edge cases Miniconda fails Anaconda could fix the issue
First download Miniconda (One of the best way to manage a clean and separated python envirments)
Then running following commands to setup the Miniconda environment for ComfyUI:
# Go to your Your ComfyUI root directory, for my example:
cd C:\Users\reall\Softwares\ComfyUI_windows_portable
conda create -p ./python_miniconda_env/ComfyUI python=3.11
# conda will tell what command to use to activate the env
conda activate C:\Users\reall\Softwares\ComfyUI_windows_portable\python_miniconda_env\ComfyUI
# update pip
python -m pip install --upgrade pip
# You can using following command to installing CUDA only in the miniconda environment you just created if you don't want to donwload and install it manually & globally:
# conda install -c "nvidia/label/cuda-12.1.0" cuda-toolkit
# Install the main packahes
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu121
pip install -r ./ComfyUI/requirements.txt
# Then go to ComfyUI-3D-Pack directory under the ComfyUI Root Directory\ComfyUI\custom_nodes for my example is:
cd C:\Users\reall\Softwares\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-3D-Pack
- Alternatively you can check this tutorial: Installing ComfyUI with Miniconda On Windows and Mac
Go to the Comfy3D root directory: ComfyUI Root Directory\ComfyUI\custom_nodes\ComfyUI-3D-Pack and run:
install_miniconda.bat
Just in case install_miniconda.bat
may not working in your OS, you could also run the following commands under the same directory: (Works with Linux & macOS)
pip install -r requirements.txt
pip install -r requirements_post.txt
Plus:
- For those who want to run it inside Google Colab, you can check the install instruction from @lovisdotio
Copy the files inside folder __New_ComfyUI_Bats to your ComfyUI root directory, and double click run_nvidia_gpu_miniconda.bat to start ComfyUI!
- Alternatively you can just activate the Conda env:
python_miniconda_env\ComfyUI
, and go to your ComfyUI root directory then run commandpython ./ComfyUI/main.py
- The world & camera coordinate system is the same as OpenGL:
World Camera
+y up target
| | /
| | /
|______+x |/______right
/ /
/ /
/ /
+z forward
elevation: in (-90, 90), from +y to -y is (-90, 90)
azimuth: in (-180, 180), from +z to +x is (0, 90)
- If you encounter OpenGL errors (e.g.,
[F glutil.cpp:338] eglInitialize() failed
), then setforce_cuda_rasterize
to true on corresponding node - If after the installation, your ComfyUI get stucked at starting or running, you could following the instruction in following link to solve the problem: Code Hangs Indefinitely When Evaluating Neuron Models on GPU