ComfyUI nodes to use InstanceDiffusion.
Original research repo: https://github.com/frank-xwang/InstanceDiffusion
Clone or download this repo into your ComfyUI/custom_nodes/
directory.
There are no Python package requirements outside of the standard ComfyUI requirements at this time.
These models were trained by frank-xwang baked inside of StableDiffusion 1.5. These are spliced out into individual models to be used with other SD1.5 checkpoints.
Download each of these checkpoints and place them into the Installation Directory within ComfyUI/models/instance_models/
directory.
Model Name | URL | Installation Directory |
---|---|---|
fusers.ckpt | huggingface | instance_models/fuser_models/ |
positionnet.ckpt | huggingface | instance_models/positionnet_models/ |
scaleu.ckpt | huggingface | instance_models/scaleu_models/ |
Text2Vid example using Kijai's Spline Editor
Example workflows can be found in the example_workflows/
directory.
fourpeople.mp4
combined.mp4
combined.mp4
combined.mp4
InstanceDiffusion supports a wide range of inputs. The inputs that do not have nodes that can convert their input into InstanceDiffusion:
- Scribbles
- Points
- Segments
- Masks
Points, segments, and masks are planned todo after proper tracking for these input types is implemented in ComfyUI.
- frank-xwang for creating the original repo, training models, etc.
- Kosinkadink for creating AnimateDiff-Evolved and providing support on integration
- Kijai for improving the speed and adding tracking nodes