xdit-project/xDiT
xDiT: A Scalable Inference Engine for Diffusion Transformers (DiTs) with Massive Parallelism
PythonApache-2.0
Pinned issues
Issues
- 1
- 4
[Support] Request for supporting Sd3.5 Medium
#372 opened by pprp - 6
- 1
Does xDiT have plans to support teacache?
#435 opened by DefTruth - 1
question about training pipeline
#440 opened by bendanzzc - 1
Http server crashes on "paralleling scheduler."
#441 opened by Aktsvigun - 1
- 2
- 1
huggingface-cli models
#433 opened by kazakovaanastasia - 1
quantization support
#436 opened by Andy0422 - 1
!bash examples/run_consisid_usp.sh error:cannot import name 'ConsisIDPipeline' from 'diffusers' (python version env maybe?)
#432 opened by kazakovaanastasia - 1
Any plans to support AMD GPUs?
#437 opened by rtlinux - 3
How to use xDiT speed cogvideox-5B-v1.5-I2V?
#385 opened by wen020 - 2
How to configure or replace sample
#425 opened by lcax200000 - 1
PixArt model torchrun error
#430 opened by kazakovaanastasia - 0
runs always stucks on INFO 01-12 20:17:39 [base_pipeline.py:343] Scheduler found, paralleling scheduler...
#431 opened by kazakovaanastasia - 3
Cogvideo run
#429 opened by kazakovaanastasia - 4
How to Add DiTFastAttn in our custom model? do i need to write a pipeline or will it work if i can just swap out some attention layers/ processors
#420 opened by asahni04 - 9
When height and width change, the inference speed will significantly slow down.
#423 opened by serend1p1ty - 3
- 0
Add Support for LTXV
#421 opened by eranlevinlt - 1
ComfyUI支持多卡并行的计划及合作方式
#419 opened by joeshow79 - 11
- 1
CogVideo import error
#417 opened by ZichengDuan - 3
USP latency test
#416 opened by eppane - 1
- 1
FLUX.1-dev run in MultiNodes error
#409 opened by etersin - 2
Do you have plan to support SANA?
#393 opened by xieenze - 1
xDiT supports EasyAnimate
#406 opened by paradiseHIT - 3
Failed to load flux.1-dev with enable_sequential_cpu_offload and use_fp8_t5_encoder (4090)
#407 opened by WeiboXu - 5
Error occured when setting the environment
#388 opened by TTTanger - 3
Error when running Flux1.0 dev and HunyuanDiT-v1.2-Diffusers with multiple prompts
#398 opened by henryhe4004 - 3
Flux.1 Hopper Performance
#378 opened by thomasbtnfr - 0
[feature] apply USP+FSDP to reduce memory overhead
#400 opened by feifeibear - 2
Reasons about inference speed difference with diffusers on single GPU without parrallelism?
#392 opened by xyyan0123 - 1
Add support for ConsisID
#389 opened by SHYuanBest - 2
- 2
max_sequence_length will lead to error
#380 opened by fy1214 - 4
Can comfyui-xdit run in multiple servers?
#350 opened by VincentXWD - 2
host.py get some error
#373 opened by fy1214 - 4
Time compute
#370 opened by thomasbtnfr - 3
RuntimeError: CUDA error
#374 opened by algorithmconquer - 12
flux_example.py get stuck
#363 opened by fy1214 - 5
Some questions about PipeFusion
#360 opened by ictzyqq - 4
flux-dev run error
#354 opened by algorithmconquer - 3
why cogvideo only support 4gpus dit parallel?
#358 opened by Bensong0506 - 2
TypeError: FluxPipeline.__call__() got an unexpected keyword argument 'negative_prompt'
#347 opened by ictzyqq - 4
- 0
How about quantized models?
#344 opened by wxsms - 1
stable diffusion 3 在推理的时候,启用并行化会在卡0上启动多个进程,这个正常吗
#340 opened by westnight