YangLing0818/RPG-DiffusionMaster

Why does this support larger scale images than those trained with the base model?

Opened this issue · 0 comments

I read through the paper and it isn't mentioned why using this library we can also generate images larger than the usual size, without fear of double heads or other artifacts.

Is it because of the regional prompting the algorithm performs? What if one of those regions themselves are larger than the 512x512 size, if using SD 1.5?