Is there a specific setting for Lora or model to work with the dynamic thresholding extension?
quantumscience opened this issue · 1 comments
How to create Lora or a model to work with the dynamic thresholding? Does it need to be a specific type or setting? Some Loras work but some overbake quickly, is there an underlying reason for that? If there is, can you please add info to the main repository readme? Great extension btw.
Nothing specific needed to make it work.
The key thing is that bumping up CFG Scale will amplify things, and this extension counteracts by rebalancing.
ie: if a model/lora is overtrained to the point that it slightly overbakes normally, a higher cfg scale will overbake very strongly, and there's less the extension can do to counteract it.
In the case of loras you might try just reducing the lora strength a bit? eg <lora:myloraname:0.5>
(instead of default 1.0
).
If you're training the models yourself, try a lower LR, or fewer steps, or more unique training images. None of that's specific to use with DynThresh, but anything that reduces model corruption naturally helps in all cases.