Extraltodeus/ComfyUI-AutomaticCFG

Generation freezes with directml (amd gpu's)

patientx opened this issue · 12 comments

Hi there. Tried your extension and thought maybe it would somehow improve using sdxl with lightning lora's since when using them max usable cfg is 2 , normally 1 is recommended. If we use normal cfg like 5-6 etc the output is garbled and unusable. And yes with your extension I was able to use cfg with a value of 5.0 and the output was not garbled and maybe actually better then using the setup (sdxl model with 4 step lightning lora normally cfg 1) normally. BUT there is a problem I was able to make it work only once and now I think I know why :

Your extension is all about sigma values yes ? There is a problem with directml like you can see here with ipadapterplus extension with comfyui (cubiq/ComfyUI_IPAdapter_plus#109)

Up until a few days we were just disabling some lines to make it work with amd gpu's and now the author solved it (hopefully) and maybe you can see what they did and solve it here the same way.

Thanks.

I am really unsure to understand what is the source of your issue.

With AMD gpu (rx 6600) using directml on windows, generation stops at second or third step if I use your node regardless of model etc. used. And this only occured with ipadapter extension which was solved recently.

Amd here too (Rx580), I don't think the problem is in this plugin, I just downloaded and tested (SDXL+Lightning) and it worked right away. Is your ComfyUi updated?

It seems I spoke too soon, it freezes (randomly) on me too. I narrowed it down to the line:

max_val = torch.mean(max_values).item()

I changed it similarly as the fix for Ipadapter:

max_val = torch.mean(max_values).detach().cpu().numpy()

and it seems to not be freezing anymore, but since it only froze sometimes, more tests are required to be sure.

It seems I spoke too soon, it freezes (randomly) on me too. I narrowed it down to the line:

max_val = torch.mean(max_values).item()

I changed it similarly as the fix for Ipadapter:

max_val = torch.mean(max_values).detach().cpu().numpy()

and it seems to not be freezing anymore, but since it only froze sometimes, more tests are required to be sure.

Yes ! This works, and actually makes the 4 step gens with lightning models much more detailed with this combined.

I am really unsure to understand what is the source of your issue.

Is it possible to add it to original code so we don't need to change it everytime ?

Well the problem is that it transfers it to the CPU so I'll check if instead it would be possible to use the comfy torch device instead.

If you add this import at the beginning of the script:

import comfy.model_management as model_management

and use this instead:

max_val = torch.mean(max_values).to(device=model_management.get_torch_device())

Does it works?

Well the problem is that it transfers it to the CPU so I'll check if instead it would be possible to use the comfy torch device instead.

If you add this import at the beginning of the script:

import comfy.model_management as model_management

and use this instead:

max_val = torch.mean(max_values).to(device=model_management.get_torch_device())

Does it works?

Did these changes, after two gens same situation, generation just stops. So, this didn't solve it. The one "MythicalChu" suggested works though. Used it a lot of times.

Also not related to this but there is this error which shows up at every generation step , it is not doing anything bad just keeps showing up.

"WARNING: The comfy.samplers.calc_cond_uncond_batch function is deprecated please use the calc_cond_batch one instead."

"WARNING: The comfy.samplers.calc_cond_uncond_batch function is deprecated please use the calc_cond_batch one instead."

you need to update to the latest version

torch.mean(max_values).detach().cpu().numpy()

That solved, just saw the patch notes on comfyui also, they were warning about the change too.

Is this all good now or is the problem still remaining?

solved I think, haven't used directml for a while now, on zluda these days.

Nice!!! :D