ExponentialML/ComfyUI_Native_DynamiCrafter

CrossAttention.efficient_forward() got an unexpected keyword argument 'value'

Opened this issue · 10 comments

Error occurred when executing KSampler //Inspire:

CrossAttention.efficient_forward() got an unexpected keyword argument 'value'

File "E:\Blender_ComfyUI\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "E:\Blender_ComfyUI\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "E:\Blender_ComfyUI\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "E:\Blender_ComfyUI\ComfyUI\custom_nodes\ComfyUI-Inspire-Pack\inspire\a1111_compat.py", line 77, in sample
return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise, noise_mode, incremental_seed_mode=batch_seed_mode, variation_seed=variation_seed, variation_strength=variation_strength)
File "E:\Blender_ComfyUI\ComfyUI\custom_nodes\ComfyUI-Inspire-Pack\inspire\a1111_compat.py", line 42, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
File "E:\Blender_ComfyUI\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 9, in informative_sample
return original_sample(*args, **kwargs) # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations.
File "E:\Blender_ComfyUI\ComfyUI\custom_nodes\comfyui-diffusion-cg\recenter.py", line 29, in sample_center
return SAMPLE(*args, **kwargs)
File "E:\Blender_ComfyUI\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 267, in motion_sample
return orig_comfy_sample(model, noise, *args, **kwargs)
File "E:\Blender_ComfyUI\ComfyUI\comfy\sample.py", line 100, in sample
samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "E:\Blender_ComfyUI\ComfyUI\custom_nodes\ComfyUI_smZNodes\smZNodes.py", line 1380, in KSampler_sample
return _KSampler_sample(*args, **kwargs)
File "E:\Blender_ComfyUI\ComfyUI\comfy\samplers.py", line 705, in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "E:\Blender_ComfyUI\ComfyUI\custom_nodes\ComfyUI_smZNodes\smZNodes.py", line 1399, in sample
return _sample(*args, **kwargs)
File "E:\Blender_ComfyUI\ComfyUI\comfy\samplers.py", line 610, in sample
samples = sampler.sample(model_wrap, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
File "E:\Blender_ComfyUI\ComfyUI\comfy\samplers.py", line 548, in sample
samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
File "E:\Blender_ComfyUI\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "E:\Blender_ComfyUI\ComfyUI\comfy\k_diffusion\sampling.py", line 137, in sample_euler
denoised = model(x, sigma_hat * s_in, **extra_args)
File "E:\Blender_ComfyUI\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in call_impl
return forward_call(*args, **kwargs)
File "E:\Blender_ComfyUI\ComfyUI\comfy\samplers.py", line 286, in forward
out = self.inner_model(x, sigma, cond=cond, uncond=uncond, cond_scale=cond_scale, model_options=model_options, seed=seed)
File "E:\Blender_ComfyUI\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in call_impl
return forward_call(*args, **kwargs)
File "E:\Blender_ComfyUI\ComfyUI\comfy\samplers.py", line 273, in forward
return self.apply_model(*args, **kwargs)
File "E:\Blender_ComfyUI\ComfyUI\custom_nodes\ComfyUI_smZNodes\smZNodes.py", line 1012, in apply_model
out = super().apply_model(*args, **kwargs)
File "E:\Blender_ComfyUI\ComfyUI\comfy\samplers.py", line 270, in apply_model
out = sampling_function(self.inner_model, x, timestep, uncond, cond, cond_scale, model_options=model_options, seed=seed)
File "E:\Blender_ComfyUI\ComfyUI\comfy\samplers.py", line 250, in sampling_function
cond_pred, uncond_pred = calc_cond_uncond_batch(model, cond, uncond
, x, timestep, model_options)
File "E:\Blender_ComfyUI\ComfyUI\comfy\samplers.py", line 222, in calc_cond_uncond_batch
output = model_options['model_function_wrapper'](model.apply_model, {"input": input_x, "timestep": timestep
, "c": c, "cond_or_uncond": cond_or_uncond}).chunk(batch_chunks)
File "E:\Blender_ComfyUI\ComfyUI\custom_nodes\ComfyUI_Native_DynamiCrafter\nodes.py", line 128, in _forward
x_out = apply_model(
File "E:\Blender_ComfyUI\ComfyUI\comfy\model_base.py", line 96, in apply_model
model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
File "E:\Blender_ComfyUI\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "E:\Blender_ComfyUI\ComfyUI\custom_nodes\ComfyUI_Native_DynamiCrafter\lvdm\modules\networks\openaimodel3d.py", line 751, in forward
h = forward_timestep_embed(
File "E:\Blender_ComfyUI\ComfyUI\custom_nodes\ComfyUI_Native_DynamiCrafter\lvdm\modules\networks\openaimodel3d.py", line 38, in forward_timestep_embed
x = layer(x, context)
File "E:\Blender_ComfyUI\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "E:\Blender_ComfyUI\ComfyUI\custom_nodes\ComfyUI_Native_DynamiCrafter\lvdm\modules\attention.py", line 554, in forward
x = block(x, context=context, **kwargs)
File "E:\Blender_ComfyUI\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "E:\Blender_ComfyUI\ComfyUI\custom_nodes\ComfyUI_Native_DynamiCrafter\lvdm\modules\attention.py", line 364, in forward
return checkpoint(self._forward, input_tuple, self.parameters(), self.checkpoint)
File "E:\Blender_ComfyUI\ComfyUI\custom_nodes\ComfyUI_Native_DynamiCrafter\lvdm\common.py", line 94, in checkpoint
return func(*inputs)
File "E:\Blender_ComfyUI\ComfyUI\custom_nodes\ComfyUI_Native_DynamiCrafter\lvdm\modules\attention.py", line 425, in _forward
n = self.attn1(n, context=context_attn1, value=value_attn1)
File "E:\Blender_ComfyUI\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)

You have incorrect torch CUDA version installed

What version of CUDA do I need to install, I installed 12.1 and get the same error

i have got the same problem
Error occurred when executing KSamplerAdvanced:

CrossAttention.efficient_forward() got an unexpected keyword argument 'value'

File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "C:\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1403, in sample
return common_ksampler(model, noise_seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise, disable_noise=disable_noise, start_step=start_at_step, last_step=end_at_step, force_full_denoise=force_full_denoise)
File "C:\ComfyUI_windows_portable\ComfyUI\nodes.py", line 1339, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
File "C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 9, in informative_sample
return original_sample(*args, **kwargs) # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations.
File "C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\sampling.py", line 267, in motion_sample
return orig_comfy_sample(model, noise, *args, **kwargs)
File "C:\ComfyUI_windows_portable\ComfyUI\comfy\sample.py", line 100, in sample
samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "C:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 705, in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
File "C:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 610, in sample
samples = sampler.sample(model_wrap, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
File "C:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 548, in sample
samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
File "C:\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\ComfyUI_windows_portable\ComfyUI\comfy\k_diffusion\sampling.py", line 137, in sample_euler
denoised = model(x, sigma_hat * s_in, **extra_args)
File "C:\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "C:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 286, in forward
out = self.inner_model(x, sigma, cond=cond, uncond=uncond, cond_scale=cond_scale, model_options=model_options, seed=seed)
File "C:\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self.call_impl(*args, **kwargs)
File "C:\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1527, in call_impl
return forward_call(*args, **kwargs)
File "C:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 273, in forward
return self.apply_model(*args, **kwargs)
File "C:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 270, in apply_model
out = sampling_function(self.inner_model, x, timestep, uncond, cond, cond_scale, model_options=model_options, seed=seed)
File "C:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 250, in sampling_function
cond_pred, uncond_pred = calc_cond_uncond_batch(model, cond, uncond
, x, timestep, model_options)
File "C:\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 222, in calc_cond_uncond_batch
output = model_options['model_function_wrapper'](model.apply_model, {"input": input_x, "timestep": timestep
, "c": c, "cond_or_uncond": cond_or_uncond}).chunk(batch_chunks)
File "C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_Native_DynamiCrafter\nodes.py", line 128, in _forward
x_out = apply_model(
File "C:\ComfyUI_windows_portable\ComfyUI\comfy\model_base.py", line 96, in apply_model
model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
File "C:\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_Native_DynamiCrafter\lvdm\modules\networks\openaimodel3d.py", line 751, in forward
h = forward_timestep_embed(
File "C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_Native_DynamiCrafter\lvdm\modules\networks\openaimodel3d.py", line 38, in forward_timestep_embed
x = layer(x, context)
File "C:\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_Native_DynamiCrafter\lvdm\modules\attention.py", line 554, in forward
x = block(x, context=context, **kwargs)
File "C:\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_Native_DynamiCrafter\lvdm\modules\attention.py", line 364, in forward
return checkpoint(self._forward, input_tuple, self.parameters(), self.checkpoint)
File "C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_Native_DynamiCrafter\lvdm\common.py", line 94, in checkpoint
return func(*inputs)
File "C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_Native_DynamiCrafter\lvdm\modules\attention.py", line 425, in _forward
n = self.attn1(n, context=context_attn1, value=value_attn1)
File "C:\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "C:\ComfyUI_windows_portable\python_embeded\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)

i tried to reinstall another torch
this one,
pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu121
but it still didnt work
how can i solve this problem? Thanks.

@hyongqi

  • check that all your ComfyUI nodes are up to date

then:

  1. you need to look at which CUDA is supported by your NVIDIA graphics card/driver
  2. install the same torch version, eg if you have CUDA 11.8 you need pip install --upgrade torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118 - see their page for CUDA torch versions
  3. make sure you install it into the comfyUI python venv and not your global python env

The error should go away. Me and a friend both had this happen after we updated ComfyUI nodes - a process that installed torch cuda version which was not compatible with our system CUDA install

Same issue. I have installed same CUDA version as touch, error still occured.

Torch version: 2.1.1+cu121
xformers version: 0.0.23
Cuda compilation tools, release 12.1, V12.1.66
Build cuda_12.1.r12.1/compiler.32415258_0

Using pytouch cross attension insted of xformers could bypass this issue. Edit the 'run_nvidia_gpu.bat' and add ' --use-pytorch-cross-attention' in the first line, save and run.

So, it seems something wrong with the xformers.

@hyongqi

  • check that all your ComfyUI nodes are up to date

then:

  1. you need to look at which CUDA is supported by your NVIDIA graphics card/driver
  2. install the same torch version, eg if you have CUDA 11.8 you need pip install --upgrade torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118 - see their page for CUDA torch versions
  3. make sure you install it into the comfyUI python venv and not your global python env

The error should go away. Me and a friend both had this happen after we updated ComfyUI nodes - a process that installed torch cuda version which was not compatible with our system CUDA install

My CUDA version is 12.1 and xformers is 0.0.24
torch2.2.0, torchvision0.17.0, torchaudio2.2.0 have been successfully installed, but the error persists.

Using pytouch cross attension insted of xformers could bypass this issue. Edit the 'run_nvidia_gpu.bat' and add ' --use-pytorch-cross-attention' in the first line, save and run.

So, it seems something wrong with the xformers.

You're right. I tried your method and it worked.

Using pytouch cross attension insted of xformers could bypass this issue. Edit the 'run_nvidia_gpu.bat' and add ' --use-pytorch-cross-attention' in the first line, save and run.
So, it seems something wrong with the xformers.

You're right. I tried your method and it worked.

the error persists. for CUDA 11.8

Using pytouch cross attension insted of xformers could bypass this issue. Edit the 'run_nvidia_gpu.bat' and add ' --use-pytorch-cross-attention' in the first line, save and run.

So, it seems something wrong with the xformers.

Thank you! Through your method I did solve this problem, my current environment is
Total VRAM 12282 MB, total RAM 32581 MB
xformers version 0.0.21
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 4070 : cudaMallocAsync
VAE dtype: torch.bfloat16
Torch version: 2.0.1+cu118
I hope I can help others who are troubled by this mistake.