bloc97/CrossAttentionControl

Can't run the notebook in Google Colab, some issues with versions.

WannaFIy opened this issue · 2 comments

Some issues with LMSDiscreteScheduler and new_attention, it requires now sequence_length and dim
def new_attention(self, query, key, value, sequence_length, dim):
but diffusers/models/attention.py calling
hidden_states = self._attention(query, key, value)

[/usr/local/lib/python3.7/dist-packages/diffusers/models/attention.py](https://localhost:8080/#) in forward(self, hidden_states, context)
    196     def forward(self, hidden_states, context=None):
    197         hidden_states = hidden_states.contiguous() if hidden_states.device.type == "mps" else hidden_states
--> 198         hidden_states = self.attn1(self.norm1(hidden_states)) + hidden_states
    199         hidden_states = self.attn2(self.norm2(hidden_states), context=context) + hidden_states
    200         hidden_states = self.ff(self.norm3(hidden_states)) + hidden_states

[/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py](https://localhost:8080/#) in _call_impl(self, *input, **kwargs)
   1128         if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1129                 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1130             return forward_call(*input, **kwargs)
   1131         # Do not call functions when jit is used
   1132         full_backward_hooks, non_full_backward_hooks = [], []

[/usr/local/lib/python3.7/dist-packages/diffusers/models/attention.py](https://localhost:8080/#) in forward(self, hidden_states, context, mask)
    268 
    269         if self._slice_size is None or query.shape[0] // self._slice_size == 1:
--> 270             hidden_states = self._attention(query, key, value)
    271         else:
    272             hidden_states = self._sliced_attention(query, key, value, sequence_length, dim)

TypeError: new_attention() missing 2 required positional arguments: 'sequence_length' and 'dim'

I'll check out what changed, for now you can try reverting to diffusers==0.3.0. It seems that v0.4.0 has significant changes to the architectures.

The code has been updated for diffusers==0.4.1. Any new errors/bugs can be discussed in a new issue.