bes-dev/pytorch_clip_guided_loss

Transformer version? TypeError: _build_causal_attention_mask() missing 1 required positional argument: 'dtype'

ImneCurline opened this issue · 0 comments

python == 3.9
torch : 2.0.0

When I run the sample code, I get the above error as the title
I found the corresponding position (a transformer's method)and added dtype="double".

Then I get the following error:
File "/root/anaconda3/envs/point_e_env/lib/python3.9/site-packages/transformers/models/clip/modeling_clip.py", line 758, in _build_causal_attention_mask
mask = torch.empty(bsz, seq_len, seq_len, dtype=dtype)
TypeError: empty() received an invalid combination of arguments - got (int, int, int, dtype=str), but expected one of:

  • (tuple of ints size, *, tuple of names names, torch.memory_format memory_format, torch.dtype dtype, torch.layout layout, torch.device device, bool pin_memory, bool requires_grad)
  • (tuple of ints size, *, torch.memory_format memory_format, Tensor out, torch.dtype dtype, torch.layout layout, torch.device device, bool pin_memory, bool requires_grad)

So is it because of the version of the transformer? And how to fix it?