ryudrigo/my-gen-clip

please help me

Opened this issue · 2 comments

clip_transformer.py 115 decode_to_img
quant_z = self.first_stage_model.quantize.get_codebook_entry(

quantize.py 325 get_codebook_entry
z_q = self.embedding(indices)

module.py 1130 _call_impl
return forward_call(*input, **kwargs)

sparse.py 158 forward
return F.embedding(

functional.py 2199 embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)

RuntimeError:
Expected tensor for argument #1 'indices' to have one of the following scalar types: Long, Int; but got torch.cuda.FloatTensor instead (while checking arguments for embedding)

应该是decode_to_img这出现了问题

new_x = decode_to_img(logits, x.shape)
这里的decode_to_img应该咋写,调用前面的肯定不对,如果能帮助我,我不胜感激