cisnlp/simalign

Inputs not converted to cuda tensors when -device is cuda

Closed this issue · 1 comments

In the simalign.py file the inputs aren't converted to cuda tensors when the flat -device is set to cuda.

	def get_embed_list(self, sent_pair: List[List[str]]) -> torch.Tensor:
		if self.emb_model is not None:
			inputs = self.tokenizer(sent_pair, is_pretokenized=True, padding=True, truncation=True, return_tensors="pt")
			outputs = self.emb_model(**inputs)[2][self.layer]

			return outputs[:, 1:-1, :]
		else:
			return None

This can be fixed by adding the line inputs = inputs.to(self.device) before passing it to self.emb_model

I fixed this in the new commit. Thank you.