KimMeen/Time-LLM

Question about error

Closed this issue · 4 comments

my torch version is 1.12.1,cuda version is 11.3.
发生异常: RuntimeError
expected scalar type BFloat16 but found Float
File "/home/ljf/ljf/TSwithLLM/Time-LLM-main/layers/Embed.py", line 43, in forward
x = self.tokenConv(x).transpose(1, 2)
File "/home/ljf/ljf/TSwithLLM/Time-LLM-main/layers/Embed.py", line 185, in forward
x = self.value_embedding(x)
File "/home/ljf/ljf/TSwithLLM/Time-LLM-main/models/TimeLLM.py", line 244, in forecast
enc_out, n_vars = self.patch_embedding(x_enc.to(torch.bfloat16))
File "/home/ljf/ljf/TSwithLLM/Time-LLM-main/models/TimeLLM.py", line 200, in forward
dec_out = self.forecast(x_enc, x_mark_enc, x_dec, x_mark_dec)
File "/home/ljf/ljf/TSwithLLM/Time-LLM-main/run_main.py", line 213, in
outputs = model(batch_x, batch_x_mark, dec_inp, batch_y_mark)
RuntimeError: expected scalar type BFloat16 but found Float

there's error. Does any met this error as this?

i had confirm that x'type is BFloat16

Hi, it seems that the issue is due to a mismatch between the BFloat16 data and the model type. Could you please let me know which script you were running when the problem occurred?

Hi, it seems that the issue is due to a mismatch between the BFloat16 data and the model type. Could you please let me know which script you were running when the problem occurred?

thanks for your reply! When ran the TimeLLM_ETTh1.sh ,The error occured.Specifically, the args "LLM" is set for "GPT2".

Hi, it seems that the issue is due to a mismatch between the BFloat16 data and the model type. Could you please let me know which script you were running when the problem occurred?

thanks for your reply! When ran the TimeLLM_ETTh1.sh ,The error occured.Specifically, the args "LLM" is set for "GPT2".

It looks like the model type and the input data type for GPT-2 are not consistent. You may need to check if you have set the model to use BFloat16 precision.