ValueError: Mixed precision training with AMP or APEX (`--fp16`) can only be used on CUDA devices.
LittleZ2022 opened this issue · 0 comments
thank you so much for your project. I want to train the model on my own corpus. I followed README but got a problem when running ./run_unsup_example.sh
.
$ ./run_unsup_example.sh
02/15/2024 12:49:10 - INFO - main - PyTorch: setting up devices
Traceback (most recent call last):
File "D:\TASK\SimCSE-main\train.py", line 591, in
main()
File "D:\TASK\SimCSE-main\train.py", line 263, in main
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
File "D:\Anaconda3\lib\site-packages\transformers\hf_argparser.py", line 157, in parse_args_into_dataclasses
obj = dtype(**inputs)
File "", line 57, in init
File "D:\Anaconda3\lib\site-packages\transformers\training_args.py", line 428, in post_init
raise ValueError("Mixed precision training with AMP or APEX (--fp16
) can only be used on CUDA devices.")
ValueError: Mixed precision training with AMP or APEX (--fp16
) can only be used on CUDA devices.
I've checked my CUDA and pytorch version according to my GPU and tried other versions, but still got the same error.
win
CUDA=11.0 (when running torch.version.cuda
. and CUDA=11.8 when running nvcc -V )
pytorch=1.7.1+cu110
others are in accordance with the requirements of requirements.txt
thanks!