rstrivedi/Melting-Pot-Contest-2023

Issue with running training code with torch

Opened this issue · 2 comments

Hello,

I have run the setup.py file and ray_patch.sh but still got the following error when running python baselines/train/run_ray_train.py --framework torch.

Traceback (most recent call last):
  File "/ccn2/u/ziyxiang/Melting-Pot-Contest-2023/baselines/train/run_ray_train.py", line 173, in <module>
    ).fit()
  File "/data/ziyxiang/anaconda3/envs/mpc_main/lib/python3.10/site-packages/ray/tune/tuner.py", line 347, in fit
    return self._local_tuner.fit()
  File "/data/ziyxiang/anaconda3/envs/mpc_main/lib/python3.10/site-packages/ray/tune/impl/tuner_internal.py", line 588, in fit
    analysis = self._fit_internal(trainable, param_space)
  File "/data/ziyxiang/anaconda3/envs/mpc_main/lib/python3.10/site-packages/ray/tune/impl/tuner_internal.py", line 703, in _fit_internal
    analysis = run(
  File "/data/ziyxiang/anaconda3/envs/mpc_main/lib/python3.10/site-packages/ray/tune/tune.py", line 1107, in run
    runner.step()
  File "/data/ziyxiang/anaconda3/envs/mpc_main/lib/python3.10/site-packages/ray/tune/execution/tune_controller.py", line 280, in step
    self._maybe_update_trial_queue()
  File "/data/ziyxiang/anaconda3/envs/mpc_main/lib/python3.10/site-packages/ray/tune/execution/tune_controller.py", line 411, in _maybe_update_trial_queue
    if not self._update_trial_queue(blocking=not dont_wait_for_trial):
  File "/data/ziyxiang/anaconda3/envs/mpc_main/lib/python3.10/site-packages/ray/tune/execution/trial_runner.py", line 1112, in _update_trial_queue
    self.add_trial(trial)
  File "/data/ziyxiang/anaconda3/envs/mpc_main/lib/python3.10/site-packages/ray/tune/execution/tune_controller.py", line 383, in add_trial
    super().add_trial(trial)
  File "/data/ziyxiang/anaconda3/envs/mpc_main/lib/python3.10/site-packages/ray/tune/execution/trial_runner.py", line 597, in add_trial
    trial.create_placement_group_factory()
  File "/data/ziyxiang/anaconda3/envs/mpc_main/lib/python3.10/site-packages/ray/tune/experiment/trial.py", line 553, in create_placement_group_factory
    default_resources = trainable_cls.default_resource_request(self.config)
  File "/data/ziyxiang/anaconda3/envs/mpc_main/lib/python3.10/site-packages/ray/rllib/algorithms/algorithm.py", line 2193, in default_resource_request
    cf.validate()
  File "/data/ziyxiang/anaconda3/envs/mpc_main/lib/python3.10/site-packages/ray/rllib/algorithms/ppo/ppo.py", line 315, in validate
    super().validate()
  File "/data/ziyxiang/anaconda3/envs/mpc_main/lib/python3.10/site-packages/ray/rllib/algorithms/pg/pg.py", line 100, in validate
    super().validate()
  File "/data/ziyxiang/anaconda3/envs/mpc_main/lib/python3.10/site-packages/ray/rllib/algorithms/algorithm_config.py", line 773, in validate
    self._check_if_correct_nn_framework_installed(_tf1, _tf, _torch)
  File "/data/ziyxiang/anaconda3/envs/mpc_main/lib/python3.10/site-packages/ray/rllib/algorithms/algorithm_config.py", line 3623, in _check_if_correct_nn_framework_installed
    raise ImportError(
ImportError: PyTorch was specified as the framework to use (via `config.framework('torch')`)! However, no installation was found. You can install PyTorch via `pip install torch`.

Double checked that torch is indeed installed, and here is output from nvidia-smi

-----------------------------------------------------------------------------+
| NVIDIA-SMI 495.44       Driver Version: 495.44       CUDA Version: 11.5     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |

Are you working in a virtual environment? could you run python on cli where you are trying to run the baseline and do import torch and see if that can find torch?

If so, there is some path problems which is why the program is not recognizing torch. You could check torch installation right before you start your train run in run_ray_train.py

Also, it could be that you are not using ray from the same installation library where your torch is installed.

Solved by adding site-packages/nvidia/cuda_runtime/lib to LD_LIBRARY_PATH

The reason that is when ray try to import torch it couldn't find cuda library in the path