please, add --use_gpu recommendation to readme
antonkulaga opened this issue · 1 comments
antonkulaga commented
When I first ran the program I got runtime error. After that I used --use_gpu flag and it went away. Maybe worth mentioning in docs
python predict.py data/sample_files/4h0h.fasta --decoys 5 --renumber
PyRosetta-4 2021 [Rosetta PyRosetta4.MinSizeRel.python38.ubuntu 2021.33+release.21c4761a87a1193dca5c6c2e1047681a200715d4 2021-08-14T17:47:22] retrieved from: http://www.pyrosetta.org
(C) Copyright Rosetta Commons Member Institutions. Created in JHU by Sergey Lyskov and PyRosetta Team.
**************************************************
Generating constraints
**************************************************
Traceback (most recent call last):
File "predict.py", line 190, in <module>
_cli()
File "predict.py", line 166, in _cli
cst_file = get_cst_file(model,
File "/data/sources/DeepAb/deepab/build_fv/build_cen_fa.py", line 68, in get_cst_file
residue_pairs = get_constraint_residue_pairs(model,
File "/data/sources/DeepAb/deepab/constraints/write_constraints.py", line 74, in get_constraint_residue_pairs
logits = get_logits_from_model(model, fasta_file, device=device)
File "/data/sources/DeepAb/deepab/util/model_out.py", line 56, in get_logits_from_model
out = model(seq)
File "/opt/micromamba/envs/DeepAb/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/data/sources/DeepAb/deepab/models/ModelEnsemble.py", line 32, in forward
out = [model(x) for model in self.models]
File "/data/sources/DeepAb/deepab/models/ModelEnsemble.py", line 32, in <listcomp>
out = [model(x) for model in self.models]
File "/opt/micromamba/envs/DeepAb/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/data/sources/DeepAb/deepab/models/AbResNet/AbResNet.py", line 188, in forward
lstm_enc = self.get_lstm_encoding(x)
File "/data/sources/DeepAb/deepab/models/AbResNet/AbResNet.py", line 141, in get_lstm_encoding
enc = self.lstm_model.encoder(src=lstm_input)[0].detach()
File "/opt/micromamba/envs/DeepAb/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/data/sources/DeepAb/deepab/models/PairedSeqLSTM/PairedSeqLSTM.py", line 25, in forward
outputs, (hidden, cell) = self.rnn(src.float())
File "/opt/micromamba/envs/DeepAb/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/opt/micromamba/envs/DeepAb/lib/python3.8/site-packages/torch/nn/modules/rnn.py", line 581, in forward
result = _VF.lstm(input, hx, self._flat_weights, self.bias, self.num_layers,
RuntimeError: Input and parameter tensors are not at the same device, found input tensor at cuda:0 and parameter tensor at cpu
jeffreyruffolo commented
Hello, thank you for creating this issue! I expect to have an update to the code released in the next 10 days addressing this issue and other device errors.