Is it possible to use openmm-torch to apply force on only several atoms or CVs
JinyinZha opened this issue · 5 comments
I am trying to use openmm-torch to add several force to dihedrals. (Namely, the force is in form of F(φ1,φ2,...φn,ψ1,ψ2,...,ψn)) However openmm-torch was made for a force in the form of F(x1,x2,...xn)), where x is the coordinate of atom. I attempted to add an additional part converting x into dihedrals in the network structure, and then input dihedrals into my network. However, it turns out that the grad is reduced.
Traceback (most recent call last):
File "/home/jyzha/project/enhanced_sampling/networks/test_openmm2.py", line 94, in
simulation.minimizeEnergy()
File "/home/jyzha/software/anaconda3/envs/openmm/lib/python3.9/site-packages/openmm/app/simulation.py", line 137, in minimizeEnergy
mm.LocalEnergyMinimizer.minimize(self.context, tolerance, maxIterations)
File "/home/jyzha/software/anaconda3/envs/openmm/lib/python3.9/site-packages/openmm/openmm.py", line 17208, in minimize
return _openmm.LocalEnergyMinimizer_minimize(context, tolerance, maxIterations)
openmm.OpenMMException: grad can be implicitly created only for scalar outputs
Exception raised from _make_grads at /home/conda/feedstock_root/build_artifacts/pytorch-recipe_1670027390539/work/torch/csrc/autograd/autograd.cpp:57 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::__cxx11::basic_string<char, std::char_traits, std::allocator >) + 0x68 (0x7fc88ffb8d28 in /home/jyzha/software/anaconda3/envs/openmm/lib/python3.9/site-packages/torch/lib/libc10.so)
frame #1: c10::detail::torchCheckFail(char const*, char const*, unsigned int, char const*) + 0xe8 (0x7fc88ff7eb58 in /home/jyzha/software/anaconda3/envs/openmm/lib/python3.9/site-packages/torch/lib/libc10.so)
frame #2: + 0x3c1577d (0x7fc8c6a6e77d in /home/jyzha/software/anaconda3/envs/openmm/lib/python3.9/site-packages/torch/lib/libtorch_cpu.so)
frame #3: torch::autograd::backward(std::vector<at::Tensor, std::allocatorat::Tensor > const&, std::vector<at::Tensor, std::allocatorat::Tensor > const&, c10::optional, bool, std::vector<at::Tensor, std::allocatorat::Tensor > const&) + 0x42 (0x7fc8c6a6fd42 in /home/jyzha/software/anaconda3/envs/openmm/lib/python3.9/site-packages/torch/lib/libtorch_cpu.so)
frame #4: + 0x3c62ca1 (0x7fc8c6abbca1 in /home/jyzha/software/anaconda3/envs/openmm/lib/python3.9/site-packages/torch/lib/libtorch_cpu.so)
frame #5: at::Tensor::_backward(c10::ArrayRefat::Tensor, c10::optionalat::Tensor const&, c10::optional, bool) const + 0x49 (0x7fc8c3f26359 in /home/jyzha/software/anaconda3/envs/openmm/lib/python3.9/site-packages/torch/lib/libtorch_cpu.so)
frame #6: TorchPlugin::CudaCalcTorchForceKernel::execute(OpenMM::ContextImpl&, bool, bool) + 0xed2 (0x7fc8ca9a6622 in /home/jyzha/software/anaconda3/envs/openmm/lib/plugins/libOpenMMTorchCUDA.so)
frame #7: OpenMM::ContextImpl::calcForcesAndEnergy(bool, bool, int) + 0xc9 (0x7fc93a770789 in /home/jyzha/software/anaconda3/envs/openmm/lib/python3.9/site-packages/openmm/../../../libOpenMM.so.8.0)
frame #8: OpenMM::Context::getState(int, bool, int) const + 0x15e (0x7fc93a76de5e in /home/jyzha/software/anaconda3/envs/openmm/lib/python3.9/site-packages/openmm/../../../libOpenMM.so.8.0)
frame #9: + 0x17bb54 (0x7fc93a7d7b54 in /home/jyzha/software/anaconda3/envs/openmm/lib/python3.9/site-packages/openmm/../../../libOpenMM.so.8.0)
frame #10: + 0x17c31a (0x7fc93a7d831a in /home/jyzha/software/anaconda3/envs/openmm/lib/python3.9/site-packages/openmm/../../../libOpenMM.so.8.0)
frame #11: lbfgs + 0x58c (0x7fc93a83606c in /home/jyzha/software/anaconda3/envs/openmm/lib/python3.9/site-packages/openmm/../../../libOpenMM.so.8.0)
frame #12: OpenMM::LocalEnergyMinimizer::minimize(OpenMM::Context&, double, int) + 0x75d (0x7fc93a7d8ddd in /home/jyzha/software/anaconda3/envs/openmm/lib/python3.9/site-packages/openmm/../../../libOpenMM.so.8.0)
frame #13: + 0x118d86 (0x7fc93ab6dd86 in /home/jyzha/software/anaconda3/envs/openmm/lib/python3.9/site-packages/openmm/_openmm.cpython-39-x86_64-linux-gnu.so)
If the additional conversion is changed to directly " return x" (where x is the input, namely no changes), the code could run well (Indeed, I changed my network to fit the dimension of input. ) I found any calculations to the input of forward function would cause this error.
I am wondering whether my attempt could be realized. If could, then how? I am looking forward to your answer. Thank you~
Could you please provide a minimum reproducer code?
As long as your model takes positions as inputs and outputs energies and optionally forces it should be fine.
class Packed_Encoder(nn.Module):
def __init__(self,encoder,angle_ids):
super(Packed_Encoder, self).__init__()
self.encoder = encoder# a MLP network that output a 1x2 tensor
self.angle_ids = angle_ids
def forward(self,x):
x = self.get_angle(x)
print("x",x)
return self.encoder(x)
def get_angle(self,x):
return x
angle_ids = self.angle_ids
print(x.shape,"x.shape")
if len(x.shape) == 2:
x = x.reshape(1,-1,3)
return x[0,0,0:2]
a1 = (x[:,angle_ids[:,1],:] - x[:,angle_ids[:,0],:]).reshape(-1,len(angle_ids[:,0]),3)
a2 = (x[:,angle_ids[:,2],:] - x[:,angle_ids[:,1],:]).reshape(-1,len(angle_ids[:,0]),3)
a3 = (x[:,angle_ids[:,3],:] - x[:,angle_ids[:,2],:]).reshape(-1,len(angle_ids[:,0]),3)
print("a1",a1)
v1 = torch.cross(a1, a2)
v2 = torch.cross(a2, a3)
print("v1",v1)
sign = torch.sign(torch.sum(v1 * a3,dim=2))
print("sign",sign)
porm = - sign * sign + sign + 1
cos = torch.sum(v1*v2,dim=2) / ((torch.sum(v1*v1,dim=2)**0.5) * (torch.sum(v2*v2,dim=2))**0.5)
print("cos0",cos)
for i in range(cos.shape[0]):
for j in range(cos.shape[1]):
if cos[i][j] > 1:
cos[i][j] = 1
elif cos[i][j] < -1:
cos[i][j] = -1
print("cos",cos)
return torch.arccos(cos) * porm
Looking at the particular error, it seems like your model is not returning forces explicitly. This makes openmm-torch automatically compute them by calling backwards on your model.
The error you post seems to indicate that your model cannot be backpropagated.
I can think of a number of reasons why this would happen:
- The input to the network is not the positions
- The output is not the energy (a tensor with a single number)
- You have non autograd compatible code in your forward
I would say the eror in your network stems from this:
self.encoder = encoder# a MLP network that output a 1x2 tensor
Your network is supposed to return only one number, the energy.
This is consistent with the pytorch error, since grad expects a scalar, but you have a 1x2 tensor if the comment is to be trusted.
You are right! I have successfully fixed the problem. Thanks a lot~