BrooksResearchGroup-UM/pyCHARMM-Workshop

error in building PyCHARMM on colab

Closed this issue · 5 comments

i am trying to use google colab (with CUDA version 12.2) to build PyCHARMM. However, during make install step it shows an error.
this is the output from the configure command (../configure --as-library -p /content/charmm/build_charmm)

=====
library build selected
=====
user specified install prefix
    /content/charmm/build_charmm
=====
configuration using cmake continues using
env /usr/local/bin/cmake -Das_library=On -DCMAKE_INSTALL_PREFIX=/content/charmm/build_charmm /content/charmm

after configuration run make in
    /content/charmm/build_charmm
to compile and link the charmm executable
=====
CMake Deprecation Warning at CMakeLists.txt:2 (cmake_minimum_required):
  Compatibility with CMake < 3.5 will be removed from a future version of
  CMake.

  Update the VERSION argument <min> value or use a ...<max> suffix to tell
  CMake that the project does not need compatibility with older versions.


-- The C compiler identification is GNU 12.2.0
-- The CXX compiler identification is GNU 12.2.0
-- The Fortran compiler identification is GNU 12.2.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/local/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/local/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Detecting Fortran compiler ABI info
-- Detecting Fortran compiler ABI info - done
-- Check for working Fortran compiler: /usr/local/bin/gfortran - skipped
-- Found FFTW: /usr/local/include  
-- Found FFTWF: /usr/local/lib/libfftw3f.so  
-- Found MPI_C: /usr/local/lib/libmpi.so (found version "3.1") 
-- Found MPI_CXX: /usr/local/lib/libmpi.so (found version "3.1") 
-- Found MPI_Fortran: /usr/local/lib/libmpi_usempif08.so (found version "3.1") 
-- Found MPI: TRUE (found version "3.1")  
-- Found OpenMP_C: -fopenmp (found version "4.5") 
-- Found OpenMP_CXX: -fopenmp (found version "4.5") 
-- Found OpenMP_Fortran: -fopenmp (found version "4.5") 
-- Found OpenMP: TRUE (found version "4.5")  
-- Found OpenMP_Fortran: -fopenmp  
-- OpenMM : FOUND OPENMM_INCLUDE_DIRS ->/usr/local/include<-
-- OpenMM : FOUND OPENMM_LIBRARIES ->/usr/local/lib/libOpenMM.so<-
-- OpenMM : FOUND OPENMM_PLUGIN_DIR ->/usr/local/lib/plugins<-
-- OpenMM : FOUND OPENMM_CPU_PLUGIN ->/usr/local/lib/plugins/libOpenMMCPU.so<-
-- OpenMM : FOUND OPENMM_CUDA_PLUGIN ->/usr/local/lib/plugins/libOpenMMCUDA.so<-
-- OpenMM : FOUND OPENMM_OPENCL_PLUGIN ->/usr/local/lib/plugins/libOpenMMOpenCL.so<-
-- Found OpenMM: /usr/local/include  
-- Could NOT find ExaFMM (missing: ExaFMM_LIBRARY) 
CMake Warning (dev) at CMakeLists.txt:365 (find_package):
  Policy CMP0146 is not set: The FindCUDA module is removed.  Run "cmake
  --help-policy CMP0146" for policy details.  Use the cmake_policy command to
  set the policy and suppress this warning.

This warning is for project developers.  Use -Wno-dev to suppress it.

-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - found
-- Found Threads: TRUE  
-- Found CUDA: /usr/local/cuda (found version "12.2") 
-- found OPENMM flags ->  <-
-- OpenMM version detected ->81<-
-- OpenCL include dir found: /usr/local/include
-- OpenCL library found: /usr/local/lib/libOpenCL.so
-- full build chosen
-- Configuring done (5.5s)
-- Generating done (0.2s)
-- Build files have been written to: /content/charmm/build_charmm
=====
you are now ready to run make in
    /content/charmm/build_charmm
=====

after that i run this command make -j4 install which produces an error:

Building Fortran object CMakeFiles/charmm.dir/blade_ctrl.F90.o
Building Fortran object CMakeFiles/charmm.dir/omm_main.F90.o
/content/charmm/build_charmm/omm_main.F90:1100:17:

 1100 |           nsteps)
      |                 1
Error: Missing actual argument for argument 'reporter' at (1)
make[2]: *** [CMakeFiles/charmm.dir/build.make:10883: CMakeFiles/charmm.dir/omm_main.F90.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:318: CMakeFiles/charmm.dir/all] Error 2
make: *** [Makefile:136: all] Error 2

this is the link to the colab notebook: https://colab.research.google.com/drive/1N82QhDe14WeO1DFJ_P0xs69yRbbsygDG?usp=sharing

yes thank you. it installed correctly now. However, when i tried to run the example docking in the CDOCKER tutorial it ended with this error:

NBONDA>>  Maximum group spatial extent (12A) exceeded.
   Size is       15.06 Angstroms and starts with atom:       1
   Please check group boundary definitions.
sh: 1: cluster.pl: not found
sh: 1: cluster.pl: not found
sh: 1: cluster.pl: not found
sh: 1: cluster.pl: not found
sh: 1: cluster.pl: not found
sh: 1: cluster.pl: not found
sh: 1: cluster.pl: not found
sh: 1: cluster.pl: not found
sh: 1: cluster.pl: not found
sh: 1: cluster.pl: not found
sh: 1: cluster.pl: not found
sh: 1: cluster.pl: not found
sh: 1: cluster.pl: not found
sh: 1: cluster.pl: not found
sh: 1: cluster.pl: not found
sh: 1: cluster.pl: not found
/usr/local/lib/python3.10/site-packages/pycharmm/cdocker.py:1299: UserWarning: loadtxt: input contained no data: "tmpcluster"
  cluster_size = np.loadtxt('tmpcluster', dtype = int)
Traceback (most recent call last):
  File "/content/play.py", line 41, in <module>
    clusterResult, dockResult = Rigid_CDOCKER(xcen = 12.33, ycen = 33.48, zcen = 19.70,
  File "/usr/local/lib/python3.10/site-packages/pycharmm/cdocker.py", line 1803, in Rigid_CDOCKER
    radius = scan_cluster_radius(name = conformerName)
  File "/usr/local/lib/python3.10/site-packages/pycharmm/cdocker.py", line 1300, in scan_cluster_radius
    radius = radii[cluster_size == np.amax(cluster_size)][0]
  File "/usr/local/lib/python3.10/site-packages/numpy/core/fromnumeric.py", line 2827, in amax
    return _wrapreduction(a, np.maximum, 'max', axis, None, out,
  File "/usr/local/lib/python3.10/site-packages/numpy/core/fromnumeric.py", line 88, in _wrapreduction
    return ufunc.reduce(obj, axis, dtype, out, **passkwargs)
ValueError: zero-size array to reduction operation maximum which has no identity

This is the code i used:

import os
os.chdir('/content/pyCHARMM-Workshop/7CDOCKER_Tutorial/rigid')
os.environ['CHARMM_LIB_DIR'] = '/content/charmm/build_charmm/lib'
## Import module
import numpy as np
import pycharmm
import pycharmm.lib as lib
import pycharmm.read as read
import pycharmm.lingo as lingo
import pycharmm.settings as settings
from pycharmm.cdocker import Rigid_CDOCKER

################################################################
# #
# #		Begin of pyCHARMM Rigid CDOCKER
# #
# ###############################################################

## Topology and parameter files
settings.set_bomb_level(-1)
#base = "/content/pyCHARMM-Workshop/7CDOCKER_Tutorial/Toppar/"
read.rtf('"../Toppar/top_all36_prot.rtf"')
read.rtf('"../Toppar/top_all36_cgenff.rtf"', append = True)
read.prm('"../Toppar/par_all36m_prot.prm"', flex = True)
read.prm('"../Toppar/par_all36_cgenff.prm"', append = True, flex = True)
settings.set_bomb_level(0)
base2 = "/content/pyCHARMM-Workshop/7CDOCKER_Tutorial/rigid/"
read.stream('ligandrtf')

## Import module
from pycharmm.cdocker import Rigid_CDOCKER

## File name and pathway
ligPDB = "ligand.pdb"
ligandrtf = "ligandrtf"
confDir = "/content/conformer/"
receptorPDB = "protein.pdb"
receptorPSF = "protein.psf"

## Rigid CDOCKER standard docking protocol
clusterResult, dockResult = Rigid_CDOCKER(xcen = 12.33, ycen = 33.48, zcen = 19.70,
              softGridFile='../jupyter_lab/grid/rigid/grid-emax-0.6-mine--0.4-maxe-0.4.bin',
              hardGridFile='../jupyter_lab/grid/rigid/grid-emax-3-mine--30-maxe-30.bin', 
              nativeGridFile='../jupyter_lab/grid/rigid/grid-emax-100-mine--100-maxe-100.bin',
                                        maxlen = 25.762, ligPDB = ligPDB, receptorPDB = receptorPDB,
                                        receptorPSF = receptorPSF, confDir = confDir, flag_grid = True,
                                        flag_delete_conformer = False, numPlace = 5)

print(clusterResult)
print(dockResult)

okay thank you. when i entered on this site (https://feig.bch.msu.edu/mmtsb/Installation) i found this :

Integration with other packages
The MMTSB Tool Set benefits from the availability of other software packages, in particular CHARMM, Amber, Modeller, DSSP, SCWRL, NCBI-BLAST, PSIPRED.

The following environment variables are related to these packages and should be set accordingly:

 CHARMMEXEC    CHARMM executable
 CHARMMDATA    CHARMM data files

for CHARMM, i only built it using --as-library flag. Do i need to add the path to these two variables? and where can i find them?

i tried it now without these two variables and the docking worked well without any error.