luancarvalhomartins/PyAutoFEP

analyzing error using tutorial.tgz

Opened this issue · 3 comments

In workdir/tutorial directory, I successfully run 'bash runall.sh' and got tutorial.tgz.
However, when running 'analyze_results.py --input tutorial.tgz --units kcal --output_uncompress_directory tutorial_temp --center_molecule=FXR_12', following error appeared, please help to solve:

###########################################
****** PyMBAR will use 64-bit JAX! *******
* JAX is currently set to 32-bit bitsize *
* which is its default.                  *
*                                        *
* PyMBAR requires 64-bit mode and WILL   *
* enable JAX's 64-bit mode when called.  *
*                                        *
* This MAY cause problems with other     *
* Uses of JAX in the same code.          *
******************************************

All available analysis will be run
================== Pairwise ΔΔG ==================
         Perturbation           protein    water

=================== STACK INFO ===================
  File "/home/shanghai/RationalDesign/mAB/PyAutoFEP/workdir/tutorial/../../analyze_results.py", line 1820, in <module>
    ddg_to_center = ddg_to_center_ddg(saved_data['perturbation_map'], center=arguments.center_molecule,
  File "/home/shanghai/RationalDesign/mAB/PyAutoFEP/workdir/tutorial/../../analyze_results.py", line 1034, in ddg_to_center_ddg
    os_util.local_print('Could not find a path to connect nodes {} and {}. Cannot go on. Check the '
  File "/home/shanghai/RationalDesign/mAB/PyAutoFEP/os_util.py", line 307, in local_print
    formatted_string = '\n{:=^50}\n{}{:=^50}\n'.format(' STACK INFO ', ''.join(traceback.format_stack()),
=================== STACK INFO ===================
[ERROR] Could not find a path to connect nodes FXR_74 and FXR_12. Cannot go on. Check the  perturbations and your input graph. Alternatively, rerun with no_checks to force execution and ignore this error.
Traceback (most recent call last):
  File "/home/shanghai/RationalDesign/mAB/PyAutoFEP/workdir/tutorial/../../analyze_results.py", line 1024, in ddg_to_center_ddg
    ddg_to_center[each_node] = sum_path(ddg_graph, each_path, ddg_key=ddg_key)
  File "/home/shanghai/RationalDesign/mAB/PyAutoFEP/workdir/tutorial/../../analyze_results.py", line 986, in sum_path
    raise networkx.exception.NetworkXNoPath('Edge between {} {} not found in graph {}'.format(node_i, node_j, g0))
networkx.exception.NetworkXNoPath: Edge between FXR_12 FXR_74 not found in graph DiGraph with 6 nodes and 5 edges

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/shanghai/RationalDesign/mAB/PyAutoFEP/workdir/tutorial/../../analyze_results.py", line 1820, in <module>
    ddg_to_center = ddg_to_center_ddg(saved_data['perturbation_map'], center=arguments.center_molecule,
  File "/home/shanghai/RationalDesign/mAB/PyAutoFEP/workdir/tutorial/../../analyze_results.py", line 1040, in ddg_to_center_ddg
    raise networkx.exception.NetworkXNoPath(error)
networkx.exception.NetworkXNoPath: Edge between FXR_12 FXR_74 not found in graph DiGraph with 6 nodes and 5 edges
#####################################

It is likely that one of the legs (or both) between FXR_12→FXR_74 did not run. Please, take a look at the log files in the md folders.

Thanks for your suggestion!
I configured gmx cuda version, not gmx_mpi. Would this be the reason of the error?
I checked log files in md of both protein and water, finding:

Program: gmx mdrun, version 2023.1-conda_forge
Source file: src/gromacs/mdrunutility/multisim.cpp (line 66)
Function: std::unique_ptr<gmx_multisim_t> buildMultiSimulation(MPI_Comm, gmx::ArrayRef<const std::__cxx11::basic_string >)

Feature not implemented:
Multi-simulations are only supported when GROMACS has been configured with a
proper external MPI library.

For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors


Primary job terminated normally, but 1 process returned
a non-zero exit code.. Per user-direction, the job has been aborted.


Program: gmx mdrun, version 2023.1-conda_forge
Source file: src/gromacs/mdrunutility/multisim.cpp (line 66)
Function: std::unique_ptr<gmx_multisim_t> buildMultiSimulation(MPI_Comm, gmx::ArrayRef<const std::__cxx11::basic_string >)

Yes, that is exactly the problem. PyAutoFEP requires MPI-enabled GROMACS to run FEP. It should even have told you that during prepare_dual_topology.py.