luancarvalhomartins/PyAutoFEP

running with GPU

xy21hb opened this issue · 6 comments

Dear all,

To run FEP with GPU, I set in step2.ini,

gmx_bin_run = "/home/usr/miniconda3/bin/gmx -nb gpu -gpu_id 1 -ntmpi 1 -ntomp 20 -v"
gmx_bin_local = "/home/usr/miniconda3/bin/gmx"

PyAutoFEP calls GROMACS like,
gmx -nb gpu -gpu_id 1 -ntmpi 1 -ntomp 20 -v grompp -c FXR_12-FXR_74/protein/lambda0/min01/../FullSystem.pdb
which is obviously wrong.

I wonder how I could add those gmx mdrun extra flags like -gpu, -ntmpi, etc, in step2.ini since the scripts only add "mdrun" after "gmx_bin_run" executable.

Thank you for your message. Sorry it took a while to answer, I'm working on industry now and my free time is now really limited.

You don't have to pass any options to gmx mdrun to use the available GPU, it is the default. Passing options via gmx_bin_run won't work for the reason you observed.

In case you are running using SLURM or PBS, make sure your submitting to a partition with GPU and that the scheduler configuration is correct. The job will use the CPU and GPU as assigned by the scheduler. Selecting a GPU for job schedulers is also the default. See documentation for further info on partitions and number of GPU.

In case you are running with bash (ie, no scheduler), each perturbations will be run serially and use the entire node. This works, but using a schedulers is often better.

Have you tried to run and observed errors?

I also wonder how I could add those gmx mdrun extra flags like -ntomp. Because I found that without this, gromacs would not work. thanks. @luancarvalhomartins @xy21hb

I also wonder how I could add those gmx mdrun extra flags like -ntomp. Because I found that without this, gromacs would not work. thanks. @luancarvalhomartins @xy21hb

I modified the content of line 2940 of the prepare_dual_topology.py. Added -ntomp flag. Theoretically, any available flag can be added.

Then run gmx mdrun

input_list = ["mdrun", "-ntomp 20", "-deffnm", os.path.basename(each_dir)]

I also wonder how I could add those gmx mdrun extra flags like -ntomp. Because I found that without this, gromacs would not work. thanks. @luancarvalhomartins @xy21hb

In general I would say that passing flags directly to GROMACS is dangerous. For instance, should it be applied during minimization? What about equilibration? Should it be passed for mdrun -rerun? You added it only for equilibration, but how could PyAutoFEP could guess. For consistency, any flag should be - in principle - passed to all instances. But this may lead to problems with -rerun and -multidir.

I believe the ideal solution for this problem is using gmx_mpi mdrun -multidir for the equilibration step. This will also speed up the execution. I, therefore, suggesting closing this issue and opening another one for using -multidir for equilibration.

Should you think otherwise, please comment here and I will add way to pass options for mdrun

Please, follow the parallelization of equilibration in #104.

job schedulers

job schedulers

Dear all,

To run FEP with GPU, I set in step2.ini,

gmx_bin_run = "/home/usr/miniconda3/bin/gmx -nb gpu -gpu_id 1 -ntmpi 1 -ntomp 20 -v" gmx_bin_local = "/home/usr/miniconda3/bin/gmx"

PyAutoFEP calls GROMACS like, gmx -nb gpu -gpu_id 1 -ntmpi 1 -ntomp 20 -v grompp -c FXR_12-FXR_74/protein/lambda0/min01/../FullSystem.pdb which is obviously wrong.

I wonder how I could add those gmx mdrun extra flags like -gpu, -ntmpi, etc, in step2.ini since the scripts only add "mdrun" after "gmx_bin_run" executable.

modify gmx mdrun flags will not work. Slurm use GPU by default if gres type was OK