QM/MM excited state dynamics using CMS-PDFT with Molcas/Tinker interface
Opened this issue · 5 comments
Hi all,
I'm trying to run a single trajectory excited state dynamics in a biological system using QM/MM, with CMS-PDFT method in Molcas. I know that the module is claimed to be experimental within SHARC, so I used the test system included in the MOLCAS_TINKER_qmmm folder to test CMS-PDFT with a system I know it works with CASSCF. My input for sharc.x is:
geomfile "geom"
veloc random 0.1
nstates 2 0 1
actstates 2 0 1
state 2 mch
coeff auto
rngseed 26933
ezero -93.95
tmax 3.0
stepsize 0.5
nsubsteps 25
surf sharc
coupling overlap
nogradcorrect
ekincorrect parallel_vel
decoherence_scheme edc
grad_select
nac_select
eselect 0.5
select_directly
atommask external
atommaskfile "atommask"
and my MOLCAS.template file is:
basis cc-pvdz
ras2 3
nactel 4
inactive 6
roots 4 0 3
method CMS-PDFT
functional t:PBE
qmmm
The first task in QM.in is submitted and ends correctly. However, when the GRAD routine starts, neither of the submitted inputs can find the Do_Rotate.1.txt in the savedir folder. My intuition says that something's missing in the SHARC_MOLCAS.py code. Please let me know whether I need to provide more information or files to solve this issue!
Hi gustavms93,
I think we have never tested this combination of CMS-PDFT and QM/MM. I would agree that there is probably something missing in the interface, but I cannot really figure it out right now. Could you provide the input and output files in the scratch directory, plus the QM.in and QM.out?
On a related note, we are in the process of redesigning the MOLCAS interface in SHARC and recently also had some trouble with gradients from CMS-PDFT (even without QM/MM). There might be recent MOLCAS versions where the CMS-PDFT gradients do not work correctly (or where we did not find up-to-date documentation of the required keywords).
Best,
Sebastian
Hi Sebastian,
I've added a ZIP file with all the requested input and output files. In the QM.log file, there might be an error code 1 in two gradient calculations. Sometimes I have a node crash, however I'm able to produce all the inputs and outputs with the error code 113 in the end, as this QM.log shows for gradient_1_1.
One bug that I found is that the keyword CMMI does not work with Molcas. Probably it refers to CMMA keyword instead.
Please let me know if I can provide more information!
Best,
Gustavo.
Hi Gustavo,
thanks for the files. It seems that there is a bug in the SHARC-MOLCAS interface, where the Do_rotate.txt is copied from the savedir in several places, but it is never copied to the savedir. This is the immediate reason for the errors you are seeing.
In the sharc3preview branch (commit 77b6ed2), I have added two lines that copy this file to the scratchdir. Could you please check whether this helps? Unfortunately, I am not very well versed in how to use MC-PDFT, so there might still be something wrong with the code in that respect. If you think that there should be CMMA instead of CMMI (neither are documented in the Manual), I would be glad if you could try that out.
Best,
Sebastian
Hi again Sebastian,
I tried in two ways: the way that copies the Do_Rotate.txt file to the savedir, and it works, but also I tried by commenting those lines that copy the Do_Rotate.txt file from the savedir, which it also works. As generating the Do_Rotate.txt file is not an extensive task for CMS-PDFT, one can skip that task. Also I double checked that the Do_Rotate.txt files that are generated from the consecutive gradient calculations and the copied ones are exactly the same ones.
On the other hand, the CMMI and CMMA keywords that the SHARC-MOLCAS interface writes are part of CMS-PDFT, but they are not reported in the manual! After a deep search I found that CMMI and CMMMA keywords are for setting up the minimum and maximum cycles for optimizing the intermediate states. However, they usually converge within the default of 100 cycles.
Other thing that I noticed is that MOLCAS calculate the gradients for CMS-PDFT, without including the MM gradients in the calculation. This task do not occur with CASSCF, and it's not possible with CASPT2 as it does not allow analytical gradients. I understand that this is a Molcas issue that I'm also deeply interested in solving. I'm attaching the new input and output files, including QM.in and QM.out files for you, and I'm running a separate test to check the QM/MM gradients with CMS-PDFT.
We can keep the discussion regarding this issue, either for QM or QM/MM calculations using CMS-PDFT!
Best,
Gustavo.
new-CMSPDFT.tar.gz
Hi Gustavo,
many thanks, that indeed clarifies a few things! I think copying the Do_Rotate.txt for the gradient jobs does make sense, because in each gradient job, the wave function should be restarted as much as possible, to avoid reconverging a wave function that is already converged in the main calculation. However, for smaller calculations, this will not be strongly noticable in the computation time. About CMMI and CMMA, that sounds like the SHARC interface can probably skip them and just go with the default.
About gradients for QM/MM, we recently had done some search, as we are currently rewriting the MOLCAS-Interface (and all other interfaces) for SHARC 4. The issue is that in the "standard" Molcas-Tinker way, the gradients are only computed in an approximate fashion. Basically, the QM electron density is computed and then the electric field at the point charge position is obtained. The gradient on the point charge is then charge times electric field. This is the reason why (internally) every point charge is given twice in the input file - once to add them to the one-electron Hamiltonian (XFIELD keyword in GATEWAY) so that the point charges affect the wave function, and once to compute the electric fields at the point charges (EFLD in GATEWAY) so that the MM gradient can be computed. However, this approach neglects some contributions to the exact gradient. It also does not make it possible to compute NACs on the point charges, even though formally they exist (moving a point charge could lead to mixing of two adiabatic states).
For these reasons, in the new interface, we will treat MM atoms as dummy atoms without basis functions and with nuclear charge = MM charge. In this way, they are treated by ALASKA on equal footing, and no extra keywords are needed whatsoever. Disadvantages are that the number of point charges is quite limited (can be increased somewhat by recompiling OpenMolcas with the right modifications), that the generated input file is very large due to complicated dummy atom syntax, and that one has to take care to properly separate the QM and MM gradients afterwards. You can find a short discussion on that topic in the Molcas forum: https://molcasforum.univie.ac.at/viewtopic.php?id=932. Note that this change in the SHARC interface is only possible because we are switching to a modular approach, where we do not use the Molcas-Tinker interface but take care of the QM/MM and MM steps ourselves separately from Molcas.
Best,
Sebastian