hfp/xconfigure

neb.x in Quantum ESPRESSO fails to compile with no error message

sweitzner opened this issue · 4 comments

Using the configure scripts for Sandy Bridge processors, we compiled both ELPA and Quantum ESPRESSO v.6.0, and see that pw.x builds perfectly well. However, when building the neb.x code, the executable fails to be built despite the console reporting zero warnings (to my knowledge) and no apparent error messages. We also see an empty fortran source file appear in the NEB/src directory called "init_us_1.f90". Any ideas about what could be causing this?

hfp commented

I am trying to reproduce this now using Intel Compiler (2017.0.098), and ./configure-qe-snb.sh (rather than ./configure-qe-snb-omp.sh). If it does not reproduce I will ask you for more details (compiler version, etc.).

hfp commented

The problem is now resolved. You need to reconfigure QE using the updated configure wrapper script(s). Please note that "wget" may not overwrite the existing scripts; perhaps you wipe those before.

Prior to the actual fix, I also did some cleanup. The latter renamed the "tagname" of the default ELPA build (renamed from "2017" to "default") as well as the expected tagname for the QE recipe. The tagname is reflected in the directory name of what you can find in your "elpa" directory (after make install of the ELPA recipe). I recommend you to rename 'elpa/2017-snb-omp' (and 'elpa/2017-snb') to 'elpa/default-snb-omp' (and 'elpa/default-snb'). Alternatively, you can configure QE by using ./configure-qe-snb-omp.sh 2017 (please note the "2017" argument, which is the tagname). Btw, the "tagname" is now documented for both the ELPA and QE recipes. The tagname is just to allow building multiple variants of ELPA and to select a variant when building QE.

As a general note (before writing my "How to Run QE", I recommend you to build the OpenMP version of QE (and ELPA). When you scale-out to more nodes, it helps to scale a bit further. Moreover, some parallelization levels in QE now start to really benefit from OpenMP (more than in the past). On the other hand you then need to set OMP_NUM_THREADS, and also need to deal with "hybrid parallelism" (MPI+OMP). I hope to soon come up with some reasonable "How to Run" recipe. In general, you probably want to orchestrate the NPOOL, NDIAG, and NTG command line arguments.

Thank you this seems to have fixed the issues we were having.

hfp commented

Thank you! I am closing the issue now.