mzjb/DeepH-pack

a user warning during inference step

Closed this issue · 3 comments

Hi, There,
when I am doing the inference part, all steps before inference step all finished normally, but the program remind me this warining (below). is this warning we can ignore or not? if not, how can we avoid (remove) it?

/share/home/zhangtao/anaconda3/envs/ZT-py39/lib/python3.9/site-packages/deeph/kernel.py:53: UserWarning: Unable to copy scripts
warnings.warn("Unable to copy scripts")

Here I also list my inference setting:

[basic]
work_dir =/work/deeph-test/workdir/inference3
OLP_dir = /
/work/deeph-test/workdir/olp
interface = openmx
structure_file_name = POSCAR
trained_model_dir = /work/deeph-test/workdir/trained_model/2023-04-19_11-29-45
task = [1, 2, 3, 4, 5]
sparse_calc_config =
/work/deeph-test/workdir/inference3/band.json
dense_calc = True
disable_cuda = False
device = cuda:0
huge_structure = True

gen_rc_idx = False
gen_rc_by_idx =
with_grad = False

[interpreter]
julia_interpreter = ***/software/julia-1.6.6/bin/julia

[graph]
radius = -1.0
create_from_DFT = True

the band.json setting is :
{
"calc_job": "band",
"which_k": 0,
"fermi_level": 0,
"lowest_band": -10.3,
"max_iter": 300,
"num_band": 100,
"k_data": ["46 0.3333333333333333 0.6666666666666667 0 0 0 0 K Γ", "28 0 0 0 0.5 0.5 0 Γ M", "54 0.5 0.5 0 0.6666666666666667 0.3333333333333333 0 M K'"]
}

One more question,

the program seems to be stucked at 3.get_pred_Hamiltonian, because the output file is not updated in the working directory. The latested updated time is 12:47, 26/04/2023. After that time, the files never have any change, but the program still is running now(17:47, 26/04/2023).

image
Uploading image.png…

Much appreciation for your kind help.

Best regard,

mzjb commented

Hi.

For the first issue, DeepH-pack will copy the code to the $(work_dir) for backup purposes. The warning you are seeing is usually caused by the presence of files in the $(work_dir)/pred_ham_std/src directory, which prevents the copying process.

As for the second issue, I'm not entirely sure what's causing it, but you could try adding restore_blocks_py = False under the [basis] section in the inference ini configuration file to see if that helps.

mzjb commented

I am not noticing that you were already performing the fifth step of band structure calculation, which means your issue is not related to restore_blocks_py.

Regarding the number of atoms in your material, if it exceeds 100, you should use sparse diagonalization instead of dense diagonalization. To do so, you should set dense_calc to False.

I suggest that you avoid replying to GitHub issues via email, as this will display additional irrelevant content on the GitHub web page.