jewettaij/moltemplate

Tests fail to run: no tests ran in 0.09 seconds

yurivict opened this issue · 7 comments

setup.py says that pytest is required for tests, but it fails to run them:

==================================================================================== test session starts ====================================================================================
platform freebsd13 -- Python 3.8.12, pytest-4.6.11, py-1.9.0, pluggy-0.13.1
rootdir: /disk-samsung/freebsd-ports/science/py-moltemplate/work-py38/moltemplate-2.20.1
plugins: forked-1.0.2, hypothesis-6.32.1, cov-2.9.0, xdist-1.32.0, rerunfailures-10.1, timeout-1.4.2, mock-1.10.4
collected 0 items                                                                                                                                                                           

=============================================================================== no tests ran in 0.09 seconds ================================================================================
*** Error code 5

Version: 2.20.1
Python-3.8
pytest-4.6.11
FreeBSD 13

Hi Yuri
It's true that I'm not using pytest. In the latest commit (327d839), I removed references to pytest from setup.py. Does this resolve the issue?
-Andrew

Are there instructions on how to run tests?

Hi Yuri

I did not expect anyone else besides myself to run these tests. The instructions are in the .circleci/config.yml file, although they might seem a little bit confusing because of the way I wrote moltemplate. Here are the details:

First, some background: Although the majority of moltemplate was written in python, these python scripts are typically invoked one after another by a bash script named "moltemplate.sh". Each python script creates temporary files which the next python script reads. (So the output of one python script becomes the input of the next python script.) I wrote it this way because I originally ran into problems using up too much memory if I tried to run everything from within python. (I didn't know about the "gc" module or python slots at the time.)

As a result, I don't use python (or pytest) to test moltemplate. Instead I run the "moltemplate.sh" script multiple times, under different conditions, and I check that the output files generated are correct each time. (I also test several other scripts, such as "ltemplify.py", which are included with moltemplate.)

First you will have to install moltemplate using pip. To do this safely without messing up your existing python environment, I use a virtual environment and install moltemplate there:

python -m venv ~/venv_moltemplate
source ~/venv_moltemplate/bin/activate
git clone https://github.com/jewettaij/moltemplate ~/moltemplate
cd ~/moltemplate
pip install . --user  # (install moltemplate in the ~/venv_moltemplate virtual environment)

Then run these commands to test moltemplate.sh. (Note: I use "shunit2" to exit with a non-zero exit code if any of the "assertTrue" statements contained in the various .sh files fail.)

git clone https://github.com/kward/shunit2 shunit2
bash tests/test_read_coords_pdb.sh
bash tests/test_ltemplify.sh
bash tests/test_oplsaa.sh
bash tests/test_compass.sh
python tests/test_genpoly_lt.py

Do you have any suggestions regarding how to implement these tests?

I will close this issue. Feel free to reopen it if I have failed to address your concerns.

Maybe you can create a single shell script that would run the above test commands:

bash tests/test_read_coords_pdb.sh
bash tests/test_ltemplify.sh
bash tests/test_oplsaa.sh
bash tests/test_compass.sh
python tests/test_genpoly_lt.py

?

Also one of the tests fail:

(python version 3.9.13 (main, Sep  3 2022, 01:12:14) 
[Clang 13.0.0 (git@github.com:llvm/llvm-project.git llvmorg-13.0.0-0-gd7b669b3a)
########################################################
##            WARNING: atom_style unspecified         ##
## --> "Data Atoms" column data has an unknown format ##
##              Assuming atom_style = "full"          ##
########################################################
parsing the class definitions... done
looking up classes... done
looking up @variables... done
constructing the tree of class definitions... done

class_def_tree = (88, 89, 90, 91, type253, type261, type272, type273)

constructing the instance tree...
 done
sorting variables...
  sorting variables in category: @/atom:
  sorting variables in category: @/bond:
  sorting variables in category: $/atom:
  sorting variables in category: $/mol:
  sorting variables in category: $/bond:
 done
 done
building templates... done
writing templates... done
building and rendering templates... done
writing rendered templates...
 done
writing "ttree_assignments.txt" file... done

expanding wildcards in "_coeff" commands

lttree_postprocess.py v0.6.2 2021-4-20
lttree_postprocess.py: -- No errors detected. --

WARNING: no angle coeffs have been set!
WARNING: no dihedral coeffs have been set!
WARNING: no improper coeffs have been set!
postprocessing file "system.in.settings"

-------------------------------------------------------------
If this software is useful in your research, please cite
Jewett et al. J.Mol.Biol. (2021) (https://doi.org/10.1016/j.jmb.2021.166841)
-------------------------------------------------------------
~/moltemplate-2.20.14/tests/ethylene+benzene
ASSERT:cleanup_moltemplate.sh failed: system.data missing impropers

Ran 1 test.

FAILED (failures=2)
*** Error code 1

Hi Yuri
Thanks for the suggestion and also for letting me know about the failed test. Unfortunately you reported this at a time my life is upside down. I just moved far away and started a new job.
It would be possible to put all the tests in a single .sh file, but I would have to invoke shunit2 a different way. I don't have a lot of time to tinker with this right now, but I suppose it is possible. What would be the advantage? Incidentally, I did slightly modify the .circleci/config.yml slightly. (Now the "shunit2" directory is stored in the "tests" directory.)
Also: I was not able to reproduce the failed test (either at home or automatically using circleci). When I run the tests, they complete without error.