passaH2O/dorado

Build failing on Ubuntu (Python 3.8+)

wrightky opened this issue · 1 comments

The last two build attempts have failed for ubuntu-latest 3.8 and 3.9 (all other tests are passing). Specifically, we're getting an assertion error for three of the tests in tests/test_examplecases.py::TestRCM. Here's the failure message:

=========================== short test summary info ============================
FAILED tests/test_examplecases.py::TestRCM::test_few_steps_RCM - assert [0, 700759344...472986.069204] == [0, 700759344...473384.876724]
  At index 1 diff: 7007593448.337233 != 7007593448.337235
  Full diff:
  - [0, 7007593448.337235, 10439964733.462337, 13698473384.876724]
  ?                     ^           ^^^^ ^^^^         ^^ ^ ^^ ^
  + [0, 7007593448.337233, 10439964528.877066, 13698472986.069204]
  ?                     ^          +++++ ^^ ^         ^^ ^ ^ ^ +
FAILED tests/test_examplecases.py::TestRCM::test_set_time_RCM_previousdata - assert [0, 7007593448.337233] == [0, 7007593448.337235]
  At index 1 diff: 7007593448.337233 != 7007593448.337235
  Full diff:
  - [0, 7007593448.337235]
  ?                     ^
  + [0, 7007593448.337233]
  ?                     ^
FAILED tests/test_examplecases.py::TestRCM::test_set_time_RCM - assert [0, 7007593448.337233] == [0, 7007593448.337235]
  At index 1 diff: 7007593448.337233 != 7007593448.337235
  Full diff:
  - [0, 7007593448.337235]
  ?                     ^
  + [0, 7007593448.337233]
  ?                     ^
========================= 3 failed, 88 passed in 4.55s =========================

It looks to me like our random seed isn't guaranteeing identical travel times in newer versions of Python, because the error is occurring quite far down the list of sig-figs. The difference is on the order of 10^-8. I'm thinking an acceptable fix would be to change the assertion to check whether the results are within some threshold difference of each other, maybe 10^-6? Let me know if that works for you @elbeejay , happy to work on this later.

That sounds good to me @wrightky - maybe using pytest.approx would be most appropriate with some relative tolerance of a hundredth or something?