status-im/nim-stew

sorted set tests broken when run individually

Opened this issue · 4 comments

nim c -r test_sorted_set "Delete items"

[Suite] SortedSet: Sorted list based on red-black tree
    /home/arnetheduck/status/nimbus-eth2/vendor/nim-stew/tests/test_sorted_set.nim(133, 22): Check failed: data.isOk == canDeleteOk
    data.isOk was false
    canDeleteOk was true
/home/arnetheduck/status/nimbus-eth2/vendor/nim-unittest2/unittest2.nim(848) unittest2
/home/arnetheduck/status/nimbus-eth2/vendor/nim-stew/tests/test_sorted_set.nim(136) runTestX60gensym287
/home/arnetheduck/status/nimbus-eth2/vendor/nim-stew/stew/results.nim(378) get
/home/arnetheduck/status/nimbus-eth2/vendor/nim-stew/stew/results.nim(368) raiseResultDefect

    Unhandled exception: Trying to access value with err Result: rbEmptyTree [ResultDefect]

this is because a global variable is used here:

https://github.com/status-im/nim-stew/blob/master/tests/test_sorted_set.nim#L76

suite variables should use setup: to ensure that a fresh variable is used for every test

mjfh commented

suite variables should use setup: to ensure that a fresh variable is used for every test

this is exactly what is not used here, one test are dependent -- nevertheless the output can be improved for running a single test (and no crash needed)

this is exactly what is not used here, one test are dependent

tests should generally be independent unless there's a good reason - ie if you want tests to share data, they should do so explicitly via an initialization that gets called on every test start, not accidentally by relying on the order in which the test runner sometimes runs them

mjfh commented

Understood, nevertheless there is some merit in writing dependent tests reporting as a single section. What I can do here is skipping the last test if it is not feasible.

Glueing together all dependent tests into a single one needs its own output in sort of subsections IMHO. is there such a feature?

e is some merit in writing dependent tests reporting as a single section.

the right technique for doing that is to move the initialization code to a separate function and call that function from both tests (or indeed put them under one test).

tests are not guaranteed to be run in order by the test running - for example, parallel test runners run them in random order - ditto tools that re-run failed tests automatically - introducing order-dependence in tests is undefined behaviour, essentially.