nextest-rs/nextest

nextest chooses arbitrarily not to run some tests it's found

Closed this issue · 7 comments

OS: W10
Rust: 1.76
nextest: 0.9.67

This has suddenly cropped up and I have no explanation: "29 tests" are announced at the beginning of the run, which is the correct number.

At the end of the run I get this:

Summary [ 0.599s] 25/29 tests run: 22 passed, 3 failed, 0 skipped

Even weirder, if I use the option --run-ignored all, sometimes all 29 tests are run, sometimes "27/29", "26/29", etc.

More details can be found here.

As illustrated there, starting from a "stable" run situation (all discovered tests are run), adding a totally trivial new test, such as:

#[test]
fn do_test() -> Result<()> {
    assert!(false);
    Ok(())
}

... then seems to spark off this unstable running.

Is this a known phenomenon/issue? (I searched).

Hi there -- I think this may be running into the default --fail-fast behavior.

I think we should make this clearer in the UI.

Thanks. I'll try that if this sort of thing crops up in future... but in fact I just managed (hopefully) to solve the problem: I just learnt that nextest apparently executes tests in parallel. Two of my tests involved creating some files and directories in a temporary directory tree, before cleaning up at teardown. Rust has a function std::env::temp_dir() which returns a PathBuf for, seemingly, this kind of purpose. It doesn't actually create the directories involved.

Anyway I realised that the making of these files and directories was clashing. I just went out of my way to ensure that these 2 tests couldn't confuse the paths of their temporary files... and this appears to have resulted in stability. No more 26/29 ... just "29 tests run" ...

Rust has a function std::env::temp_dir() which returns a PathBuf for, seemingly, this kind of purpose. It doesn't actually create the directories involved.

Ah, that only returns the system temp dir. To create new ones most folks use https://docs.rs/tempfile/. If you're using camino (maintained by myself as well), then I wrote a camino-tempfile wrapper around tempfile.

But yes, nextest does run tests in parallel. In your case you can just create separate tempdirs. But in general, if you run into this in a way that can't be worked around, check out https://nexte.st/book/test-groups which lets you specify subsets of tests to apply mutual exclusion or rate-limiting on.

Thanks, great suggestions. I've included camino and camino-tempile in my dependencies for this project.

I'm presuming that camino-tempfile::tempdir() is thread-safe, i.e. that you can safely use it in multiple tests (including rstest parameterised tests?) and not have to worry about using conflicting paths. But I shall find this out in due course no doubt.

Unfortunately ...

I'm having to re-open this issue. This intermittent/inconsistent non-running of tests phenomenon has returned, and the circumstances are fairly specific: as a result of a new test, and resultant tweak to the app code, several (about 9) old tests have (as I anticipated) started to fail. This large number of fails appears to have caused this intermittent/inconsistent non-running of tests to return.

I've haven't yet switched to camino-tempfile: but I had properly separated the temp dirs so I really doubt whether that will cure things.

By the way, I can see that you seem to be the "guiding mind" behind nextest. Great package, thanks.

I tried searching on "default --fail-fast behaviour nextest" but not much came up. As a general thing I wouldn't want the test run to exit on first fail.

Would be interested in what you have to say.

Later: um... right, this seems to be associated with tests which, when they fail, produce quite a bit of console output. In fact very long strings, 1000s of characters long, which I think might be more appropriately output to .txt files for examination, probably. Will double-check and see whether this can cure it in due course...

I'm presuming that camino-tempfile::tempdir() is thread-safe, i.e. that you can safely use it in multiple tests (including rstest parameterised tests?) and not have to worry about using conflicting paths. But I shall find this out in due course no doubt.

Yes, tempfile will create new directories in a race-free manner.

I'm having to re-open this issue. This intermittent/inconsistent non-running of tests phenomenon has returned, and the circumstances are fairly specific: as a result of a new test, and resultant tweak to the app code, several (about 9) old tests have (as I anticipated) started to fail. This large number of fails appears to have caused this intermittent/inconsistent non-running of tests to return.

Try out --no-fail-fast. You can also set it by default via .config/nextest.toml, as documented at https://nexte.st/book/configuration?highlight=fail-fast#configuration.

By the way, I can see that you seem to be the "guiding mind" behind nextest. Great package, thanks.

Thank you for the kind words! Yeah, I'm the primary author and maintainer of nextest.

As a general thing I wouldn't want the test run to exit on first fail.

I definitely understand that viewpoint! Most people generally prefer that, though, and for those that don't it's always configurable.

Just to confirm, "--no-fail-fast" does appear to stop these intermittent/inconsistent results, even without trying to cut down the console output.

The latest versions of cargo-nextest now print out a clearer warning if not all tests were run, e.g.

warning: 116/268 tests were not run due to test failure (use --no-fail-fast to run all tests)

or

warning: 116/268 tests were not run due to interrupt