feat: Allow passing `stats` object to `pass` and `fail` functions
mrazauskas opened this issue · 6 comments
It would be useful to allow passing stats
object to pass
and fail
functions.
Use case. Some runners (for instance, jest-runner-tsd
) are counting passing / failing tests per file. If I get it right, at the moment create-jest-runner
assumes that the whole file passed or failed. Hence, jest-runner-tsd
is implementing custom pass
, fail
and toTestResult
functions. Looks like this is unnecessary boiler plate code.
Alternatively it would be possible to expose toTestResult
from create-jest-runner
. Not sure about this. I think it is better to extend pass
, fail
allowing to pass stats
(with defaults as they are set now):
interface Stats {
failures: number;
passes: number;
pending: number;
todo: number;
}
Glad to work on this. Just wanted to agree on the directions first.
Yeah, seems sensible
Hi @mrazauskas, here are some directions for you to get started.
A jest runner function returns a test result object, which are of type import('@jest/test-result').TestResult
, which can be created using the toTestResult
utility function.
The pass
, fail
, etc functions are just utilities that wraps toTestResult
to provide a more ergonomic usage to consumers of this package.
So, taking the pass
function for instance, you could change the interface of the Options to this:
interface Options {
start: number;
end: number;
test: TestDetail;
numPassed?: number; // <-- added
}
such that it takes an optional property denoting the number of passing tests, and add a default value of 1 here, like:
function pass({ start, end, test, numPassed = 1 }: Options) { /*...*/ }
similar to what jest-runner-tsd
did here.
This package is tested using jest, by spawning another jest process inside each test run and snapshotting the console output after stripping off timestamps. For example see here.
After making modifications to the pass
, fail
function, add some tests to the integrationTests/__fixtures__
folder to cover this piece of funcitonality.
Feel free to tag me if you have any problems :)
@lokshunhung Thanks for these friendly guidelines.
Apologies, I forgot to post an update here. I was playing with the code and was trying to implement this idea in different ways. Have to admit – the idea was wrong (;
Initially I wanted to have something like this: pass({ start, end, test, { numPassed: 3, numTodo: 1, numFailed: 1 } })
. Wait.. Why pass
should be able to report something failing? That’s odd.
Next attempt was more or less what you propose: pass({ start, end, test, 5 })
. Looks better. All is good if all five tests pass all the time, but what if two will fail? Somehow to call pass({ start, end, test, 3 })
and fail({ start, end, test, 2 })
at the same time. This does not work.
Here I realised that pass
, fail
, todo
, and pending
are sort of constants. They are meant to report the result of just one test. They are like shortcuts for this simple use case. Anything more complex needs implementation. That makes sense, right?
At the moment I work on one more complex runner. Must admit that custom implementation instead of helpers was better option in this case.
In the end I could see that my initial idea was wrong, but I had a chance to understand the use of these helper functions. They are not documented in the Readme. So I was keeping the issue open just to remember to document them.
Let's start with the syntax errors in your reply (Please skip this if those are typos).
Forpass({ start, end, test, { numPassed: 3, numTodo: 1, numFailed: 1 } })
, I think you mean something like pass({ start, end, test, stats: { numPassed: 3, numTodo: 1, numFailed: 1 } })
, where the argument passed to the pass
function is an object, with the keys start
, end
, test
, stats
; and the value corresponding to the key stats
is an object of value { "numPassed": 3, "numTodo": 1, "numFailed": 1 }
. The keys start
, end
, test
correspond to identifiers in scope of the same name.
Take
pass({ start, end, test, numPassed: 5 })
as an example, it is different from pass(start, end, test, 5)
. The first one passes an object of value { "start": start, "end": end, "test": test, "numPassed": 5 }
as the first argument to the function pass
, while the second one passes start
, end
, test
, 5
as 4 separate arguments
Here I realised that pass, fail, todo, and pending are sort of constants.
I don't think it's correct to say they are constants. They are functions that return an object of type import('@jest/test-result').TestResult
.
You can copy and paste this snippet inside your IDE of choice and hover over to see the actual type definition
import type { TestResult } from '@jest/test-result'; type _ = TestResult; // <-- hover over me
They are meant to report the result of just one test.
A little bit off here. They are meant to represent the result of one execution of the test runner.
The createJestRunner
accepts a file path, which should be the file that contains test runner function (See this as an example).
This function performs its magic, and reports the results back to jest. In one invocation of this function, there can be multiple "tests" executed. What counts as one "test" and how many "tests" are there in each invocation of this function, is up to the implementation of this function to decide.
They are not documented in the Readme.
Unfortunately you are correct. The documentation in the readme is not very good. When I first tried to use this library, I ended up reading the source code, then later contributing to it, but ultimately didn't update the readme for better documentation (oops). I hope I cleared some of your doubts. If not, please tag me with follow up questions :)
I think I had a half-baked runner implemented somewhere in my computer, maybe you could use it as reference for your runner. I'll try digging tomorrow and see if I could find it.
In the meantime, may I recommend you to take a look at these for reference:
Those are just typos, of course.
Just "sort of constants", not constants in the strict sense.
Indeed I was reading the source code too. This helped to shape the runner I am developing. All is clear and all is working.
@mrazauskas I found it, here you go (The basic functionality of spell checking sort of works). I think all the runners that I listed in my last message are not written in TypeScript.
Just "sort of constants", not constants in the strict sense.
I guess that's one way to put it. You treat each one as a template as a possible outcome of the test result.