Report progress from console runner (nunit3-console.exe)
Closed this issue ยท 14 comments
@shift-evgeny commented on Fri Aug 12 2016
When running many tests using nunit3-console.exe it would be useful to see how many tests have been run and, ideally, how many remain to be run to get a rough idea of how far along the testing process is. It would also be useful to show the currently running test, so that tests which freezes or are particularly slow can be easily identified. #1139 and #1226 are useful if you have custom output that you want your tests to write, but this is about a general progress indicator for all tests. I believe NUnit 2 used to do something like this.
@rprouse commented on Fri Aug 12 2016
I would like to see progress reported also, but to do so, we need to load all of the assemblies at the start and count all the tests. We just added changes to only load assemblies as we run them because of the high memory usage otherwise. Maybe we could add progress for each test assembly?
A workaround for seeing tests as they are run is to add the --labels=All
command line which will cause the test name to be output after each test finishes.
@CharliePoole commented on Fri Aug 12 2016
NUnit 2 showed progress in the console runner by displaying a single char for each test as it finished. Normally it would be '.' but an 'F' was displayed for failrues. There was no total count of tests.
@shift-evgeny Can you suggest a particular console implementation that you'd like to see? Are you suggesting we take over some lines of the console display and keep updating it?
As @rprouse points out, we just removed the requirement to load and count all tests before running any of them. Folks who have 50 or so assemblies will benefit form this so we don't want to remove it. We could show a count of assemblies and a separate count of tests within each assembly.
@shift-evgeny commented on Fri Aug 12 2016
Yes, I think showing the progress per assembly would be a good compromise. (After running a set of tests once or twice a programmer has a pretty good idea of which assemblies are the slow ones.)
Yes, I think it would be nice to keep updating the same lines - though this would have to be tested to see how it behaves in TeamCity. I imagine something like this:
MyFirstTestAssembly [......... ] (090/200)
CurrentlyRunningTestInMyFirstTestAssembly
MySecondTestAssembly [....F...........E............ ] (140/192)
CurrentlyRunningTestInMySecondTestAssembly
F
for failure as in NUnit 2 and something else for error (E
or whatever NUnit 2 used to show).
Thanks for the --labels=All
tip, that could be a useful workaround. What is the difference between --labels:On
and --labels:Off
, though? Neither of them show anything and the documentation doesn't explain this at all.
@rprouse commented on Fri Aug 12 2016
I liked the way MbUnit did progress, they took over the last line of the console and created a progress bar and % complete that constantly updated as the tests ran. We could modify that and take over two lines, the first could be a progress for the assemblies and the second for the tests? That might be hard when we are running multiple assemblies in parallel though. We also need to handle test output going above the progress bar in the output.
It also needs to work with captured output in CI systems, so it should probably be opt-in with a command line argument.
@rprouse commented on Fri Aug 12 2016
Correct me if I am wrong @CharliePoole, this is off the top of my head ๐
--labels=On
will print out labels for tests that have test output, --labels=All
will print out labels for all of your tests and Off turns them off. If the docs confused you, we should probably update them.
@CharliePoole commented on Fri Aug 12 2016
@rprouse Exactly. There may be a current bug in that All is printing the labels at the end of the test. I originally designed it to print at the start of the test so you knew what test was running when it hung.
Another option would be a pop-up progress window. I've seen some great download windows on linux that handle as many threads of execution as you have with a separate bar for each one.
@shift-evgeny commented on Fri Aug 12 2016
No, please, no pop-up windows from a console app!
@CharliePoole commented on Fri Aug 12 2016
Never? Not even if you typed nunit3-console my.dll --popup
?
Seriously, I'd say it's a matter of taste. Some folks think that a curses-stype screen update is nasty.
@CharliePoole commented on Fri Aug 12 2016
We could do it with one line per agent, but you could have more agents than lines and then you wouldn't see anything. For a one- or two-line implementation, we could get the count of the first agents loaded right away, from the start-test message from each assembly. So it would work as expected so long as the number of assemblies was less than or equal to the number of agents specified. For more assemblies, you would appear to lose ground in the progress bar as each new assembly was loaded. Not perfect, but not the end of the world either.
Also, if we used a numeric format rather than a visual bar we could probably fit six to eight assembly counters on a single line.
Just throwing out ideas for the moment. :-)
@shift-evgeny commented on Fri Aug 12 2016
Pop-up windows are a horrible interruption to whatever the user is doing, sometimes even stealing keystrokes or mouse clicks. They would be especially bad for something like NUnit console, which a user would often want to run in a background window and maybe check on once in a while. I'd certainly never enable it.
I'm not sure how to deal with too many agents/assemblies. Is it likely that a user would have more than 30 or so agents? If they did, wouldn't they see the last 30, rather than nothing? I guess they could scroll the window up to see more. Not ideal, but I can't think of anything else at the moment.
@CharliePoole commented on Fri Aug 12 2016
In this case, however, the window would be up for the duration of the run. The user could move it elsewhere in order to keep the progress in view. they could minimize it. It could even show a summary progress state when minimized. Granted, not everyone's cup of tea.
Regarding agents, the current default is to not limit the number of agents. You get one for each assembly. I've been amazed at the number of test assemblies some users want to run together. Now that we allow loading of the assemblies as-needed, there is more motivation for the user to specify the number of agents as something like the number of cores, but that's brand new and not something users will discover easily - unless they actually read the release notes, that is!
@dybs commented on Fri Aug 12 2016
What about an option to switch between updating the progress per assembly (as discussed above, maybe --progress=summary), or showing the pass/fail/error status of a test once it completes like NUnit 2 (perhaps --progress=detailed)? Or maybe this could just be an option added to --labels=All (such as --labels=Result)? In my case, I have a couple thousand test cases I'm running through and I'd like to see in real-time which ones fail so I can investigate those cases while the remaining tests continue to run. Even though I can see which tests have executed using --labels=All, I have no idea if they passed or failed.
@CharliePoole commented on Fri Aug 12 2016
For implementation purposes, having a progress display (curses-style or windows dialog) is fundamentally different from changing the labels option. Changing what labels does is relatively minor. We would tweek some existing code. The progress display requires a few new classes. Obviously, neither is rocket science.
I like the idea of --labels=result or alternatively --labels=after. We could support Before and After, letting On be a synonym for Before. It seems to be a separate issue from this one, however, and it definitely requires different code.
@dybs commented on Fri Aug 12 2016
OK, I wasn't sure if it would be separate or not since it's still somewhat related to reporting progress. Should I create a separate issue for the --labels=result idea?
@CharliePoole commented on Fri Aug 12 2016
@dybs I think that would be best, since this solution does seem to do anything for @shift-evgeny who created this issue. :-) Also, your idea is what we usually label as "easyfix" to encourage newcomers to submit PRs.
@dybs commented on Fri Aug 12 2016
@shift-evgeny Sorry for hi-jacking your issue.
@CharliePoole commented on Fri Aug 12 2016
It helps us to keep things separated, but it also helps the folks who ask for fixes or features. Mixed issues usually get handled and prioritized based on the hardest, vaguest, most uncertain piece of work they contain and can only be assigned to somebody with a level of skill such that they can do every part. so if you have something more or less trivial, you want it to be by itself as an issue.
@nunit/contributors Thoughts on this? Should it be in our backlog? Do we need to do some design?
I think a full test progress would require a large refactor and I think that the changes to labels is probably good enough. ๐
Could we print out labels for assemblies too? We use the console to run >100 assemblies, and occasionally a test gets added that can deadlock, causing the console to sit there forever, and it's difficult to trace which assembly is hung. In NUnit 2 we wrote a listener that logged assembly start/finish events to console.
I'm not sure if this is the correct place to discuss this, I'm happy to make an issue for it if you'd like me to.
I'd like to chime in an additional use-case for this (or something/anything like it):
The Travis CI platform (and perhaps others too) has a timeout rule whereby it cancels a build if no output has been made to the console for 10 minutes (seems like the intention is to detect & cancel hanging builds). Travis provide a workaround for this but it's not elegant.
When using NUnit to drive actual unit tests, this 10 minute timeout isn't a problem for me - they complete far quicker than 10 mins. However when NUnit is driving BDD/Browser Automation tests they can take quite a bit longer than 10 mins for any decent-sized suite.
So - from my perspective I'd like it if nunit3-console provided a way (in all honesty, any way) of "keeping the console alive" while it's running tests. If that means it's reporting progress then wonderful because it's useful information, but right now I'd take the old NUnit 2 behaviour of printing period characters to the console.
@craigfowler assuming none of the individual tests take more than 10 minutes, you could use the labels option for the NUnit console to output the test names as they are being run.
Wow, @rprouse. I can't believe I read through the help text of nunit3-console
so many times and every single time failed to spot that option which pretty clearly does exactly what I wanted. I pored over it on numerous occasions and completely failed to notice exactly the option I wanted.
D'oh. Yes - this does do exactly what I want for this scenario. Thanks.
I'll now go get my eyes tested.
@craigfowler it isn't just you. We have so many options that the help text has become a wall of text. Even my eyes glaze over and I can never remember all the options we have ๐
If there is quite a lot of console or TestContext.Write
output, which is useful for failures, the successes end up making a lot of noise.
To reduce the noise, if the output is directed to a file (with --out
), then the labels don't show on the console window, as they go to the output also. I think in that case, a .
-like progress or something would be very helpful.
This issue has been inactive for a long time. Closing it now. Comments may still be made and we will respond to them.
@CharliePoole
I just ran into the same use case of wanting to keep track of a long test execution. The labels work, but it might be worth adding an option for separating the labels from the output of the tests themselves. That way, I can redirect test output to a file (or null if I don't care about it) and be able just to see test progress.
@amaltinsky
I'm having trouble getting your meaning. That may be because I originally created "labels" were originally as a way to label the output from tests, so you would know which test produced which line of output. Back then, the only options were On
and Off
. Now things are more complicated of course.
Output created by the test itself goes to two places... the XML result file and the runner, which is nunit3-console in this case. The runner can decide what to do with that output. For example, in a multi-threaded test run, the console runner only displays a label if the line of output comes from a different test from the preceding line. When it determines it should produce a label, it writes it to the console. As a result, the label stays with the output, even if it is re-directed. If we did anything else, the redirected file wouldn't be intelligible.
Could you explain in more detail what you would like to see happen, keeping the above comments in mind?
@CharliePoole
To avoid running into the XY problem I'll start by describing the use case: I'm interested in tracking the progress of some very long test executions containing many thousands of tests. Ideally, I want the test output to go (only) to the xml result file while being able to get some sense of the progress made by looking at the runner's output during the run.
Since nunit-console doesn't have a progress bar I tried (mis-)using labels to get a sense of the number of executed/remaining tests. However, the labels drown in a sea of test output. That's why I was interested in redirecting the test output to a file, and just having a label printed after every test.
@amaltinsky
Appreciate that! :-) I'll create a new issue that starts with your use case and tries to solve the problem.
Basically, I think we want to send something that looks like a label to StdErr, which the console displays immediately.
Of course, we can only tell you that your test run is progressing. That is, we can't tell you anything about a long-running individual test. The test would have to create that output itself.
Of course, we can only tell you that your test run is progressing. That is, we can't tell you anything about a long-running individual test. The test would have to create that output itself.
Understood.
In my case individual test cases take no longer than a few minutes each. The problem is that there's a LOT of them. So just knowing that the test is progressing (plus the name of the current test or number of executed/remaining tests) should be plenty.