krakenjs/nemo

Improvements (suggestions)

danielsoner opened this issue · 3 comments

Hi,
I've been working with Nemo for a while and really enjoy it. Here is some feedback:

  • JavaScript configuration over JSON, so it will be easier to do things dynamically + avoid repetition in JSON config files
  • textEquals assertion doesn't show actual and expected text (only one of these), so it's harder to debug
  • When I run tests in parallel: "file" and one test has it.only, I would only like to run that one test. Currently it runs all files
  • When running tests in parallel, it creates a report file for each test file. It would be easier to just open a single report file with all results
  • I use a lot of async / await and if a test fails, I usually don't see in a stack trace where it happened, which is much harder to debug. I typically see this:
    image

+1 to Javascript config over JSON

grawk commented

regarding JS config.. please see #49 .. Feel free to install 4.9.0-alpha.1 and let me know if you see any issues.

grawk commented

Taking all of these and commenting:

  • JavaScript configuration over JSON, so it will be easier to do things dynamically + avoid repetition in JSON config files
  • textEquals assertion doesn't show actual and expected text (only one of these), so it's harder to debug
    • textEquals is a WebDriver method so would probably have to be resolved via the Selenium project
  • When I run tests in parallel: "file" and one test has it.only, I would only like to run that one test. Currently it runs all files
    • it is not clear how to resolve this, since the parallel processes are separate mocha instances. I recommend just remove the -F option when you want to run a single test. That will work.
  • When running tests in parallel, it creates a report file for each test file. It would be easier to just open a single report file with all results
    • This could potentially be resolved, but it would be a significant effort. Makes sense to file a new issue to discuss/track
  • I use a lot of async / await and if a test fails, I usually don't see in a stack trace where it happened, which is much harder to debug. I typically see this (see above for this screen grab)
    • This could potentially be resolved. Investigation needed. Makes sense to file a new issue to discuss/track