kiwigrid/gherkin-testcafe

Pretty Progress Printout

Opened this issue · 5 comments

It would be really nice to get a printout of the progress of the tests like how cucumber prints it. I'm finding myself constantly trying to figure out which step is broken when using gherking-testcafe.

e.g.

 Running tests in:
 - Chrome 78.0.3904 / Mac OS X 10.14.0

 fixture
..F-..

Failures:

1) Scenario: Signing up a new user # features/example.feature:5
   ✔ Before # features/support/hooks.js:49
   ✔ Given I open the sample app # features/step_definitions/example.js:8
   ✖ When I signup # features/step_definitions/example.js:12

looks like this is where cucumber-js' implementation is. How hard would this be to add?

Hey @anisjonischkeit
I agree, this is very painful.

I investigated this some time back. I figured the best option would be to extend testcafe's reporter API. This would then also solve #7. The problem there is, that the reporter API is very separate from the runner and it's very hard to enhance it (the reporter API) with additional data (like information which steps were successful).

So if you're feeling luck, I would be very happy about very PR in this direction!

Not sure if this is a new thing but to me it looks pretty straightforward to add metadata to the tests (unless I'm missing something here).

Looking at the reporter-plugin docs though, it looks like I wouldn't be able to print anything between when a Cucumber Scenario starts and finishes (ie. before/after a step). What do you think about adding an additional reportStepStart/reportStepEnd (or maybe reportStepChange is better since we might get race conditions if we fire these asynchronously) to the reporter object that we can call before each test.

Wow, you really gave some thought into it. Thank you!

You are right: Out of the box, it is not possible to use the existing reporter API.
The additional methods you mentioned would be the direction I would want to go. But we have a problem there:
As I said, the reporter is very decoupled from the test runner. It actually communicates via an event emitter, that is unreachable from the runner, so we do not have any (simple) means to reach it from there.

My idea was to

  • emit events when a step starts and ends (I would do two events here - this makes the API more flexible) - make sure meaningful metadata is passed to the function
  • extend the reporter class to handle those events (see how it's done for fixture start)
  • Implement the reporter to print your result

Additional thoughts to your comment:

  1. In your first sentence you mentioned to put metadata to the test. I would not do that. It uses the test object in a way it is not intended to be used. We are building tooling around tests, we should be very careful what we pass to the consumer of the APIs.
  2. Steps do not run in parallel. We always await a step before running the next one. So using reportStepStart and reportStepEnd are fine.

Alright, I've made a little start on this.

Regarding 1. are you just talking about passing specific logging data rather than the full step (so that the reporter can't mess with the original step object)? That sounds very sensible to me.

Regarding 2. I understand that the tests run sequentially however in the docs for the reporter, the handlers are all async reportTestStart (so I guess printing can be done asynchronously so that you don't slow down the runner with your printing). I guess as a first implemenation though we could just do the printing synchronously (and thinking about it a little more, I don't think synchronising the async tasks will actually be a problem with a promise chain).

I think I'm starting to understand what you mean regarding the reporter being decoupled from the runner. It looks to me like the reporter is initialised somewhere in testcafe's codebase, do you know if there is any way to get the specific instance that testcafe initialises so that we get this (that has all the functions to write)? I've started by making it so that initialising the reporter returns a singleton with our reportStepStart and reportStepEnd methods but I'm not sure if this will cause a problem anywhere else; if there is anything that might require multiple instances of a reporter (possibly running tests in parallel). Ideally I'd like to just get the instance initialised in testcafe.

Regarding 1. are you just talking about passing specific logging data rather than the full step (so that the reporter can't mess with the original step object)? That sounds very sensible to me.

No, I meant that I would not add the cucumber steps to the test or fixture metadata. I would be fine with passing the cucumber step to the reporter.

Regarding 2. I understand that the tests run sequentially however in the docs for the reporter, the handlers are all async reportTestStart (so I guess printing can be done asynchronously so that you don't slow down the runner with your printing). I guess as a first implemenation though we could just do the printing synchronously (and thinking about it a little more, I don't think synchronising the async tasks will actually be a problem with a promise chain).

Very true. But this is something that the reporter needs to handle.

I think I'm starting to understand what you mean regarding the reporter being decoupled from the runner. It looks to me like the reporter is initialised somewhere in testcafe's codebase, do you know if there is any way to get the specific instance that testcafe initialises so that we get this (that has all the functions to write)? I've started by making it so that initialising the reporter returns a singleton with our reportStepStart and reportStepEnd methods but I'm not sure if this will cause a problem anywhere else; if there is anything that might require multiple instances of a reporter (possibly running tests in parallel). Ideally I'd like to just get the instance initialised in testcafe.

The reporters are initialized in the runner. We replace the runner in runner.js, so we do have direct access to the reporters.

Maybe, we need to change the whole logic of how the compiler exposes the tests and how the runner consumes them to make this work nicely.
But if I'm being honest about it, I don't know the internal logic (i.e. where is the test created, where is it consumed, where are events emitted, where are they consumed, ...) well enough to point you into the right direction here. This would need some serious digging on my side.