sglanzer-deprecated/ember-cli-blanket

v0.8.0 Only selectively generates lcov reports

Closed this issue · 37 comments

For my project, blanket only generates the lcov.dat file about half of the time I run it. It looks like testem is closing the browser session before blanket has a chance to send test results back to the coverage server.

Anyone else seeing this?

I'm seeing something fairly similar (not sure it's identical, so I can move my comments to a different issue if it turns out this is different).

Background:
Ember 2.3
Ember-CLI 2.3.0-beta.1

When running on the command line (not via browser) to mimic our CI server, I can properly run code coverage info and get results back when filtering my tests (if I run few enough tests):

ember t --filter=service

However, running a normal ember test results in this:

$ ember t

<much snippage>

ok 989 PhantomJS 1.9 - JSHint - unit/transforms: unit/transforms/utc-test.js should pass jshint
Error: request aborted
    at IncomingMessage.onAborted (/home/ubuntu/ember-client/node_modules/ember-cli-blanket/node_modules/body-parser/node_modules/raw-body/index.js:269:10)
    at IncomingMessage.emit (events.js:104:17)
    at abortIncoming (_http_server.js:279:11)
    at Socket.socketOnEnd (_http_server.js:393:7)
    at Socket.emit (events.js:129:20)
    at _stream_readable.js:908:16
    at process._tickCallback (node.js:355:11)
Error: request aborted
    at IncomingMessage.onAborted (/home/ubuntu/ember-client/node_modules/ember-cli-blanket/node_modules/body-parser/node_modules/raw-body/index.js:269:10)
    at IncomingMessage.emit (events.js:104:17)
    at abortIncoming (_http_server.js:279:11)
    at Socket.socketOnEnd (_http_server.js:393:7)
    at Socket.emit (events.js:129:20)
    at _stream_readable.js:908:16
    at process._tickCallback (node.js:355:11)
Error: request aborted
    at IncomingMessage.onAborted (/home/ubuntu/ember-client/node_modules/ember-cli-blanket/node_modules/body-parser/node_modules/raw-body/index.js:269:10)
    at IncomingMessage.emit (events.js:104:17)
    at abortIncoming (_http_server.js:279:11)
    at Socket.socketOnEnd (_http_server.js:393:7)
    at Socket.emit (events.js:129:20)
    at _stream_readable.js:908:16
    at process._tickCallback (node.js:355:11)

Update:

Unsure of how to read this (possibly related to whether a container exists behind the scenes?). If I run just unit tests I'm seeing failures. But if I run acceptance tests and unit tests or integration tests and unit tests then things seem to work fine.

@arjansingh Does this lead to any more data on your end?

ember t --filter='/(acceptance|unit)/i'

Note: found out that filtering with regex worked here: qunitjs/qunit#906

I have a similar issue - when I log the whole error https://github.com/sglanzer/ember-cli-blanket/blob/master/index.js#L14 rather than just the stack, I get:

{ [Error: request aborted]
  code: 'ECONNABORTED',
  expected: 660743,
  length: 660743,
  received: 524288,
  message: 'request aborted',
  status: 400,
  statusCode: 400,
  type: 'request.aborted' }

My guess is that when the coverage file gets above a certain size the express server is shut down before the coverage middleware (more specifically bodyParser), completes parsing the coverage POST request data. It probably happens intermittently because it's a race condition and it would only happen to your project when it get's above a certain size (our app has >2,500 tests).

Am investigating, but currently don't have any idea how to fix the issue.

My guess is that when the coverage file gets above a certain size the express server is shut down before the coverage middleware (more specifically bodyParser), completes ..

Added some more logging and testem stops the express server after that error, so that theory is wrong

I suspect that the 50mb limit set in our middleware is being superseded by another bodyParser in the middleware stack. are you using any other 'middleware' addons?

No other middlewares (I verified by putting some logging in https://github.com/ember-cli/ember-cli/blob/8a5fc39aef990b32382f260c935a90e95d72359c/lib/tasks/test.js#L33)

I also logged the limit in bodyParser and it was set to 50mb

@adamjmcgrath any chance you could run your tests in chrome and grab some stats from the network tab on /write-blanket-coverage - one thing I noticed today when debugging a qunit test (we use mocha here) is that it looked like we're posting multiple times

We use mocha as well and there's only one POST to /write-blanket-coverage

Does it show the post size? e.g. is size even an issue? or a red-herring?

On Mar 4, 2016, at 11:59 AM, Adam Mcgrath notifications@github.com wrote:

We use mocha as well and there's only one POST to /write-blanket-coverage


Reply to this email directly or view it on GitHub #112 (comment).

The size is 2.4mb. The error from bodyParser is confusing:

expected: 660743,
length: 660743,
received: 524288,

Seems to suggest the file is ~660kb. I'll keep digging and let you know if I come up with anything

FWIW, I was seeing three POSTs when I ran into this, I'll check back with more details when I work on that some more ...

that is what I was seeing as well (not this size issue) - it looks like we’re listening for Qunit.done() - but that’s being called three times.

Not sure if something’s changed in qunit but it’s weird.

On Mar 4, 2016, at 12:09 PM, David Baker notifications@github.com wrote:

FWIW, I was seeing three POSTs when I ran into this, I'll check back with more details when I work on that some more ...


Reply to this email directly or view it on GitHub #112 (comment).

So Qunit is unreliable? That would explain a lot regarding the intermittency. I'll also try and dig a little deeper in the next couple weeks.

I just bumped into this issue as well trying to upgrade ember-cli and ember-blanket
I also see three requests aborted using qunit.

"ember-cli": "v2.4.3"
"ember-cli-blanket": "0.9.1",
"testem": "1.6.0"

Looking at the logs I can see a couple of things:

  • If I run tests with a filter (~50 tests), coverage runs before tests.
  • If I run the full test suite (~2000 tests) coverage runs after tests and testem closes the session before the POST finishes.

Logs here:
https://gist.github.com/calderas/3374f43fde46f00005b2d2c45da78057

@calderas et al, I'm looking at a change that reworks how we end the tests - will advise with a PR that you can try before we release

@adamjmcgrath if you run ember test --server (which by definition will keep the server running) does it complete?

my thoughts last night didn't pan out - so adding browser side reporting for error conditions when the ajax call is fired is our best hope for improved diagnostics.

ember test --server works fine for me.

ok, cool - I’m prepping a PR now with some additional debug as well as the fix for uncovered/non imported modules.

On Apr 1, 2016, at 10:25 AM, calderas notifications@github.com wrote:

ember test --server works fine for me.


You are receiving this because you commented.
Reply to this email directly or view it on GitHub #112 (comment)

@adamjmcgrath the PR in #128 includes a new debug flag as well as some diagnostics about the post length, error handling etc. would be ideal if you could run against that branch and turn on the debugCLI:true flag in blanket-options

If that reveals some additonal info (or even gives a sense of how big the post is) that would be helpful.

Sure I can run it and share output - will try and do it today, but it may be early next week.

Incidentally I fixed the issue I was having by changing the ajax request that posts the coverage to being synchronous (added the async: false option to https://github.com/sglanzer/ember-cli-blanket/blob/aa77828bfcebb9eed738c5589ef9ea01bb118652/vendor/start.js#L6)

This resolved the issue for me, my hunch was that if the request was synchronous the server wouldn't shut down until it was complete, which appeared to work. Apologies - I didn't share it with this thread because I thought my issue was separate to the one that qunit users where reporting here. Also, my company uses a fork of this project which has diverged quite a bit from the original.

@adamjmcgrath - wow! that's super helpful. If I can get a reproducible case of it not writing to completion we can test async:false

would be interested in your fork

Confirmed async:false does the trick !

I’ll update my PR.

did you happen to see the debug line with size of posted data? curious to try and replicate.

On Apr 1, 2016, at 1:25 PM, calderas notifications@github.com wrote:

Confirmed async:false does the trick !


You are receiving this because you commented.
Reply to this email directly or view it on GitHub #112 (comment)

Yes, this is what I get:

[ember-cli-blanket]:data size: 1475849
[ember-cli-blanket]:done writing coverage
[ember-cli-blanket]:data size: 1475849
[ember-cli-blanket]:done writing coverage
[ember-cli-blanket]:data size: 1475849
[ember-cli-blanket]:done writing coverage

Thanks - that helps.

We need to add a guard against the triple write - that doesn’t happen with mocha AFAIK - so I’ll have to fire up a qunit example to debug that.

The PR #128 now includes async: false - I think it’s ready to go.

-jeff

On Apr 1, 2016, at 3:58 PM, calderas notifications@github.com wrote:

Yes, this is what I get:

[ember-cli-blanket]:data size: 1475849
[ember-cli-blanket]:done writing coverage
[ember-cli-blanket]:data size: 1475849
[ember-cli-blanket]:done writing coverage
[ember-cli-blanket]:data size: 1475849
[ember-cli-blanket]:done writing coverage

You are receiving this because you commented.
Reply to this email directly or view it on GitHub #112 (comment)

I finally dug into this tonight. The root cause, I believe, is testem, specifically: testem/testem#616

Basically on test completion, testem indiscriminately kills every open socket to its server, which explains why large coverage reports (which might still be transmitting) are often cut off mid transit.

I tested locally and also agree that the solution of using blocking Ajax calls (async: false) should do the trick. Transmitting the results synchronously effectively prevents the browser from declaring the test run done until the report is fully transmitted.

Great find @adamjmcgrath!

Confirmed, #128 worked on our end as well. I believe this issue can be closed! 👍

Well... Let's wait for the PR to be merged. 😊

On Tuesday, April 5, 2016, David Baker notifications@github.com wrote:

Confirmed, #128 #128
worked on our end as well. I believe this issue can be closed! [image:
👍]


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub
#112 (comment)

Arjan

Believe it was 4 days ago ;-)

On Apr 5, 2016, at 2:05 PM, Arjan Singh notifications@github.com wrote:

Well... Let's wait for the PR to be merged. 😊

On Tuesday, April 5, 2016, David Baker notifications@github.com wrote:

Confirmed, #128 #128
worked on our end as well. I believe this issue can be closed! [image:
👍]


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub
#112 (comment)

Arjan

You are receiving this because you commented.
Reply to this email directly or view it on GitHub

Let's wait until 0.9.2 is published on npm then? then? I just want to make sure this bad boy is live.

Waiting on @sglanzer to cut a new build.

-jeff

On Apr 5, 2016, at 3:42 PM, David Baker notifications@github.com wrote:

Believe it was 4 days ago ;-)

On Apr 5, 2016, at 2:05 PM, Arjan Singh notifications@github.com wrote:

Well... Let's wait for the PR to be merged. 😊

On Tuesday, April 5, 2016, David Baker notifications@github.com wrote:

Confirmed, #128 #128
worked on our end as well. I believe this issue can be closed! [image:
👍]


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub
#112 (comment)

Arjan

You are receiving this because you commented.
Reply to this email directly or view it on GitHub


You are receiving this because you commented.
Reply to this email directly or view it on GitHub

Sorry, I'll handle that tomorrow - we need to figure out the NPM publishing permissions so that I'm not a bottleneck on this

Published 0.9.2

Thanks @sglanzer

On Apr 6, 2016, at 1:11 PM, Steven Glanzer notifications@github.com wrote:

Published 0.9.2


You are receiving this because you commented.
Reply to this email directly or view it on GitHub #112 (comment)

👍

On Wednesday, April 6, 2016, Jeff Schilling notifications@github.com
wrote:

Thanks @sglanzer

On Apr 6, 2016, at 1:11 PM, Steven Glanzer <notifications@github.com
javascript:_e(%7B%7D,'cvml','notifications@github.com');> wrote:

Published 0.9.2


You are receiving this because you commented.
Reply to this email directly or view it on GitHub <
#112 (comment)


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub
#112 (comment)

Arjan

Any objections to me closing this (since I filed it)?

Thanks. That would be awesome

-jeff

On Apr 6, 2016, at 9:15 PM, Arjan Singh notifications@github.com wrote:

Any objections to me closing this (since I filed it)?


You are receiving this because you commented.
Reply to this email directly or view it on GitHub