Servers are sometimes not being closed properly
Closed this issue · 5 comments
Error: listen EADDRINUSE: address already in use :::3000
at Server.setupListenHandle [as _listen2] (net.js:1279:14)
at listenInCluster (net.js:1327:12)
at Server.listen (net.js:1414:7)
at LdfResponseMocker.<anonymous> (/home/travis/build/comunica/comunica/node_modules/rdf-test-suite-ldf/lib/testcase/ldf/mock/LdfResponseMocker.js:31:62)
Required for merging comunica/comunica#501
Is it a possibility that the different test manifests run parallel? This could cause this error. Maybe we could specify a separate port for each manifest. As far as I know all servers should be teared down and ports should be released.
Is it a possibility that the different test manifests run parallel?
That's unlikely. Travise executes all commands in sequence.
It's the first time I've seen this error pop up, so it's probably a rare edge-case somewhere, which makes it harder to fix unfortunately.
I had another look at the output, and I saw the following output from the integration tests that were run before:
yarn run v1.15.2
$ rdf-test-suite-ldf spec/sparql-engine.js https://comunica.github.io/manifest-ldf-tests/sparql/sparql-manifest.ttl -d 200000 -c ../../.rdf-test-suite-ldf-cache/
info: Caching enabled in /home/travis/build/comunica/comunica/.rdf-test-suite-ldf-cache/
info: Loading objectloader for manifest creation
info: Importing manifest
info: Running manifest
info: Run test: https://comunica.github.io/manifest-ldf-tests/sparql/sparql-manifest.ttl#directors01
info: Run test: https://comunica.github.io/manifest-ldf-tests/sparql/sparql-manifest.ttl#software02
info: Run test: https://comunica.github.io/manifest-ldf-tests/sparql/sparql-manifest.ttl#simple03
info: Run test: https://comunica.github.io/manifest-ldf-tests/sparql/sparql-manifest.ttl#common04
info: Run test: https://comunica.github.io/manifest-ldf-tests/sparql/sparql-manifest.ttl#missing05
info: Run test: https://comunica.github.io/manifest-ldf-tests/sparql/sparql-manifest.ttl#schrodinger06
info: Run test: https://comunica.github.io/manifest-ldf-tests/sparql/sparql-manifest.ttl#extends07
info: Run test: https://comunica.github.io/manifest-ldf-tests/sparql/sparql-manifest.ttl#belgium08
✔ SELECT - DBpedia TPF (https://comunica.github.io/manifest-ldf-tests/sparql/sparql-manifest.ttl#directors01)
✔ SELECT - DBpedia TPF (https://comunica.github.io/manifest-ldf-tests/sparql/sparql-manifest.ttl#software02)
✔ SELECT - DBPedia TPF & SPARQL (https://comunica.github.io/manifest-ldf-tests/sparql/sparql-manifest.ttl#simple03)
✔ SELECT - DBPedia TPF & Ruben(s)' FILE (https://comunica.github.io/manifest-ldf-tests/sparql/sparql-manifest.ttl#common04)
✔ SELECT - DBPedia TPF (https://comunica.github.io/manifest-ldf-tests/sparql/sparql-manifest.ttl#missing05)
✔ ASK - DBPedia TPF (https://comunica.github.io/manifest-ldf-tests/sparql/sparql-manifest.ttl#schrodinger06)
✔ SELECT - LOV TPF (https://comunica.github.io/manifest-ldf-tests/sparql/sparql-manifest.ttl#extends07)
✔ SELECT, OPTIONAL - Geonames TPF (https://comunica.github.io/manifest-ldf-tests/sparql/sparql-manifest.ttl#belgium08)
✔ 8 / 8 tests succeeded!
Done in 46.06s.
lerna ERR! yarn run integration exited 1 in '@comunica/actor-init-sparql-rdfjs'
lerna ERR! yarn run integration stdout:
So this means that even though these tests passed, the process exited with status code 1, which means that some kind of error was thrown (but not shown).
This may have something to do with server closing that failed.
Ignore my last comment, I was misinterpreting the Travis logs.
The previous integration execution exited properly with code 0.
If I understand the logic in LdfResponseMockerFactory
correctly, then this situation should not even be possible, right @ManuDeBuck?
Because tcp-port-used
is used to wait until that port is free, and throws an error otherwise.
But this error here is thrown because LdfResponseMocker
can not bind to the port that should have been checked as being free.
So things happening in parallel does indeed sound like the only possible cause (which shouldn't be the case), unless some other process was also started on port 3000 (which also shouldn't be the case).
I'll look into this further next week.
Ok, I just discovered that lerna does do some things in parallel, even though it shouldn't according to its documentation.