kensho-technologies/graphql-compiler

Make test builds fail if warnings are encountered during tests

Opened this issue · 8 comments

The test build should be completely clean. Warnings don't block merges, so we don't notice them when they pop up. If they did block merging, we'd notice them and fix them (ideal case) or at least intentionally make the decision to suppress them (last resort).

logging errors should maybe also raise.

Yes, but we may need to be a bit more discriminate there. I definitely agree logger.error() from graphql_compiler or a submodule should raise unless explicitly suppressed. It's less clear-cut with dependency libraries — e.g. if sqlalchemy decided to retry a request because a connection broke, I don't think our tests should fail.

@bojanserafimov perhaps open a separate issue for it? I suspect the two will be addressed separately, since different mechanisms are in play.

I closed the pull request above because it doesn't seem the trouble at the moment to fix this.

I am sharing what I learned in case someone else wants to address this.

  1. There are two ways that I know of to error on warning. pytest -Werror or python -Werror -m pytest. I am not sure what the difference is, but I think that the later command also errors on any pytest warnings while the former does not.
  2. Python by default hides certain warnings so you might get an error with the commands above even if you don't see any warning output. To see all the warnings run pytest -W "default".
  3. When treating warning as exceptions, warning-exceptions can be caught in try, except blocks. (This can lead to some very unexpected output when running pytest -Werror. It might not be obvious at all that the error was raised because of a failed warning).

There are currently two warnings raised when one runs pytest -W default. One of them is an ImportWarning coming from somewhere in the neo4j_client, (I think) and the other is a ResourceWarning raised because a socket that is somehow related to OrientDB is not closed.

Can you post what the warnings that appear are? It's impossible to make informed decisions on this without knowing what the warning is.

Assuming the warnings are harmless, why not just suppress them in particular and make everything else into an error? Both pytest and python support per-warning-type rules, as far as I can tell.

Closing and abandoning the PR overall feels a bit like throwing out the baby with the bathwater right now.

I think that the first error is related to neo4j because of the following output:
Screenshot from 2020-03-16 21-04-58

The error does not seem like it is due to treating warnings as errors, but it is.
Just running pytest raises no errors.

Ok, the ResourceWarning is legit and we should track it down and fix it, and the other two seem reasonable to suppress. If I remember correctly, you can specify multiple -W options with different handling for different subtrees of the Warning inheritance hierarchy.

Also, if you have cycles, I'd suggest reporting the neo4j issue because it seems like a legitimate bug. I'm not sure what's going on with the first one, and it's probably not super worth looking into right now.