sensu-plugins/sensu-plugin-python

Unit Testing SensuPluginCheck Implementations is Difficult

runewake2 opened this issue · 12 comments

The current implementation of this plugin uses the constructor to create and run the Sensu check. This makes it very difficult to implement unit tests over part of your health check. It does not appear possible to execute a test over a method inside your new class.

I would like to be able to instantiate and test a SensuPluginCheck without executing my entire health check.

For example consider this health check:

from sensu_plugin import SensuPluginCheck

class SensuHealthCheckExample(SensuPluginCheck):
    def collect_data(self):
        return "this is some data from my health check"

    def verify_data(self, data):
        return True

    def setup(self):
        self.some_argument = True

    def run(self):
        data = self.collect_data()
        if self.verify_data(data):
            self.ok("Everything is going to be alright")
        else:
            self.critical("Everything is NOT alright")

With the current implementation of this plugin calling __init__ of SensuHealthCheckExample not only creates the instance but also causes it to execute the full check. This makes it very difficult for example if I am attempting to put my verify_data function under test. I can not separate the test down to a single unit.

What is the best way to accomplish adding unit tests over these checks so that some of the more complex checks can be tested and maintained more easily?

I agree, and I was facing the same issue when writing tests for the handler implementation.

I personally think the best thing to do is to split off the main logic into a separate method, possibly standardized between all of the different classes, eg. run()?

I'm not up to speed on all parts of the Python implementations yet, but I'll be trying to catch up over the next few days. I know we've been striving for feature parity with the Ruby counterparts, even to the point of keeping the code bases reading very similarly, but I think it's best that we do what makes the Python implementation easiest to use and then we can get them to follow suit ;)

Obviously this is a breaking change, so I'll have to see what the best course of action is, besides thoroughly documenting it. @barryorourke

There is already a run() method in the SensuPlugin class, which is called from __init__.

Removing the call to self.run() from __init__ would create a breaking change, but at the same time I think it makes a lot more sense and will make testing a lot easier.

As I've said on slack this seems too easy, and my head is slightly mushy today. I'll have a proper think about this tonight after the kids are asleep.

There is more than just run that might cause some problems for testing. __init__ also uses argparse to extract arguments as an example. Ideally the option to inject these arguments would exist. That will likely complicate this.

I also do not know yet how atexit works in a testing environment.

The trick seems to be drawing a clean line between a plugin executed from the terminal or as a check and one included in another module/test. I haven't been able to figure out where to draw that line yet though.

I've created #42, as I've realised that the codebase's intentions need to be more clear before we'll have something that's easy to test.

Yeah can't pass arguments to the object or the run method the way this module is written. And it's difficult to mock the module exit methods themselves.

I mean you can kinda avoid passing arguments with global vars... ew. but that's fine. It's mocking the exit methods ( ok / critical / warning ) before instantiation of the module ( as run method is called on instantiation ) that proves really difficult. I suppose you could just null the run method entirely and not call your check function. Then instantiate and mock the check class... THEN run the unattached method that performs the check.

This is ugly.

So I am certainly not very well versed on the python side. Assuming (we know what they say about people making assumptions...including that one...) the that it's implemented anything like the ruby one with at_exit you should be just fine and there should be no issues invoking it via cli or other testing code. This will be true as long as you never call an exit(), warn(), critical(), ok(), unknown(), or similar in your library functions and always return a status + message. Let the actual check, mutator, handler, etc handle the way it wants to execute, perform error handling, etc. Libraries should never exit only return. In other words run or handler functions are the only functions that should ever exit.

I had a similar discussion here: sensu-plugins/sensu-plugins-aws#266

https://gist.github.com/openfly/ec5c6e9d670e5386cb9b54df4d3f26d0

this is the best I could do. sorta kinda works.

Yeah the sys.exits can be captured by asserts so that's not so bad. It roughs up RCA analysis as it still exits prematurely. But it works.

Biggest issue I have no is argument passing into the run method. I am using global atm and I dislike that immensely.

Sorry for dragging my heels, there's been a bit of a hectic change-up at work.

If I can remember back before christmas, I have a branch on my laptop that emulates the Ruby behaviour, which allows you to set a class property before instantiation that will stop the .run() auto-running - just for testing purposes. I was holding off on it until I had better coverage overall, as the current codebase could be fragile without tests - But I could add a PR for just this and tests covering it.

But I could add a PR for just this and tests covering it.

I'd be happy to see that go in now and add the remaining work later when you get a chance.

I've pushed a release to PyPi which includes a lot of improvements to testing.

It is now possible to instantiate both Plugins and Handlers without them running the full check/handler, you can do this by passing autorun=False to the class.

See https://github.com/sensu-plugins/sensu-plugin-python/blob/master/sensu_plugin/tests/test_check.py for how I am testing.

It's worth mentioning that I've only really got my head around pytest this week, so if you spot anything that could be improved please let me know or create a PR.