ombulabs/failure_machine

Add automated testing and improve test files to have better data sets

Opened this issue · 0 comments

Testing this library isn't super fun right now.

The 2 main issues are:

  1. It's manual
  2. The sample json output file is quite limited

We need to come up with a way to automatically test that the given classified output of calling the library is correct for a given failure input file.

I think testing individual modules might be quite easy, but this main functionality test will require some thinking.

For one, we're working with RSpec output, specifically. I do want to expand this lib to work with other outputs from other testing frameworks (specifically elixir ones, for obvious reasons), but for now I think we can focus on RSpec.

We want the lib to grab test output files and group the errors by their root cause. Right now we actually group them by message similarity, which isn't great, but I think that the first thing is to:

  1. Come up with some mock test output file, or a few files
  2. Write out what we want the library to output
  3. Have our test framework compare the actual output with the expected output

Maybe there's a better way. Maybe we don't want to necessarily test the output but just test how the data is grouped. That way we can avoid string comparisons and leave the output testing to when we decide to test our formatting options.