tiny-pilot/tinypilot

Spike project: unit testing for bash scripts

jotaen4tinypilot opened this issue · 1 comments

In the spirit of potential innovation (a topic we discussed in a recent dev meeting, related to us introducing hurl in the same spirit), I wanted to bring up the idea of writing automated tests for bash scripts.

I’ve used a bash testing utility called bats in one of my side-projects (more on that below), and found that it works somewhat nicely. (Well, as nice as it gets on bash.)

Tests with bats are basically shell script files that follow a specific notation/structure. Say, your tool under test is called my-cmd, and you wanted to test what invoking my-cmd --some-flag yields, you’d do:

#!/usr/bin/env bats

check_my_cmd() { #@test
	run my-cmd --some-flag
	[[ "${status}" -eq 0 ]]
	[[ "${output}" == 'Hello World' ]]
}

For executing the tests, you’d store that snippet in a file (e.g., test.bats) and run bats tests.bats on the terminal.

bats in a nutshell:

  • Tests are stored in regular bash script files (despite the .bats extension)
  • Test cases are regular bash functions that carry a #@test annotation comment
    • You can also use a more fancy @bats prefix, though then you break out of the regular bash realm, so e.g. you cannot shellcheck anymore.
  • Inside the test function you invoke your command-under-test via run. E.g., for a command/script my-cmd, you’d do run my-cmd.
  • The output and status are captured in $output and $status variables, which you can assert on via regular shell conditionals.

The side-project, in which I used bats, is a task runner CLI tool implemented in bash script. The full test-suite is in this folder; for isolation, I run the entire suite inside a disposable docker container, so I can safely mess with the file system. Note that the tool under test is also called run (like the bats built-in for invoking the command under test), which might cause confusion; therefore, my tool is aliased as main in my tests.

Discussion

If we are interested in exploring this, I thought that #1710 could be one possible opportunity that might lend itself for exploration, as it’s relatively simple and self-contained. We have other such scripts, though, and we could also add these tests in hindsight.

One downside that I’d see is that we’d increase the complexity of our tool chain.

There might be other bash testing utilities – which I haven’t looked into, though.

Sure, this sounds good.

It's hard for me to say no to more tests! And as we write more bash scripts, it would be nice to have more automated testing of them.

It looks pretty lightweight and straightforward, and doing it in conjunction with #1710 sounds like a great test of it.