getavalon/core

Dependable Test Suite

Opened this issue · 0 comments

Goal

If tests pass, configs work.

Motivation

One of the issues currently in making changes to core is that we need to test each individual config to ensure nothing broke. That's not good, as the configs themselves are not guaranteed to have a test suite which means we're left testing a handful of things at random, like publish something, load something. Maybe create something. But that doesn't scale. It discourages change and makes both core and configs too fragile.

On the other hand, we can't create tests for every possible permutation of core. It would not only take too long to write and maintain, it would kill innovation and experimentation.

Instead, what we could have is a test suite that captures supported features provided by Core - of api.py specifically. That way, a config can use a tested feature and rely on that it will continue working for as long as GitHub runs automated CI on pull-requests, and can experiment with anything not tested (such as new code) with an explicit awareness that it is in fact not tested.

One of the hurdles initially with building this kind of test suite was that Avalon relies heavily on a database, and databases complicate unit testing. But it's been done before; virtually every test suite for any dynamic web site does it, and there are plenty with much to learn from.

Another hurdle is host integrations. Some hosts are difficult to automate, in particular those without a licence-free Python frontend like mayapy and blender. But if we start with api.py, odds are the solution to hosts will become apparent.

Implementation

Configs depend on two entrypoints from Core.

  • api.py for the Avalon API
  • avalon.<host> for the Avalon Host Integration, e.g. avalon.maya, where __init__.py is the integration API.

That's great, as it means we'll only have to test what's in api.py and the __init__.py of each host. Here's how I would approach this issue.

  • Document members of api.py
  • Document use of members of api.py, with examples
  • Write tests for those uses, preferably running the examples

Our docs are already capable of executing contained code-blocks and including the result in the rendered markdown. We could leverage this to make a visual test-suite.