Improve course and lesson self-tests
alexander-bauer opened this issue · 4 comments
We need a couple things to say that self-tests are working, although the infrastructure is now in place.
- Courses need to self-test for metadata
- The
validate()
tool needs to be more robust - All default questions need to include automatic self-tests
To add a self-test to a Question class, define validate(self)
, which returns an error message, if any. Otherwise, none.
Something we'll have to figure out is what level to test Questions at. We could simulate user input, or just pass things directly to test_response()
. This doesn't necessarily have to be standard across Questions, of course, particularly due to the limitations of some Question types.
The main difference here is that simulating user input will make the test more rigorous for the actual Question code, whereas passing directly is more a test of the metadata. We definitely want to ensure that the question is answerable, but I don't know whether it would be sufficient to skip the user input step.
What are your thoughts, @WilCrofter and @reginaastri?
Since it's been a busy week, and since my Python skills are 20 years old at best, I should by all means catch up before offering opinions. But, since next week promises to be busy as well, I'll try to say something.
I agree that you have fully capable testing infrastructure in place. The swirlypy design as a whole, having the advantage of (talent and) prior art, is clean and efficient.
I'm not sure what you mean by "self-test for metadata," but I'm taking metadata to mean anything to do with a correct result which is needed to test a user response. If so, I agree that its existence needs to be ensured and, due to the cumulative nature of this kind of tutorial, ensured as if in the context of a running tutorial.
Your question, of course, concerns the level: full simulation of a tutorial session? passing things directly to test_response()
? relegating the job and the choice to Question classes themselves by requiring a validate(self)
method.
I vote for the last since it is most general and can take advantage of your plug-in capability (combined with Python support for multiple inheritance and abstract methods. Appropriate abstract classes could be distributed with swirlypy for use as mixins.) But since instances of a Question class are to be are used in many contexts, the class can't a priori know the context of its invocation in a lesson. To support self-test of instances in context, swirlypy itself would have to provide the context during testing, tracking the results of earlier tests for the sake of later. As I say, I have to catch up, but my impression is that an appropriate Dictionary is already there and may require only a simple API.
It seems natural to me that Lesson.validate() methods, when present, would call Question.validate(context), methods which in turn would modify the context argument, a Dictionary, and return it. Does this seem reasonable?
With recent commits, the self tests for questions are almost complete.