Improve tests of filters
Closed this issue ยท 3 comments
In the test_boolean_filtering() method, the filterset built for testing has all its classifier values set to true
classifiers = [
"stochastic",
"long_run_time",
"manipulates_state",
"manipulates_source",
"inspects_source",
]
comprehension, filterset = strategies, {}
for classifier in classifiers:
comprehension = set(
filter(axl.Classifiers[classifier], strategies)
) & set(comprehension)
filterset[classifier] = True
filtered = set(
axl.filtered_strategies(filterset, strategies=strategies)
)
self.assertEqual(comprehension, filtered)
Currently, there is no strategy that meets this criteria so the result is always an empty set, making the test a comparison of two empty sets.
Was thinking of creating a hypothesis decorator to return a list of classifiers to be given to this test so that it would not always be empty. The decorator could look like this:
classifier_list = draw(
lists(sampled_from(classifiers), min_size=min_size, max_size=max_size, unique=True)
)
Using hypothesis to explore the space of all possible classifiers sounds good to me ๐
One immediate question that comes to mind is whether or not we need a new hypothesis decorator for this as I don't see much potential for reuse but that might not be a good reason anyway and this can be discussed on the PR :)
Go for it ๐ ๐ช
The suggested changes have been done in #1377. Can this be closed?
Yep! Next time you can say "closes #1377" in the PR comment (the text on github when you open the PR) and it will close the issue on merge.