sebastianbergmann/phpunit

Refactor how tests are run

sebastianbergmann opened this issue · 10 comments

  • Move test execution logic out of TestResult
    • Use TestResult just as a Collecting Parameter to collect test results
  • Before the test execution starts
    • Use a TestSuiteIterator to get all tests that are to be run from the top-level TestSuite
      • Use FilterIterator implementations to filter tests based on --filter, --group, and --exclude-group, for instance
      • This allows the number of tests that are to be run to be calculated correctly before the tests are actually run
    • Sort the tests according to @depends annotations
  • Delay creation of TestCase objects and clean them up immediately after a test has been run
    • Allows for more elegant implementation of test execution in isolated PHP processes
    • Allows for running the tests in random order
    • Reduces memory footprint

I'm very excited to see this! We've had to increase our memory limit every so often to deal with test cases sticking around to the end. One thing you might not have considered yet is that by default any exceptions thrown during a test will have a reference to the test case itself in the backtrace.

As I've started working on a new Java project at my job I'm picking up the new JUnit features. One neat feature is the ability to add a TestDecorator to any test case using an annotation. This is how jMock checks its mocks at the end of each method, and a similar docblock annotation could be helpful for PHP tool integrators.

Sebastian, this looks like a good move. It also looks like it will resolve the issues I reported in #261.

Can you confirm that the behaviour I mention there is not intentional and won't be somehow preserved by your refactor?

I just did a quick hack where I have a single TestResult and I use it to execute on a loop of TestSuite-s which each correspond to a single file, so that I can deallocate the suite after each test file is executed. I saw in my test suite a reduction from 643MB to 431MB. So very excited to see this implemented better.

Hello @sebastianbergmann could you please add more details to this issue?
It would be nice if you could create some kind of a document with list of features/requirements that are planned for 4.0 release for example:

  • long_class_names -> namespaces
  • parallel test execution
  • etc., etc.

Basically I think that new 4.0 branch is a good moment to solve old problems and implement existing functionality in cleaner way and I hope I'll be able to help in coding it.

Before I report this as a bug, as it seems to be a design issue. Running tests with --filter causes all data providers to run. For some reason a test was failing when running it by file name rather than with a filter, as it was influenced by a data provider.

CC007 commented
  • Sort the tests according to @Depends annotations

What is the status of this?

wouldn't it make sense to also delay the creation of DataProviderTestSuites when doing this? They'd still need to collect all the data beforehand with the proposed changes, keeping them rather large.

This would obviously need the data providers to either be run twice(i.e. one for counting, one later for the data) or allow for changes to the testcase maximum count once the provider is being processed.

Data providers should be run once. If that means that counting no longer works and we need to change how progress is displayed, so be it.

data providers can provide also a label for cases, which one can use as a value for tests filter he want to run,
this feature shall not be broken by accident

Superseded by #3213.