Visual improvement of test reports
Opened this issue · 9 comments
Background and Motivation:
Explain the problem behind this proposal. If the problem has already been discussed externally, please add a link. Source code or data snippets, diagrams, graphics that illustrate the problem are welcome.
The way test reports are generated may often give the visual impression of a negative outcome even in the case that only a small number of tests have failed (i.e. the general outcome is positive), e.g. in the following example:
Proposed change
Explain what should be changed in the software. Mockups, screenshots, diagrams or other graphics are welcome.
The test outcome can be visually changed and improved through e.g. the following measures:
- avoid to mark the status of the whole test as "failed"
- instaed, display a performance bar from 0 to 100 (see screenshots in the comments below) to give an immediate idea of the goodness of the outcome. The color of the bar depends on the outcome (e.g. it can be red for negative outcome, orange for average outcome, green for positive outcome).
- enrich/substitute the table (counting the number of total/count/skipped/failed/warnings/manual test suites, test cases and assertions) with a histogram (see screenshots in the comments below), which gives an immediate idea of how many checks are passed/failed/passed with a warning.
Alternatives
Describe alternative solutions/features that have been considered.
Alternatively, for a more detailed display of results, the same histogram representation can be used but replicated for each conformance class.
Funding
Is there full or partial funding available for implementing this proposal?
Funding is ensured by the JRC.
I have no objections to the third point and think that it is a nice improvement.
But I am not sure that I understand what the "performance bar" should express? That a service is 47 % conformant? If this is a performance bar for managers than it is very deceptive. The result requires a technical interpretation, also with regard to the severity of the error: is a service with a 0 % performance bar indicator absolutely non-conformant or just down?
I propose a change in ETF to support the customization of reports. These changes could then be maintained in a separate report stylesheet.
I propose a change in ETF to support the customization of reports. These changes could then be maintained in a separate report stylesheet.
+1
avoid to mark the status of the whole test as "failed"
Just to be clear: In a customized HTML report it may be implemented this way, but in the XML/JSON reports the result still has to be "failed" as this is the result.
But I am not sure that I understand what the "performance bar" should express? That a service is 47 % conformant?
The idea is to summarize the test result in a single number (say, from 0 to 100) and/or a graphics. It is not thought as an indication of performance, but as a way to understand what is the percentage of failed (or successful) tests.
Here some mockups to make the idea more clear. the colour of the coloured part of the bar can be different depending on the overall portion of successful tests, for instance:
- green only if the overall test result is positive (let's say if % of successful tests is higher than 70%)
- orange for average values (e.g. if % of successful tests is between 40% and 70%)
- red for negative results (e.g. if % of successful tests is lower than 40%)
When looking at the stylesheet, also check the responsiveness at the page.
In order to develop this improvement, it would be needed to extend the TestRun2Default.xsl stylesheet on the component etf-bsxds, in the same style as the CSS and Javascript included with the reports . It would be possible to do a conditional import for a template containing the scripts to create the graphs. Where the condition could be stored it is still not clear for us, if it should be set up on the ETF configuration file or it should look for templates on a certain folder.
As additional feedback, in line with the recommendation of adherence to the 4+1 architectural model, we would like to make a remark on the process view. If the graphs are added statically to the report stylesheet, the process workflow to obtain a test report is not altered. However, if the customization of the reports is configurable, it is needed to add a new configuration parameter holding the path for stylesheet files and connect it to the transformers. The worflow of generating a test report should not be altered either way.
OK. Please create and use a new property etf.reportstyles.dir
in the configuration. If the configured directory is not found or empty, the application should only output a warning and use the default stylesheets as fallback.
We are developing this EIP using the graphics library Chart.js. This Javascript library can be included using a CDN, so on the alternative XSLT template to generate the reports with the charts (selected using the etf.reportstyles.dir
) we are adding a link on the <head>
element, to refer to this library. We will integrate the whole scripts to render the graphs, and the controls to interact with them on the HTML report, along with the other report's controls.
With the development of the new INSPIRE UI, this proposal is no longer relevant for us, at least for the moment. Unless there are different views, we suggest to close it.