Option for comparing coverage reports
lucavallin opened this issue ยท 13 comments
At this moment, tparse
shows the current coverage percentage. It would be useful to show, next to the current percentage, the difference with another given test report, for example one containing the coverage information for the main
branch.
I am thinking of something like:
go test -json -cover -v ./... | tparse -all -format-markdown -compare=main-coverage.json
Which results in a table like:
STATUS | ELAPSED | PACKAGE | COVER | PASS | FAIL | SKIP |
---|---|---|---|---|---|---|
๐ข PASS | 0.02s | github.com/user/package1 | 100.0% (+20.3%) | 3 | 0 | 0 |
๐ข PASS | 1.16s | github.com/user/package2 | 35.2% (-3.2%) | 1 | 0 | 0 |
๐ข PASS | 0.20s | github.com/user/package3 | 70.5% (-) | 4 | 1 | 0 |
๐ข PASS | 0.78s | github.com/user/package4 | 83.0% (+13.8%) | 6 | 2 | 0 |
๐ข PASS | 0.25s | github.com/user/package5 | 48.8% (-) | 2 | 0 | 1 |
This would make it easy and quick to see if a pull request is increasing or decreasing a project's test coverage.
Ye this is a great idea, I've also wanted something similar for elapsed .. to understand if there's a trend over time where tests get slower/faster.
Something I haven't thought deeply about is where things like "main-coverage.json" come from, are they checked-in?
Glad to hear! My first guess would be to have "main-coverage.json" checked in, that file could for example be automatically created and pushed to the main branch by Actions. A possible alternative would be to store and retrieve it from, for example, a storage bucket, or any other place that can provide a static URL to the file.
However, I think "where does the file come from?" shouldn't be a concern of tparse
.
Curious, when tparse
reads -compare=main-coverage.json
.. what are the contents?
Sorry, I might be missing something ๐
, but is this a file tparse
creates or would this be a json blob from go test ..
?
I ask because I'm failing to recall what .json file the go toolchain produces.
EDIT: IIRC there are tools like https://github.com/axw/gocov that can produce a .json file
The first that comes to mind is writing the output of go test
to file, like go test -json -cover -v ./... > test-output.json
, this would be handy given that tparse
can already understand it. I am not particularly fond of tools like gocov
since it's an extra dependency that does something extremely specific.
The most idiomatic way of doing it would be to parse the coverage file that you can get with -coverprofile
, as in go test -json -cover -coverprofile=coverage.out -v ./...
. This would require some extra work, but personally, I think tparse
would benefit from being able to parse coverprofiles anyway: sometimes I'd like to just run go tests, and then use tparse
to present them in a human-friendly way (that removes the need for the set -o pipefail
hack too).
I have been searching for a tool for presenting tests results, and I picked tparse
because no other tool does:
- Present results in a human-friendly way, especially markdown for comments in pull requests and (Actions) job summaries
- Show coverage percentage (junit-based tools somehow don't pick that up)
- Show the coverage diff with previous runs of the tests (this feature we're talking about in the issue)
- Bonus points: have a GitHub Action in the marketplace to go with the tool, maybe it could even annotate PRs
This is great feedback!
At one point I thought about adding a tparse
-specific JSON blob that could be checked in but could also be shipped to a service. I started working on https://gotest.io to track go tests over time but paused those efforts due to a full-time job.
I know I've been saying this for a while, but I do intend to find some time this summer to get this tool to a stable v1 and address a few of the open issues. (I still write a lot of Go code, and use this tool quite a bit).
@mfridman Great to hear. I'll be happy to help if/when I have some free time, but no promises. This issue is the most interesting for me, so if we agree on the (rough) design, I will give it a try.
I had a look at the source code, reading and parsing a file to compare against would be quite simple. Besides adding the -compare
flag (or something similar), I added the following to app.go
on line 57:
if option.Compare != "" {
var compareReader io.ReadCloser
if compareReader, err = os.Open(option.Compare); err != nil {
return 1, err
}
compare, err := parse.Process(compareReader)
if err != nil {
return 1, err
}
// this one just to print out the contents of compare
spew.Dump(compare)
defer compareReader.Close()
}
Was hacking on tparse
this weekend and was thinking about this issue.
Presumably when running tparse
in CI with a -compare
flag, you'd need to have an "against" (main-coverage.json) file .. would this be something that's already checked into vcs? Or would the against file be stored in cache or some blob storage? I guess this doesn't matter, since it's just a bag of bytes we read, more a curiosity.
There are at least 2 ways we can go .. have tparse
store some custom .json format (maybe in a specific directory with some timestamp) or have the caller store the raw go test output (which tparse -file
already knows how to parse).
Fwiw I really like this idea, and with a bit of guidance can help get something in place.
Hey @mfridman! Thanks for looking into this.
I think the easiest would be to have the file in git, but if the -compare
flag could take any URI, then it might be easy to point it to either a local file, a cache, or a file in a blob storage.
My preference for the format would be the raw test output. It is not ideal because Go already has a coverage format that is more "semantically" correct, however, tparse
does not understand this yet. Having a custom format instead would take the tool further away from Go's defaults... if that makes sense.
Alright, made some progress on this today.
happy path
# go<version> test -count=1 fmt strings bytes bufio crypto log mime sort time -json -cover > go<version>.json
$ tparse -file go1.20.json -compare go1.17.json
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ STATUS โ ELAPSED โ PACKAGE โ COVER โ PASS โ FAIL โ SKIP โ
โโโโโโโโโโโผโโโโโโโโโโผโโโโโโโโโโผโโโโโโโโโโโโโโโโผโโโโโโโผโโโโโโโผโโโโโโโโ
โ PASS โ 0.62s โ bufio โ 93.4% (+0.8%) โ 87 โ 0 โ 0 โ
โ PASS โ 1.56s โ bytes โ 95.5% (+0.2%) โ 138 โ 0 โ 0 โ
โ PASS โ 0.75s โ crypto โ 5.9% (+5.9%) โ 5 โ 0 โ 0 โ
โ PASS โ 0.69s โ fmt โ 95.2% (0.0%) โ 79 โ 0 โ 1 โ
โ PASS โ 0.87s โ log โ 68.0% (+1.1%) โ 10 โ 0 โ 0 โ
โ PASS โ 1.02s โ mime โ 94.0% (+0.3%) โ 24 โ 0 โ 0 โ
โ PASS โ 1.72s โ sort โ 58.7% (-2.1%) โ 76 โ 0 โ 1 โ
โ PASS โ 1.27s โ strings โ 97.9% (-0.1%) โ 117 โ 0 โ 0 โ
โ PASS โ 7.80s โ time โ 92.5% (+0.9%) โ 349 โ 0 โ 1 โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
error state
Still, display the package summary but output a warning below the table if there is an error opening the compare file or parsing its contents.
This might be an opportunity to add a -fail-on-any-error
but I think tparse
should strive to ALWAYS print the summary table unless something catastrophic happens since users are trusting their test output piped into this tool.
go run main.go -file go1.20-strings.json -compare invalid.json
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ STATUS โ ELAPSED โ PACKAGE โ COVER โ PASS โ FAIL โ SKIP โ
โโโโโโโโโโโผโโโโโโโโโโผโโโโโโโโโโผโโโโโโโโผโโโโโโโผโโโโโโโผโโโโโโโโ
โ PASS โ 0.94s โ strings โ 97.9% โ 117 โ 0 โ 0 โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
warning: failed to open compare file: invalid.json
Questions
Is it confusing to show a green cell in Cover even though the overall coverage went down when comparing it against another test output?
The colours for the Cover column in the package summary are based on some arbitrary logic from many years ago:
>0 && <= 50
.. red> 50 && < 80
.. yellow>= 80
.. green
Hey @mfridman, awesome work! The happy path looks great, and I agree with you on having tparse
to always print the summary unless a cosmic cataclysm hits us (or an alien invasion). It's useful to give users the possibility to decide the behaviour, but I don't think this is something that needs to happen right away.
About the question, I think I would like a decrease in coverage to be obvious at a glance. Do you think it'd be possible to have the (-0.1%
) part in orange/yellow if negative, and maybe a different shade of green if positive? I wouldn't make 97.9%
yellow, it's still a great level of coverage to have!
Merged a crude implementation in #101
-compare Compare against a previous test output file. (experimental)
Thanks for the great suggestion. There's still a bit of cleanup to do, but overall I think it's a good first step..
๐ Great work, glad I could be of help!