Show detailed total lines
ooggss opened this issue · 7 comments
Describe the feature
I would like to use this tool to calculate the coverage of certain functions under the test cases in the corresponding project (as my functions have complex context and cannot be run independently). Currently, the tool can directly output uncovered lines, but it does not provide total lines or tested lines, so I am not sure which lines are included in the total lines or tested lines. If this feature is already implemented, please let me know how to use it.
Alternatively, could you explain the criteria the tool uses to determine whether a line is counted as part of the total lines? I tried removing blank lines, lines with only comments, and lines with only '}', but the total lines I calculated still exceed the displayed value.
So the detailed line counts are in different report types like XML, html etc. For reporting coverage Delta's tarpaulin does output a json of it's end coverage state in your target directory as well
So the detailed line counts are in different report types like XML, html etc. For reporting coverage Delta's tarpaulin does output a json of it's end coverage state in your target directory as well
Thanks for your reply!
Besides, would it be possible to tell me about the criteria tarpaulin uses to determine whether a line is counted as part of the total lines?
Look in the tarpaulin folder in target it will have a file like ${PROJECT}-coverage.json
.
And it depends what coverage engine you're using ptrace (default on linux) or llvm (default everywhere else). Ptrace will use the debug information from the binary and then filter out lines based on some analysis via syn as debug info often over-reports how many lines are source lines in a file. llvm coverage it might use some of that info (I'd have to check) but definitely supplements it with the source locations LLVM coverage instrumentation reports.
fwiw a lot of users get better results with --engine llvm
but it's highly project specifc
I got it. I do get better and faster results with --engine llvm
in my project.
What's more, can we use cargo nextest
with tarpaulin now? I'm not looking for faster test execution but rather an isolated execution environment for each test case. Some of my test cases fail under cargo test but pass successfully with cargo nextest run.
nextest is a lot of work my side which I don't necessarily have the bandwidth for. If your tests fail under cargo test but pass with nextest run depending on the application I'd say that sounds like an issue in your tests you should ideally address.
Personally, for tarpaulin's tests I used to have to run the tests like cargo test -- --test-threads 1
because the ptrace and wait syscalls that tarpaulin uses can't be isolated to one test instance so would interfere with each other. But now I use rusty_fork
so each test is in it's same process (so a similar solution to using nextest to fix your tests). If you're dealing with syscalls like that it's understandable but otherwise I'd generally assume something has to be fixed in the tests 👀
Thanks for your detailed response and suggestions. Actually, the project I'm testing is not one that I developed. When I encountered test errors while using cargo test and reported them to the project's developers, they informed me that I need to use cargo nextest to ensure the tests run correctly.