Misses in coverage
Opened this issue ยท 83 comments
Mention any false positives or false negatives in code coverage in this issue with an example where it occurred and it will be added to the list!
- An
else
only line https://codecov.io/gh/Smithay/wayland-rs/src/master/wayland-commons/src/socket.rs#L33 - Single logical lines split across multiple ones see #270 for past discussion
- match statement with enum or
*self
#258 - macros in function bodies https://codecov.io/gh/simd-lite/simdjson-rs/src/master/src/stage2.rs#L133
- static values can be uncovered https://codecov.io/gh/simd-lite/simdjson-rs/src/master/src/stage2.rs#L320
- const values can be uncovered https://codecov.io/gh/simd-lite/simdjson-rs/src/master/src/stage2.rs#L320
- unsafe blocks marked as uncovered https://codecov.io/gh/simd-lite/simdjson-rs/src/master/src/lib.rs#L285
-
const
initialisations aren't covered. - general chained method calls (sometimes can differ in free functions versus methods see #238 )
-
PhantomData
- Inconsistent coverage with macros like
assert_eq!
- Not sure how well inline
asm!
works - Code disabled by feature configs included
- struct fields in matches with enum structs
Foo::X{ref bar} => {}
bar can register as uncovered - trait's ignore methods without default impl
It looks like my coverage report is counting the lines in tests as part of coverage (including one miss, somehow). See e.g. here if you scroll down to the lines 500+. Not sure whether that's intentional or not but seems a bit odd (and the miss is a false negative I guess)
There's some weirdness with macros, I need to figure out the best way to handle them. Also if you want to exclude test code you can use --exclude-tests
and all lines in anything with the #[test]
attribute or #[cfg(test)]
will be removed from the results
Just curious, why is --ignore-tests
not the default? Are people really writing tests for their tests? I can't imagine a situation where I'd want my test code to be counted as part of the coverable source code
Honestly just because as a feature it came later and no ones asked for the default to be switched around, I've got no opposition to switching the default behaviour. I also definitely don't think people are writing tests for their tests (or at least I hope not...)
The main use I can see is for integration tests that involve things like file or network IO some people might want to make sure their tests are actually executing the code as thoroughly as they expect
I think this is another instance of the "split logical lines", but the attached report for serde.rs
shows some more uncovered chain calls and unexpected (inlined?) not-to-be-covered lines at the very bottom.
A variable declaration with the left-hand expression on a different line is counted as untested, see this report and this one. Additionally, conditionally compiled code that is not part of the current feature set seems to flag a false negative as well, as shown here
oh the feature flag is something I've definitely missed! Although I suppose I wonder if that should be filtered out by default, as ideally you should test all your features thoroughly and it is possible to organise different runs in a batch with different feature settings... Maybe I'll add a flag to add/remove them.
Also for the assign expressions thinking of something that maps to logical lines of code for solving that class of issues once and for all
It looks like the inline, associated functions returning &'static str
in this report only have the function signature marked and not the body that returns the string.
Also, thanks for tarpaulin, it's really handy and easy to set up :)
Are people really writing tests for their tests?
I also definitely don't think people are writing tests for their tests (or at least I hope not...)
I can think of two examples:
- Implementations of Quickcheck / Proptest test case generation. These implementations are sometimes complicated enough to warrant some testing.
- Reference functions. Sometimes I write a slow-but-simple textbook test version of an algorithm (for example an FFT). I then use this to compare with the fast-but-complicated real implementation. But I first add a few tests to make sure the reference is correct.
But I don't care much about coverage in both cases, --ignore-test
seems like a good default.
Here's another coverage result that seems to have some odd lines marked as not covered:
https://codecov.io/gh/jonhoo/openssh-rs/src/9c82a1b11668033e2361d04f821786a41ce46615/src/lib.rs
The repository is over here: https://github.com/jonhoo/openssh-rs/
I believe all of the misses here are erroneous:
https://codecov.io/gh/jonhoo/cliff/src/811bace3d7b1d7c64bdd316ab59d4e355e3d163c/src/lib.rs
GitHub API: Forbidden
๐
^ I think that's just GitHub having issues today
I have 2 covered lines containing doc comments using tarpaulin 0.11.1:
https://codecov.io/gh/FeuRenard/mygpoclient-rs/src/c5e4d61f21b548f2efd03429a4d35f62441c42c2/src/episode.rs
False negatives here.
I have setup tarpaulin recently and tested it on my code and uploaded test coverage to Codecov. This is the actual report on Codecov (no custom formatting).
I have found that:
- some
write!( ... )
macros arguments, when those are split on multiple lines are flagged as misses - some
assert!( ... )
macros arguments, when those are split on multiple lines are flagged as misses - some right hand sides of
let ...
assignments, when on other lines (even ifself
is on the same line of thelet ...
) are flagged as misses
Here's my travis-ci build configuration, if needed:
configuration
language: rust
sudo: required # required for some configurations
# tarpaulin has only been tested on bionic and trusty other distros may have issues
dist: bionic
addons:
apt:
packages:
- libssl-dev
rust:
- stable
- beta
- nightly
jobs:
allow_failures:
- rust: nightly
fast_finish: true
cache: cargo
before_script: |
if [[ "$TRAVIS_RUST_VERSION" == stable ]]; then
cargo install cargo-tarpaulin
fi
after_success: |
if [[ "$TRAVIS_RUST_VERSION" == stable ]]; then
# Get coverage report and upload it for codecov.io.
cargo tarpaulin --out Xml
bash <(curl -s https://codecov.io/bash)
fi
I have a whole bunch of match self.blah { <newline> }
s where the line containing the match
itself is marked not-covered.
In a fn foo(&mut self, ...) -> &mut Self { ...; self }
, the final line returning self
is always listed as uncovered.
It seems to be any trailing expression that is just a borrow from self
: report
The item headers for impl
s in statement position are incorrectly marked as uncovered. Lifting the impl
blocks to item position properly does not consider these lines as uncovered. [example]
@CAD97 That link is behind a login wall. Not everyone has already given access to codecov.io.
@mathstuf ah, sorry, I was unaware that that link required a login. (It should be public, the repository in question is public.) (If it was just that the second link was blank, it's because I had forgotten to pin it to a specific commit, which is fixed now.) Here's the linked snippets inline:
+ pub fn builder(&mut self) -> &mut Builder {
- &mut self.cache
}
+ pub fn add(&mut self, element: impl Into<NodeOrToken<Arc<Node>, Arc<Token>>) -> &mut Self {
+ self.children.push(element.into());
- self
}
+ fn visit_seq<Seq>(self, mut seq: Seq) -> Result<Self::Value, Seq::Error>
where
Seq: SeqAccess<'de>,
{
+ if seq.size_hint().is_some() {
struct SeqAccessExactSizeIterator<'a, 'de, Seq: SeqAccess<'de>>(
&'a mut Builder,
Seq,
PhantomData<&'de ()>,
);
- impl<'de, Seq: SeqAccess<'de>> Iterator for SeqAccessExactSizeIterator<'_, 'de, Seq> {
- type Item = Result<NodeOrToken<Arc<Node>, Arc<Token>>, Seq::Error>;
+ fn next(&mut self) -> Option<Self::Item> {
+ self.1.next_element_seed(ElementSeed(self.0)).transpose()
}
These that I'm seeing seem almost arbitrary; the function signature itself is marked as missing coverage, even though the function itself clearly is covered[1], and I don't have the same problem for the exact same function implemented elsewhere[2] and which is tested in the exact same manner; a similar issue can be seen with [3] vs [4].
Pretty sure I found a false negative, the latest released cargo-tarpaulin
(0.13.3) marks two wrapping_mul()
's as unexecuted in the middle of a bunch of other code that is green:
- https://hg.sr.ht/~icefox/oorandom/browse/src/lib.rs?rev=default#L91
- https://hg.sr.ht/~icefox/oorandom/browse/src/lib.rs?rev=default#L214
Report: https://alopex.li/temp/tarpaulin-oorandom.html
Thank you!
Simply, what I noticed is that once I get the warning "Instrumentation address clash, ignoring 0x40001234", the lines around of that "address clash" are marked as uncovered. So have a simple new
method with a straightforward implementation it's marked as uncovered. Therefore the coverage is reported with lower percentage.
I'll try and solve that instrumentation clash warning better, it's because instrumentation points to lines are a many to many relationship and I worked to simplify it by making it a many to one (many traces to one line).
Also @fadeevab do you have a public project with the instrumentation clash issue I can test with? Just to save time in hunting down or creating a project where that happens
@xd009642 Yeap, https://github.com/fadeevab/cocoon
Thanks!
Hmm it didn't happen on my machine... I've pushed a branch called permissive_clash_handling
if you want to try it out yourself. In the meantime I'll keep looking to recreate the clash issue with a project on my machine
Not sure whether it helps, but this is my toolchain:
Default host: x86_64-unknown-linux-gnu
stable-x86_64-unknown-linux-gnu (default)
rustc 1.44.1 (c7087fe00 2020-06-17)
I filed #496 which appears to be another case of match
and function arguments split across multiple lines as described here. rsgit is a public project; feel free to submit a PR (similar to my rust-git/rsgit#56) if you wish to use that for testing.
Yeah I've got a plan for the chained method calls and some of the match ones. Basically falling back on the logical lines technique, just need to figure out the edge cases in the language etc.
All these issues really come around because there's a lot of noise and instrumentation points that aren't reachable (you need to instrument somewhere else). Would be nice to figure out how to tackle the noise in the DWARF tables and get around it but logical lines based grouping isn't insurmountable to implement
Looks like Rust 1.45.0 added some new cases. :-( Please see rust-git/rsgit#59 for an example.
Several false negatives here:
All of these examples have tests directly testing them at the bottom of the file.
I noticed some strange behavior in the coverage of a diff https://codecov.io/gh/wasmerio/wasmer/pull/1566/diff, specifically:
- constant in if else chain marked as not hit
- unconditional call to
clone
marked as not hit - use of
type { field, ..type::default() }
marked as not hit - part of
if let
binding marked as not hit - creation of variable marked as not hit
- use of that variable being marked as not hit
Perhaps this is a configuration issue on my side; a lot of these issues look like they were caused by optimization.
Thanks!
Looks like one of the largest instances of false negatives in the code for time is a parameter or the initialization of a struct (when a variable with that name already exists, typically). Other situations that sometimes aren't covered is when an exprssion is split over multiple lines.
impl Distribution<Duration> for Standard {
fn sample<R: Rng + ?Sized>(&self, rng: &mut R) -> Duration {
let seconds = Standard.sample(rng);
Duration::new(
seconds, // This line alone isn't covered.
seconds.signum() as i32 * rng.gen_range(0, 1_000_000_000),
)
}
}
pub const fn try_from_hms(
hour: u8,
minute: u8,
second: u8,
) -> Result<Self, error::ComponentRange> {
ensure_value_in_range!(hour in 0 => 23);
ensure_value_in_range!(minute in 0 => 59);
ensure_value_in_range!(second in 0 => 59);
Ok(Self {
hour, // this
minute, // this
second, // and this line are not covered
nanosecond: 0,
})
}
pub(crate) fn from_iso_ywd_unchecked(year: i32, week: u8, weekday: Weekday) -> crate::Date {
let ordinal = week as u16 * 7 + weekday.iso_weekday_number() as u16
- (Self::from_yo_unchecked(year, 4)
.weekday() // this
.iso_weekday_number() as u16 // this
+ 3); // and this line are uncovered, despite the first chunk of the expression being covered
if ordinal < 1 {
return Self::from_yo_unchecked(year - 1, ordinal + days_in_year(year - 1));
}
let days_in_cur_year = days_in_year(year);
if ordinal > days_in_cur_year {
Self::from_yo_unchecked(year + 1, ordinal - days_in_cur_year)
} else {
Self::from_yo_unchecked(year, ordinal)
}
}
Having no idea how tarpaulin actually works aside from it putting "markers" to detect when certain code is run, I think marking an entire expression as covered is what would be necessary. The struct initialization is almost certainly optimized out, even in debug mode (especially in the example provided), so any marker would be futile.
Here's an interesting one! Should a branch of a match arm that consists solely of unreachable!()
be considered covered? In my opinion it should be ignored.
I don't seem to get any coverage on traits even when directly testing the trait methods
Is there a straightforward way for these misses/false positives be fixed? I'd be willing to contribute to this repo fixing some, though I have no clue where to start.
@MonliH so the misses in a coverage typically come down to two classes:
- The instructions show up in dwarf for lines that aren't executable i.e. a line of just
} else {
and this can be caught in the source_analysis module. - The instruction is something that should be executed i.e. a method call in a chain of calls but for whatever reason the instructions provided in the debug info aren't good enough.
So issues that fall under 1. are definitely easiest. The source analysis module will load the contents of a file and use syn to walk over every expression and statement, and it's generally well tested. Because of the vast number of things to consider like potential attributes on every expression and all the ways expressions can be composed in rust it can sometimes take some thinking on how to do things properly and all the ways people can write code that could break the assumptions.
Issues that fall under 2 are harder. I've typically looked at things like the GDB source code, addr2line implementations and objdump output from test binaries containing relevant DWARF sections to improve things here.
Also #549 should fix all of these issues but progress on that is now mainly working through the LLVM internals and I'm not sure how easy it will be to have someone else assist on it
I'm getting a lot of uncovered implicit return statements in functions that clearly execute
Honestly just because as a feature it came later and no ones asked for the default to be switched around, I've got no opposition to switching the default behaviour. I also definitely don't think people are writing tests for their tests (or at least I hope not...)
The main use I can see is for integration tests that involve things like file or network IO some people might want to make sure their tests are actually executing the code as thoroughly as they expect
Is there any way that we can get an annotation similar to #[cfg(not(tarpaulin))] to ignore the tests from coverage consideration, but still run the test? I have a few tests where checks are necessary for IO, but don't want to include those statements in my coverage results.
Is there any way that we can get an annotation similar to #[cfg(not(tarpaulin))] to ignore the tests from coverage consideration
cfg
would compile the code out completely. I think you're looking for the cargo tarpaulin --ignore-tests
flag.
@cwlittle the readme covers this https://github.com/xd009642/tarpaulin#ignoring-code-in-files you can use #[cfg(not(tarpaulin_include))]
I don't set tarpaulin_include
in the rustflags so as long as you don't add it as a cfg the code will be included in, but not added to the coverage results
Using #[inline(always)]
on a function will result in its first line to be reported as uncovered.
To reproduce (tested on v0.18.0):
#[inline(always)]
fn foo() {
println!()
}
#[test]
fn test_foo() {
foo();
}
Line 2 (fn foo() {
) will be reported as not covered while the body is correctly reported as covered.
Previously reported in #79 and #136 (comment).
_ => write!(
f,
"{}{}",
self.call_wahtever_function(),
self.call_another_function()
)
In this write of Display trait implementation, splitters to multiple lines by fmt, the f is uncovered
Also there seem to be problems with async-trait
I'm getting a lot of uncovered implicit return statements in functions that clearly execute
I'm also seeing this behaviour. Also break statements and enum match cases are uncovered.
I tried tarpaulin on the code base of kodiak-taxonomy, a library I published recently and got fairly decent results. Thank you so much for this tool!
Sometimes tarpaulin missed "None" match arms, some of them being empty ( None => {} ), some of them having code in the None block ( None => { ... } ). I was wondering why and tried to create a minimalist example. At first I didn't succeed. In my example code tarpaulin reported the None match arm as covered.
It turned out that tarpaulin behaves differently as soon as any generics are involved. When I replace K with String in the following example the tarpaulin report is correct: all covered.
Anything I can do about this? Am I missing something? Thanks.
use std::fmt::Display;
pub struct User<K> {
name: K,
}
impl<K> User<K>
where
K: Display + Clone,
{
pub fn set_name(&mut self, user: Option<User<K>>) {
match user {
None => {}
Some(user) => self.name = user.name,
}
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn set_name_some() {
let mut me = User {
name: "Tobias".to_string(),
};
let new_name = User {
name: "Valentin".to_string(),
};
me.set_name(Some(new_name));
}
#[test]
fn set_name_none() {
let mut me = User {
name: "Tobias".to_string(),
};
me.set_name(None);
}
}
I also attach the tarpaulin report here.
@tokcum, if you're using the ptrace engine (default on linux) it's just an unfortunate side-effect of instructions being in different locations and it being more sensitive to that. One potential solution is using --engine llvm
and seeing if that improves things
@xd009642, thank you. Changing the engine to LLVM did the trick. Great! I was not aware of the implications changing the engine.
I have a similar issue as @tokcum but in my case --engine llvm
doesnt help. Basically it is a generic function with a match where the blocks are multiline, minimal example:
pub fn is_some<T>(option: Option<T>) -> bool {
match option {
Some(_) => {
return true;
}
None => {
return false;
}
}
}
#[cfg(test)]
mod test {
use super::*;
#[test]
fn test_none() {
assert_eq!(is_some::<()>(None), false);
}
#[test]
fn test_some() {
assert_eq!(is_some(Some(())), true);
}
}
If I run tarpaulin on this (regardless if with --engine llvm
I get only 4 / 6 matched lines and the non-matching lines are the Some
and the None
lines:
Here's a few inaccurracies in one screenshot (code that's being shown there):
- multi-line match arms can be partially covered
- field access / method chains without any divergence partially covered
return
not covered despite the preceding statement that doesn't diverge being covered
Is there a flag or config setting to ignore lines with normal, non-doc comments? I'm getting a large amount of false hits on lines with //
comments
It seems arbitrary as well, I have comments containing plain text getting false hits, while I also have comments containing multiple "punctuation" type symbols which are ignored
False hit
// Pull incrementor ID
Ignored
//BUCKETS.save(deps.storage, (sender.clone(), &bucket_id), &new_bucket)?;
Not sure if this is one of the cases mentioned above:
https://app.codecov.io/gh/boa-dev/boa/pull/2885/blob/boa_unicode/src/lib.rs
The matches!()
is sometimes covered, sometimes not, then, the function call (first argument) is covered, but the pattern matching is not.
I noticed a little bug in a small crate (~100 lines) I'm working on. Comments without a newline before them are getting counted towards coverage. Adding a newline gave me +2.88% coverage. This is the command used:
cargo tarpaulin -v --release --engine ptrace --all-features --out html
This is an excerpt from the html report showing with and without the newline:
Not sure if this has been brought up before.
Found this yesterday, not sure it has already been reported, but I couldn't find any reference to it:
Resulting coverage:
https://app.codecov.io/gh/clechasseur/mini_exercism/commit/c4d0075a4dad0b6c9d1eb5efdcd10f36d560d7fa/blob/src/api/detail.rs#L117
This is inside a macro_rules!
as well as inside a paste!
block.
It also comes up when invoking the macro, because the macro forwards the attributes:
Resulting coverage:
https://app.codecov.io/gh/clechasseur/mini_exercism/commit/c4d0075a4dad0b6c9d1eb5efdcd10f36d560d7fa/blob/src/api/v1.rs#L21
Seems like tarpaulin
believes the #[derive(Debug)]
is a line to be covered.
EDIT: I tried adding a test that actually uses the implementation of Debug
being generated and it does solve the issue (e.g. the line is now considered covered). It's weird however because I have lots of other #[derive(Debug)]
on custom types not defined through macros that do not need to be covered.
Fails to record coverage properly when functions are annotated with tracing::instrumentation
from tokio's tracing crate.
and
For tracing, we found that it helps a lot to set RUST_LOG=trace
(assuming tracing-subscriber
's EnvFilter
is used), otherwise expressions inside of event!
/ trace!
/ info!
and so on don't get evaluated, and tarpaulin correctly shows them as not covered.
I'm also seeing this behaviour. Also break statements and enum match cases are uncovered.
@eloff It might be a bit late, but was this in generic code by any chance? I have both break statements and match cases not covered in generic code.
@jplatte Is your example from generic code?
I concur with @tokcum and @blacktemplar and I think that all these might have the same root cause. Maybe discussion about this case should be regrouped in #1078.
I included a link with my example. It's in an impl block with lifetime generics, no type generics are involved.
I included a link with my example. It's in an impl block with lifetime generics, no type generics are involved.
Sorry, I had missed that. It seems like lifetime generics are enough to trigger the behavior I have observed:
struct S1 { v: Option<i32>, }
fn f1<'a>() {
let s = S1 { v: Some(0) };
Box::new(S1 {
v: s
.v
.map(|v| 42),
});
}
#[test]
fn test1() { f1(); }
struct S2 { u: i32, }
fn f2<'a>() {
Box::new(S2 {
u: 0,
});
}
#[test]
fn test2() { f2(); }
fn f3<'a>() {
Some(0)
.map(
|
v
|
42
);
}
#[test]
fn test3() { f3(); }
gets me
One silly, yet super important gotcha regarding accuracy: make sure you have not added:
[lib] test = false
In your Cargo.toml
(and forgot about it). I had accidentally forgot about adding this and it resulted in my code coverage from 90% -> 60% (in a multi-crate-repo).
@xd009642 I use UniFFI
and it is heavily macro based, and a lot of its macros usages results in Tarpaulin flagging the line where I use the macro as uncovered (they are covered, but I guess Tarpaulin does not know about it)
Also when I have declared my own macros often Eq
derivation is missed:
https://app.codecov.io/gh/radixdlt/sargon/blob/develop/src%2Fprofile%2Fv100%2Fnetworks%2Fnetwork%2Fauthorized_dapp%2Fshared_with_dapp.rs#L16
seem to get missing coverage when a chain of methods is called: may be because they're being optimised out before the final code? https://app.codecov.io/gh/KGrewal1/optimisers/blob/master/src%2Flbfgs.rs
missing coverage on struct field instantiation on lines 146 definitely doesnt seem correct however?
Also some issues where every line is covered bar a final call to collect https://app.codecov.io/gh/KGrewal1/optimisers/blob/master/src%2Fadam.rs
Hi ๐ , nice work with this project! ๐ธ
I believe there's missing coverage when using include-tests
on #[tokio::test]
. One example of it not being covered is here, in which our GHA workflow uses include-tests = true
here.
The #[test]
are covered fine, it just appears to be #[tokio::test]
that's not being included.
Edit: resolved #1503. tyty! ๐ธ
I went through the codebase and it looks like this line https://github.com/xd009642/tarpaulin/blob/develop/src/source_analysis/items.rs#L106 doesn't return true for #[tokio::test]
since it's not considered and indent
when number of path segments is greater than 1.
Edit: resolved #1503. tyty! ๐ธ
I'm getting a lot of seemingly random misses in match statements:
CodeCov Link: https://app.codecov.io/gh/AnthonyMichaelTDM/mecomp/commit/0ce9868c0a0eae2458b5202e506c1250e32956b8/blob/one-or-many/src/lib.rs
Screenshot of (some of the) coverage results of interest:
the test(s) that should be resulting in those lines being covered:
#[cfg(test)]
mod tests {
use super::*;
use pretty_assertions::{assert_eq, assert_ne};
use rstest::rstest;
...
#[rstest]
#[case::none(OneOrMany::<usize>::None, OneOrMany::<usize>::None, Some(std::cmp::Ordering::Equal))]
#[case::none(OneOrMany::<usize>::None, OneOrMany::One(1), Some(std::cmp::Ordering::Less))]
#[case::none(OneOrMany::<usize>::None, OneOrMany::Many(vec![1, 2, 3]), Some(std::cmp::Ordering::Less))]
#[case::one(OneOrMany::One(1), OneOrMany::<usize>::None, Some(std::cmp::Ordering::Greater))]
#[case::one(OneOrMany::One(1), OneOrMany::One(1), Some(std::cmp::Ordering::Equal))]
#[case::one(OneOrMany::One(1), OneOrMany::One(2), Some(std::cmp::Ordering::Less))]
#[case::one(
OneOrMany::One(1),
OneOrMany::One(0),
Some(std::cmp::Ordering::Greater)
)]
#[case::one(OneOrMany::One(1), OneOrMany::Many(vec![1, 2, 3]), Some(std::cmp::Ordering::Less))]
#[case::many(OneOrMany::Many(vec![1, 2, 3]), OneOrMany::<usize>::None, Some(std::cmp::Ordering::Greater))]
#[case::many(OneOrMany::Many(vec![1, 2, 3]), OneOrMany::One(1), Some(std::cmp::Ordering::Greater))]
#[case::many(OneOrMany::Many(vec![1, 2, 3]), OneOrMany::Many(vec![1, 2, 3]), Some(std::cmp::Ordering::Equal))]
#[case::many(OneOrMany::Many(vec![1, 2, 3]), OneOrMany::Many(vec![2, 3]), Some(std::cmp::Ordering::Less))]
#[case::many(OneOrMany::Many(vec![1, 2, 3]), OneOrMany::Many(vec![1, 2, 3, 4]), Some(std::cmp::Ordering::Less))]
fn test_partial_cmp<T>(
#[case] input: OneOrMany<T>,
#[case] other: OneOrMany<T>,
#[case] expected: Option<std::cmp::Ordering>,
) where
T: std::fmt::Debug + PartialOrd,
{
let actual = input.partial_cmp(&other);
assert_eq!(actual, expected);
}
#[rstest]
#[case::none(OneOrMany::<usize>::None, OneOrMany::<usize>::None, std::cmp::Ordering::Equal)]
#[case::none(OneOrMany::<usize>::None, OneOrMany::One(1), std::cmp::Ordering::Less)]
#[case::none(OneOrMany::<usize>::None, OneOrMany::Many(vec![1, 2, 3]), std::cmp::Ordering::Less)]
#[case::one(OneOrMany::One(1), OneOrMany::<usize>::None, std::cmp::Ordering::Greater)]
#[case::one(OneOrMany::One(1), OneOrMany::One(1), std::cmp::Ordering::Equal)]
#[case::one(OneOrMany::One(1), OneOrMany::One(2), std::cmp::Ordering::Less)]
#[case::one(OneOrMany::One(1), OneOrMany::One(0), std::cmp::Ordering::Greater)]
#[case::one(OneOrMany::One(1), OneOrMany::Many(vec![1, 2, 3]), std::cmp::Ordering::Less)]
#[case::many(OneOrMany::Many(vec![1, 2, 3]), OneOrMany::<usize>::None, std::cmp::Ordering::Greater)]
#[case::many(OneOrMany::Many(vec![1, 2, 3]), OneOrMany::One(1), std::cmp::Ordering::Greater)]
#[case::many(OneOrMany::Many(vec![1, 2, 3]), OneOrMany::Many(vec![1, 2, 3]), std::cmp::Ordering::Equal)]
#[case::many(OneOrMany::Many(vec![1, 2, 3]), OneOrMany::Many(vec![2, 3]), std::cmp::Ordering::Less)]
#[case::many(OneOrMany::Many(vec![1, 2, 3]), OneOrMany::Many(vec![1, 2, 3, 4]), std::cmp::Ordering::Less)]
fn test_cmp<T>(
#[case] input: OneOrMany<T>,
#[case] other: OneOrMany<T>,
#[case] expected: std::cmp::Ordering,
) where
T: std::fmt::Debug + Ord,
{
let actual = input.cmp(&other);
assert_eq!(actual, expected);
}
...
}
some additional context:
#[derive(Clone, Debug, PartialEq, Eq, Default)]
pub enum OneOrMany<T> {
One(T),
Many(Vec<T>),
#[default]
None,
}
huh, changing my compilation profile to use opt-level 0 instead of opt-level 1 seems to have fixed it. nevermind i guess?
Is there more to getting the tracing event! macro to be covered than setting RUST_LOG=trace?
I have done that and I am not getting the event! macro lines covered.
You need an actual tracing subscriber registered, I typically use the tracing_test
crate to do this.
I think I was confusing logging and tracing - I switched to a simpler logging interface using env_logger and I achieved all of the coverage that I required. Thanks for the help.
It seems that const fn
s are flagged as not covered if they are only used to initialise constants or statics.
static GLOBAL_ONE_PLUS_ONE_STATIC: u64 = add(1, 1);
const GLOBAL_ONE_PLUS_ONE_CONST: u64 = add(1, 1);
pub const fn add(left: u64, right: u64) -> u64 {
const _UNUSED: u64 = 70;
left + right
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn it_works() {
assert_eq!(GLOBAL_ONE_PLUS_ONE_STATIC, 2);
assert_eq!(GLOBAL_ONE_PLUS_ONE_CONST, 2);
static ONE_PLUS_ONE_STATIC: u64 = add(1, 1);
assert_eq!(ONE_PLUS_ONE_STATIC, 2);
const ONE_PLUS_ONE_CONST: u64 = add(1, 1);
assert_eq!(ONE_PLUS_ONE_CONST, 2);
}
}
|| Uncovered Lines:
|| src/lib.rs: 4, 6
|| Tested/Total Lines:
|| src/lib.rs: 0/2 +0.00%
||
0.00% coverage, 0/2 lines covered, +0.00% change in coverage
appending let _unused = add(1, 1);
brings the coverage to 100%
that's probably because the values would be computed at compile time
Yeah, values at compile time aren't evaluated at test time so not covered. const fn's can be runtime or compile time so I can't just filter them out
hmmm, I'd imagine the way it ideally work then is to have the compiler increment the counters.
Otherwise, I'd imagine it would be difficult to filter these constants depending on their call sites.
const
evaluation happens around MIR time, long before (IIUC) tarpaulin
adds any counters to the pipeline (closer to the LLVM-IR stage).