Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

html report differs from lcov report #1103

Open
jmfrank63 opened this issue Aug 31, 2023 · 2 comments
Open

html report differs from lcov report #1103

jmfrank63 opened this issue Aug 31, 2023 · 2 comments

Comments

@jmfrank63
Copy link

jmfrank63 commented Aug 31, 2023

First, thank you for this great piece of software.

I am currently diving deeper into code coverage and set up grcov for a sample project, which can be found here:

inst-decoding-8086

GitHub actions work fine with codecov.io and locally lcov gave me all the hints I needed to achieve 100% coverage.
I call grcov like this:

cargo fmt
cargo clean
export RUSTFLAGS="-Cinstrument-coverage"
export LLVM_PROFILE_FILE="inst-decoding-8086-%p-%m.profraw"
cargo test
export CARGO_INCREMENTAL=0
export RUSTFLAGS="-Zprofile -Ccodegen-units=1 -Copt-level=0 -Clink-dead-code -Coverflow-checks=off -Zpanic_abort_tests -Cpanic=abort"
export RUSTDOCFLAGS="-Cpanic=abort"
grcov . --binary-path ./target/debug/ -s . -t html --branch --llvm --ignore-not-existing --ignore "/*" --ignore "../**" -o ./target/debug/coverage/
grcov . --binary-path ./target/debug/ -s . -t lcov --branch --llvm --ignore-not-existing --ignore "/*" --ignore "../**" -o ./target/debug/lcov.info
genhtml -o ./target/debug/lcov --show-details --highlight --ignore-errors source  --ignore-errors unmapped,unmapped --legend ./target/debug/lcov.info

However, despite lcov being 100% the html report is still missing out function calls. I dug deeper into this and present one of the examples here I am not certain about but seems to be strange. This is the code in question:

#[derive(Debug, PartialEq)]
pub enum X86InstructionError {
    InvalidInstruction,
    InvalidRegister,
}

impl fmt::Display for X86InstructionError {
    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { // <-- line 20 (see below)
        write!(f, "{:?}", self)
    }
}

impl Error for X86InstructionError {}

impl From<X86InstructionError> for io::Error {
    fn from(error: X86InstructionError) -> Self {
        io::Error::new(io::ErrorKind::Other, error)
    }
}

Though the following tests are not useful, as they test something that is already been tested, they provide some good insights:

#[test]
    fn test_from_trait_implementation() {
        let x86_error = X86InstructionError::InvalidInstruction;
        let io_error = io::Error::from(x86_error);
        assert_eq!(io_error.kind(), io::ErrorKind::Other);
        let inner = io_error.get_ref().unwrap();
        let inner_downcasted = inner.downcast_ref::<X86InstructionError>().unwrap();
        assert_eq!(inner_downcasted, &X86InstructionError::InvalidInstruction);
    }

    #[test]
    fn test_into_trait_implementation() {
        let x86_error = X86InstructionError::InvalidRegister;
        let io_error: io::Error = x86_error.into();
        assert_eq!(io_error.kind(), io::ErrorKind::Other);
        let inner = io_error.get_ref().unwrap();
        let inner_downcasted = inner.downcast_ref::<X86InstructionError>().unwrap();
        assert_eq!(inner_downcasted, &X86InstructionError::InvalidRegister);
    }

Normally, into() should suffice, but to be absolutely sure, I used both into() and from().

I forked the project, and modified the html.rs to get the content of result.functions (fn get_stats line 182)

to get the call of line 20 (see above)

"_RNvXs0_NtNtCs2CSrhgX8DPy_18inst_decoding_808615instruction_set6errorsNtNtNtCs6hP5VPn6DF_3std2io5error5ErrorINtNtCs98dpJn7v43e_4core7convert4FromNtB5_19X86InstructionErrorE4from": Function { start: 20, executed: false }

but also

"_RNvXs0_NtNtCs4HZ39rGBqnJ_18inst_decoding_808615instruction_set6errorsNtNtNtCs6hP5VPn6DF_3std2io5error5ErrorINtNtCs98dpJn7v43e_4core7convert4FromNtB5_19X86InstructionErrorE4from": Function { start: 20, executed: true }

Since I used both of my Error variants, I can exclude monomorphication of them.

Before I dive into the demangling and looking deeper into this, I'd like to have some guidance from folks having more experience than 1 hour of code reading and debugging.

The questions that arise here:

Why does the html report differ from the lcov report? Is there a natural explanation, or is this simply a bug?

Having already put some work in here, I also wanted to ask if there would be interest in making the html report a little more interactive like lcov. I think it might be helpful if the number of hit and missed functions would be links pointing to the source lines of the functions considered.

If I tried to implement this, would this be considered to be merged (provided the general guidelines are followed)?
Also, if I missed any information, feel free to ask for more.

@jmfrank63 jmfrank63 changed the title html report differs from clov report html report differs from lcov report Aug 31, 2023
@jmfrank63
Copy link
Author

jmfrank63 commented Sep 2, 2023

I did more research into this, and it seems to be a bug. Handling directly profdata produces a different result than it would be using lcov. Also, if profdata and lcov is present, it seems to produce a mixture of both which is also different from both other reports.
Finally, the workaround is to produce an lcov report, then delete the profdata (*.profraw) and then create the html report, which produces the same result as lcov. However, this is rather pointless, as the genhtml report is so much more detailed.
IMHO, the goal is to produce the same data as lcov without lcov by just looking at the profdata, and have the same details as lcov. I might be looking into this, but first I'll wait for some feedback on this issue.

@busticated
Copy link

i just ran into this one as well. a number of the work-arounds suggested over in #556 suggest invoking lcov directly which works but sadly produces an inaccurate summary

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants