Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Chart throughput on parameterized benchmark page by default? #9

Open
antifuchs opened this issue May 28, 2018 · 4 comments · May be fixed by bheisler/criterion.rs#646
Open

Chart throughput on parameterized benchmark page by default? #9

antifuchs opened this issue May 28, 2018 · 4 comments · May be fixed by bheisler/criterion.rs#646

Comments

@antifuchs
Copy link

I am using Criterion to benchmark a function that operates on a vector of elements, using something like:

const ALL: &'static [Width; 4] = &[Width::U8, Width::U16, Width::U32, Width::U64];
const ELTS: usize = 20;

fn encode_multiple_benchmark(c: &mut Criterion) {
    let id = "encode_multiple";
    let bm = ParameterizedBenchmark::new(
        id,
        |b, ref n| {
            b.iter(|| {
                let v = vec![n.sample() as u64; ELTS];
                v.fib_encode().expect("should be encodable")
            })
        },
        ALL,
    ).throughput(|_s| Throughput::Elements(ELTS as u32));
    c.bench(id, bm);
}

The benchmark overview page shows the duration that each iteration took, but that number is a bit useless on its own. The thing I'm really interested in over time is the throughput of that function, which is only given under "Additional Statistics" on the details page.

It would be really nice if the benchmark was configurable to show that throughput on the parameterized benchmark's overview page by default.

@antifuchs antifuchs changed the title C Chart throughput on benchmark page by default? May 28, 2018
@antifuchs antifuchs changed the title Chart throughput on benchmark page by default? Chart throughput on parameterized benchmark page by default? May 28, 2018
@bheisler
Copy link
Owner

bheisler commented Jun 1, 2018

Hey, thanks for trying Criterion.rs, and thanks for the suggestion.

Yeah, that's reasonable. We could probably skip adding configuration (at least for now) and assume that anyone who configures a throughput metric on their benchmarks at all is probably more interested in the throughput than the execution time.

I probably won't get around to implementing this right away, but pull requests would be welcome.

@gz
Copy link

gz commented Aug 9, 2019

I would be willing to try and tackle this since I'd like to have this feature too. Can @bheisler maybe give me a pointer to the relevant function that I need tu modify for this?

@bheisler
Copy link
Owner

I think you'd need to modify more than one function...

For now, let's scope this to just adding throughput charts to the per-benchmark reports. Reporting throughput on the summary reports raises a lot of complicated questions and edge cases (Would you want to have both throughput and execution time on the summaries? Should the violin plots show throughput instead of execution time? What if some of the benchmarks in a group have no throughput? What if they have different kinds of throughput?).

src/html/mod.rs is the entry point for generating the HTML reports. You probably want to edit measurement_complete to have it generate one or more new plots if there is a throughput (if measurements.throughput.is_some()). You'll need to convert the average iteration times in measurements.avg_times to throughput numbers and define the new plots in src/plot, probably in a new file. Depending on how you want to display these plots, you might also need to update the benchmark_report.html.tt template.

This will need to work with the custom measurements feature I've added to 0.3.0, so you'll need to build on top of the v0.3.0-dev branch. That will probably require a breaking change to the ValueFormatter trait, so it would have to be done before I release 0.3.0 or wait for 0.4.0.

Yeah, this isn't a trivial feature to add, partly because Criterion.rs' internal code isn't as clean as I'd like and partly because it interacts with some other features currently in development.

@gz
Copy link

gz commented Aug 12, 2019

Thanks that will be very helpful, I will try to look into these pointers to see what I can come up with on the 0.3 branch.

bheisler referenced this issue in bheisler/criterion.rs Aug 17, 2019
This is mostly future-proofing for #149, but it does allow format_throughput to be implemented in terms of scale_throughputs.
gz referenced this issue in gz/criterion.rs Aug 26, 2019
gz referenced this issue in gz/criterion.rs Sep 23, 2019
@bheisler bheisler transferred this issue from bheisler/criterion.rs Jul 6, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants