Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Track-caller panics #975

Merged
merged 1 commit into from Mar 10, 2024
Merged

Conversation

xd009642
Copy link
Contributor

@xd009642 xd009642 commented Apr 7, 2021

Add track caller to any function, method, trait that has a documented panics section. I also used cargo fmt to tidy up my semi-auto track_caller labelling so there may be other changes but they'll just be formatting.

Still need to assess the performance impact of all these track callers!

Fixes #972

@xd009642
Copy link
Contributor Author

xd009642 commented Apr 7, 2021

So I tried the benchmarks in the repo just to see if there were any glaring issues but the results seemed very noisy and an even mix of faster than master or slower than master. I'll create some more directed benchmarks and try to figure out what the impact is

src/arraytraits.rs Outdated Show resolved Hide resolved
ndarray-rand/src/lib.rs Outdated Show resolved Hide resolved
@bluss
Copy link
Member

bluss commented Apr 7, 2021

If we can find, a non-microbenchmark like mentioned #972

src/arraytraits.rs Outdated Show resolved Hide resolved
@xd009642
Copy link
Contributor Author

xd009642 commented Apr 7, 2021

Okay so I created this microbenchmark of slicing:

use criterion::{black_box, criterion_group, criterion_main, Criterion};
use ndarray::{s, Array3};
use ndarray_rand::RandomExt;
use ndarray_rand::rand_distr::Uniform;

fn slice_benchmark(c: &mut Criterion) {
    let data: Array3<f64> = Array3::random((64,64,64), Uniform::new(0., 10.));

    c.bench_function("slice", |b| b.iter(|| {
        data.slice(black_box(s![.., 24..50, ..])).shape();
    }));
}

criterion_group!(benches, slice_benchmark);
criterion_main!(benches);

Results:

0.15.0
slice
time: [34.281 ns 34.723 ns 35.189 ns]
Found 9 outliers among 100 measurements (9.00%)
4 (4.00%) high mild
5 (5.00%) high severe

This PR
slice
time: [32.815 ns 33.108 ns 33.458 ns]
change: [-8.6198% -5.7554% -3.0651%] (p = 0.00 < 0.05)
Performance has improved.
Found 5 outliers among 100 measurements (5.00%)
3 (3.00%) high mild
2 (2.00%) high severe

Results folder here:

criterion_results.zip

@xd009642
Copy link
Contributor Author

xd009642 commented Apr 7, 2021

Personally, I reckon the shift in the results between the two is probably down to something else scheduled by my OS taking some cycles away and these two versions can be considered equivalent in performance (just from this benchmark).

Whether this microbenchmark is good enough to check the impact of track caller is down to someone more familiar with ndarray internals 🙂

@xd009642 xd009642 marked this pull request as ready for review April 7, 2021 17:59
@xd009642
Copy link
Contributor Author

Any thoughts about the benchmark @bluss ?

@bluss
Copy link
Member

bluss commented Apr 12, 2021

The maintainers have asked for judicious use of the track caller attribute and applying it everywhere is funny to me 🙂 because it's basically the literal opposite by the dictionary of what judicious means.

Maintainers have asked for a holistic benchmark - maybe a real program that uses ndarray for a lot of varied operations.

I hope it explains what I mean. If the contributor doesn't want to be judicious or whatever, then the maintainer instead has to judge line by line if the additions make sense. And I don't have the investment to do that at the moment, sorry. In short this PR is stalling a bit since it seems to go a bit against every instruction that I have given to try to guide its resolution.

@bluss
Copy link
Member

bluss commented Apr 12, 2021

A general nice github feature: please use a line like "Fixes #972" to the PR description when making a change that closes an issue. Issues should be closed automatically on merge, that's the nice thing. PR titles and descriptions should also be updated to reflect the current state of the PR, before merge. (Now that might be moot here - but using this tip makes you a better PR-maker as a whole).

@xd009642
Copy link
Contributor Author

So I didn't apply it everywhere the PR title is a bit inaccurate in that sense. I only applied it on public API functions figuring only panics that can be caused directly by user code should be tracked. But I'll go through them and aim to reduce the scope further.

Oh I misread the non-microbenchmark as microbenchmark sorry that ones on me. I was looking at some projects like linfa to see if I could find bigger example projects that used some methods that panicked but I didn't find anything that looked useful. I'll go back to that.

@xd009642 xd009642 changed the title Track-caller all the panics Track-caller panics Apr 13, 2021
@bluss bluss added this to the 0.16.0 milestone Mar 10, 2024
@bluss
Copy link
Member

bluss commented Mar 10, 2024

Updated to master and conflicts resolved, I hope.

@bluss bluss added this pull request to the merge queue Mar 10, 2024
@bluss bluss removed this pull request from the merge queue due to a manual request Mar 10, 2024
@bluss
Copy link
Member

bluss commented Mar 10, 2024

Squashed to clean up history. Current merge queue config will create a merge commit at the end too.

@bluss
Copy link
Member

bluss commented Mar 10, 2024

Thanks, sorry for the delay.

@bluss bluss enabled auto-merge March 10, 2024 14:05
@bluss bluss added this pull request to the merge queue Mar 10, 2024
Merged via the queue into rust-ndarray:master with commit 5465bc4 Mar 10, 2024
8 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add #[track_caller] judiciously
2 participants