Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MIRI detects leaks even when using temporary pool #1072

Open
AngelicosPhosphoros opened this issue Jul 9, 2023 · 5 comments
Open

MIRI detects leaks even when using temporary pool #1072

AngelicosPhosphoros opened this issue Jul 9, 2023 · 5 comments

Comments

@AngelicosPhosphoros
Copy link

rayon version

rayon = "1.7.0"

Rust version

rustc +nightly --version --verbose
rustc 1.72.0-nightly (871b59520 2023-05-31)
binary: rustc
commit-hash: 871b5952023139738f72eba235063575062bc2e9
commit-date: 2023-05-31
host: x86_64-pc-windows-msvc
release: 1.72.0-nightly
LLVM version: 16.0.4

Code

use rayon::prelude::*;

/// Uses temporary thread pool to ensure that rayon stops threads
/// so MIRI doesn't think that there are leaks.
pub fn run_in_rayon<F>(op: F)
where
    F: Send + FnOnce(),
{
    rayon::ThreadPoolBuilder::new()
        // Build scoped is only way to ensure that rayon threads are finished
        .build_scoped(rayon::ThreadBuilder::run, |pool| pool.install(op))
        .unwrap();
}

#[test]
fn test_1(){
    let data = vec![1; 50000];
    run_in_rayon(||{
        let s: i32 = data.par_chunks(500)
            .map(|x|->i32{x.iter().sum()})
            .sum();
        assert_eq!(s, 50000);
    });
}

When running it using command cargo +nightly miri test, MIRI reports memory leaks somewhere in rayon or crossbeam, see attached log.

I had previously had similar trouble with global threadpool so I created run_in_rayon wrapper to ensure that threads created by rayon aren't leaked and don't cause issues with MIRI but it doesn't work anymore.

@AngelicosPhosphoros
Copy link
Author

Here the log from MIRI execution.
out.txt

@AngelicosPhosphoros
Copy link
Author

@cuviper
Copy link
Member

cuviper commented Jul 11, 2023

I had previously had similar trouble with global threadpool so I created run_in_rayon wrapper to ensure that threads created by rayon aren't leaked and don't cause issues with MIRI but it doesn't work anymore.

The allocations all look like crossbeam's own epoch-tracking stuff. Did the versions in your lockfile change from when it used to work?

@AngelicosPhosphoros
Copy link
Author

Well, I updated my Cargo.toml so it did.

This wrapper works when using rayon 1.6 and fails when using rayon 1.7.
You can test it even using playground: https://play.integer32.com/?version=nightly&mode=debug&edition=2021&gist=3afeedbcaf62138518fa6d3b327595dd

It could be a bug in crossbeam but I don't use it directly (so it is only transitive dependency from rayon) so I don't know where to dig further.

@cuviper
Copy link
Member

cuviper commented Jul 18, 2023

The version comment on the playground does nothing -- it's always whatever is prebuilt in the playground sysroot, which will be the latest release (possibly lagging brand new releases). I ran miri at your link, and it timed out the first time, then reported leaks the second time.

When I run your reproducer locally under valgrind, it says the memory is still reachable -- I suspect it's the crossbeam-epoch HANDLE in TLS. That should be running drops though, so I'm not sure.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants