Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix SB UB (without breaking MSRV) #80

Closed
wants to merge 5 commits into from
Closed
Show file tree
Hide file tree
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
3 changes: 1 addition & 2 deletions .github/workflows/test.yaml
Expand Up @@ -231,8 +231,7 @@ jobs:
- name: Run miri
env:
PROPTEST_CASES: "10"
# TODO: Do something with the stacked borrows. Figure out what it means.
MIRIFLAGS: "-Zmiri-disable-isolation -Zmiri-disable-stacked-borrows"
MIRIFLAGS: "-Zmiri-disable-isolation -Zmiri-permissive-provenance"
run: cargo miri test --all-features

thread_sanitizer-MacOS:
Expand Down
13 changes: 10 additions & 3 deletions src/lib.rs
Expand Up @@ -883,7 +883,7 @@ macro_rules! t {
const WRITER_CNT: usize = 2;
const READER_CNT: usize = 3;
#[cfg(miri)]
const ITERATIONS: usize = 10;
const ITERATIONS: usize = 5;
#[cfg(not(miri))]
const ITERATIONS: usize = 100;
const SEQ: usize = 50;
Expand Down Expand Up @@ -1160,6 +1160,9 @@ macro_rules! t {
#[test]
/// Make sure the reference count and compare_and_swap works as expected.
fn cas_ref_cnt() {
#[cfg(miri)]
const ITERATIONS: usize = 10;
#[cfg(not(miri))]
const ITERATIONS: usize = 50;
let shared = ArcSwap::from(Arc::new(0));
for i in 0..ITERATIONS {
Expand All @@ -1173,7 +1176,7 @@ macro_rules! t {
// Fill up the slots sometimes
let fillup = || {
if i % 2 == 0 {
Some((0..50).map(|_| shared.load()).collect::<Vec<_>>())
Some((0..ITERATIONS).map(|_| shared.load()).collect::<Vec<_>>())
} else {
None
}
Expand Down Expand Up @@ -1272,10 +1275,14 @@ mod tests {
/// created, but contain full Arcs.
#[test]
fn lease_overflow() {
#[cfg(miri)]
const GUARD_COUNT: usize = 100;
#[cfg(not(miri))]
const GUARD_COUNT: usize = 1000;
let a = Arc::new(0);
let shared = ArcSwap::from(Arc::clone(&a));
assert_eq!(2, Arc::strong_count(&a));
let mut guards = (0..1000).map(|_| shared.load()).collect::<Vec<_>>();
let mut guards = (0..GUARD_COUNT).map(|_| shared.load()).collect::<Vec<_>>();
let count = Arc::strong_count(&a);
assert!(count > 2);
let guard = shared.load();
Expand Down
14 changes: 12 additions & 2 deletions src/ref_cnt.rs
Expand Up @@ -92,7 +92,12 @@ unsafe impl<T> RefCnt for Arc<T> {
Arc::into_raw(me) as *mut T
}
fn as_ptr(me: &Arc<T>) -> *mut T {
me as &T as *const T as *mut T
// SAFETY: The refcount will be incremented then decremented
// so we force the compiler to remove the overflow check
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't believe this is actually true/safe.

  1. Ever reaching unreachable_unchecked is explicitly an UB. So if the strong_count is at the max value, it simply invokes UB.
  2. Even if it „just“ turned off the check inside Arc, this would not be safe. Because it would temporarily overflow to 0 and, in some other thread, could be incremented twice, decremented once, reach 0 again there and get deallocated.

We could use the approach without that optimization, and we could measure it that way to see if there's any actual change in speed. That would be the more elegant way and I suspect it might actually be the case.

If it turns out it is expensive, we could use a different trick:

  1. Make a „hollow“ copy of an Arc, one that does not own its ref count. It could be done with eg. std::ptr::read.
  2. Call Arc::into_raw on that one.
  3. Turn it back into the hollow copy of Arc with Arc::from_raw.
  4. mem::forget it, so we get rid of it without calling its destructor.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's very true - I must admit I only considered Rc and pasted the working code for Arc. I believe

fn as_ptr(me: &Arc<T>) -> *mut T {
    let ptr = Arc::into_raw(unsafe { std::ptr::read(me) } );
    ptr as *mut T
}

should be safe and work for both variants. Neither ptr::read nor into_raw will affect the refcount (ptr::read is a bitwise copy and into_raw calls mem::forget to avoid running the destructor).
The safety is much easier to verify here - we are calling ptr::read on a *const T derived from an &T which is guarenteed valid.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The previous wasn't safe even for Rc, due to the fact that you could reach the unreachable_unchecked. The first point was true even for Rc.

I know that the read is just a bitwise copy and into_raw currently calls the forget. In the current version, these are enough. Nevertheless, this is with the knowledge of internal implementation. The internal implementation may change and these properties are not guaranteed. The only description is that into_raw shall be paired with from_raw. The read shall then be paired with forget. So it's about future proofing these for possible change in the standard library.

if Arc::strong_count(me) == usize::max_value() { unsafe { std::hint::unreachable_unchecked(); } }
let ptr = Arc::into_raw(Arc::clone(me));
let _ = unsafe { Arc::from_raw(ptr) };
ptr as *mut T
}
unsafe fn from_ptr(ptr: *const T) -> Arc<T> {
Arc::from_raw(ptr)
Expand All @@ -105,7 +110,12 @@ unsafe impl<T> RefCnt for Rc<T> {
Rc::into_raw(me) as *mut T
}
fn as_ptr(me: &Rc<T>) -> *mut T {
me as &T as *const T as *mut T
// SAFETY: The refcount will be incremented then decremented
// so we force the compiler to remove the overflow check
if Rc::strong_count(me) == usize::max_value() { unsafe { std::hint::unreachable_unchecked(); } }
let ptr = Rc::into_raw(Rc::clone(me));
let _ = unsafe { Rc::from_raw(ptr) };
ptr as *mut T
}
unsafe fn from_ptr(ptr: *const T) -> Rc<T> {
Rc::from_raw(ptr)
Expand Down