Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix unsoundness in <BlockRng64 as RngCore>:: next_u32 #1159

Closed
wants to merge 1 commit into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
51 changes: 37 additions & 14 deletions rand_core/src/block.rs
Expand Up @@ -57,8 +57,7 @@ use crate::impls::{fill_via_u32_chunks, fill_via_u64_chunks};
use crate::{CryptoRng, Error, RngCore, SeedableRng};
use core::convert::AsRef;
use core::fmt;
#[cfg(feature = "serde1")]
use serde::{Deserialize, Serialize};
#[cfg(feature = "serde1")] use serde::{Deserialize, Serialize};

/// A trait for RNGs which do not generate random numbers individually, but in
/// blocks (typically `[u32; N]`). This technique is commonly used by
Expand All @@ -71,6 +70,12 @@ pub trait BlockRngCore {

/// Results type. This is the 'block' an RNG implementing `BlockRngCore`
/// generates, which will usually be an array like `[u32; 16]`.
///
/// # Panics
///
/// Code which uses this type is allowed to assume that `Results` is a type
/// with at least one element. It may panic or exhibit incorrect behavior
/// if this is not the case. It may *not* exhibit undefined behavior.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This goes without saying, usually..

type Results: AsRef<[Self::Item]> + AsMut<[Self::Item]> + Default;

/// Generate a new block of results.
Expand Down Expand Up @@ -179,8 +184,7 @@ impl<R: BlockRngCore> BlockRng<R> {
}

impl<R: BlockRngCore<Item = u32>> RngCore for BlockRng<R>
where
<R as BlockRngCore>::Results: AsRef<[u32]> + AsMut<[u32]>,
where <R as BlockRngCore>::Results: AsRef<[u32]> + AsMut<[u32]>
{
#[inline]
fn next_u32(&mut self) -> u32 {
Expand Down Expand Up @@ -347,12 +351,11 @@ impl<R: BlockRngCore> BlockRng64<R> {
}

impl<R: BlockRngCore<Item = u64>> RngCore for BlockRng64<R>
where
<R as BlockRngCore>::Results: AsRef<[u64]> + AsMut<[u64]>,
where <R as BlockRngCore>::Results: AsRef<[u64]> + AsMut<[u64]>
{
#[inline]
fn next_u32(&mut self) -> u32 {
let mut index = self.index * 2 - self.half_used as usize;
let mut index = self.index * 2 - usize::from(self.half_used);
if index >= self.results.as_ref().len() * 2 {
self.core.generate(&mut self.results);
self.index = 0;
Expand All @@ -364,14 +367,34 @@ where
self.half_used = !self.half_used;
self.index += self.half_used as usize;

// Index as if this is a u32 slice.
let index = if cfg!(target_endian = "little") {
index
} else {
index ^ 1
};

let results = self.results.as_ref();
// Convert the `&[u64]` into an equivalent `&[u32]`, which has twice as
// many elements. The conversion is sound because `u64` never has a
// smaller alignment requirement than `u32`.
let ptr = results.as_ptr() as *const u32;
// We are guaranteed that this multiplication can't overflow. If it did
// overflow, that would mean that there were at least 2^(word size)
// 32-bit values in the slice, and thus 4 * 2^(word size) bytes. There
// can only ever be at most 2^(word size) - 1 addressable bytes in
// memory at a time, so we could never be in this situation.
let len = results.len() * 2;

unsafe {
let results = &*(self.results.as_ref() as *const [u64] as *const [u32]);
if cfg!(target_endian = "little") {
*results.get_unchecked(index)
} else {
*results.get_unchecked(index ^ 1)
}
let results = core::slice::from_raw_parts(ptr, len);
// Note that this bounds check is required for soundness. There's no
// guarantee that `<<R as BlockRngCore>::Results as AsRef>::as_ref`,
// which is supplied by code outside of this crate, returns a slice
// of any particular length, or even that two subsequent calls to it
// return slices of the same length. Thus, it would be unsound to
// assume that this index is in-bounds and to elide the bounds
// check.
results[index]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we make all of the code safe at this point?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, we can by getting u64 from the cached block, shifting it to 32 bits if one half was already used, and casting it to u32.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is the best approach. Maybe let's just convert index to an index into the u64 slice and shift by 32 * self.half_used bits.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I opened #1160 for this.

}
}

Expand Down
9 changes: 5 additions & 4 deletions rand_core/src/error.rs
Expand Up @@ -50,9 +50,7 @@ impl Error {
#[cfg_attr(doc_cfg, doc(cfg(feature = "std")))]
#[inline]
pub fn new<E>(err: E) -> Self
where
E: Into<Box<dyn std::error::Error + Send + Sync + 'static>>,
{
where E: Into<Box<dyn std::error::Error + Send + Sync + 'static>> {
Error { inner: err.into() }
}

Expand Down Expand Up @@ -223,6 +221,9 @@ mod test {
fn test_error_codes() {
// Make sure the values are the same as in `getrandom`.
assert_eq!(super::Error::CUSTOM_START, getrandom::Error::CUSTOM_START);
assert_eq!(super::Error::INTERNAL_START, getrandom::Error::INTERNAL_START);
assert_eq!(
super::Error::INTERNAL_START,
getrandom::Error::INTERNAL_START
);
}
}
8 changes: 3 additions & 5 deletions rand_core/src/impls.rs
Expand Up @@ -65,17 +65,15 @@ macro_rules! fill_via_chunks {
core::ptr::copy_nonoverlapping(
$src.as_ptr() as *const u8,
$dst.as_mut_ptr(),
chunk_size_u8);
chunk_size_u8,
);
}
} else {
for (&n, chunk) in $src.iter().zip($dst.chunks_mut(SIZE)) {
let tmp = n.to_le();
let src_ptr = &tmp as *const $ty as *const u8;
unsafe {
core::ptr::copy_nonoverlapping(
src_ptr,
chunk.as_mut_ptr(),
chunk.len());
core::ptr::copy_nonoverlapping(src_ptr, chunk.as_mut_ptr(), chunk.len());
}
}
}
Expand Down
2 changes: 1 addition & 1 deletion rand_core/src/lib.rs
Expand Up @@ -41,8 +41,8 @@
use core::convert::AsMut;
use core::default::Default;

#[cfg(feature = "std")] extern crate std;
#[cfg(feature = "alloc")] extern crate alloc;
#[cfg(feature = "std")] extern crate std;
#[cfg(feature = "alloc")] use alloc::boxed::Box;

pub use error::Error;
Expand Down