Skip to content

Commit

Permalink
Merge #574
Browse files Browse the repository at this point in the history
574: Fixed a few typos r=taiki-e a=regexident



Co-authored-by: Vincent Esche <regexident@gmail.com>
  • Loading branch information
bors[bot] and regexident committed Oct 7, 2020
2 parents 7cc8377 + 619f7db commit 2444749
Show file tree
Hide file tree
Showing 5 changed files with 12 additions and 12 deletions.
8 changes: 4 additions & 4 deletions crossbeam-epoch/src/atomic.rs
Expand Up @@ -749,7 +749,7 @@ pub trait Pointer<T: ?Sized + Pointable> {
/// # Safety
///
/// The given `data` should have been created by `Pointer::into_usize()`, and one `data` should
/// not be converted back by `Pointer::from_usize()` mutliple times.
/// not be converted back by `Pointer::from_usize()` multiple times.
unsafe fn from_usize(data: usize) -> Self;
}

Expand Down Expand Up @@ -801,7 +801,7 @@ impl<T> Owned<T> {
/// # Safety
///
/// The given `raw` should have been derived from `Owned`, and one `raw` should not be converted
/// back by `Owned::from_raw()` mutliple times.
/// back by `Owned::from_raw()` multiple times.
///
/// # Examples
///
Expand Down Expand Up @@ -1108,7 +1108,7 @@ impl<'g, T: ?Sized + Pointable> Shared<'g, T> {
///
/// Dereferencing a pointer is unsafe because it could be pointing to invalid memory.
///
/// Another concern is the possiblity of data races due to lack of proper synchronization.
/// Another concern is the possibility of data races due to lack of proper synchronization.
/// For example, consider the following scenario:
///
/// 1. A thread creates a new object: `a.store(Owned::new(10), Relaxed)`
Expand Down Expand Up @@ -1188,7 +1188,7 @@ impl<'g, T: ?Sized + Pointable> Shared<'g, T> {
///
/// Dereferencing a pointer is unsafe because it could be pointing to invalid memory.
///
/// Another concern is the possiblity of data races due to lack of proper synchronization.
/// Another concern is the possibility of data races due to lack of proper synchronization.
/// For example, consider the following scenario:
///
/// 1. A thread creates a new object: `a.store(Owned::new(10), Relaxed)`
Expand Down
4 changes: 2 additions & 2 deletions crossbeam-epoch/src/internal.rs
Expand Up @@ -29,7 +29,7 @@
//! Whenever a bag is pushed into a queue, the objects in some bags in the queue are collected and
//! destroyed along the way. This design reduces contention on data structures. The global queue
//! cannot be explicitly accessed: the only way to interact with it is by calling functions
//! `defer()` that adds an object tothe thread-local bag, or `collect()` that manually triggers
//! `defer()` that adds an object to the thread-local bag, or `collect()` that manually triggers
//! garbage collection.
//!
//! Ideally each instance of concurrent data structure may have its own queue that gets fully
Expand Down Expand Up @@ -368,7 +368,7 @@ pub struct Local {

/// Total number of pinnings performed.
///
/// This is just an auxilliary counter that sometimes kicks off collection.
/// This is just an auxiliary counter that sometimes kicks off collection.
pin_count: Cell<Wrapping<usize>>,
}

Expand Down
8 changes: 4 additions & 4 deletions crossbeam-skiplist/src/base.rs
Expand Up @@ -226,7 +226,7 @@ impl<K, V> Node<K, V> {
}
}

/// Decrements the reference count of a node, pinning the thread and destoying the node
/// Decrements the reference count of a node, pinning the thread and destroying the node
/// if the count become zero.
#[inline]
unsafe fn decrement_with_pin<F>(&self, parent: &SkipList<K, V>, pin: F)
Expand Down Expand Up @@ -1157,7 +1157,7 @@ where

loop {
{
// Search for the first entry in order to unlink all the preceeding entries
// Search for the first entry in order to unlink all the preceding entries
// we have removed.
//
// By unlinking nodes in batches we make sure that the final search doesn't
Expand Down Expand Up @@ -1933,7 +1933,7 @@ where
pub struct IntoIter<K, V> {
/// The current node.
///
/// All preceeding nods have already been destroyed.
/// All preceding nods have already been destroyed.
node: *mut Node<K, V>,
}

Expand All @@ -1946,7 +1946,7 @@ impl<K, V> Drop for IntoIter<K, V> {
// the skip list.
let next = (*self.node).tower[0].load(Ordering::Relaxed, epoch::unprotected());

// We can safely do this without defering because references to
// We can safely do this without deferring because references to
// keys & values that we give out never outlive the SkipList.
Node::finalize(self.node);

Expand Down
2 changes: 1 addition & 1 deletion crossbeam-utils/src/cache_padded.rs
Expand Up @@ -4,7 +4,7 @@ use core::ops::{Deref, DerefMut};
/// Pads and aligns a value to the length of a cache line.
///
/// In concurrent programming, sometimes it is desirable to make sure commonly accessed pieces of
/// data are not placed into the same cache line. Updating an atomic value invalides the whole
/// data are not placed into the same cache line. Updating an atomic value invalidates the whole
/// cache line it belongs to, which makes the next access to the same cache line slower for other
/// CPU cores. Use `CachePadded` to ensure updating one piece of data doesn't invalidate other
/// cached data.
Expand Down
2 changes: 1 addition & 1 deletion crossbeam-utils/src/thread.rs
Expand Up @@ -442,7 +442,7 @@ impl<'scope, 'env> ScopedThreadBuilder<'scope, 'env> {
*result.lock().unwrap() = Some(res);
};

// Allocate `clsoure` on the heap and erase the `'env` bound.
// Allocate `closure` on the heap and erase the `'env` bound.
let closure: Box<dyn FnOnce() + Send + 'env> = Box::new(closure);
let closure: Box<dyn FnOnce() + Send + 'static> =
unsafe { mem::transmute(closure) };
Expand Down

0 comments on commit 2444749

Please sign in to comment.