Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore: fix typos #3907

Merged
merged 1 commit into from Jun 30, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
2 changes: 1 addition & 1 deletion benches/rt_multi_threaded.rs
@@ -1,4 +1,4 @@
//! Benchmark implementation details of the theaded scheduler. These benches are
//! Benchmark implementation details of the threaded scheduler. These benches are
//! intended to be used as a form of regression testing and not as a general
//! purpose benchmark demonstrating real-world performance.

Expand Down
2 changes: 1 addition & 1 deletion tokio-stream/src/stream_ext.rs
Expand Up @@ -515,7 +515,7 @@ pub trait StreamExt: Stream {
/// Skip elements from the underlying stream while the provided predicate
/// resolves to `true`.
///
/// This function, like [`Iterator::skip_while`], will ignore elemets from the
/// This function, like [`Iterator::skip_while`], will ignore elements from the
/// stream until the predicate `f` resolves to `false`. Once one element
/// returns false, the rest of the elements will be yielded.
///
Expand Down
2 changes: 1 addition & 1 deletion tokio-stream/src/wrappers/watch.rs
Expand Up @@ -11,7 +11,7 @@ use tokio::sync::watch::error::RecvError;
/// A wrapper around [`tokio::sync::watch::Receiver`] that implements [`Stream`].
///
/// This stream will always start by yielding the current value when the WatchStream is polled,
/// regardles of whether it was the initial value or sent afterwards.
/// regardless of whether it was the initial value or sent afterwards.
///
/// # Examples
///
Expand Down
2 changes: 1 addition & 1 deletion tokio-test/src/lib.rs
Expand Up @@ -10,7 +10,7 @@
attr(deny(warnings, rust_2018_idioms), allow(dead_code, unused_variables))
))]

//! Tokio and Futures based testing utilites
//! Tokio and Futures based testing utilities

pub mod io;

Expand Down
2 changes: 1 addition & 1 deletion tokio-test/src/task.rs
Expand Up @@ -180,7 +180,7 @@ impl ThreadWaker {
}
}

/// Clears any previously received wakes, avoiding potential spurrious
/// Clears any previously received wakes, avoiding potential spurious
/// wake notifications. This should only be called immediately before running the
/// task.
fn clear(&self) {
Expand Down
2 changes: 1 addition & 1 deletion tokio-util/src/codec/any_delimiter_codec.rs
Expand Up @@ -234,7 +234,7 @@ impl Default for AnyDelimiterCodec {
}
}

/// An error occured while encoding or decoding a chunk.
/// An error occurred while encoding or decoding a chunk.
#[derive(Debug)]
pub enum AnyDelimiterCodecError {
/// The maximum chunk length was exceeded.
Expand Down
2 changes: 1 addition & 1 deletion tokio-util/src/codec/decoder.rs
Expand Up @@ -28,7 +28,7 @@ use std::io;
/// It is up to the Decoder to keep track of a restart after an EOF,
/// and to decide how to handle such an event by, for example,
/// allowing frames to cross EOF boundaries, re-emitting opening frames, or
/// reseting the entire internal state.
/// resetting the entire internal state.
///
/// [`Framed`]: crate::codec::Framed
/// [`FramedRead`]: crate::codec::FramedRead
Expand Down
6 changes: 3 additions & 3 deletions tokio-util/src/codec/framed_impl.rs
Expand Up @@ -124,7 +124,7 @@ where
// to a combination of the `is_readable` and `eof` flags. States persist across
// loop entries and most state transitions occur with a return.
//
// The intitial state is `reading`.
// The initial state is `reading`.
//
// | state | eof | is_readable |
// |---------|-------|-------------|
Expand Down Expand Up @@ -155,10 +155,10 @@ where
// Both signal that there is no such data by returning `None`.
//
// If `decode` couldn't read a frame and the upstream source has returned eof,
// `decode_eof` will attemp to decode the remaining bytes as closing frames.
// `decode_eof` will attempt to decode the remaining bytes as closing frames.
//
// If the underlying AsyncRead is resumable, we may continue after an EOF,
// but must finish emmiting all of it's associated `decode_eof` frames.
// but must finish emitting all of it's associated `decode_eof` frames.
// Furthermore, we don't want to emit any `decode_eof` frames on retried
// reads after an EOF unless we've actually read more data.
if state.is_readable {
Expand Down
2 changes: 1 addition & 1 deletion tokio-util/src/codec/length_delimited.rs
Expand Up @@ -486,7 +486,7 @@ impl LengthDelimitedCodec {
// Skip the required bytes
src.advance(self.builder.length_field_offset);

// match endianess
// match endianness
let n = if self.builder.length_field_is_big_endian {
src.get_uint(field_len)
} else {
Expand Down
4 changes: 2 additions & 2 deletions tokio-util/src/codec/lines_codec.rs
Expand Up @@ -203,12 +203,12 @@ impl Default for LinesCodec {
}
}

/// An error occured while encoding or decoding a line.
/// An error occurred while encoding or decoding a line.
#[derive(Debug)]
pub enum LinesCodecError {
/// The maximum line length was exceeded.
MaxLineLengthExceeded,
/// An IO error occured.
/// An IO error occurred.
Io(io::Error),
}

Expand Down
2 changes: 1 addition & 1 deletion tokio-util/src/either.rs
Expand Up @@ -67,7 +67,7 @@ pub enum Either<L, R> {
}

/// A small helper macro which reduces amount of boilerplate in the actual trait method implementation.
/// It takes an invokation of method as an argument (e.g. `self.poll(cx)`), and redirects it to either
/// It takes an invocation of method as an argument (e.g. `self.poll(cx)`), and redirects it to either
/// enum variant held in `self`.
macro_rules! delegate_call {
($self:ident.$method:ident($($args:ident),+)) => {
Expand Down
4 changes: 2 additions & 2 deletions tokio-util/src/sync/cancellation_token.rs
Expand Up @@ -775,8 +775,8 @@ impl CancellationTokenState {
return Poll::Ready(());
}

// So far the token is not cancelled. However it could be cancelld before
// we get the chance to store the `Waker`. Therfore we need to check
// So far the token is not cancelled. However it could be cancelled before
// we get the chance to store the `Waker`. Therefore we need to check
// for cancellation again inside the mutex.
let mut guard = self.synchronized.lock().unwrap();
if guard.is_cancelled {
Expand Down
2 changes: 1 addition & 1 deletion tokio-util/src/sync/intrusive_double_linked_list.rs
Expand Up @@ -222,7 +222,7 @@ impl<T> LinkedList<T> {
}
}

/// Returns whether the linked list doesn not contain any node
/// Returns whether the linked list does not contain any node
pub fn is_empty(&self) -> bool {
if self.head.is_some() {
return false;
Expand Down
6 changes: 3 additions & 3 deletions tokio/docs/reactor-refactor.md
Expand Up @@ -228,7 +228,7 @@ It is only possible to implement `AsyncRead` and `AsyncWrite` for resource types
themselves and not for `&Resource`. Implementing the traits for `&Resource`
would permit concurrent operations to the resource. Because only a single waker
is stored per direction, any concurrent usage would result in deadlocks. An
alterate implementation would call for a `Vec<Waker>` but this would result in
alternate implementation would call for a `Vec<Waker>` but this would result in
memory leaks.

## Enabling reads and writes for `&TcpStream`
Expand Down Expand Up @@ -268,9 +268,9 @@ select! {
}
```

It is also possible to sotre a `TcpStream` in an `Arc`.
It is also possible to store a `TcpStream` in an `Arc`.

```rust
let arc_stream = Arc::new(my_tcp_stream);
let n = arc_stream.by_ref().read(buf).await?;
```
```
2 changes: 1 addition & 1 deletion tokio/src/io/driver/interest.rs
Expand Up @@ -58,7 +58,7 @@ impl Interest {
self.0.is_writable()
}

/// Add together two `Interst` values.
/// Add together two `Interest` values.
///
/// This function works from a `const` context.
///
Expand Down
2 changes: 1 addition & 1 deletion tokio/src/io/driver/mod.rs
Expand Up @@ -96,7 +96,7 @@ const ADDRESS: bit::Pack = bit::Pack::least_significant(24);
//
// The generation prevents a race condition where a slab slot is reused for a
// new socket while the I/O driver is about to apply a readiness event. The
// generaton value is checked when setting new readiness. If the generation do
// generation value is checked when setting new readiness. If the generation do
// not match, then the readiness event is discarded.
const GENERATION: bit::Pack = ADDRESS.then(7);

Expand Down
6 changes: 3 additions & 3 deletions tokio/src/io/driver/scheduled_io.rs
Expand Up @@ -84,9 +84,9 @@ cfg_io_readiness! {

// The `ScheduledIo::readiness` (`AtomicUsize`) is packed full of goodness.
//
// | reserved | generation | driver tick | readinesss |
// |----------+------------+--------------+------------|
// | 1 bit | 7 bits + 8 bits + 16 bits |
// | reserved | generation | driver tick | readiness |
// |----------+------------+--------------+-----------|
// | 1 bit | 7 bits + 8 bits + 16 bits |

const READINESS: bit::Pack = bit::Pack::least_significant(16);

Expand Down
6 changes: 3 additions & 3 deletions tokio/src/io/read_buf.rs
Expand Up @@ -45,7 +45,7 @@ impl<'a> ReadBuf<'a> {

/// Creates a new `ReadBuf` from a fully uninitialized buffer.
///
/// Use `assume_init` if part of the buffer is known to be already inintialized.
/// Use `assume_init` if part of the buffer is known to be already initialized.
#[inline]
pub fn uninit(buf: &'a mut [MaybeUninit<u8>]) -> ReadBuf<'a> {
ReadBuf {
Expand Down Expand Up @@ -85,7 +85,7 @@ impl<'a> ReadBuf<'a> {
#[inline]
pub fn take(&mut self, n: usize) -> ReadBuf<'_> {
let max = std::cmp::min(self.remaining(), n);
// Saftey: We don't set any of the `unfilled_mut` with `MaybeUninit::uninit`.
// Safety: We don't set any of the `unfilled_mut` with `MaybeUninit::uninit`.
unsafe { ReadBuf::uninit(&mut self.unfilled_mut()[..max]) }
}

Expand Down Expand Up @@ -217,7 +217,7 @@ impl<'a> ReadBuf<'a> {
///
/// # Panics
///
/// Panics if the filled region of the buffer would become larger than the intialized region.
/// Panics if the filled region of the buffer would become larger than the initialized region.
#[inline]
pub fn set_filled(&mut self, n: usize) {
assert!(
Expand Down
4 changes: 2 additions & 2 deletions tokio/src/io/stdio_common.rs
Expand Up @@ -52,10 +52,10 @@ where

buf = &buf[..crate::io::blocking::MAX_BUF];

// Now there are two possibilites.
// Now there are two possibilities.
// If caller gave is binary buffer, we **should not** shrink it
// anymore, because excessive shrinking hits performance.
// If caller gave as binary buffer, we **must** additionaly
// If caller gave as binary buffer, we **must** additionally
// shrink it to strip incomplete char at the end of buffer.
// that's why check we will perform now is allowed to have
// false-positive.
Expand Down
14 changes: 7 additions & 7 deletions tokio/src/io/util/read_until.rs
Expand Up @@ -10,12 +10,12 @@ use std::task::{Context, Poll};

pin_project! {
/// Future for the [`read_until`](crate::io::AsyncBufReadExt::read_until) method.
/// The delimeter is included in the resulting vector.
/// The delimiter is included in the resulting vector.
#[derive(Debug)]
#[must_use = "futures do nothing unless you `.await` or poll them"]
pub struct ReadUntil<'a, R: ?Sized> {
reader: &'a mut R,
delimeter: u8,
delimiter: u8,
buf: &'a mut Vec<u8>,
// The number of bytes appended to buf. This can be less than buf.len() if
// the buffer was not empty when the operation was started.
Expand All @@ -28,15 +28,15 @@ pin_project! {

pub(crate) fn read_until<'a, R>(
reader: &'a mut R,
delimeter: u8,
delimiter: u8,
buf: &'a mut Vec<u8>,
) -> ReadUntil<'a, R>
where
R: AsyncBufRead + ?Sized + Unpin,
{
ReadUntil {
reader,
delimeter,
delimiter,
buf,
read: 0,
_pin: PhantomPinned,
Expand All @@ -46,14 +46,14 @@ where
pub(super) fn read_until_internal<R: AsyncBufRead + ?Sized>(
mut reader: Pin<&mut R>,
cx: &mut Context<'_>,
delimeter: u8,
delimiter: u8,
buf: &mut Vec<u8>,
read: &mut usize,
) -> Poll<io::Result<usize>> {
loop {
let (done, used) = {
let available = ready!(reader.as_mut().poll_fill_buf(cx))?;
if let Some(i) = memchr::memchr(delimeter, available) {
if let Some(i) = memchr::memchr(delimiter, available) {
buf.extend_from_slice(&available[..=i]);
(true, i + 1)
} else {
Expand All @@ -74,6 +74,6 @@ impl<R: AsyncBufRead + ?Sized + Unpin> Future for ReadUntil<'_, R> {

fn poll(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {
let me = self.project();
read_until_internal(Pin::new(*me.reader), cx, *me.delimeter, me.buf, me.read)
read_until_internal(Pin::new(*me.reader), cx, *me.delimiter, me.buf, me.read)
}
}
2 changes: 1 addition & 1 deletion tokio/src/io/util/split.rs
Expand Up @@ -95,7 +95,7 @@ where
let n = ready!(read_until_internal(
me.reader, cx, *me.delim, me.buf, me.read,
))?;
// read_until_internal resets me.read to zero once it finds the delimeter
// read_until_internal resets me.read to zero once it finds the delimiter
debug_assert_eq!(*me.read, 0);

if n == 0 && me.buf.is_empty() {
Expand Down
4 changes: 2 additions & 2 deletions tokio/src/macros/select.rs
Expand Up @@ -289,7 +289,7 @@
/// loop {
/// tokio::select! {
/// // If you run this example without `biased;`, the polling order is
/// // psuedo-random, and the assertions on the value of count will
/// // pseudo-random, and the assertions on the value of count will
/// // (probably) fail.
/// biased;
///
Expand Down Expand Up @@ -467,7 +467,7 @@ macro_rules! select {
let mut is_pending = false;

// Choose a starting index to begin polling the futures at. In
// practice, this will either be a psuedo-randomly generrated
// practice, this will either be a pseudo-randomly generated
// number by default, or the constant 0 if `biased;` is
// supplied.
let start = $start;
Expand Down
6 changes: 3 additions & 3 deletions tokio/src/net/tcp/split.rs
Expand Up @@ -30,7 +30,7 @@ pub struct ReadHalf<'a>(&'a TcpStream);

/// Borrowed write half of a [`TcpStream`], created by [`split`].
///
/// Note that in the [`AsyncWrite`] implemenation of this type, [`poll_shutdown`] will
/// Note that in the [`AsyncWrite`] implementation of this type, [`poll_shutdown`] will
/// shut down the TCP stream in the write direction.
///
/// Writing to an `WriteHalf` is usually done using the convenience methods found
Expand All @@ -57,7 +57,7 @@ impl ReadHalf<'_> {
/// `Waker` from the `Context` passed to the most recent call is scheduled
/// to receive a wakeup.
///
/// See the [`TcpStream::poll_peek`] level documenation for more details.
/// See the [`TcpStream::poll_peek`] level documentation for more details.
///
/// # Examples
///
Expand Down Expand Up @@ -95,7 +95,7 @@ impl ReadHalf<'_> {
/// connected, without removing that data from the queue. On success,
/// returns the number of bytes peeked.
///
/// See the [`TcpStream::peek`] level documenation for more details.
/// See the [`TcpStream::peek`] level documentation for more details.
///
/// [`TcpStream::peek`]: TcpStream::peek
///
Expand Down
4 changes: 2 additions & 2 deletions tokio/src/net/tcp/split_owned.rs
Expand Up @@ -112,7 +112,7 @@ impl OwnedReadHalf {
/// `Waker` from the `Context` passed to the most recent call is scheduled
/// to receive a wakeup.
///
/// See the [`TcpStream::poll_peek`] level documenation for more details.
/// See the [`TcpStream::poll_peek`] level documentation for more details.
///
/// # Examples
///
Expand Down Expand Up @@ -150,7 +150,7 @@ impl OwnedReadHalf {
/// connected, without removing that data from the queue. On success,
/// returns the number of bytes peeked.
///
/// See the [`TcpStream::peek`] level documenation for more details.
/// See the [`TcpStream::peek`] level documentation for more details.
///
/// [`TcpStream::peek`]: TcpStream::peek
///
Expand Down
8 changes: 4 additions & 4 deletions tokio/src/net/udp.rs
Expand Up @@ -699,7 +699,7 @@ impl UdpSocket {
/// [`connect`]: method@Self::connect
pub fn poll_recv(&self, cx: &mut Context<'_>, buf: &mut ReadBuf<'_>) -> Poll<io::Result<()>> {
let n = ready!(self.io.registration().poll_read_io(cx, || {
// Safety: will not read the maybe uinitialized bytes.
// Safety: will not read the maybe uninitialized bytes.
let b = unsafe {
&mut *(buf.unfilled_mut() as *mut [std::mem::MaybeUninit<u8>] as *mut [u8])
};
Expand Down Expand Up @@ -985,7 +985,7 @@ impl UdpSocket {
///
/// # Returns
///
/// If successfull, returns the number of bytes sent
/// If successful, returns the number of bytes sent
///
/// Users should ensure that when the remote cannot receive, the
/// [`ErrorKind::WouldBlock`] is properly handled. An error can also occur
Expand Down Expand Up @@ -1100,7 +1100,7 @@ impl UdpSocket {
buf: &mut ReadBuf<'_>,
) -> Poll<io::Result<SocketAddr>> {
let (n, addr) = ready!(self.io.registration().poll_read_io(cx, || {
// Safety: will not read the maybe uinitialized bytes.
// Safety: will not read the maybe uninitialized bytes.
let b = unsafe {
&mut *(buf.unfilled_mut() as *mut [std::mem::MaybeUninit<u8>] as *mut [u8])
};
Expand Down Expand Up @@ -1239,7 +1239,7 @@ impl UdpSocket {
buf: &mut ReadBuf<'_>,
) -> Poll<io::Result<SocketAddr>> {
let (n, addr) = ready!(self.io.registration().poll_read_io(cx, || {
// Safety: will not read the maybe uinitialized bytes.
// Safety: will not read the maybe uninitialized bytes.
let b = unsafe {
&mut *(buf.unfilled_mut() as *mut [std::mem::MaybeUninit<u8>] as *mut [u8])
};
Expand Down