Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[docs] Improve Mutex FIFO explanation #3615

Merged
merged 1 commit into from Mar 12, 2021
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
11 changes: 6 additions & 5 deletions tokio/src/sync/mutex.rs
Expand Up @@ -71,13 +71,13 @@ use std::sync::Arc;
/// async fn main() {
/// let count = Arc::new(Mutex::new(0));
///
/// for _ in 0..5 {
/// for i in 0..5 {
/// let my_count = Arc::clone(&count);
/// tokio::spawn(async move {
/// for _ in 0..10 {
/// for j in 0..10 {
/// let mut lock = my_count.lock().await;
/// *lock += 1;
/// println!("{}", lock);
/// println!("{} {} {}", i, j, lock);
/// }
/// });
/// }
Expand All @@ -100,9 +100,10 @@ use std::sync::Arc;
/// Tokio's Mutex works in a simple FIFO (first in, first out) style where all
/// calls to [`lock`] complete in the order they were performed. In that way the
/// Mutex is "fair" and predictable in how it distributes the locks to inner
/// data. This is why the output of the program above is an in-order count to
/// 50. Locks are released and reacquired after every iteration, so basically,
/// data. Locks are released and reacquired after every iteration, so basically,
/// each thread goes to the back of the line after it increments the value once.
/// Note that there's some unpredictability to the timing between when the
/// threads are started, but once they are going they alternate predictably.
Comment on lines 104 to +106
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Strictly speaking this is not enforced by the above since a thread may for some reason spend a lot of type going from the unlock to starting the next lock, but I think this is fine.

/// Finally, since there is only a single valid lock at any given time, there is
/// no possibility of a race condition when mutating the inner value.
///
Expand Down