Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Guarantee that File::write_all writes all data (or at least tries) #4316

Merged
merged 18 commits into from Jan 25, 2022

Conversation

BraulioVM
Copy link
Contributor

@BraulioVM BraulioVM commented Dec 12, 2021

Motivation

Ref: #4296
#4296 (comment)

In some cases, one could time successfully awaited calls to write_all (or write) with a shutdown of the runtime,
and have the write not even be attempted. This can be a bit surprising.

The purpose of this PR is to find a way (if possible) to fix that. There would be no guarantee that the write actually
succeeds (any OS error could be hit at the time the write actually gets executed), but at least it would be attempted.

Solution

I have found a sequence of events that leads to spawn_blocking tasks being "ignored". I've written a note about it
in a comment. I'm not sure if it's intentional that we won't try draining the queue of blocking tasks before shutting down.
Couldn't we tweak the shutdown logic to execute all tasks that were scheduled before the call to shutdown?
If users are concerned about shutting down the runtime taking a long time because of blocking tasks, they can call
https://docs.rs/tokio/latest/tokio/runtime/struct.Runtime.html#method.shutdown_timeout or shutdown_background.

The documentation in https://docs.rs/tokio/latest/tokio/runtime/struct.Runtime.html#shutdown says:

The current thread will block until the shut down operation has completed.
-Drain any scheduled work queues.

So it should already be expected that shutting down a runtime could block to some extent?

Do you think it would make sense to change the shutdown logic to execute all pending tasks? If so, I can figure out how to
do the code change.

@github-actions github-actions bot added the R-loom Run loom tests on this PR label Dec 12, 2021
@BraulioVM
Copy link
Contributor Author

No need to run CI on this one because I haven't changed any code yet, but I don't think I can disable it myself

@Darksonn Darksonn added A-tokio Area: The main tokio crate M-runtime Module: tokio/runtime M-task Module: tokio/task M-fs Module: tokio/fs labels Dec 12, 2021
@Darksonn
Copy link
Contributor

I'm not so sure about making this change for all spawn_blocking tasks. Certainly any tasks spawned after shutdown should not run. It's also unclear when it comes to tasks still in the queue if every thread up to the limit is currently taken.

However on the other hand, tasks spawned just before shutdown when the pool isn't full would be ok to always do it for. That is already behavior that can happen under the current implementation, so it would certainly not be breaking.

@BraulioVM
Copy link
Contributor Author

Certainly any tasks spawned after shutdown should not run

I agree. I wouldn't change this bit of code, which handles that

pub(crate) fn spawn(&self, task: Task, rt: &Handle) -> Result<(), ()> {
let shutdown_tx = {
let mut shared = self.inner.shared.lock();
if shared.shutdown {
// Shutdown the task
task.shutdown();
// no need to even push this task; it would never get picked up
return Err(());
}
shared.queue.push_back(task);

It's also unclear when it comes to tasks still in the queue if every thread up to the limit is currently taken.

So this would mean something like: once shutdown is called, every thread that is currently idle will get a chance to execute exactly one task before completely terminating. Is that it more less?

Note that even in the current implementation, it could be the case that once shutdown is called, a thread in the blocking pool could still take on and execute an arbitrary number of blocking tasks before terminating. Say for example you have just one thread in the blocking pool, and you spawn_blocking a task that takes 30s to complete, then, before that blocking task has finished, you schedule 500 more blocking tasks, and finally you call shutdown. The thread in the blocking pool would probably be executing

while let Some(task) = shared.queue.pop_front() {
drop(shared);
task.run();
shared = self.shared.lock();
}

and won't notice the shutdown after it has finished going through all the tasks in the queue. (I haven't actually tested this, it's just my understanding of the code).

In that sense, it's already possible that we'll complete all the blocking tasks in the queue before shutdown, even if all threads were taken at the time of shutdown.

@Darksonn
Copy link
Contributor

Interesting.

@BraulioVM
Copy link
Contributor Author

(I haven't actually tested this, it's just my understanding of the code).

I have confirmed this is the case. The following code:

#[test]
fn blocking_tasks_block_shutdown() {
    let rt = tokio::runtime::Builder::new_multi_thread()
        .worker_threads(1)
        .max_blocking_threads(1)
        .build()
        .unwrap();
    rt.block_on(async {
        for i in 0..10 {
            println!(
                "TT: {:?} Scheduling task {}",
                std::time::Instant::now(),
                i);
            let j = i;
            rt.spawn_blocking(move || {
                println!(
                    "TT: {:?} Initiating task {}",
                    std::time::Instant::now(),
                    j
                );
                std::thread::sleep(
                    std::time::Duration::from_secs(3)
                );
            });
        }

    });
    println!(
        "TT: {:?} done scheduling tasks",
        std::time::Instant::now(),
    );
    drop(rt);
}

produces the following output:

$ cargo test --all-features blocking_tasks_block -- --nocapture &| grep TT
TT: Instant { tv_sec: 500504, tv_nsec: 186272702 } Scheduling task 0
TT: Instant { tv_sec: 500504, tv_nsec: 186324929 } Scheduling task 1
TT: Instant { tv_sec: 500504, tv_nsec: 186330571 } Scheduling task 2
TT: Instant { tv_sec: 500504, tv_nsec: 186333739 } Scheduling task 3
TT: Instant { tv_sec: 500504, tv_nsec: 186335307 } Scheduling task 4
TT: Instant { tv_sec: 500504, tv_nsec: 186336593 } Scheduling task 5
TT: Instant { tv_sec: 500504, tv_nsec: 186339816 } Scheduling task 6
TT: Instant { tv_sec: 500504, tv_nsec: 186341120 } Scheduling task 7
TT: Instant { tv_sec: 500504, tv_nsec: 186345392 } Scheduling task 8
TT: Instant { tv_sec: 500504, tv_nsec: 186346793 } Scheduling task 9
TT: Instant { tv_sec: 500504, tv_nsec: 186351871 } done scheduling tasks
TT: Instant { tv_sec: 500504, tv_nsec: 186394403 } Initiating task 0
TT: Instant { tv_sec: 500504, tv_nsec: 186411780 } Initiating task 1
TT: Instant { tv_sec: 500507, tv_nsec: 186532582 } Initiating task 2
TT: Instant { tv_sec: 500507, tv_nsec: 186533490 } Initiating task 3
TT: Instant { tv_sec: 500510, tv_nsec: 186726888 } Initiating task 4
TT: Instant { tv_sec: 500510, tv_nsec: 186795241 } Initiating task 5
TT: Instant { tv_sec: 500513, tv_nsec: 187019986 } Initiating task 6
TT: Instant { tv_sec: 500513, tv_nsec: 187021385 } Initiating task 7
TT: Instant { tv_sec: 500516, tv_nsec: 187261531 } Initiating task 8
TT: Instant { tv_sec: 500516, tv_nsec: 187261759 } Initiating task 9

Note that all tasks get executed even 10 seconds after calling drop on the runtime. I'm actually a bit surprised that tasks get executed in groups of two, but anyway.

@BraulioVM
Copy link
Contributor Author

As for ways forward, I can think of three alternatives:

  1. We accept and embrace the fact that shutting down a runtime will block on completion of all blocking tasks that were scheduled before the shutdown. In this sense, the fact that, in some very specific circumstances, blocking tasks that were scheduled before shutdown do not get executed should be considered a bug. Users that do not want this behaviour can (and should) call shutdown_timeout or shutdown_background. I don't think that embracing this aspect of shutting down would negatively affect any existing users, because in many cases their applications were probably already blocking on shutdown, as evidenced by my example above.
  2. We treat the blocking nature of shutdown as a bug and attempt to fix it. I don't think this perspective is backed by any of the tokio documentation. I believe this could break existing users, but I don't really have any data to back this claim (other than the issue referenced above, where users were surprised that some tasks wouldn't get executed).
  3. We go for something in the middle, where we try something like Guarantee that File::write_all writes all data (or at least tries) #4316 (comment) . I believe this could be a bit confusing and hard to explain to users ("depending on the timing of operations between threads, the runtime may either block until all blocking tasks have been executed, or until each idle thread in the blocking pool has executed one of the pending tasks in the pool. Some tasks may get discarded after one task has been assigned to each idle thread). I'm not sure if I'm mischaracterising what @Darksonn meant in that comment though.

In my opinion as a tokio noob, option 1 makes more sense. I want to hear what the experts think though.

Any thoughts?

@carllerche
Copy link
Member

Thoughts:

  • Runtime shutdown is intended to not be graceful, favoring cancellation when possible, deferring graceful shutdown to the user.
  • write_all losing data is a bug

@BraulioVM
Copy link
Contributor Author

Keeping the blocking nature of shutdown aside for a moment, couldn't we fix the original issue by doing a flush at the end of write_all? As @Darksonn pointed out, we cannot make write wait for the data to be written (#4296 (comment)), but I don't see why we couldn't change write_all to do a flush at the end. In fact the documentation for write_all says:

This method will not return until the entire buffer has been successfully written or such an error occurs

which we know is not true - the method may return before the last batch of data has actually been written to the file. Also, errors originating from the last write will not be returned to the user. The documentation also says that write_all is not cancellation-safe, which was the reason we couldn't change write, so maybe we can afford the change here?

If you agree, I could make a change in https://github.com/tokio-rs/tokio/blob/master/tokio/src/io/util/write_all.rs#L40 and then we could deal with runtime shutdown and blocking tasks in a separate PR

@Darksonn
Copy link
Contributor

For one, this would affect all IO resources - they are not customizable. For another, I still think it's a bug for write as well.

@BraulioVM
Copy link
Contributor Author

For one, this would affect all IO resources - they are not customizable.

But is that a problem? I see that flush is a no-op on UNIX domain sockets and TCP streams. Not sure about other IO resources though.

For another, I still think it's a bug for write as well.

As in, write should wait for the content to actually be written? You pointed out that changing that could be a breaking change. Are you saying the breaking change is worth it because it's preferable to have write wait for the writing result?

Or are you saying that write may not wait for the data to be written but that we should make sure the associated blocking task gets executed?

@Darksonn
Copy link
Contributor

Darksonn commented Dec 14, 2021

But is that a problem? I see that flush is a no-op on UNIX domain sockets and TCP streams. Not sure about other IO resources though.

The write_all method should not include a call to flush. E.g. if someone wraps their tcp stream in a BufWriter, then they should be able to call write_all with many small segments and flush afterwards.

For another, I still think it's a bug for write as well.

As in, write should wait for the content to actually be written? You pointed out that changing that could be a breaking change. Are you saying the breaking change is worth it because it's preferable to have write wait for the writing result?

Or are you saying that write may not wait for the data to be written but that we should make sure the associated blocking task gets executed?

I mean that the runtime should wait for the spawn_blocking task associated with the write call when shutting down. It's not possible to wait for the task in the write call itself.

@BraulioVM
Copy link
Contributor Author

The write_all method should not include a call to flush. E.g. if someone wraps their tcp stream in a BufWriter, then they should be able to call write_all with many small segments and flush afterwards.

Got it, that makes sense.

@BraulioVM
Copy link
Contributor Author

I mean that the runtime should wait for the spawn_blocking task associated with the write call when shutting down. It's not possible to wait for the task in the write call itself.

Ok makes sense. What if:

  1. We extend the UnownedTask struct to also contain (or to expose from an inner field, like the RawTask? No idea where this would live yet) a is_mandatory: bool field.
  2. The value of is_mandatory for tasks spawned with spawn_blocking will be false.
  3. We will add a new method spawn_mandatory_blocking that will make sure to set is_mandatory: true in the created tasks.
  4. File will spawn its file-writing tasks using this new spawn_mandatory_blocking function.
  5. On shutdown, threads in the blocking pool will work through the queue of blocking tasks checking the is_mandatory field. If the field is false, the task will be cancelled. If the field is true, the task will be executed. (We may deal with the race in the shutdown logic in a separate PR).

Does this sound like a promising approach? I have also thought about extending the runtime to know about Files specifically, and the operations associated to them, but it seems to me leveraging the BlockingPool for all the file IO is a good idea? It feels a bit weird though to have a general-sounding concept ("mandatory blocking task") to achieve a very concrete goal (write on Files do not get lost) and nothing else.

There would still be some open questions:

  • Better names for all of these concepts (is_mandatory, spawn_mandatory_blocking)
  • What to do about shutdown_background and shutdown_timeout? Is it fine to lose the writes there?
  • Do we expose these new APIs (spawn_mandatory_blocking, basically) to the users? Or do we keep them internal to the library? It makes sense to me that we would want to keep them private. As @carllerche says, users should handle graceful shutdown on their own and not rely on the runtime shutdown to do it for them.

@Darksonn
Copy link
Contributor

This is more or less what I was thinking the solution would be myself, so it makes sense.

Better names for all of these concepts (is_mandatory, spawn_mandatory_blocking)

The names are ok for now.

What to do about shutdown_background and shutdown_timeout? Is it fine to lose the writes there?

The operations will continue in the background if you use them. If someone exits the process before that, then we lose the writes, but there's not anything we can do about that.

Do we expose these new APIs (spawn_mandatory_blocking, basically) to the users?

No.

@BraulioVM
Copy link
Contributor Author

Cool! I'll look into it

@BraulioVM
Copy link
Contributor Author

This is still a work in progress.

I've clumsily written what I think is a fix. I have no idea whether I put the is_mandatory field in the right type because I'm still figuring out how the task-adjacent types relate to each other.

I am thinking about how to test the fix. I managed to test it manually using the code in the original issue. Without the fix, that test fails at most in a few hundred iterations. With the fix, I had to ctrl-c the script after 60K thousand iterations, as it wasn't failing anymore. However, this is not a very good test for the automated test suite.

I was thinking about using loom to prove the fix is actually a fix, but I have seen it's only used for testing the concurrency components, and not to test higher-level components like the runtime, so maybe loom is not a good candidate for the test that I want to write? The test would just be something like (pseudocode):

loom::model(|| {
   runtime.block_on(async {
       let mut file = File::create("foo.txt").await?;

      file.write_all(b"some bytes").await?;
      Ok(())
    });
   drop(runtime);
   // check file contains "some bytes"
});

but maybe it's not possible to do this...

@Darksonn
Copy link
Contributor

You can probably write literally the loom test you posted here. It would go in this directory. Though we might want the test to just use spawn_blocking_mandatory directly so we don't need to actually involve the file system.

@BraulioVM
Copy link
Contributor Author

BraulioVM commented Dec 27, 2021

I tried writing a loom test but couldn't manage to write one that would fail when using spawn_blocking in the way that I would expect. Maybe it would have, but if so, it would have taken >30minutes. As a sanity check, I've written a test that doesn't use loom but uses good-ol' looping, trying the same thing many times and hoping for the best.The test is called mandatory_blocking_tasks_get_executed, and you can find it in my last commit. The goal of the test is:

  1. To fail quickly when using spawn_blocking instead of spawn_mandatory_blocking. This shows that the test is effective at covering what it's intended to cover.
  2. To succeed when using spawn_mandatory_blocking, to show the fix is actually a fix.

The approach is not great for some reasons:

  1. It's very coupled to the implementation of the blocking pool. For example, the test makes sure to spawn a first blocking task before spawning the one that is supposed to mutate the atomic boolean. The test has to tickle the blocking pool in a very specific way to get a failure when using spawn_blocking.
  2. It's non-deterministic. If there is a test failure, that's deterministic evidence of an error in our implementation (assuming the test is implemented correctly). However, the absence of a test failure is not evidence that the implementation is correct. Classic concurrency. I would love to have got the loom test working.

On the other hand, the test fails relatively quickly when using spawn_blocking instead of spawn_mandatory_blocking (see here, the numbers represent the iterations it took the test code to fail when using spawn_blocking instead of spawn_mandatory_blocking). That's good in my opinion.

What do you think of the testing approach? Is it acceptable? Or should I keep trying to get a loom test that showcases the issue?

Once we agree on the testing approach I will proceed to clean-up the implementation (of both the tests and the fix).

@Darksonn
Copy link
Contributor

Darksonn commented Dec 28, 2021

This loom test fails, and if your PR is correct, then it should succeed if changed to use spawn_blocking_mandatory. (I didn't try it with your PR.)

use crate::runtime::tests::loom_oneshot;

#[test]
fn spawn_blocking_should_always_run() {
    loom::model(|| {
        let rt = Builder::new_current_thread().build().unwrap();

        let (tx, rx) = loom_oneshot::channel();
        rt.spawn_blocking(|| {});
        rt.spawn_blocking(move || {
            let _ = tx.send(());
        });

        drop(rt);

        // This call will deadlock if `spawn_blocking` doesn't run.
        let () = rx.recv();
    });
}

Note that loom will catch the deadlock and panic, so the above test doesn't actually hang. You can put the test in one of the files in src/runtime/tests or make a new file in that directory.

@BraulioVM
Copy link
Contributor Author

Thank you very much for helping out @Darksonn! That test fails as expected with spawn_blocking and succeeds with spawn_mandatory_blocking (at least using LOOM_MAX_PREEMPTIONS=5). I will remove the previous test I wrote, include the one you suggested, and clean up the implementation of the fix

@BraulioVM
Copy link
Contributor Author

Well there are a few builds failing that I could put more time into fixing, but I wanted to ask first whether the approach seems reasonable. I'm most interested in whether adding the is_mandatory: bool field to UnownedTask was the right thing to do. I personally have no idea. On the one hand, it currently is just a reference-counted handle to a task, and maybe it's very important to keep it small. On the other hand, it's the only type that applies specifically to blocking tasks, which the change is concerned with.

Is it fine to put the is_mandatory field in the UnownedTask? Do you have suggestions about other places where to store this information?

Comment on lines 30 to 31
F: FnOnce() -> R + Send + 'static,
R: Send + 'static,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rustfmt doesn't look inside macro invocations.

Suggested change
F: FnOnce() -> R + Send + 'static,
R: Send + 'static,
F: FnOnce() -> R + Send + 'static,
R: Send + 'static,

tokio/src/runtime/blocking/pool.rs Outdated Show resolved Hide resolved
tokio/src/runtime/blocking/pool.rs Outdated Show resolved Hide resolved
tokio/src/runtime/handle.rs Outdated Show resolved Hide resolved
Comment on lines 204 to 208
/// This type holds two ref-counts.
pub(crate) struct UnownedTask<S: 'static> {
raw: RawTask,
is_mandatory: bool,
_p: PhantomData<S>,
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suppose it makes sense to put it here, but maybe we should either rename the struct to something like BlockingTask or put the boolean in a wrapper struct?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've moved things around a bit. I took your advice of defining a wrapper struct that contains a UnownedTask and a is_mandatory field. The wrapper struct is defined in pool.rs, next to where it's used. Unit tests that use unowned can keep using it without caring about the is_mandatory field.

(I've also changed is_mandatory: bool to a mandatory: Mandatory where Mandatory is a new enum. I did this to make the call-site of code using this parameter easier to read)

@Darksonn
Copy link
Contributor

I do think that the approach makes sense.

@Darksonn
Copy link
Contributor

Ah, no, I had just missed the notification that you updated the PR.

tokio/src/runtime/tests/loom_blocking.rs Outdated Show resolved Hide resolved
Comment on lines +61 to +65
let (tx, rx) = loom_oneshot::channel();
let handle = runtime::spawn_mandatory_blocking(move || {
let _ = tx.send(());
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this test not do an extra spawn_blocking first like the other test?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Each test is covering a different race related to shutting down:

  1. The first one makes sure that, if a thread in the blocking pool gets awaken and shutdown has been signaled already, all mandatory tasks will get executed. Reaching this scenario requires putting the initial spawn_blocking_task, otherwise the thread in the blocking pool will not get to check the shutdown flag before executing the mandatory task.
  2. The second one makes sure that, if the runtime had shutdown by the time the spawning thread was spawning the mandatory task, an error will be communicated to the caller. (Or the contrapositive: if calling spawn_mandatory_blocking doesn't err, the task will be executed).

I've checked that both tests are useful by changing the implementations slightly and seeing them fail.

I could also write a test that covers the combination of both races if you want, but it wouldn't add a lot of value IIUC because both races are independent

@Darksonn
Copy link
Contributor

Darksonn commented Jan 21, 2022

Besides my comment on the error, I am fine with this.

The `UnownedTask` type will have an extra field called `is_mandatory`
which will determine whether the tasks get executed at shutdown. Maybe
the `RawTask` is a better place to put this field, but I don't know my
way around these types yet.

I have verified that this code change fixes the problem by running the
code in the original issue a hundred thousand times. Without the fix,
that code consistently misses a write in the first few hundred
executions. With the change, I've never seen it miss the write.

I think we might be able to use `loom` to test this. Will try to do so.
This is just a prototype to communicate that I didn't get the loom test
to work and to see if this testing approach would be accepted. It's not
great, because the test implementation is very coupled to the blocking
pool implementation (f.i, spawning a previous blocking task and awaiting
on it before launching the offending task). However, it takes 150ms
to run on my machine when it succeeds, and fails in the first few
attempts when using `spawn_blocking` instead of
`spawn_mandatory_blocking`, which is a good sign
The previous approach added it to the `UnownedTask` type. However, the
type is also used in contexts where the concept of mandatory doesn't
apply (unit tests).

This new approach adds the information about "mandatory-ness" right
where it makes sense: the blocking pool.
The purpose of this enum is to be just like the `is_mandatory: bool`,
only that it makes call-site easier to understand.

I have also introduced some methods in the `pool::Task` struct to make
the code a bit nicer
There are only two pieces of code that use `spawn_blocking_inner`:

1. `tokio::fs::File`, which uses a mock of `spawn_blocking_inner` in
    the tests.
2. The loom test that covers it.

All other test build result in the function not being used, hence the dead_code
When running `cargo test` on `<repo>/tokio`, you would expect that
`rustc` only gets executed against `tokio` using the `--test` flag,
which would make the `cfg_attr(test, ...)` conditional attribute apply.
That's not true though. When running `cargo test` on `tokio`, the tokio
crate gets built *twice*, once without `--test` and once with (this can
be verified using strace). This is because, f.i, tokio specifies a
dev-dependency on `tokio-test, which specifies back a dependency on
`tokio`.

So when running a regular test build, tokio will first get built without
`--test`. We will not get dead code errors there because for that build,
`tokio::fs::File` uses the new `spawn_mandatory_blocking`. For the next
build, the one with `--test`, we will not get dead code errors because,
even though `tokio::fs::File` uses a mock `spawn_mandatory_blocking`, the
`cfg_attr(test` is working as expected.

Things are different for loom builds. We will first get a build of tokio
without `--test` but with the `--cfg loom` flag. The fact that
`tokio::fs::File` uses `spawn_mandatory_blocking` won't save us here,
because `tokio:fs` doesn't get compiled in loom builds

The solution I can think of is extending the `cfg_attr(test` to
`cfg_attr(any(test, loom)`, which is a bit unfortunate because
`spawn_mandatory_blocking` *is* used in the loom tests, only not in the
regular loom build of the crate, which is triggered by the cyclical
dependency between tokio and tokio-test.
On a previous iteration of this PR, there could be a race between
calling `spawn_mandatory_blocking` and dropping the runtime. This could
result in f.i, a call to `File::write` returning successfully but the
actual write not getting scheduled.

Now `spawn_mandatory_blocking` will keep track of whether the task was
actually scheduled. If not, the `File::write` method will return an
error, letting users know that the write did not, and will not happen.

I have also added a loom test that checks that the return value of
`spawn_mandatory_blocking` can be trusted, even when shutting down from
a different thread.
This way it is consistent with `asyncify`
@BraulioVM
Copy link
Contributor Author

Besides my comment on the error, I am fine with this.

I believe this is done then?

Copy link
Contributor

@Darksonn Darksonn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, seems good.

@Darksonn Darksonn merged commit 7aad428 into tokio-rs:master Jan 25, 2022
@Darksonn
Copy link
Contributor

Thanks!

@carllerche
Copy link
Member

I missed this, but I just wanted to say thanks to all and this solution is simpler than anything I had thought of yet. Well done 👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-tokio Area: The main tokio crate M-fs Module: tokio/fs M-runtime Module: tokio/runtime M-task Module: tokio/task R-loom Run loom tests on this PR
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants