Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The tokio::io::empty combinator does not participate in coop #4291

Closed
francis-starlab opened this issue Dec 1, 2021 · 5 comments · Fixed by #4300
Closed

The tokio::io::empty combinator does not participate in coop #4291

francis-starlab opened this issue Dec 1, 2021 · 5 comments · Fixed by #4300
Labels
A-tokio Area: The main tokio crate C-bug Category: This is a bug. M-coop Module: tokio/coop M-io Module: tokio/io

Comments

@francis-starlab
Copy link

francis-starlab commented Dec 1, 2021

Version
List the versions of all tokio crates you are using. The easiest way to get
this information is using cargo tree subcommand:

cargo tree | grep tokio

└── tokio v1.14.0
    └── tokio-macros v1.6.0 (proc-macro)

Platform
The output of uname -a (UNIX), or version and 32 or 64-bit (Windows)
Linux francis-debian 5.10.0-9-amd64 #1 SMP Debian 5.10.70-1 (2021-09-30) x86_64 GNU/Linux

Description

I tried this code:

use std::time::Duration;
use tokio::io::AsyncReadExt;
use tokio::time::timeout;

#[tokio::main]
async fn main() {
    if timeout(Duration::from_millis(1000), async {
        loop {
            let mut buf = [0u8; 4096];
            let _ = tokio::io::empty().read(&mut buf).await;
        }
    }).await.is_err() {
        println!("timedout");
    };
}

I expected to see this happen:

I expected a timeout to occur after 1 second and for the process to exit. Instead it hangs forever.

@francis-starlab francis-starlab added A-tokio Area: The main tokio crate C-bug Category: This is a bug. labels Dec 1, 2021
@hawkw
Copy link
Member

hawkw commented Dec 1, 2021

Reading from io::empty always returns immediately, so the loop will never yield. The timeout cannot fire unless the main task yields, because it's blocking the thread.

If you spawn the async block inside the timeout as a separate task, you should see it time out as expected: https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=8c1de2e3ce490e8d57ff848f32ae95db

@hawkw
Copy link
Member

hawkw commented Dec 1, 2021

It occurs to me that io::empty should maybe participate in cooperative yielding to prevent busy-loops like this from hanging the entire runtime...

@Darksonn Darksonn changed the title Timeout not timing out. The tokio::io::empty combinator does not participate in coop Dec 2, 2021
@Darksonn Darksonn added M-coop Module: tokio/coop M-io Module: tokio/io labels Dec 2, 2021
@Darksonn
Copy link
Contributor

Darksonn commented Dec 2, 2021

Thanks for reporting this. We should make sure that io::empty and friends participate in coop.

@francis-starlab
Copy link
Author

francis-starlab commented Dec 2, 2021

Possibly related/same thing:

use std::process::Stdio;
use std::time::Duration;
use tokio::io::AsyncReadExt;
use tokio::process::Command;
use tokio::time::timeout;

#[tokio::main]
async fn main() {
    let mut command = Command::new("yes");
     command.stdout(Stdio::piped());
    let mut spawned = command.spawn().expect("error spawning");
    let mut stdout = spawned.stdout.take().expect("no stdout?");
    if timeout(Duration::from_millis(1000), async {
        loop {
            let mut buf = [0u8; 1];
            let _ = stdout.read(&mut buf).await;
        }
    }).await.is_err() {
        println!("timedout");
    };
}

Assuming you have a fast enough yes on your system, then the above also doesn't timeout when it should.

BraulioVM added a commit to BraulioVM/tokio that referenced this issue Dec 5, 2021
Reads and buffered reads from a `tokio::io::empty` were always marked
as ready. That makes sense, given that there is nothing to wait for.
However, doing repeated reads on the `empty` could stall the event
loop and prevent other tasks from making progress.

This change makes reads on empty objects cooperative. One of every two
reads will return `Poll::Pending`, which will give the executor a
chance to keep making progress elsewhere.

Fixes: tokio-rs#4291
BraulioVM added a commit to BraulioVM/tokio that referenced this issue Dec 6, 2021
Reads and buffered reads from a `tokio::io::empty` were always marked
as ready. That makes sense, given that there is nothing to wait for.
However, doing repeated reads on the `empty` could stall the event
loop and prevent other tasks from making progress.

This change uses tokio's coop system to yield control back to the
executor when appropriate.

Note that the issue that originally triggered this PR is not fixed
yet, because the `timeout` function will not poll the timer after
empty::read runs out of budget. A different change will be needed to
address that.

Refs: tokio-rs#4291
Darksonn pushed a commit that referenced this issue Dec 10, 2021
Reads and buffered reads from a `tokio::io::empty` were always marked
as ready. That makes sense, given that there is nothing to wait for.
However, doing repeated reads on the `empty` could stall the event
loop and prevent other tasks from making progress.

This change uses tokio's coop system to yield control back to the
executor when appropriate.

Note that the issue that originally triggered this PR is not fixed
yet, because the `timeout` function will not poll the timer after
empty::read runs out of budget. A different change will be needed to
address that.

Refs: #4291
@BraulioVM
Copy link
Contributor

BraulioVM commented Dec 11, 2021

Actually the original issue isn't fixed yet because of #4300 (comment) . I should have updated the PR body. Anyway, I'm working on a PR that finally fixes this problem.

tokio::io::empty does participate in coop now, which is what the currrent title of the issue refers to, so I guess it could make sense to keep it closed.

BraulioVM added a commit to BraulioVM/tokio that referenced this issue Dec 11, 2021
Up until now, if the future that we were applying a timeout to
consistently depleted the coop budget, the timeout never got a chance to
be evaluated. In the next call to `poll`, the underlying future would be
polled and it would once again deplete the budget. In those circumstances,
timeouts would not be respected. This can be surprising to people, and
in fact it was in tokio-rs#4291 .

The solution is to make a budget exception with `timeout` if it was the
underlying future that depleted the budget.
BraulioVM added a commit to BraulioVM/tokio that referenced this issue Dec 11, 2021
Up until now, if the future that we were applying a timeout to
consistently depleted the coop budget, the timeout never got a chance to
be evaluated. In the next call to `poll`, the underlying future would be
polled and it would once again deplete the budget. In those circumstances,
timeouts would not be respected. This can be surprising to people, and
in fact it was in tokio-rs#4291 .

The solution is to make a budget exception with `timeout` if it was the
underlying future that depleted the budget.

Refs: tokio-rs#4291 , tokio-rs#4300
GongLG added a commit to GongLG/tokio that referenced this issue Feb 7, 2022
Add coop checks on pipe poll_read and poll_write.

Fixes: tokio-rs#4470
Refs: tokio-rs#4291, tokio-rs#4300
GongLG added a commit to GongLG/tokio that referenced this issue Feb 8, 2022
Add coop checks on pipe poll_read and poll_write.

Fixes: tokio-rs#4470
Refs: tokio-rs#4291, tokio-rs#4300
GongLG added a commit to GongLG/tokio that referenced this issue Feb 8, 2022
Add coop checks on pipe poll_read and poll_write.

Fixes: tokio-rs#4470
Refs: tokio-rs#4291, tokio-rs#4300
GongLG added a commit to GongLG/tokio that referenced this issue Feb 8, 2022
Add coop checks on pipe poll_read and poll_write.

Fixes: tokio-rs#4470
Refs: tokio-rs#4291, tokio-rs#4300
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-tokio Area: The main tokio crate C-bug Category: This is a bug. M-coop Module: tokio/coop M-io Module: tokio/io
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants