Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tokio::fs never waked up when ran by tokio::spawn #1851

Closed
jean-airoldie opened this issue Nov 28, 2019 · 4 comments · Fixed by #1861
Closed

tokio::fs never waked up when ran by tokio::spawn #1851

jean-airoldie opened this issue Nov 28, 2019 · 4 comments · Fixed by #1861
Labels
C-bug Category: This is a bug.

Comments

@jean-airoldie
Copy link
Contributor

Version

[dependencies]
tokio = { version = "0.2", features = ["rt-core", "time", "fs", "macros"] }
futures = "0.3"
tempfile = "3.1.0"

with tokio 0.2.1 being selected.

Platform

Linux 5.0.0-36-generic 18.04.1-Ubuntu x86_64

Description

The tokio::fs operations I tested are not waked up when driven by tokio::spawn. The following tests block forever:

#[cfg(test)]
mod tests {
    use {
        tokio::{fs, time::{timeout, Duration}},
        tempfile::tempdir,
        futures::future::FutureExt,
    };

    #[tokio::test]
    async fn test_read() {
        let temp = tempdir().unwrap();
        let dir = temp.path();

        let (driver, handle) = fs::read(dir.join("bar")).remote_handle();
        tokio::spawn(driver);

        // The task is never waked up.
        handle.await.unwrap();
    }

    #[tokio::test]
    async fn test_write() {
        let temp = tempdir().unwrap();
        let dir = temp.path();

        let (driver, handle) = fs::write(dir.join("bar"), b"bytes").remote_handle();
        tokio::spawn(driver);

        // The task is never waked up.
        handle.await.unwrap();
    }
}

Note that this is doesn't seem to be caused by the #[tokio::test] macro since I can replicate this behavior using #[tokio::main]. Also it doesn't seem to be caused by the remote_handle method either since I had the same behavior in one of my applications that doesn't use it. It seems that tokio::fs operations are simply not working when ran as a task inside tokio::spawn.

@kbleeke
Copy link
Contributor

kbleeke commented Nov 28, 2019

I've run into a similar issue with the dns feature where a SocketAddr doesn't get resolved. This leads me to belive that this issues is related to the blocking feature.

What I found interesting is that

#[tokio::main]
async fn main() {
    tokio::spawn(async move {
        tokio::fs::read("./Cargo.toml").await.expect("read");
    }).await.unwrap();
}

does not terminate while

#[tokio::main]
async fn main() {
    tokio::fs::read("./Cargo.toml").await.expect("read");
}

does.

Feature are tokio = { version = "0.2.1", features = ["fs", "rt-core", "macros"] }
Both examples work when adding the rt-threaded feature.

@zonyitoo
Copy link
Contributor

Same problem when running futures with tokio::spawn_blocking if using the basic_scheduler. They are definitely related.

@carllerche carllerche added the C-bug Category: This is a bug. label Nov 29, 2019
carllerche added a commit that referenced this issue Nov 29, 2019
The "global executor" thread-local is to track where to spawn new tasks,
**not** which scheduler is active on the current thread. This fixes a
bug with scheduling tasks on the basic_scheduler by tracking the
currently active basic_scheduler with a dedicated thread-local variable.

Fixes: #1851
@carllerche
Copy link
Member

Fix here: #1861

@kbleeke
Copy link
Contributor

kbleeke commented Nov 29, 2019

Works for me. Thank you! 👍

carllerche added a commit that referenced this issue Nov 29, 2019
The "global executor" thread-local is to track where to spawn new tasks,
**not** which scheduler is active on the current thread. This fixes a
bug with scheduling tasks on the basic_scheduler by tracking the
currently active basic_scheduler with a dedicated thread-local variable.

Fixes: #1851
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
C-bug Category: This is a bug.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants