Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

net: possible race during I/O driver shutdown #2924

Closed
carllerche opened this issue Oct 8, 2020 · 5 comments
Closed

net: possible race during I/O driver shutdown #2924

carllerche opened this issue Oct 8, 2020 · 5 comments
Labels
A-tokio Area: The main tokio crate C-bug Category: This is a bug. M-runtime Module: tokio/runtime

Comments

@carllerche
Copy link
Member

When shutting down the I/O driver, all existing I/O resources should be notified and all subsequent operations error. Any attempt to create a new I/O resource should fail.

There currently is some logic to handle this, but after a quick skim, I believe it may be racy. When polling an I/O resource, first the handle is checked to see if a strong reference to the I/O driver still exists. If one does not, the I/O resource assumes the I/O driver is gone. When the I/O driver shuts down, it wakes all associated I/O resources.

I wonder if an I/O resource is added during the shutdown process if it will get notified of the shutdown or if it will hang indefinitely. I also wonder if the Arc is correctly dropped before the I/O resources are notified.

Ref: https://github.com/tokio-rs/tokio/pull/2919/files#r501433182

@carllerche carllerche added C-bug Category: This is a bug. A-tokio Area: The main tokio crate labels Oct 8, 2020
@zaharidichev
Copy link
Contributor

Is someone taking a look at that? Can I ?

@Darksonn
Copy link
Contributor

Please do!

zaharidichev added a commit to zaharidichev/tokio that referenced this issue Oct 17, 2020
zaharidichev added a commit to zaharidichev/tokio that referenced this issue Oct 19, 2020
To avoid IO resources getting registered while the driver
is shutting down,`state: AtomicUsize` is introduced.

Fixes: tokio-rs#2924
@bdonlan
Copy link
Contributor

bdonlan commented Oct 20, 2020

I independently discovered this as part of work in #2903; there's also another related race where a readiness future can be created after notifications fire off (but before the Weak<Inner> becomes non-upgradable). I'll have a different fix for both in my AsyncFd branch shortly.

@bdonlan
Copy link
Contributor

bdonlan commented Oct 20, 2020

@zaharidichev FYI

bdonlan pushed a commit to bdonlan/tokio that referenced this issue Oct 20, 2020
Previously, there was a race window in which an IO driver shutting down could
fail to notify ScheduledIo instances of this state; in particular, notification
of outstanding ScheduledIo registrations was driven by `Driver::drop`, but
registrations bypass `Driver` and go directly to a `Weak<Inner>`. The `Driver`
holds the `Arc<Inner>` keeping `Inner` alive, but it's possible that a new
handle could be registered (or a new readiness future created for an existing
handle) after the `Driver::drop` handler runs and prior to `Inner` being
dropped.

This change fixes this in two parts: First, notification of outstanding
ScheduledIo handles is pushed down into the drop method of `Inner` instead,
and, second, we add state to ScheduledIo to ensure that we remember that the IO
driver we're bound to has shut down after the initial shutdown notification, so
that subsequent readiness future registrations can immediately return (instead
of potentially blocking indefinitely).

Fixes: tokio-rs#2924
@zaharidichev
Copy link
Contributor

@bdonlan makes sense. I will close my PR in favor of yours then

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-tokio Area: The main tokio crate C-bug Category: This is a bug. M-runtime Module: tokio/runtime
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants