New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
rt: Allow concurrent block_on
's with basic_scheduler
#2804
Conversation
6d0e01b
to
db92967
Compare
This allows us to concurrently call `Runtime::block_on` with the basic_scheduler and allowing other threads to steal the dedicated parker.
db92967
to
144b6ac
Compare
At one point when we talked about this, I thought you said that the solution was to just force the initial |
tokio/src/runtime/basic_scheduler.rs
Outdated
// TODO: Consider using an atomic load here intead of locking | ||
// the mutex. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this shouldn't be too tricky if we just ensure that the state is only changed while inside a lock.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would like to punt this to the next PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sure, that's fine, i was just mentioning this for whenever you get to it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not for this PR, but when we do this, we probably want to define some "cell" type in src/util
.
No, we wanted to actually move the driver between threads. I think that would be odd to have the block_on run longer than expected? That feels like an implicit dependency that would be pretty surprising. |
@carllerche and I had a chat last week, I have some tweeks I am going to change this. I want to see if we can get away with a |
Yeah, I think this is better, I just wasn't sure if I remembered our chat from earlier. Carry on! |
Okay, I've refactored this to use a |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i think we need to rework the way we're handling mutex poisoning here. i think we just want to always put everything back regardless of whether the mutex is poisoned. we can use PoisonError::into_inner
to get a useable guard out of a poisoned mutex, ignoring the poison. i think we should do that here.
otherwise, this all seems fine to me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the work. I left comments.
I mentioned in the original issue that I thought this could be done mostly by working with a Park
layer, but we can punt that to another PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking good. I think we are almost there. Left some notes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks 👍
This PR changes how
Runtime::block_on
works when thebasic_scheduler
is selected. Before this change, we used a mutex to guard when entering a block_on with abasic_scheduler
which meant that we could never callblock_on
concurrently event hos this was totally possible. This PR changes that to allow us to callblock_on
concurrently. This also allows multiple block on's to "steal" the driver/parker. Doing this by sharing theParkThread
condvar and allowing us to notify that thread to check for the parker to allow moving it forward.The stealing works by the first thread to call
Runtime::block_on
will acquire the driver. Every other call toblock_on
while the firstblock_on
is still running, will not acquire the driver but will instead enter the runtime context and attempt to poll the passed in future. The parker is slightly modified for this case to allow us to provide our ownCondvar
that we will use once the initialblock_on
finishes to notify the second/other threads that they can now steal the driver.Related to #2720