New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add OwnedTasks #3909
Add OwnedTasks #3909
Conversation
/// The caller must ensure that if the provided task is stored in a | ||
/// linked list, then it is in this linked list. | ||
pub(crate) unsafe fn remove(&self, task: &Task<S>) -> Option<Task<S>> { | ||
self.list.lock().remove(task.header().into()) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We had discussed introducing a field in the task header to remember which OwnedTasks
it is in to make these operations safe. However this seems to be somewhat difficult, e.g. if you concurrently insert it into two OwnedTasks
structures, then you have a race condition on the field for remembering the container.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, which would be solved if the only way to get a Task
structure is via the OwnedTasks
value:
owned_tasks.insert(async { ... }) -> Task<_>;
By doing this, the pointer to the OwnedTask in the task header is set on creation and never changed.
This can be done in a follow-up API though, we can keep remove
unsafe for now.
fn pre_shutdown(&mut self, worker: &Worker) { | ||
// Signal to all tasks to shut down. | ||
for header in self.tasks.iter() { | ||
while let Some(header) = worker.shared.owned.pop_back() { | ||
header.shutdown(); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here all the threads will clean up tasks in parallel. I'm not sure what is best here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is the shutdown process and not performance-sensitive. This seems fine to me.
}; | ||
|
||
// Track the task to be released by the worker that owns it | ||
// TODO: Is this still necessary? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it still necessary?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is a good question. I don't think so. This is the waiting bit, which you removed. I would remove it and see if loom is happy.
cc @udoprog as you have also touched this code. |
enum RemoteMsg { | ||
/// A remote thread wants to spawn a task. | ||
Schedule(task::Notified<Arc<Shared>>), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
was the intention behind leaving this an enum just to keep the diff smaller? it seems like it could be changed to a Schedule
struct, and just making the run queue a VecDeque<Schedule>
...
Looks great to me. I left some comments. I would be interested in seeing how this change impacts benchmarks. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great to me. I would check to see if there is no significant perf regression, but otherwise it looks fine.
Running the benchmark in #3927 on this PR yields these results: Before
After
|
This introduces an
OwnedTasks
structure that contains all of the tasks on the runtime. It eliminates the need for message passing when tasks are to be removed. This is a step towards isolating unsafety related to tasks in theruntime::task
module.