Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add OwnedTasks #3909

Merged
merged 8 commits into from Jul 7, 2021
Merged

Add OwnedTasks #3909

merged 8 commits into from Jul 7, 2021

Conversation

Darksonn
Copy link
Contributor

@Darksonn Darksonn commented Jul 1, 2021

This introduces an OwnedTasks structure that contains all of the tasks on the runtime. It eliminates the need for message passing when tasks are to be removed. This is a step towards isolating unsafety related to tasks in the runtime::task module.

@Darksonn Darksonn added A-tokio Area: The main tokio crate M-runtime Module: tokio/runtime labels Jul 1, 2021
@Darksonn Darksonn requested a review from carllerche July 1, 2021 11:14
Comment on lines +24 to +28
/// The caller must ensure that if the provided task is stored in a
/// linked list, then it is in this linked list.
pub(crate) unsafe fn remove(&self, task: &Task<S>) -> Option<Task<S>> {
self.list.lock().remove(task.header().into())
}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We had discussed introducing a field in the task header to remember which OwnedTasks it is in to make these operations safe. However this seems to be somewhat difficult, e.g. if you concurrently insert it into two OwnedTasks structures, then you have a race condition on the field for remembering the container.

Copy link
Member

@carllerche carllerche Jul 2, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, which would be solved if the only way to get a Task structure is via the OwnedTasks value:

owned_tasks.insert(async { ... }) -> Task<_>;

By doing this, the pointer to the OwnedTask in the task header is set on creation and never changed.

This can be done in a follow-up API though, we can keep remove unsafe for now.

Comment on lines 548 to 552
fn pre_shutdown(&mut self, worker: &Worker) {
// Signal to all tasks to shut down.
for header in self.tasks.iter() {
while let Some(header) = worker.shared.owned.pop_back() {
header.shutdown();
}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here all the threads will clean up tasks in parallel. I'm not sure what is best here.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is the shutdown process and not performance-sensitive. This seems fine to me.

};

// Track the task to be released by the worker that owns it
// TODO: Is this still necessary?
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it still necessary?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That is a good question. I don't think so. This is the waiting bit, which you removed. I would remove it and see if loom is happy.

@carllerche
Copy link
Member

cc @udoprog as you have also touched this code.

Comment on lines +67 to 69
enum RemoteMsg {
/// A remote thread wants to spawn a task.
Schedule(task::Notified<Arc<Shared>>),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

was the intention behind leaving this an enum just to keep the diff smaller? it seems like it could be changed to a Schedule struct, and just making the run queue a VecDeque<Schedule>...

tokio/src/runtime/task/stack.rs Show resolved Hide resolved
@carllerche
Copy link
Member

Looks great to me. I left some comments. I would be interested in seeing how this change impacts benchmarks.

Copy link
Member

@carllerche carllerche left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great to me. I would check to see if there is no significant perf regression, but otherwise it looks fine.

@Darksonn
Copy link
Contributor Author

Darksonn commented Jul 6, 2021

Running the benchmark in #3927 on this PR yields these results:

Before

running 4 tests
test basic_scheduler_spawn      ... bench:         466 ns/iter (+/- 66)
test basic_scheduler_spawn10    ... bench:       2,555 ns/iter (+/- 156)
test threaded_scheduler_spawn   ... bench:       3,466 ns/iter (+/- 984)
test threaded_scheduler_spawn10 ... bench:       8,359 ns/iter (+/- 2,946)

After

running 4 tests
test basic_scheduler_spawn      ... bench:         491 ns/iter (+/- 14)
test basic_scheduler_spawn10    ... bench:       2,593 ns/iter (+/- 46)
test threaded_scheduler_spawn   ... bench:       3,542 ns/iter (+/- 215)
test threaded_scheduler_spawn10 ... bench:       7,895 ns/iter (+/- 1,029)

@Darksonn Darksonn merged commit e2589a0 into master Jul 7, 2021
@Darksonn Darksonn deleted the owned-tasks branch July 7, 2021 08:53
@Darksonn Darksonn mentioned this pull request Jul 16, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-tokio Area: The main tokio crate M-runtime Module: tokio/runtime
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants