Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BlockingPool num_idle_threads is wrongly double-increased when shutting down #6439

Open
liuq19 opened this issue Mar 29, 2024 · 0 comments · May be fixed by #6440
Open

BlockingPool num_idle_threads is wrongly double-increased when shutting down #6439

liuq19 opened this issue Mar 29, 2024 · 0 comments · May be fixed by #6440
Labels
A-tokio Area: The main tokio crate C-bug Category: This is a bug. M-metrics Module: tokio/runtime/metrics M-runtime Module: tokio/runtime

Comments

@liuq19
Copy link

liuq19 commented Mar 29, 2024

Version
Tokio v1.37.0

Description
BlockingPool num_idle_threads is doubly increased, when shutting down before the condvar wait_timeout.

The whole code is:

self.metrics.inc_num_idle_threads();
while !shared.shutdown {
let lock_result = self.condvar.wait_timeout(shared, self.keep_alive).unwrap();
shared = lock_result.0;
let timeout_result = lock_result.1;
if shared.num_notify != 0 {
// We have received a legitimate wakeup,
// acknowledge it by decrementing the counter
// and transition to the BUSY state.
shared.num_notify -= 1;
break;
}
// Even if the condvar "timed out", if the pool is entering the
// shutdown phase, we want to perform the cleanup logic.
if !shared.shutdown && timeout_result.timed_out() {
// We'll join the prior timed-out thread's JoinHandle after dropping the lock.
// This isn't done when shutting down, because the thread calling shutdown will
// handle joining everything.
let my_handle = shared.worker_threads.remove(&worker_thread_id);
join_on_thread = std::mem::replace(&mut shared.last_exiting_thread, my_handle);
break 'main;
}
// Spurious wakeup detected, go back to sleep.
}
if shared.shutdown {
// Drain the queue
while let Some(task) = shared.queue.pop_front() {
self.metrics.dec_queue_depth();
drop(shared);
task.shutdown_or_run_if_mandatory();
shared = self.shared.lock();
}
// Work was produced, and we "took" it (by decrementing num_notify).
// This means that num_idle was decremented once for our wakeup.
// But, since we are exiting, we need to "undo" that, as we'll stay idle.
self.metrics.inc_num_idle_threads();

When BlockingPool is shutting down,
the num_idle_threads will be increased at first in

self.metrics.inc_num_idle_threads();

and then it will doubly be increased in this

self.metrics.inc_num_idle_threads();

@liuq19 liuq19 added A-tokio Area: The main tokio crate C-bug Category: This is a bug. labels Mar 29, 2024
@mox692 mox692 added M-runtime Module: tokio/runtime M-metrics Module: tokio/runtime/metrics labels Mar 29, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-tokio Area: The main tokio crate C-bug Category: This is a bug. M-metrics Module: tokio/runtime/metrics M-runtime Module: tokio/runtime
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants