Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add API to prestart threads in threadpools #1032

Open
catlee opened this issue Jan 17, 2024 · 4 comments
Open

Add API to prestart threads in threadpools #1032

catlee opened this issue Jan 17, 2024 · 4 comments

Comments

@catlee
Copy link

catlee commented Jan 17, 2024

For my use case I would like to ensure that when I create a thread pool with min_threads > 0, that the minimum number of workers are created immediately.

The Java interface for ThreadPoolExecutor calls this "prestart". For example: prestartCoreThread and prestartAllCoreThreads: https://docs.oracle.com/javase/8/docs/api/java/util/concurrent/ThreadPoolExecutor.html#prestartCoreThread--

I have a draft PR (here) that implements a similar API for the CRuby implementation (I left the JRuby implementation for later). It adds both methods, as well as a prestart option to the initializer.

Is this an API change you would consider accepting?

@eregon
Copy link
Collaborator

eregon commented Jan 17, 2024

What is the advantage of doing so?
The disadvantage is it likely causes extra resource consumption (CPU & memory).

@catlee
Copy link
Author

catlee commented Jan 17, 2024

The advantage is that you get slightly improved latency on handling the first few items that are posted to the pool. On my system, I see about a 0.5ms improvement to handling the first few items when using prestart.

@eregon
Copy link
Collaborator

eregon commented Jan 17, 2024

I see. Could you share a repro for that? I'd like to try it locally.

@catlee
Copy link
Author

catlee commented Jan 17, 2024

Here's how I'm trying to measure the impact:

require "concurrent"

def gettime
  Process.clock_gettime(Process::CLOCK_MONOTONIC)
end

def measure_latency(prestart)
  pool = Concurrent::FixedThreadPool.new(1, prestart: prestart)
  times = []
  start = gettime
  pool.post { times << (gettime - start) }
  pool.shutdown
  pool.wait_for_termination
  times.first
end

def percentiles(times, p)
  times.sort!
  times[(times.size * p).ceil - 1]
end

n = 1000
no_prestart_times = n.times.map { measure_latency(false) }
prestart_times = n.times.map { measure_latency(true) }

puts "No prestart:"
puts "  50th percentile: #{percentiles(no_prestart_times, 0.5)}"
puts "  90th percentile: #{percentiles(no_prestart_times, 0.9)}"
puts "  99th percentile: #{percentiles(no_prestart_times, 0.99)}"

puts "Prestart:"
puts "  50th percentile: #{percentiles(prestart_times, 0.5)}"
puts "  90th percentile: #{percentiles(prestart_times, 0.9)}"
puts "  99th percentile: #{percentiles(prestart_times, 0.99)}"

puts "Delta:"
puts "  50th percentile: #{percentiles(no_prestart_times, 0.5) - percentiles(prestart_times, 0.5)}"
puts "  90th percentile: #{percentiles(no_prestart_times, 0.9) - percentiles(prestart_times, 0.9)}"
puts "  99th percentile: #{percentiles(no_prestart_times, 0.99) - percentiles(prestart_times, 0.99)}"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants