Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Increase max ConsumerWorkService block size to 256 #814

Merged
merged 1 commit into from Jul 25, 2022

Conversation

michaelklishin
Copy link
Member

@michaelklishin michaelklishin commented Jul 22, 2022

As suggested in #813. This has a few
minor effects on consumers:

  • A low single-digit % throughput gain
  • A comparable reduction in mean consumer latency

In general, it makes sense that consumers operating at peak throughput
should run operations in blocks close to the QoS prefetch used.
Since we usually recommend a value of 100-300 for environments that
focus on throughput, the new default of 256 makes sense.

The only negative effect I can think of a slightly higher GC pressure
which can increase variability of the aforementioned metrics.

Using development builds with PerfTest

To install a development version of this client locally, use

./mvnw clean package -P uber-jar -Dgpg.skip=true -Dmaven.test.skip

then change PerfTest dependency in pom.xml to use 6.0.0-SNAPSHOT or whatever version
you designate to this PR locally, and produce an uberjar:

./mvnw clean package -P uber-jar -Dgpg.skip=true -Dmaven.test.skip

then run it like so

java -jar ./target/perf-test.jar --queue block-size-256 -x 1 -y 2 --qos 256 --id "block-size-256"

and compare it to a GA version, e.g.

java -jar perf-test-ga.jar --queue block-size-16 -x 1 -y 2 --qos 256 --id "block-size-16"

As suggested in #813. This has a few
minor effects on consumers:

 * A low single-digit % throughput gain
 * A comparable reduction in mean consumer latency

In general, it makes sense that consumers operating at peak throughput
should run operations in blocks close to the QoS prefetch used.
Since we usually recommend a value of 100-300 for environments that
focus on throughput, the new default of 256 makes sense.

The only negative effect I can think of a slightly higher GC pressure
which can increase variability of the aforementioned metrics.
@michaelklishin michaelklishin added this to the 5.16.0 milestone Jul 22, 2022
Copy link
Contributor

@lukebakken lukebakken left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wait for a 👍 from @acogoluegnes on this one!

@acogoluegnes
Copy link
Contributor

I did not notice any significant nor reproducible improvement after some local testing, but this block size is better aligned with our QoS recommendations, so I'm OK to merge.

@acogoluegnes acogoluegnes merged commit b1016c1 into main Jul 25, 2022
@acogoluegnes acogoluegnes deleted the mk-higher-max-runnable-block-size branch July 25, 2022 06:51
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants