Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement batching methods on MpmcArrayQueue #211

Merged
merged 1 commit into from Jun 14, 2018

Conversation

franz1981
Copy link
Collaborator

@franz1981 franz1981 commented May 25, 2018

It adds batch drain/fill methods for MpmcArrayQueue

@coveralls
Copy link

coveralls commented May 25, 2018

Pull Request Test Coverage Report for Build 440

  • 132 of 132 (100.0%) changed or added relevant lines in 2 files are covered.
  • 21 unchanged lines in 4 files lost coverage.
  • Overall coverage increased (+0.4%) to 68.973%

Files with Coverage Reduction New Missed Lines %
jctools-core/src/main/java/org/jctools/maps/NonBlockingHashMap.java 3 78.34%
jctools-core/src/main/java/org/jctools/maps/NonBlockingSetInt.java 4 77.09%
jctools-core/src/main/java/org/jctools/maps/NonBlockingHashMapLong.java 7 80.4%
jctools-core/src/main/java/org/jctools/maps/NonBlockingIdentityHashMap.java 7 77.06%
Totals Coverage Status
Change from base Build 437: 0.4%
Covered Lines: 5653
Relevant Lines: 8196

💛 - Coveralls

@franz1981
Copy link
Collaborator Author

it is addressing #180

@@ -128,10 +128,13 @@ final boolean casConsumerIndex(long expect, long newValue)
*/
public class MpmcArrayQueue<E> extends MpmcArrayQueueL3Pad<E>
{
public static final int MAX_LOOK_AHEAD_STEP = Integer.getInteger("jctools.mpmc.max.lookahead.step", 4096);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should probably use PortableJvmInfo.RECOMENDED_OFFER_BATCH instead. On SPSC there's no reason not to claim large chunks for an uncontended producer, but for MPSC/MPMC the assumption should be some contention. Given that the empty slots in the queue will result in consumer 'bubbles' (queue not empty but slot is empty) we can potentially have an issue if a producer takes a while to fill up the slots it claims.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Strike that, this is just setting a max

Copy link
Contributor

@nitsanw nitsanw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@nitsanw nitsanw merged commit 2e38415 into JCTools:master Jun 14, 2018
@felixbarny
Copy link

Looking forward to this. When is the next release scheduled?

@nitsanw
Copy link
Contributor

nitsanw commented Oct 5, 2018

Sorry for the slow response. Totally swamped ATM, will release around end of October.

@franz1981
Copy link
Collaborator Author

@felixbarny I've implemented the batch methods assuming mostly single threaded usage to get the best of it: at the first failed attempt (due to contention) to fill/drain in batches it will just fill/drain one by one.
it is what you're expecting?

@felixbarny
Copy link

felixbarny commented Oct 5, 2018

I'm using an MPMC queue as the basis for an object pool. My scenario is that I have multiple consumers and, under normal circumstances, a single producer. In rare cases, there can be multiple producers.

I was hoping that both the producer and the consumers would benefit from batch operations. The producer could batch up multiple objects and add them to the object pool/queue in batch. Also, the consumers could get multiple objects and buffer them thread-local. By doing this, my expectation would be to gain performance by reducing the amount of CAS operations (1 per batch vs 1 per element).

@franz1981
Copy link
Collaborator Author

franz1981 commented Oct 5, 2018

@felixbarny

My scenario is that I have multiple consumers and, under normal circumstances, a single producer. In rare cases, there can be multiple producers.

I would say for sure that it will do it for the producers, given the "mostly single-threaded" scenario, will benefit a lot by batch fill

By doing this, my expectation would be to gain performance by reducing the amount of CAS operations (1 per batch vs 1 per element).

That's the ratio behind this PR indeed :)
I just suggest to build up some benchmark/experiment with your scenarios to look if it is improving things as you expect and with different jctools.mpmc.max.lookahead.step parameters 👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants