Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Alternative Pool Strategies #5218

Merged
merged 27 commits into from Sep 16, 2020
Merged

Alternative Pool Strategies #5218

merged 27 commits into from Sep 16, 2020

Conversation

gregw
Copy link
Contributor

@gregw gregw commented Sep 1, 2020

Speculative proposal to make Pool have a pluggable strategy for #5217

Signed-off-by: Greg Wilkins <gregw@webtide.com>
@gregw gregw requested review from lorban and sbordet September 1, 2020 10:16
@gregw
Copy link
Contributor Author

gregw commented Sep 1, 2020

@sbordet @lorban i don't hate it...

 + javadoc

Signed-off-by: Greg Wilkins <gregw@webtide.com>
@lorban
Copy link
Contributor

lorban commented Sep 1, 2020

I'm not going to comment on the implementation that is a bit drafty, but making the thread-local caching pluggable is surprisingly not very intrusive, so I don't mind it.

But I'm still not conviced by the usefulness of this change as I'm worried it seems like it will create a tendency to push down some logic that should be kept higher-up, like the randomness / round robinness that IMHO is better suited at a higher level.

Oh, and some unwise strategy has the potential of totally killing perf, like the LRU implementation.

 + Added a ThreadLocalStrategy for a single cached item
 + Tell strategies about newly reserved entries
 + Fixed multiplexing test that was dependent on the impl of the cache

Signed-off-by: Greg Wilkins <gregw@webtide.com>
@gregw
Copy link
Contributor Author

gregw commented Sep 1, 2020

@lorban yep the LRU impl really sux :) Perhaps we shouldn't offer that one as the name is attractive, but hard to implement without garbage and/or contention.

Have a look again now, as I've created ThreadLocalStrategy for the common case of a cache of size 1. So that will give us a performance improvement.

@gregw gregw linked an issue Sep 1, 2020 that may be closed by this pull request
 + added tests

Signed-off-by: Greg Wilkins <gregw@webtide.com>
@gregw gregw marked this pull request as ready for review September 2, 2020 08:27
@lorban
Copy link
Contributor

lorban commented Sep 2, 2020

What I'm uncomfortable with in this change is the fact that the strategy can either augment the entry lookup, like the caching strategy does by adding a fastpath / slowpath behavior, or it can completely bypass the entry lookup... while still having the ability to fallback to the default lookup.

This sounds like a too coarse/permissive API that could easily be abused. I'm worried this could encourage turning the Pool class into a silver bullet in the long run, especially since I can't think of a use-case that can't be solved by wrapping it instead.

@gregw
Copy link
Contributor Author

gregw commented Sep 2, 2020

@lorban But I think it is better to have Pool be a Shiny bullet (if not silver), rather than have the HttpClient connection pool code worrying about the details of how to do random/round-robin/etc.
The #5219 proposal is worse for me because it mixes up concerns: the client code should not need to worry about how the pool works.

Wrapping is a poor solution as it requires exposing public API of aquireAt(int) and you end up with the situation of unused cache mechanisms in the code that you hope the JIT will remove and that they wont confuse connection cache developers.

Finally, if your arguments hold, then that is a reason to remove the caching mechanism from Pool as that too could be done with a wrapper! Strategy should be 0,1 or infinity. The fact that we have thread local cache and round robin shows that we have at least 2 strategies... hence we should implement infinity.

 + Don't have a fallback iteration, instead make a SearchStrategy and DualStrategy
@gregw gregw changed the title Speculative idea to make a pluggable Pool strategy Pluggable Pool strategy Sep 2, 2020
@gregw
Copy link
Contributor Author

gregw commented Sep 2, 2020

@lorban I think I've fixed your reservation of having the fallback iterative search.
Instead that is now just a SearchStrategy and I have added a DualStrategy so we have

cacheSize ==0 results in new SearchStrategy()
cacheSize == 1 results in new DualStrategy(new ThreadLocalStrategy(), new SearchStrategy())
cacheSize > 1 results in new DualStrategy(new ThreadLocalCacheStrategy(cacheSize), new SearchStrategy())

a round robin pool is just new RoundRobinStrategy() as it does not need a search
random does, so it should also be a new DualStrategy(new RandomStrategy(), new SearchStrategy())

Naming could be worked on....
If you like we could have an interface Strategy and interface CompleteStrategy extends Strategy with the constructor needing a CompleteStrategy.
Search and RoundRobin are CompleteStrategy's but the others are not. Dual could be renamed CachingStrategy where the second strategy has to be complete. This would prevent bad combinations.

 + split strategies into Cache and Strategies
Copy link
Contributor

@lorban lorban left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm leaning on the path of being convinced by this Strategy approach, mostly because we could get rid of the acquireAt() method.

But the Strategy API needs some cleanup IMHO.

 + Added reserve and release
@gregw gregw marked this pull request as draft September 2, 2020 13:58
@gregw
Copy link
Contributor Author

gregw commented Sep 2, 2020

@sbordet @lorban I've followed Simone's feedback and it is working... but I think it is really rather delicate and there are all sorts of ways that it can be used incorrectly.
I've put this back into draft as I think it has become too big and invasive and will be too difficult to use.

I'm not sure that making the strategies get involved with the actual mechanics of release/remove/reserve is a good thing. Sure it makes the interface look symmetric (having release instead of released), but it is putting too much of the internals into the strategy. You have to deal with if the first strategy releases, then the second doesn't know about it etc. Multiplexing also makes composite release very complex.

Fundamentally I think it kind of goes against the atomic spirit of the pool, where the Entry is the primary atomic mechanism and when you act on it, you might find that somebody else has already acquired or closed it on you. Thus I think a released notification on the strategy is much simpler and better than a release operation. Note also that the tryRelease API need for the operation is not public.

So I am 99% sure that this is the right place to implement round robin / random style strategies. I'm just 1% sure this is currently the right way to do it.

+ reverted to post notifications for removed, reserved and released.
+ Added a few more strategies that need to be benchmarked, that use the list iterator.

Signed-off-by: Greg Wilkins <gregw@webtide.com>
@gregw
Copy link
Contributor Author

gregw commented Sep 7, 2020

@lorban @sbordet I know I shouldn't be playing with this... but....

I've updated to a simpler strategy that is notified after removed, reserved, released - as that makes combining strategies sensible.
However, I've then been experimenting with using cowList.listIterator(int index) for strategies that can have a random or round-robin starting point, but then iterator themselves if need be. These might produce a bit more garbage, but there are also ways to reuse the iterators. Using such strategies means that we don't need to combine with a LinearSearch, so perhaps we can go back to strategies that actually do the operation on the list.

Anyway, this is still a draft, but have a look if you are interested.... but only once jetty-10 is ready to release.

cheers

Signed-off-by: Greg Wilkins <gregw@webtide.com>
Signed-off-by: Greg Wilkins <gregw@webtide.com>
Signed-off-by: Greg Wilkins <gregw@webtide.com>
Signed-off-by: Greg Wilkins <gregw@webtide.com>
Signed-off-by: gregw <gregw@webtide.com>
 + pluggable strategies
 + hard coded

Signed-off-by: Greg Wilkins <gregw@webtide.com>
@gregw
Copy link
Contributor Author

gregw commented Sep 10, 2020

Some strange benchmark results!

  • I narrowed testing down to 4 key strategies: Linear, Random, ThreadLocal, RoundRobin
  • I wrote a single strategy that could be configured to do any of those: OneStrategyToRuleThemAll. It works fine (although it's round robin impl has a hack that wont work in general).
  • So I thought what if I just hard coded the OSTRTA into a Pool class with some inheritance to make the different implementations! The results are horrid:
           (POOL_TYPE)           Ops/s          Error   Units     alloc rate
                linear    12388037.627 ±  4447723.249   ops/s     0.001 ±   0.005  MB/sec
        OSTRTA(LINEAR)    11712109.534 ± 18050206.735   ops/s     0.001 ±   0.008  MB/sec
           Pool.Linear     9076780.369 ±   772761.089   ops/s     0.001 ±   0.009  MB/sec
    threadlocal+linear    19955813.938 ± 15912431.612   ops/s     0.001 ±   0.008  MB/sec
  OSTRTA(THREAD_LOCAL)    18986205.894 ± 34689639.507   ops/s     0.001 ±   0.008  MB/sec
           Pool.Thread    10881925.059 ±  6078295.882   ops/s     0.001 ±   0.008  MB/sec
      random-iteration    17877890.452 ±  2485249.042   ops/s   377.917 ± 403.364  MB/sec
        OSTRTA(RANDOM)    16392327.062 ±  9050945.178   ops/s     0.001 ±   0.009  MB/sec
           Pool.Random     9314759.324 ± 16094548.697   ops/s     0.001 ±   0.008  MB/sec
     roundrobin+linear     8846940.752 ±  9466144.785   ops/s   124.635 ± 153.511  MB/sec
   OSTRTA(ROUND_ROBIN)     8709583.310 ±  4613331.332   ops/s   122.741 ± 110.409  MB/sec
       Pool.RoundRobin     8079087.371 ± 17119090.317   ops/s     0.001 ±   0.008  MB/sec

Ignore the garbage generated by round robin, that's a lambda I can get rid of.

All the inherited behaviour ones: Pool.Linear, Pool.Thread etc. are the slowest! Virtual calls I guess.

So I can see 2 ways forward:

  1. Go back to the pluggable strategy pool and just provide the 4 key strategies (will replace random-iteration with one that doesn't allocate)
  2. Go to a single Pool class with a passed in Mode (like OSTRTA)

I'm guessing @sbordet likes 1. and @lorban likes 2. ??? I'm kind of on the fence and need to be nudged either way.

@gregw
Copy link
Contributor Author

gregw commented Sep 10, 2020

So the results show us the way:

Benchmark                                                        (POOL_TYPE)  (SIZE)   Mode  Cnt         Score          Error  Units
PoolStrategyBenchmark.testAcquireReleasePoolWithStrategy              linear      16  thrpt    3  16627512.074 ± 17642452.179  ops/s
PoolStrategyBenchmark.testAcquireReleasePoolWithStrategy         Pool.Linear      16  thrpt    3  18129265.620 ±  4463027.935  ops/s
PoolStrategyBenchmark.testAcquireReleasePoolWithStrategy  threadlocal+linear      16  thrpt    3  31143782.885 ± 42795850.366  ops/s
PoolStrategyBenchmark.testAcquireReleasePoolWithStrategy         Pool.Thread      16  thrpt    3  43294021.851 ± 60671099.356  ops/s
PoolStrategyBenchmark.testAcquireReleasePoolWithStrategy       random+linear      16  thrpt    3  20697605.670 ±  1380766.015  ops/s
PoolStrategyBenchmark.testAcquireReleasePoolWithStrategy         Pool.Random      16  thrpt    3  24497173.155 ±  9343959.400  ops/s
PoolStrategyBenchmark.testAcquireReleasePoolWithStrategy   roundrobin+linear      16  thrpt    3   7313809.523 ±   172708.836  ops/s
PoolStrategyBenchmark.testAcquireReleasePoolWithStrategy     Pool.RoundRobin      16  thrpt    3   7213369.700 ±   214676.084  ops/s
PoolStrategyBenchmark.testAcquireReleasePoolWithStrategy     threadid+linear      16  thrpt    3  48749994.419 ± 76484935.329  ops/s
PoolStrategyBenchmark.testAcquireReleasePoolWithStrategy       Pool.ThreadId      16  thrpt    3  39440370.903 ± 72046403.132  ops/s

The hard coded strategies match the pluggable strategies, except for threadlocal, in which case it is a lot better. So given the simplicity of
the built in approach, I will prepare the PR assuming that.

@gregw
Copy link
Contributor Author

gregw commented Sep 10, 2020

@sbordet I think this is ready to use in the connection pools, but it needs changes to their constructors, so I might let you do it?

@gregw
Copy link
Contributor Author

gregw commented Sep 10, 2020

Final benchmark results:

Benchmark                                                 (CACHE)      (POOL_TYPE)  (SIZE)   Mode  Cnt         Score           Error  Units
PoolStrategyBenchmark.testAcquireReleasePoolWithStrategy    false      Pool.Linear       4  thrpt    3  19726151.258 ±   4667777.948  ops/s
PoolStrategyBenchmark.testAcquireReleasePoolWithStrategy    false      Pool.Linear      16  thrpt    3  17818008.006 ±   9139880.033  ops/s
PoolStrategyBenchmark.testAcquireReleasePoolWithStrategy    false      Pool.Random       4  thrpt    3  17996978.870 ±   3800769.899  ops/s
PoolStrategyBenchmark.testAcquireReleasePoolWithStrategy    false      Pool.Random      16  thrpt    3  26759187.237 ±   3575103.399  ops/s
PoolStrategyBenchmark.testAcquireReleasePoolWithStrategy    false  Pool.RoundRobin       4  thrpt    3   8484006.729 ±    890613.362  ops/s
PoolStrategyBenchmark.testAcquireReleasePoolWithStrategy    false  Pool.RoundRobin      16  thrpt    3   8528120.962 ±   3044297.564  ops/s
PoolStrategyBenchmark.testAcquireReleasePoolWithStrategy    false    Pool.ThreadId       4  thrpt    3  17647364.947 ±   2405801.834  ops/s
PoolStrategyBenchmark.testAcquireReleasePoolWithStrategy    false    Pool.ThreadId      16  thrpt    3  45474461.806 ± 115516916.830  ops/s
PoolStrategyBenchmark.testAcquireReleasePoolWithStrategy     true      Pool.Linear       4  thrpt    3  19061961.075 ±   9416326.911  ops/s
PoolStrategyBenchmark.testAcquireReleasePoolWithStrategy     true      Pool.Linear      16  thrpt    3  40489787.862 ±  19745433.024  ops/s
PoolStrategyBenchmark.testAcquireReleasePoolWithStrategy     true      Pool.Random       4  thrpt    3  18678870.374 ±  20741555.087  ops/s
PoolStrategyBenchmark.testAcquireReleasePoolWithStrategy     true      Pool.Random      16  thrpt    3  52719122.996 ±  97184200.471  ops/s
PoolStrategyBenchmark.testAcquireReleasePoolWithStrategy     true  Pool.RoundRobin       4  thrpt    3  17302566.141 ±    746359.957  ops/s
PoolStrategyBenchmark.testAcquireReleasePoolWithStrategy     true  Pool.RoundRobin      16  thrpt    3  45292108.520 ±  37436891.524  ops/s
PoolStrategyBenchmark.testAcquireReleasePoolWithStrategy     true    Pool.ThreadId       4  thrpt    3  21522511.381 ±   3130277.130  ops/s
PoolStrategyBenchmark.testAcquireReleasePoolWithStrategy     true    Pool.ThreadId      16  thrpt    3  39812967.055 ±  81267475.628  ops/s

@gregw gregw marked this pull request as ready for review September 10, 2020 14:04
@gregw
Copy link
Contributor Author

gregw commented Sep 10, 2020

Screenshot from 2020-09-10 16-05-59

Copy link
Contributor

@lorban lorban left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Besides a few small cleanups, I like that the all-contained-and-compact design was kept. But the PR's name is now wrong.

LGTM.

Signed-off-by: Greg Wilkins <gregw@webtide.com>
Signed-off-by: Greg Wilkins <gregw@webtide.com>
@gregw gregw changed the title Pluggable Pool strategy Alternative Pool Strategies Sep 15, 2020
@gregw gregw requested a review from sbordet September 16, 2020 10:14
Copy link
Contributor

@sbordet sbordet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd like a minimal javadocs for Pool.StrategyType. Other than that, LGTM.

@gregw gregw merged commit ba22c08 into jetty-9.4.x Sep 16, 2020
@joakime joakime deleted the jetty-9.4.x-PoolStrategy branch September 17, 2020 07:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Review RoundRobinConnectionPool
3 participants