New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make CacheApi use futures #5923
Conversation
9e0823a
to
b8ec964
Compare
Do you plan to write another implementation of I think we should leave |
BTW, I’ve just had a look at ehcache and it seems that their implementation is blocking, so you should probably define a dedicated execution context for ehcache calls, instead of using |
@julienrf There are already other implementations of I think we should strongly consider changing the default caching implementation, and splitting out ehcache into a separate module. The default should be a fast in-memory cache, then if you need something more complicated you can use something like play2-memcached or your own custom implementation. |
be668c5
to
d7354c4
Compare
Another thought: we can basically simulate the |
I agree. However, for such a cache implementation I would expect not having to deal with |
@julienrf That's why I created a If you were creating a |
@gmethvin wouldn't it be better to make both a |
@schmitch I'm not sure. Initially I was thinking if you don't want to migrate your code to use futures, the fix is basically I also think the interface external cache modules implement should be the future one, since it's just simpler to have one interface that works for blocking and non-blocking caches. |
Actually I don't use the Cache Api (yet) so I can't speak what's the best way, However I still think that some kind of deprecation is always good, even if the change is simple. |
Perhaps we could deprecate the old |
Note that variable expiration may be a slightly annoying feature to support across different backends. In memcached, redis, and Ehcache 2.x per entry expiration is supported. This is done by lazily evicting an entry due to the capacity constraint and emulating a miss otherwise. This is cheap to process, but can result in a lot of dead entries polluting the cache. Recent work in memcached has been focused on solving this problem by using a sweeper thread and a more advanced eviction policy. Guava and Caffeine support only fixed expiration. This is based on the observation that most application caches do not commingle different data (users + photos + etc) in a single cache but instead use per-type caches. In those cases almost always a fixed duration is used (e.g. expire the entry after 5 minutes). This allows removing expired entries eagerly in O(1) time by modeling expiration as a access-order or write-order linked list. An eager per-entry expiration would require O(lg n) time and add complexity to the API, so that isn't provided. Instead the same lazy approach is left to the user to implement as a simple decorator. Ehcache 3, Hazelcast, JCache, etc. uses lazy variable expiration with an (assumed!) capacity constraint to not have a memory leak, but does not expose this option in the Cache API. Instead this is configured as the ExpiryPolicy at construction and does not inspect the entry to determine the timings. Compatibility for per entry might require some hack like using a Per entry expiration decorators would be necessary for most caches that implement Play's API. That's not hard, but worth taking into consideration to decide if its an important capability. |
@@ -12,74 +10,7 @@ | |||
*/ | |||
@Deprecated | |||
public class Cache { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any reason to not completely remove this class?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually I decided not to remove UnboundCachedBuilder
, which uses the static cacheApi
but maybe we should remove that too...
+1 to deprecated to |
@marcospereira Certain kinds of caches (e.g. an in memory hash map) are actually fast enough for a blocking API to make sense. The implementations of those caches will most likely be blocking anyway, so using the future API is just introducing more syntax for no reason. Also note that other cache APIs like scalacache have a synchronous api for similar reasons. I definitely think the "default" should be the async API, but having an alternative sync API is also useful, as long as users explicitly opt in to it. |
As @gmethvin said, we can have cache implementations that run on the same node as the Web server and that manage concurrency without locks (e.g. with STM or just Scala’s standard |
I know that ScalaCache serves as an adapter to several front ends, but it does sound like the API for caching could cover a lot of different options. From the point of view of Play, the only touch points into the core Play API is in the Everything apart from that is basically exposing the Cache directly. So... does it make to have a Cache API that's exposed directly to the user, rather than having a trait |
@wsargent As I understand it, the point of having a common cache API, in addition to the |
I am not sure it is worth the effort. It would be better to just use Scala’s |
@julienrf agree. Async/Sync names makes more sense to me. |
Rebased and updated |
I've switched to That said, it may be worth also creating a separate project for play-cache with its own release cycle before we do the Play 2.6 release. |
public interface AsyncCacheApi { | ||
|
||
/** | ||
* @return a blocking version of this cache, which can be used to make synchronous calls. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@return a sync version of this cache...
@gmethvin some minor revisions here and then I think it is good to be merged. |
* @param expiration expiration period in seconds. | ||
* @return a CompletionStage containing the value | ||
*/ | ||
<T> CompletionStage<T> getOrElseUpdate(String key, Callable<CompletionStage<T>> block, int expiration); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A Callable
requires that a new instance is allocated on every access in order to pass the key into the block. A function (optionally checked) avoids this (or at least makes it a user mistake).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you referring to using Supplier
instead? Or a Function
that accepts the key?
The current API uses Callable
and cache implementations are meant to call it once to get the value. I would prefer not to change to Supplier
since they can't throw checked exceptions. That would also require existing users to rewrite their code for (arguably) little benefit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Function
or a checked version of it. The JVM may decide to cache a stateless lambda, e.g. a method references for static methods. The user will likely have to use the cache key to compute the value, which would disallow this optimization. It would also be consistent with the Collection Framework's compute methods (computeIfAbsent
, etc).
@marcospereira thanks for finding all the places I forgot to rename |
@gmethvin just a question wouldn't this be backportable if we bind the SyncApi against the CacheAPI? the interface of the SyncAPI is the same as the CacheAPI and it would give users a first impression of the API. I'm not sure what else is changed internally but as far as I looked it up the Sync API is the same and we could let the user see the AsyncCacheApi really early on (btw. I like the design). |
9a507f5
to
99409d5
Compare
@schmitch We could backport but we'd have to remove the deprecation on CacheApi and make sure there's nothing that prevents it from working with existing cache modules. I think it would be okay but I haven't looked at it in detail. |
case finite: FiniteDuration => | ||
val seconds = finite.toSeconds | ||
if (seconds <= 0) { | ||
element.setTimeToLive(1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@gmethvin Do you remember why you set time to live to 1 second? My guess would be because you can't use 0 because ehcache treats that as indefinitely. Negative values throw exceptions so that wasn't an option as well.
However, I think a much better approach would be to not even cache an element at all in such a case. I fixed that here: #11516
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think mainly I was considering the case where you pass a duration like 500 milliseconds. In that case toSeconds
will round to zero, but the user probably would want the cache value to be around for at least 500 milliseconds.
That behavior is still a bit inconsistent though, since we're rounding down for all other TTLs, e.g., if you pass 1.5 seconds it will round to 1. It might make sense to round up in other cases too.
For values of zero or less, I agree it probably makes sense to remove the element, though I'm not sure why the user wouldn't just call remove
in that case.
Rewrite the
CacheApi
to useFuture
/CompletionStage
where appropriate.Since making blocking calls can sometimes be more convenient when using a fast in-memory cache, I've also created a
BlockingCacheApi
that can be accessed either by injecting it directly or callingcache.sync
on an existingCacheApi
. This should also make migration straightforward for existing users.