Replies: 11 comments 8 replies
-
Excellent question. I am trying to find the commit that added it without much luck. Seems we would want it to evict the least used first. |
Beta Was this translation helpful? Give feedback.
-
I wonder if we could replace the LruCache with ConcurrentLinkedHashMap (https://github.com/ben-manes/concurrentlinkedhashmap/) WDYT? I think it should be possible to embed a slightly modified version of this inside the project, it's Apache-2.0 but should be ok to embed the code (or we can also ask permission to Ben Manes). |
Beta Was this translation helpful? Give feedback.
-
@jorsol, that project does not appear to be actively maintained, as the last commit was from 2015. |
Beta Was this translation helpful? Give feedback.
-
In general I don't like to be dependent on other projects. SCRAM was a special case as it was ostensibly written for us. As far as embedding code goes you don't have to ask, but you do have to include the fact that you used it and add their license. I see no reason to just not use the LruCache correctly. I don't think it's in a critical path ? |
Beta Was this translation helpful? Give feedback.
-
If I'm not mistaken the cache here is being used to avoid parsing the query if possible. I would expect the difference in the time to lookup the query in the cache vs parse the query to be significant. I could be wrong but I doubt the speed of the actual cache implementation is significant. However not being implemented as an LRU cache would be significant. |
Beta Was this translation helpful? Give feedback.
-
Both places an lru cache is used is on a per connection basis. So while thread safety is important, there should be little or no actual contention. |
Beta Was this translation helpful? Give feedback.
-
#2007 fixes our use of LruCache and deprecates the constructors which allow it to be based on insertion order. |
Beta Was this translation helpful? Give feedback.
-
You are welcome to use and embed ConcurrentLinkedHashMap if it is helpful. Of course if it not then don’t. It’s commonly embedded. mssql-jdbc, Groovy, micronaut, etc do so. It’s small and easily hackable. Prefer it to excessive hacking around LHM or writing a custom implementation if a fully synchronized LRU is not good enough. |
Beta Was this translation helpful? Give feedback.
-
@bokken , the original use case for the cache was "borow-return". For instance, For example, the client code might use multiple prepared statements with the same SQL text at the same time (e.g. fetch from the corresponding resultsets. That is why "least recently inserted" was the same as "least recently used". The caches are not shared across connections, so there's no need in CLHM. CLHM and friends requires Java 8, so that was not an option earlier. |
Beta Was this translation helpful? Give feedback.
-
Even without StampedLock, I still see:
I'd like to see the change that just implements "LruCache to order by access, instead of by update". If more than this is necessary, there should be some explanation as to why it is necessary, and it should be a separate review that can be evaluated on its own merits, without tying it to this review. |
Beta Was this translation helpful? Give feedback.
-
@MarkMielke, I agree the Can you provide an example of new code paths added? The 2 actions are now null checked in the constructor and populated with default behaviors if caller provided null. The |
Beta Was this translation helpful? Give feedback.
-
I see 2 places where
LruCache
instances are created:Both usages are passing
false
for accessOrder, which is simply passed along to theLinkedHashMap
constructor where the description is:So that seems to mean that these are not least recently /used/ caches but least recently /created/.
Is that what is actually intended?
Beta Was this translation helpful? Give feedback.
All reactions