Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

expiresVariable() leaves a scheduled expiry even when the cache is empty #542

Closed
facboy opened this issue Apr 24, 2021 · 9 comments
Closed

Comments

@facboy
Copy link
Contributor

facboy commented Apr 24, 2021

i'm having a bit of trouble reproducing this in a test case, so i'll ask a question to see whether it's intended behaviour.

basic background, we're on JDK8 so i have to use a ScheduledExecutorService for the scheduler(). i don't want this to keep the JVM running so i have given it a corePoolSize of 0. i'm using variableExpiry, and the behaviour i'm seeing is that after the last (and in this particular instance, only) entry in the cache has expired out, the cache is leaving a scheduled PerformCleanup task for about 155m in the future. none of my expiry times has ever been anywhere near this value (it's either MAX_EXPIRY (about 150y)), or about 10 seconds after its last update.

is this intended behaviour? i haven't worked out exactly where it's getting this interval from (except that obviously it comes from the timerWheel).

@ben-manes
Copy link
Owner

It wouldn't be intended, but could be a side effect of the timer wheel being so optimized that it is hard to debug and test, causing mistakes to slip through. I think we'd need to get a test failure and then debug where the logic went wrong. There is a related discussion in #541 that could hint towards some bugs in the calculation logic.

@facboy
Copy link
Contributor Author

facboy commented Apr 24, 2021

gah, i think this is a simple mistake...i don't have setRemoveOnCancelPolicy(true) on the ScheduledExecutorService, and ScheduledExecutorService doesn't take it account whether tasks in the workQueue are cancelled when deciding if it needs to leave a thread alive or not.

i've noticed something else though, if i explicitly empty the cache (eg invalidateAll()), existing expiry tasks do not get cancelled. basically it determines that the expiry is now Long.MAX_VALUE and so does not invoke Pacer.schedule(), but it also doesn't check whether Pacer already has a task scheduled (which it should really cancel).

@ben-manes
Copy link
Owner

That's true, I was thinking of that as well. I wasn't sure if it was worth mentioning because future usages would add to the cache and cause the pacer to reschedule, canceling the last future. Or, if that task comes up, the cleanup would be instant as no work to do. I think it's smart to add that extra optimization to to invalidateAll, but since most schedulers don't set setRemoveOnCancelPolicy(true) either (including the one hidden in CompletableFuture), there might not be much of a benefit. What do you think?

@facboy
Copy link
Contributor Author

facboy commented Apr 24, 2021

i'm not sure. perhaps having it on invalidateAll() is useful, in my particular use-case if i want to invalidate an individual entry (and remove its expiry task) it seems i have to put it back (via the policy) with an immediate expiry of 0. not sure if u can suggest any other workarounds, for that or invalidateAll()?

@ben-manes
Copy link
Owner

I think the attached following patch might be the right improvement then. If the cache is emptied out by any means it should cancel the future, so that would be during maintenance or clear, which doesn't need to call maintenance by wiping everything out directly. Some added tests would be needed for the merge, so I'll tidy that up later.

schedule.patch.txt

@facboy
Copy link
Contributor Author

facboy commented Apr 24, 2021

ok cheers. i don't suppose u could backport this to a 2.9.1? :)

@facboy
Copy link
Contributor Author

facboy commented Apr 24, 2021

btw thanks for your prompt responses. i've managed to workaround it on 2.9.0 with some hackery (poking methods that i shouldn't).

@ben-manes
Copy link
Owner

That's great, I'm glad it is not a blocker. When I have the time then I'll integrate and backport to 2.x.

ben-manes added a commit that referenced this issue May 1, 2021
When the timer wheel advances then higher level buckets evict to the
lower wheels. In some cases this reorganization failed, such as when
the nano time transitioned from a negative to a postive clock value.
The number of steps to turn the wheel is now determined by a more
accurate calculation. (fixes #541)

When a scheduler is enabled then a future task may exist against the
next expiration event to perform cache maintenance. If the cache
becomes empty beforehand, e.g. Map.clear(), then this future can be
canceled. (fixes #542)
ben-manes added a commit that referenced this issue May 1, 2021
When the timer wheel advances then higher level buckets evict to the
lower wheels. In some cases this reorganization failed, such as when
the nano time transitioned from a negative to a postive clock value.
The number of steps to turn the wheel is now determined by a more
accurate calculation. (fixes #541)

When a scheduler is enabled then a future task may exist against the
next expiration event to perform cache maintenance. If the cache
becomes empty beforehand, e.g. Map.clear(), then this future can be
canceled. (fixes #542)
@ben-manes
Copy link
Owner

Released in 2.9.1 and 3.0.2

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants