Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactoring: Cache policy implementations #389

Open
tatsuya6502 opened this issue Jan 27, 2024 · 2 comments
Open

Refactoring: Cache policy implementations #389

tatsuya6502 opened this issue Jan 27, 2024 · 2 comments

Comments

@tatsuya6502
Copy link
Member

tatsuya6502 commented Jan 27, 2024

As of v0.12.4, Moka supports only the TinyLFU cache policy. However, there are some requests and plans to support other cache policies. I am going to implement the LRU policy without refactoring the cache policy implementation, mainly due to time constraints. But, later when I have time, I would like to refactor it.

Cache policies

  1. TinyLFU, with LRU eviction strategy, supported by all Moka versions
  2. TinyLFU, with sampled LFU eviction strategy (Ristretto style)
  3. Window TinyLFU (Caffeine style)
  4. LRU

I am not sure if 2. is worth implementing. But I could write a prototype implementation and run it against the Caffeine simulator (a Moka driver) to see how well it will do.

How these policies work

Moka W-TinyLFU

Read

TinyLFU

  • Increment the popularity of the key in the frequency sketch (aka LFU filter) regardless of whether the cache was hit or missed.
  • If cache hit, move the entry to the MRU position of the main probation queue.

Window TinyLFU

  • Increment the popularity of the key in the frequency sketch regardless of whether the cache was hit or missed.
  • If cache hit, do one of the followings:
    • If the entry is in the window queue, move it to the MRU position of it.
    • If the entry is in the main probation queue, move it to the MRU position of the main protected queue.
      • Then, if the main protected queue is full, move the LRU entry to the MRU position of the main probation queue.
    • If the entry is in the main protected queue, move it to the MRU position of it.

LRU

  • If cache hit, move the entry to the MRU position of the main probation queue.

Insert or Update

TinyLFU, with LRU eviction strategy

  • handle_upsert
    • Decides whether or not to admit the candidate to the main probation queue.
      • This is done by comparing the popularity of the candidate and the LRU of the main probation queue.
      • If yes, push it to the MRU position of the main probation queue.
      • If no, evict it.
  • evict_lru_entries
    • If the main probation queue is over capacity, evict the LRU entry of it.
    • Repeat it until the over capacity is resolved.

TinyLFU, with sampled LFU eviction strategy

  • handle_upsert
    • Decides whether or not to admit the candidate to the main probation queue.
      • This is done by comparing the popularity of the candidate and a sampled entry of the internal concurrent hash table.
        1. Sample five(?) entries from the internal concurrent hash table.
        2. Select the least popular one as the victim.
        3. Compare the popularity of the candidate and the victim.
      • If yes, push it to the MRU position of the main probation queue.
      • If no, evict it.
  • evict_lru_entries
    • If the main probation queue is over capacity, do the followings:
      1. Sample five(?) entries from the internal concurrent hash table.
      2. Evict the least popular one.
    • Repeat it until the over capacity is resolved.

Window TinyLFU

  • handle_upsert
    • Push the candidate to the MRU position of the window queue.
    • If the window queue is over capacity, decide whether or not to admit the LRU of the window queue to the main probation queue.
      • If yes, push it to the MRU position of the main probation queue.
      • If no, evict it.
  • evict_lru_entries
    • If the main probation queue is over capacity, evict the LRU entry of it.
    • Repeat it until the over capacity is resolved.

LRU

  • handle_upsert
    • Push the candidate to the MRU position of the main probation queue.
  • evict_lru_entries
    • If the main probation queue is over capacity, evict the LRU entry of it.
    • Repeat it until the over capacity is resolved.
@tatsuya6502 tatsuya6502 changed the title Refactoring: Cache Policies Refactoring: Cache policies Jan 27, 2024
@tatsuya6502 tatsuya6502 changed the title Refactoring: Cache policies Refactoring: Cache policy implementations Jan 27, 2024
@ben-manes
Copy link

fwiw, I don't know if SLRU is helpful anymore. It very well may be and would make perfect sense to be, but I have not evaluated it for a very long time. The rational would be that the victim should have a historical high frequency and low recency or that sketch collisions might have mistakenly admitted an entry that could age out quickly. However, that aging only makes sense with jitter or else TinyLFU could excessively reject candidates due to an artificially hot victim.

If I remember correctly, I originally saw SLRU improve the WikiBench trace by a good margin. I later fixed flaws in my CountMin sketch that was introducing hash bias and recall that may have corrected it on that trace. I left it as is since I had already done all the work, the researchers had written the paper section, and I saw no downsides. It might help on traces or it may be busy work, I just don't know.

If you are in the mood of analyzing variations then you might try to see if it is worthwhile. But analyzing takes a lot of time so only do that if interested or it would save you work.

@tatsuya6502
Copy link
Member Author

I started to implement Window-TinyLFU policy here: #249

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants