Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

impl Cached for std hashmap #36

Closed
Stargateur opened this issue Jun 29, 2020 · 6 comments · Fixed by #40
Closed

impl Cached for std hashmap #36

Stargateur opened this issue Jun 29, 2020 · 6 comments · Fixed by #40

Comments

@Stargateur
Copy link
Contributor

Basically a unbound cache. By the way, any plan to add entry api of std ?

@jaemk
Copy link
Owner

jaemk commented Jul 2, 2020

Entry api would be interesting, but I'm not sure we'd be able to support allowing arbitrary code to run for the "item missing, insert" case in the same way that separate get and set functions allow.

Implementing Cached for the std hashmap sounds straightforward though, contributions would be welcomed 🙂

@Stargateur
Copy link
Contributor Author

Without the entry API, a hashmap is annoying to use and inefficient. I try another crate lru but as you can see it have the same problem. I don't understand how people use your two crate consider that this operation is a very basic use case of any hashmap.

And cause get is borrow self as mut in our case, it make me do:

get value => if is_none
{
compute value
set value
}
get value => unwrap

This make between 2 or 3 step for something that could take 1, do you really use your crate like that ?

@Stargateur
Copy link
Contributor Author

I did some test implementing a entry for sized cache, it's compile (didn't test yet but no reason it doesn't work) but I don't see how make this a trait, cause this would lead to a lifetime on the trait cached and so that would require GATs I think. That unfortunate cause there is no way to have a unbound cache with sized cache. I gonna need some glue on my code to allow both unlimited and limited mode.

@jaemk
Copy link
Owner

jaemk commented Jul 2, 2020

I think adding an explicit trait method get_or_set_with that takes an fn will be easier than implementing the entire entry API and still give you what you're looking for. This wasn't originally added because it's not required for the main macro/decorator-style use-case where get/set-ing is handled by the macros.

@Stargateur
Copy link
Contributor Author

unfortunately, this doesn't cover all my use case, cause I use async for one of my cache and so with entry api I can do a match and do async only in the vacant branch but I can't inside a function. Or maybe I could add a async_get_or_set_with but that sound strange.

@jaemk
Copy link
Owner

jaemk commented Jul 3, 2020

for the async case, a possible work around would be to put the vacant branch logic in separate function and use the cached proc macro to memoize the function instead of using the cache structs directly

@jaemk jaemk closed this as completed in #40 Jul 3, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants