Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimized expressions are not cached #25

Open
bjorne opened this issue Apr 22, 2016 · 3 comments
Open

Optimized expressions are not cached #25

bjorne opened this issue Apr 22, 2016 · 3 comments

Comments

@bjorne
Copy link

bjorne commented Apr 22, 2016

By default JMESPath#search caches parsed expressions in order to improve performance. However, it also takes care to call #optimize on each expression. The optimized expressions are not cached, meaning that for every #search the optimization takes place.

According to my (limited) tests, caching the optimized expressions increases the performance by 10-30 times.

I see two ways to solve this;

  • Cache the optimized expressions within Runtime, instead of caching the non-optimized expressions in the CachingParser, or
  • Make #optimize memoize its result, so that sub-sequent calls can return the value directly without the overhead

I'll happily help with the coding, but wanted to get some feedback first.

@bjorne
Copy link
Author

bjorne commented Apr 22, 2016

Sorry, I misunderstood things a bit. Since JMESPath.search creates a new Runtime for every call, it also creates a new CachingParser for each call. This should mean that expressions are not actually cached, and consequently my estimated performance increase would not be as significant as I stated just from caching the optimized expressions.

@iconara
Copy link
Contributor

iconara commented Apr 22, 2016

I suggest removing the CachingParser completely and instead introduce a CachingRuntime that provides the expression caching and a constant DEFAULT_RUNTIME that is used by JMESPath.search.

@iconara
Copy link
Contributor

iconara commented Apr 22, 2016

It seems like the caching optimization got lost in two steps: first I introduced the optimization step and missed that the result of the optimization wasn't cached, and then 5703438 changed things so that the caching parser is created for each new runtime.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants