New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Optimization: Remove unneeded lodash functions and replace polymorphic _.each #199
Conversation
Hi, thanks again for the great work. Can you:
|
Fast.js offers an object-only forEach which can be better optimized.
Hey @pgilad,
|
Sorry that it's all questions instead of actual feedback so far. I'm really excited about the energy you're putting into these PRs for optimization @STRML. I read through the diff and it all looks ok to me. |
There was actually already an Re: I don't care about fast.js |
A few, somewhat disjointed thoughts: I think since npm 3 is released, I think we shouldn't worry too terribly much about npm 2. The other idea between using a shared set of utils was to save code when using multiple ampersand modules together. All that said, I think it may be worth looking at total weight of resulting code w/ dependencies as another metric. I've done this in other cases by browserify-ing and minifying the resulting package. I'd be curious to see how these compared in that regard. I know when we switched to lodash we gained some weight :-/ Related... what does @AmpersandJS/core-team think about adding @STRML? I find his contributions hugely valuable and want to get out of his way :) |
@HenrikJoreteg I would very much like to forget about npm 2, but the simple truth is that npm 3 is still slow, even though they closed the issue. I and many like me still can't use it in production. In any case, I agree, npm install bloat is far less of a big deal than simple bundle size. And webpack loaders can be used to replace lodash modules so it's not the end of the world. We win overall with this (note that ampersand-events and its deps come with):
|
@HenrikJoreteg I'll check, we could possibly use |
I'm all for adding @STRML to the ampersand team. |
There is I think doing something like Lodash v4 (not out yet) is for modern browsers only, drops a lot of the |
Cool, thanks for chiming in @jdalton. |
Btw when lodash v4 release you all could update versions without having to do all these rewiring-things and get file-size wins. I expect to release lodash v4 at the same time as jQuery 3.0. Either Jan 12 or 16. It's either on the day Microsoft drops IE8 support or on jQuery's 10th birthday. |
cool stuff, @STRML. i'm definitely in support of the team add :) |
Thanks @jdalton - I agree that the benchmark needs work and I'm almost done with one that will take it through many more complicated use scenarios so we get a better picture of overall performance. My initial focus was on easy deopt wins & model creation speed because we create and destroy a very large volume of models in my day job. I found that the major hurdle in using I agree that v8-only is not an ideal focus but it the simplest for now. Full browser-suite benchmarks are in my roadmap but are considerably more effort than a Node script with time/timeEnd. |
Do you have link to the benchmark you used or a results summary?
Kinda sorta (some have optimized a few bits and pieces), but not really. These ES5 methods have been around for ~10yrs now and are largely unoptimized. And now there's ES6 methods with similar treatment, e.g.
Be wary of micro-opts. It's easy to fall into the trap of modifying code paths because one is 4 million ops/sec vs. 2 million ops/sec when either are good enough. Try to go for the bigger wins, such as the lodash optimization to avoid linear searches in methods like |
Thanks @jdalton , I appreciate your input here. The deopt is usually very similar to:
The deopt is caused by the object/array switch in I'm aware my method isn't perfect - trying to save time while I work on many many projects. I figured this was an easy win without any behavior changes. |
Ah, so not due to lodash itself. From the Chakra side we'll bail-out when the jitted code hits the different type (so from array to object) but keep the previously jitted array path code around. If the mixed path becomes hot then the method will be re-jitted with more generic optimizations applied and eventually the array-jitted form will be gc'ed. Do you have any perf numbers for these calls or some kind of context to weigh the wins? |
So, this is very far from comprehensive, but I run a simple creation / derived getter bench here. This is by far the most common use case in my application, and it directly affects startup speed (we have a lot of models). I'm almost done adapting the test suite so we get a clearer picture of all operations. In any case, with this branch, I see results between 67-77ms. On master I see 92-100. Before the last PR I wrote optimizing this (#198), about 140ms. Not a massive difference (certainly not like Latest commit reverts back to lodash, using forOwn directly accomplishes the same goal and we don't have to bring in such a large module. |
@AmpersandJS/core-team Are we good with merging this? |
👍 I'm good with this. |
👍 from me as well |
Optimization: Remove unneeded lodash functions and replace polymorphic _.each
@STRML Thank you very much for the continued support! |
This gives us another 40% over the #198, bringing the simple creation benchmark down from 140ms to 60ms with the two combined.
I've removed some lodash modules that were superfluous and introduced complicated dependency chains.
Fast.js does a good job of staying small and light and will introduce minimal extra bundling size for browsers while providing significant speed. I see a very significant deopt in both
'lodash.foreach'
and'fast.js/forEach'
due to the Object/Array switch, which is why I require the object version directly. Lodash unfortunately does not expose an Object version directly.Still more to do for speed, expect more PRs. I will likely reduce the test suite somehow into a benchmark. Please merge #198 first before merging this.