-
-
Notifications
You must be signed in to change notification settings - Fork 374
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow preventing inlining by function call #350
Comments
related discussion & impl in Closure: |
I do have a preference: the annotation should be on the caller's side, not the callee's side. Engines usually care about the caller's side when performing optimizations, so this is where it'd have the greatest impact. Plus, there's already Edit: By "caller's side" I mean like in |
i wonder how the "caller" side hint would work from a minifier's perspective. it seems like it would rarely make sense to inline in the first place since you'd be saving just for perf i imagine you'd want exactly the opposite on the caller side: |
@leeoniya I've not had much of an issue with perf when it comes to automatic inlining, and I'd rather keep the option enabled for the 90% of code that could generally benefit from the reduced code size. It's just that 10% where I need a little more fine-grained control. |
And Closure Compiler IIRC does (or at least did) inline small functions across multiple locations in its advanced optimization mode, which optimizes for speed over size somewhat. |
I like this feature. It doesn't like a lot of lines of code, for a quite significant benefit. Thanks for this issue! |
I also expect this, because currently I have to disable the |
Any news on this? It's a mild blocker for me. There's a few cases where I've explicitly split a closure into a separate function to force the engine to generate a smaller closure to save some memory (the closure is what's returned), and I can't ensure this remains the case without such an escape hatch. |
I think it depends on your VM, but IIUC v8 and spidermonkey don't have unused variables sitting around in closure memory. Unless you use eval and the engine can't tell what you need in there? To answer your question, I don't have plans to work on this right now, it depends on how much time I get and how serious are the issues coming in. It might be done this year, but I have a day job to attend to :) |
Regarding plans on how to implement this, I've been advised by people smarter than me to use a caller-side annotation. It might be a bother if you're writing these by hand and have multiple call sites, but then again these annotations are meant more for compilers and macros than anything else. So something like |
@fabiosantoscode That's my preference, too, a caller-side annotation.
It's been a while since I've read and tested this, so I tested this just now and profiled the last two in an empty // Merged
function getSetA(init, a, b, c, d, e, f, g, h, onchange) {
return onchange != null
? function () {
window.temp = [a, b, c, d, e, f, g, h]
if (arguments.length) {
var temp = onchange, prev = init
init = arguments[0]
onchange = undefined
try {
if (temp) temp(init, prev)
} finally {
onchange = temp
}
}
return init
}
: function () {
if (arguments.length) init = arguments[0]
return init
}
}
// Split
function observableGetSet(init, a, b, c, d, e, f, g, h, onchange) {
return function () {
if (arguments.length) {
window.temp = [a, b, c, d, e, f, g, h]
var temp = onchange, prev = init
init = arguments[0]
onchange = undefined
try {
if (temp) temp(init, prev)
} finally {
onchange = temp
}
}
return init
}
}
function getSetB(init, a, b, c, d, e, f, g, h, onchange) {
return onchange != null
? observableGetSet(init, onchange)
: function () {
if (arguments.length) init = arguments[0]
return init
}
}
// I ran the above, started the profiler, ran these four statements, and then stopped it.
window.a0 = getSetA("string", 1, 2, 3, 4, 5, 6, 7, 8, null)
window.a1 = getSetA("string", 1, 2, 3, 4, 5, 6, 7, 8, (...x) => window.temp2 = x)
window.b0 = getSetB("string", 1, 2, 3, 4, 5, 6, 7, 8, null)
window.b1 = getSetB("string", 1, 2, 3, 4, 5, 6, 7, 8, (...x) => window.temp2 = x) And here's the sizes of the allocated contexts for each:
Notes:
The big differentiating factor here is that Variables not used in any closure are generally kept out, which is why So yeah, you're correct for variables not used in any closure, but not for those used in some closures.
Okay, fair enough. |
Thank you for your understanding :) I am passionate about making Terser the best it can be, but my job comes before issues, which come before features :) |
Well, I have some functions (they have names) in a closure. All of them are complicated enough (having arguments and variables), and one of them is exported and will be called for many times. Then I want to compress the code while keep those inner functions from being embeded as anonymous ones into the exported function. I once set My intentsMy intent is to wrap some logic in not anonymous IIFE functions (
If V8 has done some tricks to achieve these for anonymous IIFE functions, please help me and give some technology articles or links to C++ source code about it. Thanks! |
@gdh1995 when this has been implemented you should get around your issue. I removed
|
What about this: add an option to define a limit The idea behind it is: if I've hard-code lots of functions (e.g. I've written a naive version of this idea in gdh1995@1365bbc and I'm expecting reviews. Updated: I've field #500 to discuss this. |
Could you file a separate issue on that request? That's distinct from my
feature which is for a granular per-call opt-out.
On Thu, Oct 24, 2019 at 03:25 Dahan Gong ***@***.***> wrote:
What about this: add an option to define a limit X: number, and if a
function belongs to a huge closure which has more than X variables and
functions, then simply avoid embedding its definition when a current scope
is in one of children of the function's scope.
The idea behind it is: if I've hard-code lots of functions (e.g. >= 20
items), then it means I've extracted enough code to avoid allocating
temporary function instances frequently, so just skip embedding.
I've written a naive version of this idea in ***@***.***
<gdh1995@1365bbc>
and I'm expecting reviews.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#350?email_source=notifications&email_token=ABCGWBGYDP3GCN5DFOJHRKLQQFEXDA5CNFSM4HPVN3Z2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOECEAB5Q#issuecomment-545784054>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABCGWBD2BMBIC26JYZXLHJLQQFEXDANCNFSM4HPVN3ZQ>
.
--
-----
Isiah Meadows
contact@isiahmeadows.com
www.isiahmeadows.com
|
There is now a Thanks! |
@fabiosantoscode Sorry but I think the ExpectedA function which is used only once can keep itself standalone. The real resultA function is still embeded as an IIFE. Here's my test: // input.js
(function(){
function foo(val) { return val; }
function bar() {
var pass = 1;
pass = /*@__NOINLINE__*/ foo(pass);
window.data = pass;
}
window.bar = bar;
bar();
})(); And !function(){function bar(){var pass=1;pass=function(val){return val}(pass),window.data=pass}window.bar=bar,bar()}(); I think a better result should be |
My bad! Thanks for the test case! |
BTW, that's also the precise feature I wanted added in the initial bug.
On Wed, Nov 13, 2019 at 15:30 Fábio Santos ***@***.***> wrote:
Reopened #350 <#350>.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#350?email_source=notifications&email_token=ABCGWBBHRGAKLJVCFU2TMULQTRPXFA5CNFSM4HPVN3Z2YY3PNVWWK3TUL52HS4DFWZEXG43VMVCXMZLOORHG65DJMZUWGYLUNFXW5KTDN5WW2ZLOORPWSZGOU2ZJBOY#event-2796720315>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABCGWBBPSCKRG25QY6R4EPLQTRPXFANCNFSM4HPVN3ZQ>
.
--
-----
Isiah Meadows
contact@isiahmeadows.com
www.isiahmeadows.com
|
Yeah @isiahmeadows my intention was for it to work this way, but I did something wrong because there's two ways that terser can inline a function and I didn't cover both :) |
Um, it seems that this is not solved yet? |
@gdh1995 I can't reproduce the issue anymore with the latest Terser. |
Oh, actually I can :) |
I've made some progress towards this. |
I'm going to release 4.6.2 tomorrow with this. I've tried a few ways that noinline might fail to prevent an inline, but there might still be something missing. |
Oops, I tested terser@4.6.2 just now and all of the three I'll test more and put a shorter example here in 1~2 days. |
Thank you, I'll look into it once there's a repro. If there are three cases, please include all three. Each might fail for its own specific reason. |
@fabiosantoscode Sorry I made a mistake when doing tests. The comments do work now. But now I have a new related problem. I once wrote: // getGulpUglify means return `gulp-uglify` with `terser`
stream = stream.pipe(getGulpUglify()(config));
stream = stream.pipe(getGulpUglify()({...config, mangle: null})); and the first pass removed all Now I test Therefore, I wonder:
|
Thanks. Now it works as I expect.
|
Awesome. If you have any further issues regarding this feature please open a new issue! Cheers! |
We are heavily reliant on `reduce_funcs`, which was removed in Terser 5, but [will be added back in 5.2.2 or 5.3](terser/terser#350). In the meantime, we have to disable `reduce_vars` to prevent function inlining, although this has a large size cost since it also prevents variable inlining.
Just a simple feature request that there should be an escape hatch for
inline
to force a function to not be inlined. Of course, this is almost never a good idea, but in performance-critical scenarios, there might be a reason to explicitly not inline a function, mainly to guide engines to inline the caller more easily. Here's a few examples from Node's own code base: 1 2 3. I've also in the past done this in Acorn many years back, speeding it up by 5% simply by not inlining stuff into a giantswitch
statement.When a function would not normally be inlined, this of course would be a no-op and could generate a warning (but it doesn't have to).
The text was updated successfully, but these errors were encountered: