Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[webpack-5] Default webpackMode to "eager" in mode: "development" #8644

Open
TheLarkInn opened this issue Jan 17, 2019 · 15 comments · May be fixed by #8645
Open

[webpack-5] Default webpackMode to "eager" in mode: "development" #8644

TheLarkInn opened this issue Jan 17, 2019 · 15 comments · May be fixed by #8645

Comments

@TheLarkInn
Copy link
Member

TheLarkInn commented Jan 17, 2019

Feature Request

Propose altering the default webpackMode for async chunks from "lazy" to "eager" in mode: development.

Background

What is motivation or use case for adding/changing the behavior?
In my work with @MLoughry from the Outlook Web App team (arguably the largest web application deployed at Microsoft), we discovered during the update process from webpack 4 to webpack 5@alpha.6 (or whatever is the current latest build), that their "inner development loop" (AKA: Their local dev env using webpack-dev-server and incrementally rebuilding a single entrypoint on each change), that there was a 5-20% build time regression

Some of this was alleviated by turning back on unsafeCache: true for relevant config properties, but it still wasn't the win's that we were expecting. (Turns out caching really isn't relevant here in watch mode because WDS is using memory-fs anyways).

So we decided to do some watch mode profiles to try and identify area's of opportunity or concern for their builds. Below is a screen shot of a CPU Profile Segment starting from a file change, to webpack finishing the incremental compilation:

image

This segment shows the incremental rebuild taking 62.779 seconds for a single entrypoint (roughly 16-18k modules).

Upon further inspection we noticed that ~60% (43.499 seconds) of this time spent was performing "file creation". See the sceenshot below which has highlighted emitFiles function being called continuously.

image

For this function to be executing so many times, there would in theory have to be significant amount of chunk generation happening (this is usually caused by a team that is effectively using code-splitting throughout their application). After having @MLoughry scan their usage for that entry he comfirmed that indeed they are using import() over 200-300 times throughout that single entry point

This confirmed my theory that chunk creation via webpack is subjected to at least a O(x) (linear) time complexity.

How should this be implemented in your opinion?
The development loop time for OWA (consisting of easily 150-300 engineers) incredibly important as reducing time by a factor of 2-3x could mean millions of dollars of wasted engineering cost saved. Therefore, we don't have to focus on performance of the webpage itself when we are in our development evnironment. (Today webpack already makes these awesome tradeoffs via the mode feature).

Since chunk creation is the main cost, we could reduce the number of bundles generated inside the development environment by setting the webpackMode for all async bundles to eager. This would cause chunks to load still using Promise.resolve (retaining behavior of a import()), meanwhile, emitting those modules into the same entry chunk, and not additional lazy bundles.

To test this theory, we broke into the ImportParserPlugin and modified the default behavior from "lazy" to "eager".

let chunkName = null;
let mode = "lazy";
let include = null;
let exclude = null;
const groupOptions = {};

Then we ran the exact same profiles to calculate the difference in incremental build times. The results were astonishing. Below, is a segmented profile from (once again) file change, to recompilation completion.

image

As you can see in the highlighted segment, the new incremental build time, from the exact same set of modules was 5.036 seconds.

We then wanted to dig deeper to identify what the cost of the emitFile functions were now. See this subsection below:

image

As seen in the screenshot the cost of file creation has now been reduced to (.651 seconds) reducing the time complexity to near O(1).

Proposal

With the huge performance gains realized in this research we would like to propose altering the default webpackMode to "eager" for mode: development. The tradeoff's here are a potentially slower booting page, however this kind of trade-off is precidented via decisions like eval source-maps, not minifying, etc.

I don't believe that this has the potential to cause a "functional break" in development experience for users, because imports will still use Promise.resolve() to load modules therefore retaining its async behavior.

Implementation Details

It appears that the "infrastructure" is already in place to pass options from the ImportParserPlugin:

constructor(options) {
this.options = options;
}
apply(parser) {
parser.hooks.importCall.tap("ImportParserPlugin", expr => {
if (expr.arguments.length !== 1) {
throw new Error(
"Incorrect number of arguments provided to 'import(module: string) -> Promise'."
);
}
const param = parser.evaluateExpression(expr.arguments[0]);
let chunkName = null;
let mode = "lazy";
let include = null;
let exclude = null;
const groupOptions = {};

In addition options passed from the ImportPlugin also recieve options from WebpackOptionsApply:

new ImportParserPlugin(options).apply(parser);
};

Options are obtained from the configuration's options.module property:

).apply(compiler);
new ImportPlugin(options.module).apply(compiler);
new SystemPlugin(options.module).apply(compiler);

Therefore, we would update the ImportParserPlugin to set webpackMode based on a newly created property in options.module called defaultAsyncChunkMode;'eager'. (Similarly we could make this a boolean property as well).

Rationale

By making this a default we hold true to our fundemantals of #0CJS and have sane defaults for development envrionments. If users want to opt out of this behavior, they then have the ability to do so via options.module.defaultAsyncChunkMode. This also reduces builds for scaled builds like OWA by a factor of 7!

Breaking Change Implications

This would technically be considered a breaking change behaviorally however, we are in webpack@5 alpha and this would be the place to make it.

For the ~250 lazy chunks on Outlook Web App, we expect there to be no functional regressions from this change (and intend to test the submitted PR during its review process).

Are you willing to work on this yourself?
Bring it on. 😏

@niieani
Copy link
Contributor

niieani commented Jan 18, 2019

Hmm... It's nice to have that option, but...

I remember doing the opposite in the past, i.e. arbitrarily forcing large number of chunks for better performance in dev-mode. I would split chunks on purpose, due to the high cost of writing huge files in dev mode, esp. when generating sourcemaps.

My reason was that when you have chunking-by-default enabled, a change of a single source file causes a single, small chunk+sourcemap to be re-rendered, emitting e.g. ~100KB of code.

On the other hand, when you have a single chunk which is e.g. 50MB and you always have to emit it (and its sourcemap, which is especially expensive). This also was problematic for browsers, as they would choke while loading huge sourcemaps like that.

@matheus1lva
Copy link
Member

It would be also good to backport to v4, since it is not a a substantial change and hard to implement.

@TheLarkInn
Copy link
Member Author

@niieani I think specifically this presents itself at scale. But ideally will have wins for large and small apps based on the percentages.

@asapach
Copy link
Contributor

asapach commented Jan 19, 2019

@TheLarkInn, would it make sense to instead increase optimization.splitChunks.minSize in development mode so that a chunk is not emitted if it's too small? That should solve the issue when a lot of chunks are being generated.

@edmorley
Copy link
Contributor

This would help with the performance issues reported in #8557 (cc @chenyiqiao) and #4636.

That said, has anyone looked into improving chunk build performance in general, so this workaround isn't needed?

@sokra
Copy link
Member

sokra commented Jan 24, 2019

One have to note that the config contains this:

                  optimization: {
                      removeAvailableModules: false,
                      removeEmptyChunks: false,
                      splitChunks: false,
                  }

Which results in chunks are no longer optimized and probably contain a lot of duplicate modules. Using eager by default will result in all modules are put into one chunk, which removes all this duplication.

Module duplication will cause chunk files to grow, which increases the total cost of emitFiles.

The default settings for splitChunks would group module duplication into new chunks, which would decrease total chunk size.

Enabling removeAvailableModules would remove all modules which are included in the entry chunk from child chunks. This would also decrease total chunk size.

My guess is that not disabling splitChunks would be enough. I'm unsure if there was a reason why splitChunks was disabled. Maybe there is a performance issue there. This is something that need to be tested... (Gimme a profile with splitChunks enabled)

@chenyiqiao
Copy link

@edmorley Yeah,this is great. What shocks me the most is that, literally nobody reacted to #8557. it's definitely unacceptable if incremental build time >30s. Maybe everybody give up code splitting in dev mode as I do...

@MLoughry
Copy link
Contributor

@chenyiqiao The slowness of the chunk graph algorithm had already been brought up previously (#8126), and @sokra had already done work as a result to try and optimize it. However, at the very least, I don’t understand it well enough to know how to reimplement it with the same behavior but orders of magnitude faster.

@sokra
Copy link
Member

sokra commented Jan 30, 2019

Sorry that the other issues got lost. I like to fix that, but I need to play with the build, so I need a repo of something. Fixing is blindly is very difficult. Profiles also help a bit, but a lot of information get lost during function inlining.

@shytikov
Copy link

shytikov commented May 7, 2019

Is there any workaround for the moment? Is it possible to globally set webpackMode to "eager" either in configuration or using command line?

@MLoughry
Copy link
Contributor

MLoughry commented May 7, 2019

@shytikov I just published a plugin to perform the same behavior: https://www.npmjs.com/package/eager-imports-webpack-plugin

@shytikov
Copy link

shytikov commented May 8, 2019

@MLoughry thank you! Nice one!

I have also found another workaround. Maybe it's less optimal, but we can limit number of chunks adding LimitChunkCountPlugin.

I believe it adds post-processing step, it's a con, but you don't need to install anything — it's a pro.

@sibelius
Copy link

sibelius commented May 8, 2019

does https://www.npmjs.com/package/eager-imports-webpack-plugin or LimitChunkCountPlugin

breaks HMR? or HMR just works well?

@MLoughry
Copy link
Contributor

MLoughry commented May 8, 2019

@sibelius I've not tested it with HMR, as my team doesn't use HMR.

@sokra
Copy link
Member

sokra commented Jul 26, 2019

There is also the MinChunkSizePlugin which might be useful in these cases.

Different chunking should not affect HMR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

10 participants