Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make initial incremental/watch builds as fast normal builds #42960

Closed
wants to merge 7 commits into from

Conversation

sokra
Copy link
Contributor

@sokra sokra commented Feb 25, 2021

Current just running tsc is much faster compared to tsc --watch or tsc --incremental. There are already multiple issues describing that problem (I didn't verify all of them if it's really the same problem):

After digging in the source code for a while I think I found the cause of that:

For incremental or watch builds typescript need to compute 2 additional things for each module:
the "shape" and the "referenced modules".

These things are needed to calculate which modules need to be invalidated when a file has changed.
Where a file has changed typescript calculates a new "shape" and if the new shape differs from the old one, it will follow the graph of "referenced modules" upwards and invalidates these modules too (recursively).

Currently typescript uses "emit as declaration file" to calculate the "shape" and "referenced modules" and this is basically as expensive as doing a full "emit". Most time of the initial build is spend with that.

So the initial incremental/watch build is as slow as running tsc with emit.
But tsc --noEmit is very fast compared to that.

Refactoring idea

I think we actually don't really need to compute the "shape" on initial build. The "shape" is only an optimization so that non-shape-affecting changes to the file don't invalidate importing modules.

I would propose to not compute the "shape" on initial build. When a change to the file happens use the module content (version) instead to check if we need to invalidate the parents. This will cause unnecessary invalidation if only internals has change, but that would be an acceptable trade-off (Keep in mind that typechecking is much faster than computing the shape).

In addition to that compute the "shape" when a module was invalidated in cause of a file change or an "shape"-change of an referenced module (only real "shape" changes, don't do that when we haven't computed the old shape. This avoids computing too many shapes during a watch rebuild)

Note: Due to not computing the shape we also don't have access to exportedModulesFromDeclarationEmit and have to use all references of the module instead. This will cause the module to be invalidated more often until the invalidation is triggered by a real shape change, which will cause it to compute its own shape and exportedModulesFromDeclarationEmit.

Summary: Lazy compute "shapes" and "exported modules" on first invalidation. Without old shape and exported modules: Invalidate referencing modules on file change instead of shape change. Invalidate module if any referenced modules changes instead of only exported ones.

So that's what I did.

Benchmark

Running different test cases with a project with about 3000 files. Total time as reported by tsc. I did not do any averaging as the results are pretty clear:

with isolatedModules:

Test case master This PR Note
tsc 23.34s 23.01s equal
tsc --incremental (initial) 67.51s⚠️ 24.15s large improvement
(with cache, no change) 6.79s 6.75s equal
(with cache, non shape affecting change) 8.12s 9.19s❗ Initial shape computation, slower
(with cache, same file again) 8.09s 8.05s shape is already computed, equal
(with fresh cache, shape affecting change) 9.65s 9.30s Initial shape computation
(with cache, same file again) 9.59s 9.25s shape is already computed
(with cache, same file again) 9.58s 9.29s shape is already computed
tsc --watch (startup) 70.98s⚠️ 26.24s large improvement
(save without change) 0.03s 0.03s equal
(non shape affecting change) 0.36s 1.47s❗ Initial shape computation, slower
(same file again) 0.31s 0.21s shape is already computed, equal
(with fresh watcher, shape affecting change) 2.11s 1.40s Initial shape computation
(same file again) 1.49s 1.21s shape is already computed
(same file again) 1.45s 1.07s shape is already computed
tsc --watch --incremental (initial) 71.34s⚠️ 26.84s large improvement
(from cache) 9.78s 9.69s equal

without isolatedModules

Test case master This PR Note
tsc 23.50s 23.03s
tsc --incremental (initial) 67.68s⚠️ 24.02s large improvement
(with cache, no change) 6.89s 6.77s equal
(with cache, non shape affecting change) 7.87s 9.34s❗ Initial shape computation, slower
(with cache, same file again) 7.92s 8.04s equal
(with fresh cache, shape affecting change) 9.55s 9.27s Initial shape computation, ironially faster as shapes of referencing files are not computed
(with cache, same file again) 9.56s 10.37s❗ Initial shape computation of referencing files, slower
(with cache, same file again) 9.55s 9.70s equal
tsc --watch (startup) 71.30s⚠️ 26.27s large improvement
(save without change) 0.03s 0.03s equal
(non shape affecting change) 0.34s 1.42s❗ Initial shape computation, slower
(same file again) 0.26s 0.21s equal
(with fresh watcher, shape affecting change) 1.91s 1.38s Initial shape computation, ironially faster as shapes of referencing files are not computed
(same file again) 1.55s 2.15s❗ Initial shape computation of referencing files, slower
(same file again) 1.52s 1.47s equal
tsc --watch --incremental (initial) 73.30s⚠️ 26.99s large improvement
(from cache) 9.93s 9.57s equal

Raw data

Summary: tsc --incremental and tsc --watch is now as fast a pure tsc (see ⚠️), first time changing a file in watch/incremental mode takes a small hit (see ❗).

Test suite

All tests are passing. I updated a lot baselines as signatures are now missing from tsbuildinfo (they are 0 as marked for lazy computed), but there is no functional change.

I needed to change some tests that verify that clean build and incremental build result in the same build info, which is no longer true when signatures are lazily computed.

I disabled lazy shape computation for unittests:: tsserver:: compileOnSave, unittests:: tsc-watch:: emit file --incremental for some compileOnSave tests and for 8 tests using assumeChangesOnlyAffectDirectDependencies as the tests expect certain behavior that lazy shape computation would change. Note that the behavior is not wrong, but it doesn't fit to the test cases.

Edge cases

There are a few edge cases one might run into:

A

Do a non shape affecting change to file that affects the global scope (and is not a declaration file).

Since we don't know it's non shape affecting on this first change this will need to typecheck all files.

The second change will no longer have this behavior since shape is then computed.

Note that all CommonJS files are currently considered as "affecting global scope", so this might be a problem for commonjs projects. I guess this is a bug and CommonJS modules should probably not flagged in this way. Note: I fixed that.

B

Do a shape affecting change to a file that is referenced by many other modules.

On second change this will trigger a shape computation on all referencing modules, which might cause a extra delay (similar to the initial shape computation before this PR).

We could a limit in how many shapes should be computed at maximum during a single build to avoid this. But in worst case this would make it have the performance like the current initial builds have.

🔍 Search Terms

slow, incremental, watch

✅ Viability Checklist

My suggestion meets these guidelines:

  • This wouldn't be a breaking change in existing TypeScript/JavaScript code
  • This wouldn't change the runtime behavior of existing JavaScript code
  • This could be implemented without emitting different JS based on the types of the expressions
  • This isn't a runtime feature (e.g. library functionality, non-ECMAScript syntax with JavaScript output, new syntax sugar for JS, etc.)
  • This feature would agree with the rest of TypeScript's Design Goals.

Please verify that:

  • There is an associated issue in the Backlog milestone (required)
  • Code is up-to-date with the master branch
  • You've successfully run gulp runtests locally
  • There are new or updated unit tests validating the change

📃 Motivating Example

Incremental build are unattractive compared to full builds when using typescript for typechecking with noEmit: true.

💻 Use Cases

What do you want to use this for?

next.js

What shortcomings exist with current approaches?

Incremental builds are too slow. So you have to choose between:

  • slow uncached, but super fast cached builds (--incremental)
  • fast uncached and cached builds (not --incremental)

What workarounds are you using in the meantime?

Not using --incremental at all

PS: tsbuildinfo reference list optimization

As a little extra I changed to serialization of tsbuildinfo a little bit so that duplicate lists of references are deduplicated (this is the first commit). This isn't strictly necessary, but in an intermediate version of this refactoring I just used all modules as fallback references and this resulted in an huge slowdown due to writing tsbuildinfo, so I optimized it a bit. I left it here, because it will decrease the tsbuildinfo size, which is good when it has to be transferred e. g. between CI builds.

@typescript-bot
Copy link
Collaborator

Thanks for the PR! It looks like you've changed the TSServer protocol in some way. Please ensure that any changes here don't break consumers of the current TSServer API. For some extra review, we'll ping @sheetalkamat, @amcasey, @mjbvz, @minestarks for you. Feel free to loop in other consumers/maintainers if necessary

@typescript-bot typescript-bot added the For Uncommitted Bug PR for untriaged, rejected, closed or missing bug label Feb 25, 2021
@typescript-bot
Copy link
Collaborator

This PR doesn't have any linked issues. Please open an issue that references this PR. From there we can discuss and prioritise.

@ghost
Copy link

ghost commented Feb 25, 2021

CLA assistant check
All CLA requirements met.

@typescript-bot typescript-bot added For Milestone Bug PRs that fix a bug with a specific milestone and removed For Uncommitted Bug PR for untriaged, rejected, closed or missing bug labels Feb 25, 2021
@sokra sokra marked this pull request as draft February 26, 2021 11:40
@sokra sokra marked this pull request as ready for review February 26, 2021 15:30
@sokra
Copy link
Contributor Author

sokra commented Feb 26, 2021

Update:

  • shape update propagation didn't work as intended. That's fixed now.
  • fixed a bug where CommonJS modules were flagged as affecting the global scope. They no longer do that.
  • I noticed that isolatedModules will affect the behavior, so I benchmarked that and added the results.

@sokra sokra force-pushed the performance/lazy-shapes branch 2 times, most recently from 8225bf2 to b926fcf Compare March 3, 2021 08:01
@sokra
Copy link
Contributor Author

sokra commented Mar 3, 2021

@sheetalkamat @weswigham I guess that's pretty difficult to review, since there are many changed files. If it helps we can hop on a call to walk through together. Or if there is anything else I can help with, just let me know.

@sheetalkamat
Copy link
Member

We will be discussing this in design meeting on whether this should be default or opt in behavior. After that i will review the changes. But looking at the change, I feel like we don't need to populate signature at all just mark in state if all source files are changed files (because there was no oldState and some other conditions that mark all files as need emit or invalidate semantic cache etc) then we shouldn't compute the signature otherwise compute it. That seems like simpler and correct implementation. Again I haven't reviewed it at all, which I will do after the design meeting so that I have all inputs to review it.

@sheetalkamat
Copy link
Member

https://github.com/microsoft/TypeScript/compare/lazySignatureCompute .. i just created commit with what meant when i said we can simplify this, obviously I haven't spent time looking at the test failures or change in deep to see if its correct and satisfies 0what you propose,

Note that as part of design meeting #43069 we discussed that we will make this default without any option to disable this behavior. Add flag later if people ask for it.

@sokra
Copy link
Contributor Author

sokra commented Mar 4, 2021

lazySignatureCompute (compare) .. i just created commit with what meant when i said we can simplify this, obviously I haven't spent time looking at the test failures or change in deep to see if its correct and satisfies 0what you propose,

I see what you mean. That's probably a better way to solve that, compared to my added NOT_COMPUTED_YET value for the signature. With that we could use undefined as not computed value and skip the initial compilation with the additional flag you added.
I guess I can integrate that into the PR.

we will make this default without any option to disable this behavior

That's great. My PR currently adds a disableLazyShapeComputation option which is /*@internal*/. Won't that be enough or should we get rid of that too. In this case we might need to change some test cases a little bit.

@sandersn sandersn added this to Not started in PR Backlog Mar 4, 2021
@sheetalkamat
Copy link
Member

I guess I can integrate that into the PR.

Please do

disableLazyShapeComputation

Please remove the option and fix test cases .. you may want to add more scenarios if they were testing initial local change, then add another local change as next step to ensure that still works correctly from then on

Also can you please pull out deduplication part in separate PR and add test that shows it for easier review and maintenance.

Thank you for great idea and work

@sandersn sandersn assigned sheetalkamat and unassigned weswigham Mar 4, 2021
@sandersn sandersn moved this from Not started to Needs review in PR Backlog Mar 4, 2021
@sokra
Copy link
Contributor Author

sokra commented Mar 4, 2021

Should I also move fix CommonJs modules no longer affecting the global scope b7c2161 (#42960) into a separate PR?

@sheetalkamat
Copy link
Member

That would be great. Thanks.

@cnshenj
Copy link

cnshenj commented Mar 5, 2021

There is one case not fully covered by the benchmark. In our project, we can easily repro the slow initial pass of watch/incremental. We also have another case that is very slow, may not be as slow as the initial pass, but slower than those benchmark test cases (those that take 8-9 seconds with or without the fix). The case is:
In VSCode, modify several files (I guess multiple files have shape affecting changes), then save them all with Ctrl+K, followed by S.

@sokra
Copy link
Contributor Author

sokra commented Mar 10, 2021

@sheetalkamat rebased

@sheetalkamat
Copy link
Member

@sokra thanks. Will be reviewing this today sometime later after I manage DT queue and some other PR reviews.

@sokra
Copy link
Contributor Author

sokra commented Mar 11, 2021

Will be reviewing this today sometime later after I manage DT queue and some other PR reviews.

Awesome. Just tell me if there is anything I can help you with.

@sheetalkamat
Copy link
Member

@sokra please dont bother updating the PR with conflicts, i am looking into simplifying this change and will update you once i have some results. Meanwhile i am using this branch for the tests you have already taken efforts to update.

@sokra
Copy link
Contributor Author

sokra commented Mar 12, 2021

please dont bother updating the PR with conflicts

ok. It's a little bit hard to look at these conflicts and don't wanting to fix them...

Maybe you could look at the baselines of when global file is added, the signatures are updated? I think the test case may no longer test that signatures are updated, since they are not even computed with this change.

@sokra
Copy link
Contributor Author

sokra commented Mar 24, 2021

The major change of this PR has been merged as #43314

@piotr-oles
Copy link

Is it released?

@sokra
Copy link
Contributor Author

sokra commented Apr 21, 2021

The latest beta version includes the fix

@ndr47
Copy link

ndr47 commented Jun 12, 2021

I vote for having this feature optional.

I am encountering some errors after these release.
The errors appear only on --watch and not when I build the project.

I have a very complex typing system and I don't think it would be possible to reproduce the error in a simpler way.
Basically a very complex type is not recognized by the system anymore. This maybe because the type uses different files to define itself.

It would be nice to have a flag also in order to test if this is the reason.

@amcasey
Copy link
Member

amcasey commented Jun 14, 2021

@ndr47 Can you provide more details? What functionality would you like to be optional? This PR has evolved quite a bit since it was created, but I believe the intention was to make some watch mode functionality lazier, which I wouldn't to expect non-watch builds at all (or affect the errors produced by watch builds - just the speed at which they are produced).

I believe the most common reason for seeing different errors in different build modes is that compilation depends on input file ordering (e.g. because the same type is provided by two different files, possible versions of the same library).

@sokra
Copy link
Contributor Author

sokra commented Jun 23, 2021

Another part merged here: #44090

@sandersn
Copy link
Member

sandersn commented Aug 1, 2022

Coming back to this after a long time -- are there any parts left over that haven't been merged as other PRs?

@stwlam
Copy link

stwlam commented Aug 11, 2023

@sandersn I wonder: is your question answerable at this point?

@sandersn
Copy link
Member

I had forgotten about this PR. Regardless of whether its features have been included piecemeal over the past couple of years, it's so old that it would need to be restarted anyway. I'm going to close it.

@sandersn sandersn closed this Aug 11, 2023
PR Backlog automation moved this from Waiting on reviewers to Done Aug 11, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
For Milestone Bug PRs that fix a bug with a specific milestone
Projects
PR Backlog
  
Done
10 participants