Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

We should change yarn test so that it doesn't run lots of different things in parallel. #932

Closed
Brondahl opened this issue Oct 15, 2018 · 7 comments

Comments

@Brondahl
Copy link

See conversations in other threads (e.g. #881) running yarn test does a bunch of things, all in parallel.

It does (at least) the following steps, which I've run each on their own, to bench mark.

  • npm run flow. Something about setting up a local server for type validation. Takes about 1.5 minutes on the first run. Minimal output, if run on it's own. Lots of output if run in parallel with other stuff
  • npm run test:prod. Actually run the unit tests. Takes about 30 seconds? Large output.
  • npm run coverage. Actually run the tests (again!?) and then report coverage. Takes about 30 seconds. Large output.
  • npm run test:typescript. Takes about 20 seconds? Moderate output.
  • npm run lint:check. Linting. Takes ~10 seconds. Minimal output.
  • npm run test:size. Package size validation. Takes ~10 seconds. Moderate output

Total time: 3:10 to actually run all those things in sequence (including entering commands)

Running them all in parallel, takes a total of 2:53. So we're not actually gaining very much. (It looks like some of the steps are waiting for flow server to be set up, but that setup runs slower because of some other test processes running.) I suspect if I scripted the runs to happen in sequence, then wt find that the typing was ALL of the difference.

And it causes it to generate way, WAY more output. A bunch of those tools are written with their logging assuming they're the only thing running, so they can just update the existing line. Because that's not what's happening, we get lots and lots of extra lines printed and the various outputs are all smashed together and completely unintelligible. You can't tell which lines are related and which aren't.

I propose that we make yarn test run its various steps in sequence, with clear indications between steps.
It will also mean that we can e.g. quickly check the linting before doing the expensive operations, so that trivial 'I forgot a comma' fixes can be done without waiting for 90 seconds of irrelevant tests.

This would also make it really easy to show contributors what the individual test segments are, and thus
allow them to run just the segment they care about!

@Brondahl
Copy link
Author

I wanted to get the discussion of the concept out of the way, before I actually start working on a PR.
Let me know your thoughts, please!

@Brondahl Brondahl changed the title Should yarn test run lots of different things in parallel? We should change yarn test so that it doesn't run lots of different things in parallel. Oct 15, 2018
@Brondahl
Copy link
Author

Sidenote: I assume that test prod and coverage are actually doing subtly different things, and thus aren't actually duplicating the work or running the tests?

@Brondahl
Copy link
Author

Brondahl commented Oct 19, 2018

@Ailrun @Andarist Would you be happy with this change? If so I'll get started on a PR.

@Andarist
Copy link
Member

I hav no strong opinion about this, would wait for @mitchellhamilton comment before starting to work on it.

@emmatown
Copy link
Member

I don't really see why this matters. There's no real point in running the test script since eslint and flow should be run in editors, tests should be run with the test:watch script and the bundle size stuff can just happen on CI.

@Brondahl
Copy link
Author

So:
A) If you don't think yarn test matters, then are you happy for me to make this change - If no-one's running it then we're not affecting anyone by changing it. :D


Hopefully that's enough to get this change signed off, but in case not ...

B) If you don't think anyone should be running it, then you probably want to edit the CONTIBUTING docs to not tell people to use it as the first thing they see ;)

C) It seems weird to actively choose not to have a single "run all of the tests and make sure it's ready to be merged" command. Especially to write it off as "just raise a PullRequest and rely on the CI server to see if everything works". I imagine it would be pretty damn annoying for you if 95% of PRs raised had trivial CI errors because no-one bothered to run any of the tests locally?

D) If you want eslint and flow to be run in editors then you need to add those to the setup steps in CONTRIBUTING, including full documentation of how to get them running. But I'd recommend that you don't: I suspect you'll reduce the number of contributors you get, if you do.

"Please install these extensions and configure your IDE like this" is far more invasive thing to ask contributors to do than "please run this command". I certainly would have no interest in installing and re-configuring my IDE tools just for one OS project, when they could be run on the command line. Furthermore, what about all the other IDE tools or eslint options that people have? Are you expecting them to disable those tools? Or just try to work out which warnings come from and selectively apply warnings?

Similarly, not everyone likes watch mode for testing. I hate it, for a variety of reasons. Obviously I'm in favour of having it available so that people can pick-and-choose but it shouldn't (IMO) be the only way that tests can be run.

@emmatown
Copy link
Member

I just changed the test script to just run jest since that's what the script should really be doing.

I imagine it would be pretty damn annoying for you if 95% of PRs raised had trivial CI errors because no-one bothered to run any of the tests locally?

Not really, people can see that CI fails and fix it. There are far more important things to consider than CI failures like how a change will affect the API, stop future enhancements, performance, etc.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants