Skip to content

icetbr/comparing-testing-libraries

Repository files navigation

Changelog highlights: updated 2023-07-15, node 20.3.1

  • all testes using ESM
  • some tests got faster (mocha, jest, ava)
  • including results of vitest and native
  • ava watch got way better, jest remained the same, vitest is amazing

Cold start times (in seconds) and a watch mode grade (0 - 10)

10 restarts 100 restarts watch
notest 0,27 tape 0,90 notest 2,57 mocha* 10
best 0,30 tapeReport 0,97 best 2,71 native 9.5
baretest 0,33 pta 1,06 xv 2,94 vitest 9
tehanu 0,34 mocha 1,68 tehanu 2,98 zora 9
xv 0,34 lab 1,98 baretest 3,37 tape 9
uvu 0,38 tap 2,90 zora 3,62 lab 8
native 0,40 ava 4,27 uvu 3,66 ava 8
zora 0,40 jest 4,96 native 3,98 jest 7
zoraReport 0,71 vitest 8,10 zoraReport 8,19

This table shows the results of running time node test/myTest.js 10/100 times (see perf.sh). The watch column is how fast can a rerun get with nodemon or --watch flag. It was eyeball measured, 10 meaning a flicker-free instant feedback.

Some runners have no output, this makes them faster. Also esm modules are a little slower (xv is esm only)

Best and Notest are the fastest possible implementations, they are not actual libs.

Jest and Ava scores poorly here because they rely on hot reloading (HMR). It takes a while to load the first time, but subsequent runs are comparable to the fastests libs. Some libs like Mocha's native watch mode makes subsequent runs faster as well.

Bear in mind that this is the result of 10/100 runs. So, a Baretest run might take 33ms and Tape 80ms. Ask yourself if this will make a difference, these are very small numbers.

Mocha watch with require(CJS) is a perfect 10.

Choosing a test runner

There are 3 deal breaker features every test runner must have:

  • fast rerun: write, save, see, the most essential feedback loop
  • familiar syntax: jest/mocha compatible, easy to switch between runners
  • esm support: it's JS future

Nearly all runners fails the familiarity test. Checkout some popularity stats like number of stars, montly downloads, commit activity and others. 1, 2

Notable mentions

Minimalist and somewhat interesting new test runners

  • g-test-runner: zero dependency, many features, like "rerun only failed tests"
  • natr: riteway inspired
  • oletus: zero configuration/dependency, multi thread!
  • beartest: jest syntax, less features, faster

Additional features

  • easy toggle serial/parallel tests
    • unit runs in parallel, integration in serial
    • parallel !== serial, not all are trully "multi thread"
  • pretty print string comparison diff
  • clean stack traces
    • I only need one line of stacktrace to find my error, I don't want it to be the 5th of 10 lines
  • clear terminal before run
  • minimalist output
  • bail on error
    • if the change I made broke hundreds of test, I don't need to see all of them
  • mock, cover, snapshoot

My impressions

These are mostly nitpicking based on first impressions, they are all great libraries.

Jest

  • initial configuration: hard if not using defaults
    • needed to include testEnvironment
      • huge performance cost otherwise (~80% on cold start tests)
    • needed to include testRegex
      • didn't recognize my command line pattern
  • very active development
  • too many lines of useless output when in watch mode
  • very user focused, readability in mind (ex: many useful assertions)
  • bail doesn't work for tests in the same file (bug)
  • problems identifying test files (ex: camel case userTest.js vs user.test.js)
  • polluted diff result, impossible to have inline diff
  • ridiculously slow cold start
  • Jest doesn't always try to run test suites in parallel
    • weird errors when improper mocking
  • expect doesn't accept error messages
  • asymetricMatchers produce output structure different then equals

Mocha

  • very active development
  • the best flicker free experience
  • questionable choices: tests don't force exit when done
  • stack trace clean level: 1 (minor details)

Ava

  • very active development
  • no support for nested tests
  • parallel by default, but can use --serial cli
  • annoying messages in watch mode
  • the slowest of all watchers

Lab

  • somewhat active development
  • best (not by much) equal diff
  • makes sense to use if you're using hapi (I am)
  • stack trace clean level: 2 (some internal calls)
  • flicker speed: has some delay

Tape

  • no support for async (use tape-promise or other)
  • needs a tap reporter
  • special syntax (t.test)

Zora

  • interesting idea, it is "just" javascript
  • fast no matter how you run it
  • paralel tests by default, it takes extra work to make then synchronous, bad for integration tests
  • weird integrations with nodemon that makes it sometimes hang
  • special syntax (t.test, await test, and others)

uvu

Notes

  • other benchmarks
  • Ava and Jest have an aditional large start cost when first run
  • to test this yourself, run ./perf.sh
  • bash script over npm scripts because it's faster and more flexible
  • The previous version of this README posted at dev.to: DX comparison of javascript testing libraries

Usage

  1. clone
  2. npm install
  3. pick your target

Look inside run.sh and scripts for targets.

Formats

  • equalError: forces an assertion error
  • nativeWatcherName will use the lib's built in watch mechanism

Simple targets

[mode=equalError] ./run.sh [10Times | 100Times] libName
[mode=equalError] ./run.sh nativeWatcherName
[mode=equalError] ./run.sh watcherName libName

Special targets

mode=(assert|chai|should|jest|lab|unexpect) ./run.sh mochaAssert

./run.sh genBaseTests
./run.sh genMediumTests
./run.sh genLargeTests

./run.sh perfReport
./run.sh diffErrorsReport

Examples

./run.sh mocha
mode=equalError ./run.sh mocha
mode=jest ./run.sh mochaAssert
./run.sh mochaWatch
./run.sh 10Times mocha
./run.sh nodemon lab
./run.sh chockidar lab
./run.sh onchange zora

How I use this

  • ./run.sh perfReport generates a txt from which I create the performance table
  • ./run.sh diffErrorsReport generates .ansi files so I can analyze the results

USE VSCODE-ANSI TO SEE THE ANSI FILES IN PREVIEW MODE

Contributing
License (MIT)

About

Comparing the experience of using different testing libraries in javascript

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published