Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

expect.hasAssertions() fails to mark the test as failed when using async function and callback #4750

Open
6 tasks done
JesusTheHun opened this issue Dec 14, 2023 · 14 comments
Labels
enhancement New feature or request p2-nice-to-have Not breaking anything but nice to have (priority)

Comments

@JesusTheHun
Copy link

Describe the bug

You are using expect.hasAssertions() and an async function that accepts a callback.
In that callback you make some assertions. If one of those assertions fails, the test is marked as successful and an error is reported.

I see two possibilities :

  1. The assertions have been run and the test should fail because one of the assertion has failed.
  2. The assertions have not been run yet and the test should fail because of expect.hasAssertions().

Reproduction

https://stackblitz.com/edit/vitest-dev-vitest-a4gsaq?file=test%2Fexpect.hasAssertions-async.test.ts

System Info

System:
    OS: macOS 14.2
    CPU: (12) arm64 Apple M2 Max
    Memory: 19.44 GB / 64.00 GB
    Shell: 5.9 - /bin/zsh
  Binaries:
    Node: 18.16.1 - ~/.nvm/versions/node/v18.16.1/bin/node
    Yarn: 1.22.19 - ~/.nvm/versions/node/v18.16.1/bin/yarn
    npm: 9.5.1 - ~/.nvm/versions/node/v18.16.1/bin/npm
    pnpm: 7.29.1 - ~/.nvm/versions/node/v18.16.1/bin/pnpm
    bun: 1.0.14 - ~/.bun/bin/bun
  Browsers:
    Chrome: 120.0.6099.109
    Edge: 120.0.2210.61
    Safari: 17.2
  npmPackages:
    vitest: ^1.0.4 => 1.0.4

Used Package Manager

npm

Validations

@hi-ogawa
Copy link
Contributor

hi-ogawa commented Dec 14, 2023

I'm not sure if I'm reading your repro correctly, but I suppose you're intentionally writing return fakeIO instead of return fakeIO.then(() => ...) for the sake of reproduction. I think the effect of this is that it will put then(...) callback (for example, expect(err).toBeNull()) to be behind promise/microtask queue of your actual test function.

Attempting to make a simpler repro, using queueMicrotask seems to reproduce a similar behavior:

https://stackblitz.com/edit/vitest-dev-vitest-clv1dj?file=test%2Fexpect.hasAssertions-async.test.ts

Show example code
import { test, expect } from 'vitest';

test('simpler repro?', () => {
  expect.hasAssertions();

  // hasAssertions success + unhandled error
  queueMicrotask(() => expect(0).toBe(1));
});
> vitest --ui


 DEV  v1.0.4 /home/projects/vitest-dev-vitest-clv1dj
      UI started at http://localhost:51204/__vitest__/

 ✓ test/expect.hasAssertions-async.test.ts (1)
   ✓ simpler repro?

⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯ Unhandled Errors ⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯

Vitest caught 1 unhandled error during the test run.
This might cause false positive tests. Resolve unhandled errors to make sure your tests are not affected.

⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯ Uncaught Exception ⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯
AssertionError: expected +0 to be 1 // Object.is equality

- Expected
+ Received

- 1
+ 0

 ❯ eval test/expect.hasAssertions-async.test.ts:7:34
      5| 
      6|   // hasAssertions success + unhandled error
      7|   queueMicrotask(() => expect(0).toBe(1));
       |                                  ^
      8| });
      9| 
 ❯ node:internal/process/task_queues:107:1649
 ❯ AsyncResource.runInAsyncScope node:async_hooks:84:2421
 ❯ AsyncResource.runMicrotask node:internal/process/task_queues:107:1601
 ❯ ../../../blitz.d3416dfd.js:352:59333

This error originated in "test/expect.hasAssertions-async.test.ts" test file. It doesn't mean the error was thrown inside the file itself, but while it was running.
The latest test that might've caused the error is "simpler repro?". It might mean one of the following:
- The error was thrown, while Vitest was running this test.
- This was the last recorded test before the error was thrown, if error originated after test finished its execution.
⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯

 Test Files  1 passed (1)
      Tests  1 passed (1)
     Errors  1 error
   Start at  08:12:32
   Duration  1.93s (transform 54ms, setup 0ms, collect 35ms, tests 10ms, environment 0ms, prepare 219ms)


 FAIL  Tests failed. Watching for file changes...
       press h to show help, press q to quit

@sheremet-va
Copy link
Member

I think this is related to #3119

@JesusTheHun
Copy link
Author

JesusTheHun commented Dec 15, 2023

I think this is related to #3119

It seems that #3119 is about the event loop and not the micro-tasks queue. In our issue, if you make the callback async, there is no error because the callback is the first element in the virtual event loop of the Promise, which will be executed before the rest of the test function.

I'm not sure if I'm reading your repro correctly, but I suppose you're intentionally writing return fakeIO instead of return fakeIO.then(() => ...) for the sake of reproduction. I think the effect of this is that it will put then(...) callback (for example, expect(err).toBeNull()) to be behind promise/microtask queue of your actual test function.

@hi-ogawa you are correct, I trimmed down a real case for the repro.
To solve this the easy way you can just await Promise.resolve() after executing the test function body. This will flush the microtask queue and won't give the hand to the event loop. I'm not 100% convince it's "clean" philosophically, but I also cannot see why it's "poor" code. Any other solution I can think of would put the burden onto the end user.

@sheremet-va
Copy link
Member

It seems that #3119 is about the event loop and not the micro-tasks queue. In our issue, if you make the callback async, there is no error because the callback is the first element in the virtual event loop of the Promise, which will be executed before the rest of the test function.

The reason is the same though. .then in your examples is executed after the tests are finished. The solution is also the same: fail a test that executes its code after the test is finished.

@sheremet-va
Copy link
Member

sheremet-va commented Dec 15, 2023

The reason is the same though. .then in your examples is executed after the tests are finished. The solution is also the same: fail a test that executes its code after the test is finished.

Basically, if your test body cannot track its code execution, it is not the fault of the test runner.

Just adding Promise.resolve will fix this particular issue but will still break the code executed in the next microtask (then the next person will come and open an issue "test doesn't fail when the error is thrown in third microtask!").

@JesusTheHun
Copy link
Author

JesusTheHun commented Dec 15, 2023

The reason is the same though. .then in your examples is executed after the tests are finished. The solution is also the same: fail a test that executes its code after the test is finished.

I'm not convinced that the test must fail. We could very well flush the queue and only then check the result of the assertions.

Basically, if your test body cannot track its code execution, it is not the fault of the test runner.

Yes, but it doesn't mean we can't help the user, alleviating some of the pain.

Just adding Promise.resolve will fix this particular issue but will still break the code executed in the next microtask (then the next person will come and open an issue "test doesn't fail when the error is thrown in third microtask!").

If you add await Promise.resolve(), it will resume the function execution immediately after the microtask queue has been flushed. It doesn't matter how many microtasks are queued. So it will permanently fix this issue.
However, if the test body doesn't await all the Promises and those queue some microtasks, then yes the test will error.

One thing to keep in mind though is that very few developers know or understand the microtasks queue whereas most are familiar with async management. Meaning that it is less of a burden to tell them to manage their async calls than to tell them to manage their microtasks queue, which might very well be generated by a third-party library.

@sheremet-va
Copy link
Member

I'm not convinced that the test must fail. We could very well flush the queue and only then check the result of the assertions.

This would mean that we basically change the developer's test function which I am totally against. The test function was provided to the test runner and we cannot temper with it in any way - if the developer didn't wait for a microtask queue to finish, it is intentional from the test runner's point of view.

Yes, but it doesn't mean we can't help the user, alleviating some of the pain.

Yes, by failing the test as described in #3119.

@JesusTheHun
Copy link
Author

JesusTheHun commented Dec 15, 2023

if the developer didn't wait for a microtask queue to finish, it is intentional from the test runner's point of view.

Fair point.

So what is left to do is to handle assertions properly when made in the microtask queue.
As I mentioned in my opening, either the microtask has not been run and the test should fail because of expect.hasAssertions() or it should fail because of expect(0).toBe(1).
So I guess the microtasks queue is flushed between the end of the test function and the assertions checks.

I also noticed that flushing the microtask queue inside the test body doesn't fix the issue :

// The test passes when it should fail
// hasAssertions success + unhandled error
test('flushing microtasks queue', async () => {
  expect.hasAssertions();
  queueMicrotask(() => {
    expect(0).toBe(1);
  });
  await Promise.resolve();
});

@sheremet-va
Copy link
Member

I also noticed that flushing the microtask queue inside the test body doesn't fix the issue :

It doesn't work because the default catch block cannot handle such cases. You can see it in the simple example (this is how it's processed internally):

try {
  queueMicrotask(() => {
    throw new Error('error inside a microtask!')
  });
  await Promise.resolve();
} catch(err) {
  // not called!
}

@JesusTheHun
Copy link
Author

Right. Is the solution to listen to uncaughtException and mark the test as failed then ?

That would require the error to carry the task id so it would only work for assertion errors, which is fair.
Ideally it would also carry the assertion result, so it can be pushed down the task.result for a seamless integration.

@sheremet-va
Copy link
Member

Right. Is the solution to listen to uncaughtException and mark the test as failed then ?

We already do that, that's why you see the error in the first place and it's not lost. The problem is that it's not reliable to assign an error to a test when global expect fails because it's not assigned to a test. Only local expect (test('', { expect })) can do that.

@JesusTheHun
Copy link
Author

JesusTheHun commented Dec 18, 2023

We already do that, that's why you see the error in the first place and it's not lost. The problem is that it's not reliable to assign an error to a test when global expect fails because it's not assigned to a test. Only local expect (test('', { expect })) can do that.

That's a start !

I'm wondering why we cannot attach a test though.
We have the list of all the tests and there is single test running per context. I can tell we know which one is running because the reporter knows it, so what's the catch ? There is the case of concurrent of course, but it already requires the use of local expect, right ?

EDIT : It just hit me. When the microtask function throws, we may be running a different test already

@sheremet-va
Copy link
Member

There is the case of concurrent of course, but it already requires the use of local expect, right ?

No, it doesn't require anything. Only snapshots need that.

EDIT : it just hit me. When the microtasks queue function will throw, we can be running a new test already

Yes, that's exactly my concern.

@JesusTheHun
Copy link
Author

Yes, that's exactly my concern.

While it's not optimal to reserve that feature to local expect, it seems to me that it is the cheaper option (easier to code and maintain) and still brings a genuine benefit.

Another option I see is to wait for the event loop & microtasks queue to be flushed before moving to the next test, so we can mark the latest test as failed (😉) if an error is thrown in the meantime, but that's incompatible with concurrent since it does not require a local expect.

This inconsistency makes me lean towards the first, more simple option.

@sheremet-va sheremet-va added enhancement New feature or request p2-nice-to-have Not breaking anything but nice to have (priority) and removed pending triage labels Dec 28, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request p2-nice-to-have Not breaking anything but nice to have (priority)
Projects
None yet
Development

No branches or pull requests

3 participants