Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Recover vows tests, drop proof tests #59

Merged
merged 13 commits into from Jun 28, 2020

Conversation

karfau
Copy link
Member

@karfau karfau commented Jun 26, 2020

fixes #35

I found that some part of the test folder are using vows for testing.

  • Main changes in the test files is for fixing imports and removing "traversal code" that vows is able to handle when invoked from the cli. (test/**/.js files that don't have the .vows.js extension might still contain assertions. We need to digest them and decide whether and how to restore them.)
  • To ease filtering I '''renamed''' those files in an extra commit.
  • As discussed in Restore vows test "suite", drop useless proof tests #51 some files still contain CRLF line separators

All the vows tests can now be executed via the usual npm test.
Tons of test cases where only calls to console.assert that logged an "Assertion failed: ...", I converted them into an equivalent assertion that fails a test.
Due to the discussion in the first PR I introduced a helper module test/assert.js that is heavily covered by tests and

  • allowed me to more easily restore existing assertions
  • allows us to switch between loose == and strict === checking by XMLDOM_ASSERT_STRICT=1 npm run test

When these failed and it made sense, I changed the expectation to the current value.
In case we need to skip a test, we can use assert.skip (that logs the failed assertion and fails if the assertion no longer fails).

I also drastically reduced the amount of console.log in tests, now the only thing that's printing in addition to vows is the internal logging of xmldom.

For reviewing it's much easier to go commit by commit! (Watching all changes in one go will look like history of the test files gets lost, but this is not true as long as commits are preserved when the PR lands.)

karfau added 12 commits June 20, 2020 13:14
- Strict checks throw when `process.env.XMLDOM_ASSERT_STRICT` is truthy,
  otherwise they are only logged when they fail
- `assert.skip(...)` can be used for failing assertions (and need to be removed when they no longer fail)
- tests for `assert` and `assert.skip` are part of `npm run test`
Renamed the qualifying `.js` files to `.vows.js` to simplify filtering.
The `assert.throw` fails since the code no longer throws exceptions when parsing fails.
@brodybits
Copy link
Member

I may need a few days to review. I have some quick questions:

  • Is the code in test/assert*.js adapted from somewhere?
  • I wonder if any the code in test/assert*.js could be moved into a new package that could help some others?
  • 37d5fb9 - "Assert the behavior since 0.1.27" - looks like a possible regression that happened in a patch release. Should we raise this as an issue?

@karfau
Copy link
Member Author

karfau commented Jun 26, 2020

  • Is the code in test/assert*.js adapted from somewhere?

No, the question whether moving a test suite from loose asserts to strict asserts could allow future breaking changes got me thinking and I made me write this, so we can observe and treat this question in a central place.

  • I wonder if any the code in test/assert*.js could be moved into a new package that could help some others?

I have written some assertion helpers before, and this might be the most complex piece so far.
Most of the time they have one thing in common: project specific assumptions.
This one has to and it allows for a great deal of reduced complexity. E.g.

  • using vows to write tests
  • using node's assert to write tests and implement it
  • using project specific folder structure to locate the actual assertion in test code
  • the name of the env var to set
  • the assumption or rather decision that a skipped assertion that no longer fails should fail the test
  • throwing when any caught Error is not an /^AssertionError/
  • which "native" node assertions to provide on it
  • ...

So currently I'll not invest time into that, but I'm planning to create an article about that question on dev.to and we will see how it evolves from there.

  • 37d5fb9 - "Assert the behavior since 0.1.27" - looks like a possible regression that happened in a patch release. Should we raise this as an issue?

I went "back in time" as described here again (with the tests from this branch) and discovered:

      ✗ unclosedcomment 
      TypeError: Cannot read property 'prefix' of null 
      at Document.nodeSerializeToString [as toString] (xmldom/dom.js:920:23) 
      at Object.unclosedcomment (xmldom/test/parse/unclosedcomment.vows.js:10:10) 
      at runTest (xmldom/node_modules/vows/lib/vows.js:136:26) 
      at EventEmitter.<anonymous> (xmldom/node_modules/vows/lib/vows.js:81:9) 
      at EventEmitter.emit (events.js:203:15) 
      at EventEmitter.options.Emitter.emit (xmldom/node_modules/vows/lib/vows.js:241:24) 
      at xmldom/node_modules/vows/lib/vows/suite.js:170:45 
      at process._tickCallback (internal/process/next_tick.js:61:11) 
      at Function.Module.runMain (internal/modules/cjs/loader.js:834:11) 
      at startup (internal/bootstrap/node.js:283:19) 

which was fixed here which I would consider a bugfix (released in 0.2.0) and not a breaking change.

Just to be sure we are on the same page, here is my basic assumption:

Since v 0.1.27 was published 3.5 years (-2 days) ago by the original maintainer I would generally consider this the "version people rely on by now".

So fixing things that could be assumed before that would "only" help people that stuck to an even earlier version, I'm not sure we want to already discuss those problems but we can easily run those tests against earlier versions to detect those and discuss which behavior makes more sense.

I would recommend to only do that after we have a more robust/complete test suite.

Please let me know in case you disagree.

@brodybits
Copy link
Member

Thanks. Your disposition concerning the asserts makes more sense to me now. I will try to take a closer look this weekend, don't expect to find any more blockers at this point. But I think it would be great if we could raise issues for the following:

test/assert.js Outdated Show resolved Hide resolved
@brodybits
Copy link
Member

brodybits commented Jun 26, 2020

I did also try running Stryker on test/assert.js. It did encounter some timeouts but did not find any surviving mutants, which gives me some extra confidence. I would like to add Stryker after merging this one, see #60.

edit: I would also be fine to include Stryker in this PR, otherwise I can just take your configuration from #51 (and give you the primary author credit). Part of #60 will be to get this on the Stryker dashboard. I am going off email pretty soon, will come back later this weekend.

@karfau
Copy link
Member Author

karfau commented Jun 26, 2020

  • improving the asserts

If you are referring to using strict asserts everywhere I can do it, if you have more in mind, maybe you should create that issue.

  • the changed behavior from 0.1.27

You mean creating issues just to close them? Or one to list all of them? I'm not sure what to create an issue for. So maybe it's easier for you to do it?

Are you referring to this checklist?
I would prefer to maintain it there, as far as I can see all items that are relevant to this repo are already either issues or PRs. So maybe I misunderstood you again...

Thx for running stryker against the test/assert.js nice idea (I thought about it, but didn't try)
Regarding adding stryker in this PR:
I think the mixed line separator issue is the most burning one, since it's making it hard to contribute.
I would prefer for myself to get the checklist from the first PR done, next thing can be Stryker, I don't care about the credits. But let's not add more things to this PR, I would say it's big enough.

@brodybits brodybits added the bug Something isn't working label Jun 26, 2020
@brodybits
Copy link
Member

I just raised #61 to improve the test assertions.

  • the changed behavior from 0.1.27

You mean creating issues just to close them? Or one to list all of them?

Let's start with one to list them all for now, but let's keep the regression in #57 open. My purpose is to avoid losing track.

Are you referring to this checklist?
I would prefer to maintain it there

In that case, my idea would be to keep it open as a draft PR, maybe put something in brackets at the beginning of the title. Again my purpose is to avoid losing track.

I will do a final review and merge this weekend. Please do me a favor and ping me, multiple times if needed, just in case I drop it for any reason. I am definitely with you to resolve the line endings and finish that checklist.

For the record, I think this PR needs to be merged in a merge commit. But I think this should be an exception, general preference should be squash commit.

And thanks again for your contributions!

@karfau
Copy link
Member Author

karfau commented Jun 28, 2020

Since you asked for it, @brodybits here is your first ping.
How is it going? 😉

@brodybits
Copy link
Member

Pong, thanks!

I will probably make the formatting edit, one last review & test, and merge this one.

The primary purpose is to make it absolutely clear
that the return expression spans multiple lines,
should be a little more consistent with
Prettier formatting.
Copy link
Member

@brodybits brodybits left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Approved to be merged in a merge commit. Thanks again!

@brodybits brodybits merged commit 102ddd2 into xmldom:master Jun 28, 2020
@karfau karfau deleted the 35/recover-vows-tests-CRLF branch June 28, 2020 18:03
This was referenced Mar 13, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging this pull request may close these issues.

test coverage seems to be very limited
2 participants