Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve test coverage with future-proof testing strategy #72

Closed
2 of 7 tasks
karfau opened this issue Jun 29, 2020 · 21 comments
Closed
2 of 7 tasks

Improve test coverage with future-proof testing strategy #72

karfau opened this issue Jun 29, 2020 · 21 comments
Assignees
Labels
good first issue Good for newcomers help-wanted External contributions welcome question Contains open questions that need to be answered strategic
Milestone

Comments

@karfau
Copy link
Member

karfau commented Jun 29, 2020

We just restored the existing vows test suite.

But I don't think we should add new tests to it (or only in a limited manner).

  • there are many tests with multiple assertions, first failing hides other potential failling ones
  • vows is another abandoned test framework
  • generating those kind of repetitive tests is more verbose then in current frameworks (like jest)

I think this is again a high prio issue since it lower the barrier to contribution and we will hopefully add a lot of tests for all the known issues. (I imagine to add branches or even draft PRs with a failing test case as a first step of every investigation.)

Suggested LOD(level of done):

@ owners & authors Please keep this description up to date with results from the discussion below.

@brodybits brodybits pinned this issue Jun 30, 2020
@brodybits brodybits changed the title Future proof testing strategy Future-proof testing strategy Jun 30, 2020
@brodybits

This comment has been minimized.

@karfau

This comment has been minimized.

@brodybits brodybits changed the title Future-proof testing strategy Improve test coverage with future-proof testing strategy Aug 21, 2020
@brodybits brodybits added the bug Something isn't working label Aug 21, 2020
@brodybits
Copy link
Member

I am now merging issue #103 into this issue, hoping to bring the high-level discussion back here.

Coming from PR #112, I am now thinking it would be good to target a test subdirectory structure to group the tests by characteristics such as these groups:

  • Existing vows tests are not complete and not structured as cleanly as we would like, but it would be nice to keep them together as a baseline for our test coverage. I would still love to port the existing vows tests to Jest if this is not too impractical.
  • xmltest cases proposed in PR test: Add jest testsuite with xmltest cases #112 requires using a third-party test suite with some licensing constraints. Yes I think they would help improve mutation testing coverage in the near term. I am wondering if we should consider this a form of integration testing.
  • Maybe we should have API tests to help avoid breaking API changes.

From discussion here and in #112, it would probably be good to start by introducing Jest with a very limited number of clean and simple test case(s).

It would be ideal for the test structure to reflect higher-level requirements and feature documentation but don't expect this to be really practical in the near term.

@karfau
Copy link
Member Author

karfau commented Aug 28, 2020

Now that the basics are in place I would love to start thinking about a solid structure for tests, that also makes it easy for contributors to know where to put there tests.

From my experience the most clear solution is to group (unit) tests by modules, since it's totally clear where to put a test if only one module is involved.
Caveat: The dependencies between our modules are quite "intense", so in the tests I added so far I simply used APIs from multiple modules.

What is currently missing heavily are tests that exercise the existing API on the different "classes/types/prototypes". I think it would be "rather simple" to to mock dependencies from other modules and write tests that only exercise low level API from a single module. (But those tests tend to break when refactoring, so doing it this way right away might get into our way.)
I would suggest to have a folder api that either contains a file for each module in lib, or a file for each type listed in the readme.

I think it would make sense to move the vows tests into a vows or even legacy folder and ✔️ drop everything that is currently not used in CI from the test folder. Any objections?

The xmltest is so special that it could just get it's own folder, I will take care of that when working on that part. ✔️

We could of course also have a first level of unit, functional, integration or whatever categories we want, but we should document what kind of tests go where and it might be overhead to start with it.

Any comments/additions/ideas?

@brodybits
Copy link
Member

My apologies for the delay. Here is my quick reaction so far:

I think the most important step is to finish PR #112 to add the xmltest cases, in their own xmltest subdirectory. This should give us the confidence needed to continue with any further refactoring and other contributions that are coming in.

I am sorry to say that the mocking between modules looks like a smell to me. For reference: https://medium.com/javascript-scene/mocking-is-a-code-smell-944a70c90a6a

I am now wondering if we could improve the architecture and API to expose layers like these:

  • Abstract tree from the SAX parser, maybe with type definitions in a d.ts file
  • The SAX parser itself from lib/sax.js
  • DOM parser
  • DOM AST - ??
  • DOM functions from lib/dom.js

@karfau
Copy link
Member Author

karfau commented Aug 30, 2020

I think the most important step is to finish PR #112 to add the xmltest cases, in their own xmltest subdirectory. This should give us the confidence needed to continue with any further refactoring and other contributions that are coming in.

WIP Done

  • Abstract tree from the SAX parser, maybe with type definitions in a d.ts file
  • The SAX parser itself from lib/sax.js
  • DOM parser
  • DOM AST - ??
  • DOM functions from lib/dom.js

Some wording in this list confuses me, so I will try to wrap up my current understanding of the architecture:

  • (By definition) A SAX parser is an "event dispatcher" that triggers different events (/ calls event handlers from an implementation that is passed to it). Our implementation SAXParser is defined in lib/sax.js and will be (integration) tested as part of xmltest cases. I also hope to later convert an existing SAX test suite that just tests the SAX parser to see if it's feasible to do replace lib/sax.js? #55 .
  • The event handler in our case is the DOMHandler that is defined in lib/dom-parser.js, it creates the DOM, which is something like an AST, since it describes the XML as a tree of objects. The interfaces provided by the DOM are well specified and xmldom only provides a subset for certain levels. It is implemented in lib/dom.js and allows for traversal and modifications (I think there are some tests as part of the vows test suite, those might even be worth porting, but I expect there to be much more API that is not tested. I hope that I will be able to find an existing DOM test suite that we can test our implementation with, since it's a well defined standard.)
  • The DOMParser and it's only method parseFromString are just there to store some state and configure the SAXParser and DOMHandler before invoking it. It will also be part of the xmltest cases.
  • Add to it the XMLSerializer (that is defined in and exported from lib/dom.js but also exported from lib/dom-parser.js that turns the DOM int a string again (by calling the same method that Node.toString would, not sure why it need a "class" for that).

I'm quite certain I have seen various places that break "separation of concerns", which will make it slightly harder to write tests for (some) units. But I think it makes sense to put separate tests for the different "parts".

@karfau
Copy link
Member Author

karfau commented Aug 30, 2020

I am sorry to say that the mocking between modules looks like a smell to me. For reference: medium.com/javascript-scene/mocking-is-a-code-smell-944a70c90a6a

Nice article, I agree with all I have scanned from it. Would be happy to apply whatever is needed, to do black box testing as much as possible. Let's see how far we get.

@brodybits
Copy link
Member

How about an approach like this:

  • finish the work on xmltest and check our Stryker coverage
  • see if we can replace the existing SAX parser as discussed in replace lib/sax.js? #55
  • investigate how we can improve the separation of concerns and if we should make the API more layered
  • make the tests more modular while refactoring the code to be more modular

@brodybits
Copy link
Member

We have made some nice progress with improving the test coverage with help from xmltest (PR #112) and porting the Vows tests to Jest (PR #121). But I think we have quite a few TODO items:

  • Stryker score seems to be around 65%. Recent contributions such as merged PR remove ES6 syntax from getElementsByClassName #91 and updates proposed in PR Add method getElementsByClassName for Element #100 do not seem to be very well tested in the mutation report. Or am I missing something here?
  • Stryker mutation testing seems to be quite a bit slower. I wonder if switching to Jest mutator would speed it up or not.
  • I think the use of a custom assert module should be replaced with strict equality tests whenever possible and looser equality tests as needed. Then we can remove test/assert.vows.checks.js and completely drop use of Vows. @awwright started working on this in PR Port custom assertions to mostly strict assertions #89, but I would favor that we switch directly to using the Jest expect pattern if possible.
  • Extra log output from npm test should be cleaned up.

@awwright
Copy link
Contributor

The next thing I would like too see is the formatting.

Like if we could get a PR focused only on normalizing the whitespace, that would be great.

@brodybits
Copy link
Member

My idea is to use Prettier to clean up the white space, apply consistent formatting, and apply some consistent styling all at once. I think to do this in multiple steps would introduce quite a bit of extra code churn. This is what I proposed in PR #123, starting with test.

@karfau
Copy link
Member Author

karfau commented Sep 17, 2020

I think the use of a custom assert module should be replaced with strict equality tests whenever possible and looser equality tests as needed. Then we can remove test/assert.vows.checks.js and completely drop use of Vows. @awwright started working on this in PR #89, but I would favor that we switch directly to using the Jest expect pattern if possible.

I don't think it makes sense to keep the custom assertions so porting their tests to jest doesn't make sense. So I will focus on getting rid of the custom assertions and related tests and vows, after #123 landed.

@brodybits
Copy link
Member

I think these should be the next steps, as we have discussed:

  • replace the custom assertions to Jest expect statements
  • switch Stryker over to using the Jest runner
  • add some more tests to increase the Stryker mutation testing score

It would be awesome if @karfau has any more time to help with some of these items. There is no formal deadline, but I think finishing these steps would enable us to start cleaning up lib and consider adding more features.

I would also like to make the 0.4.0 release by the end of September, with or without more test updates.

@karfau
Copy link
Member Author

karfau commented Sep 18, 2020

If there is nobody else working on those test files, I can pick up this one:

  • replace the custom assertions to Jest expect statements

@karfau
Copy link
Member Author

karfau commented Sep 18, 2020

I have an idea how to fix the issue of all that console output at the same time when switching to jest expect: by using the getTestParser and toMatchSnapshot as in the xmltests.

Any objections?

@brodybits
Copy link
Member

@karfau I was hoping that you would take over this work. Please feel free to propose the changes as you feel best.

FYI I will likely go offline in 1-2 hours and be gone for a long weekend. Thanks again!

@karfau
Copy link
Member Author

karfau commented Jan 15, 2021

@brodybits
I just (re)discovered that #126 stopped running the tests for the custom assertion, and that it also changed code in test/assert.js.

Those tests are also still based on vows which was already removed.
By rereading the thread in #121 I'm assuming that all assertions should be converted to normal jest expect assertions. I'll start working on that to get rid of this strange state (and create a draft PR soon).

@brodybits
Copy link
Member

From review of PR #174 I think that setAttributeNode and setAttributeNodeNS member functions are not very well tested. Here is what I found:

% git grep -C3 setAttributeNode test
test/dom/attr.test.js-          assert(root.attributes.length, 2)
test/dom/attr.test.js-          try {
test/dom/attr.test.js-                  var c = root.ownerDocument.createElement('c')
test/dom/attr.test.js:                  c.setAttributeNode(root.attributes.item(0))
test/dom/attr.test.js-          } catch (e) {
test/dom/attr.test.js-                  assert(e.code, 10)
test/dom/attr.test.js-                  return
--
test/dom/attr.test.js-          assert(child.attributes.length, 4, 'after adding 4 and one with namespace')
test/dom/attr.test.js-          try {
test/dom/attr.test.js-                  var c = root.ownerDocument.createElement('c')
test/dom/attr.test.js:                  c.setAttributeNodeNS(root.attributes.item(0))
test/dom/attr.test.js-          } catch (e) {
test/dom/attr.test.js-                  assert(e.code, 10, 'wrong error code')
test/dom/attr.test.js-                  return
--
test/dom/attr.test.js-          root.setAttributeNS('a', 'a:a', '1')
test/dom/attr.test.js-          assert(root.attributes.length, 4)
test/dom/attr.test.js-          //not standart
test/dom/attr.test.js:          //      root.firstChild.setAttributeNode(root.attributes[0]);
test/dom/attr.test.js-          //      assert(root.attributes.length, 0);
test/dom/attr.test.js-  })
test/dom/attr.test.js-

@karfau karfau added good first issue Good for newcomers help-wanted External contributions welcome question Contains open questions that need to be answered strategic and removed bug Something isn't working labels Jan 21, 2021
@karfau karfau added this to the 1.0.0 milestone Jan 21, 2021
@brodybits
Copy link
Member

@karfau I recall we discussed running Stryker on PRs as well as on the master branch. I think it would be awesome if we could find a way for Stryker to help detect modifications that are not very well tested.

@karfau
Copy link
Member Author

karfau commented Mar 10, 2021

@brodybits Currently stryker runs are failing on master since snapshots for new error handling tests are making assumptions about lines of code that errors originate from.
We need to decide how to fix that, but I think it shouldn't prevent us from releasing 0.6.0

@karfau karfau unpinned this issue Jul 28, 2021
@karfau karfau modified the milestones: planning 1.0.0, before 1.0.0 Dec 23, 2021
@karfau
Copy link
Member Author

karfau commented Oct 1, 2023

Has improved a lot, even though no discussion took place outside of this thread

@karfau karfau closed this as completed Oct 1, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Good for newcomers help-wanted External contributions welcome question Contains open questions that need to be answered strategic
Projects
None yet
Development

No branches or pull requests

3 participants