New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve test coverage with future-proof testing strategy #72
Comments
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
I am now merging issue #103 into this issue, hoping to bring the high-level discussion back here. Coming from PR #112, I am now thinking it would be good to target a test subdirectory structure to group the tests by characteristics such as these groups:
From discussion here and in #112, it would probably be good to start by introducing Jest with a very limited number of clean and simple test case(s). It would be ideal for the test structure to reflect higher-level requirements and feature documentation but don't expect this to be really practical in the near term. |
Now that the basics are in place I would love to start thinking about a solid structure for tests, that also makes it easy for contributors to know where to put there tests. From my experience the most clear solution is to group (unit) tests by modules, since it's totally clear where to put a test if only one module is involved. What is currently missing heavily are tests that exercise the existing API on the different "classes/types/prototypes". I think it would be "rather simple" to to mock dependencies from other modules and write tests that only exercise low level API from a single module. (But those tests tend to break when refactoring, so doing it this way right away might get into our way.) I think it would make sense
We could of course also have a first level of Any comments/additions/ideas? |
My apologies for the delay. Here is my quick reaction so far: I think the most important step is to finish PR #112 to add the I am sorry to say that the mocking between modules looks like a smell to me. For reference: https://medium.com/javascript-scene/mocking-is-a-code-smell-944a70c90a6a I am now wondering if we could improve the architecture and API to expose layers like these:
|
Some wording in this list confuses me, so I will try to wrap up my current understanding of the architecture:
I'm quite certain I have seen various places that break "separation of concerns", which will make it slightly harder to write tests for (some) units. But I think it makes sense to put separate tests for the different "parts". |
Nice article, I agree with all I have scanned from it. Would be happy to apply whatever is needed, to do black box testing as much as possible. Let's see how far we get. |
How about an approach like this:
|
We have made some nice progress with improving the test coverage with help from
|
The next thing I would like too see is the formatting. Like if we could get a PR focused only on normalizing the whitespace, that would be great. |
My idea is to use Prettier to clean up the white space, apply consistent formatting, and apply some consistent styling all at once. I think to do this in multiple steps would introduce quite a bit of extra code churn. This is what I proposed in PR #123, starting with test. |
I don't think it makes sense to keep the custom assertions so porting their tests to jest doesn't make sense. So I will focus on getting rid of the custom assertions and related tests and vows, after #123 landed. |
I think these should be the next steps, as we have discussed:
It would be awesome if @karfau has any more time to help with some of these items. There is no formal deadline, but I think finishing these steps would enable us to start cleaning up I would also like to make the 0.4.0 release by the end of September, with or without more test updates. |
If there is nobody else working on those test files, I can pick up this one:
|
I have an idea how to fix the issue of all that console output at the same time when switching to jest expect: by using the Any objections? |
@karfau I was hoping that you would take over this work. Please feel free to propose the changes as you feel best. FYI I will likely go offline in 1-2 hours and be gone for a long weekend. Thanks again! |
@brodybits Those tests are also still based on vows which was already removed. |
From review of PR #174 I think that
|
@karfau I recall we discussed running Stryker on PRs as well as on the master branch. I think it would be awesome if we could find a way for Stryker to help detect modifications that are not very well tested. |
@brodybits Currently stryker runs are failing on master since snapshots for new error handling tests are making assumptions about lines of code that errors originate from. |
Has improved a lot, even though no discussion took place outside of this thread |
We just restored the existing
vows
test suite.But I don't think we should add new tests to it (or only in a limited manner).
vows
is another abandoned test frameworkI think this is again a high prio issue since it lower the barrier to contribution and we will hopefully add a lot of tests for all the known issues. (I imagine to add branches or even draft PRs with a failing test case as a first step of every investigation.)
Suggested LOD(level of done):
setAttributeNode
andsetAttributeNodeNS
@ owners & authors Please keep this description up to date with results from the discussion below.
The text was updated successfully, but these errors were encountered: