Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[feature request] junit test reporter #58

Open
jjangga0214 opened this issue Jul 29, 2019 · 11 comments · May be fixed by #103
Open

[feature request] junit test reporter #58

jjangga0214 opened this issue Jul 29, 2019 · 11 comments · May be fixed by #103

Comments

@jjangga0214
Copy link

jjangga0214 commented Jul 29, 2019

Hi, thank you for this convenient cli!

I'd like to suggest a feature which might be cool especially in CI environment.
If markdownlint-cli supports junit-style(de facto format) reporter, it'd be so comfortable to check the lint results in CI.
Other popular test runner and linters (e.g. jest, eslint) does support the feature as well.

How do you feel about this?

@alejandroclaro
Copy link

alejandroclaro commented May 15, 2020

I also vote to add a mechanism to add custom formatters to support other formats like:

  • JSON
  • Checkstyle
  • JUnit

We use eslint and stylelint to generate checkstyle reports in Jenkins. IT would be nice to be able to do this also with markdownlint.

@FISHMANPET
Copy link

FISHMANPET commented Jun 19, 2020

I'd also be interested in this to integrate markdownlint into Azure Devops pipelines using one of their supported unit test report formats (which includes JUnit as well as others). Looks like there's a number of npm packages to produce JUnit and Checkstyle and probably other formats as well.

One request I'd add to this is to allow the tool to exit with a 0 return code even if there are test failures (either by default when outputting a file or with an additional flag). That way we can use existing tools to determine if markdownlint errors will fail the build or not. For example Azure Devops has a setting to specify if failed tests in the uploaded test file should fail the build or not.

@DavidAnson
Copy link
Collaborator

I have a plan around this, but need some more time to prove it out. Your comment about return values is interesting and I think it’s the first time I’ve heard that. Can you please point to some other tools that support that behavior? I would expect that people would just ignore the return value if they didn’t care about it; I’m not sure why a tool would need to support that explicitly. Thanks for helping me understand!

@FISHMANPET
Copy link

The one I'm most familiar with is Pester for unit testing PowerShell. It won't generate any kind of "error" on a test failure by default (though you can specify a failure on test failure), requiring the user/script to interpret the returned information and determined what course of action to take.

It's also incredibly useful in an environment like Azure DevOps, which can "consume" test results to generate reports and determine trends. The general workflow for running a test there is that you'll have a step that will run the test and generate a report file, and then run the Publish Test Results Task to consume that generated test.

One way to do this is to allow the testing task to "fail" if the tests fail, and then set the Publish task to run even if previous steps have failed, so that the results will still get uploaded. But the experience for that isn't great, with the reason for failure of the build just being Bash exited with code '1'. This is a build report I was emailed while I was working on implementing markdownlint in a project of mine this afternoon:
image

Another way to do it is for the testing task to succeed to indicate that it successfully ran the tests, then the Publish Test Results task can be configured to fail on test failures. I had that in another project I was building out and on a run where the tests failed I got this much more useful and friendly build report:
image
This results in a much friendlier build report, but it also gives me the opportunity to control if failing tests should fail my build or not. This is useful when bringing some kind of project (code or markdown) "in from the cold" so to speak and start applying testing standards to it. It could be overwhelming in an existing markdown repository that's never been linted to require the first person to change it to fix the hundreds of errors that have been built up for years. For example, the markdown repo I was listing today had 544 errors of various types, and if we required all of those to be fixed before someone could update any documentation, we'd never get documentation updated again.

It wouldn't be too hard for me to wrap the call to markdownlint-cli in some way that I can just eat the non-zero exit code and still return 0, but it would be preferable for it to not return a non-zero code in the first place. Especially when the result is the same if it failed to execute as if there were test failures, I could potentially be hiding failures in executing the command if I do that.

PowerShell also generally isn't very happy when things return non-0 codes because that indicates a failure of the command, and would prefer the command return 0 to indicate it executed successfully, then find a different way to report its data. The way I'd approach running a command, generally, is that I want to know if the command succeeded or not, and throwing an error because it successfully found something it was looking for would strike me as an anti-pattern (though that ship has certainly sailed and it's a pattern in use all over the place)

@FISHMANPET
Copy link

Oh and one more example of a project I have where the test runner has a non-0 exit code but then the test results still gets uploaded
image

My coworkers are continuously confused by that pipeline because they're conditioned to focus on the big red X in those emails and don't even think to keep looking to see that there were test failures.

@DavidAnson
Copy link
Collaborator

Great explanation, thank you. I might still want to treat these two things separately, but I understand the motivation.

@FISHMANPET
Copy link

OK so I decided to play around with this a little bit and ended up making a possibly workable POC for this:
master...FISHMANPET:junit
First to note, I don't "know" JavaScript so I kinda fumbled my way through this, so excuse any style errors or anything like that.

I used the package junit-report-builder to do most of the work of generating the XML file. It can be a bit of trial and error to figure out how a linter's results will best map to something like Junit and then be interpreted by a CI tool. Also the concept of a "passed" test is kind of difficult, when in the extreme every single line could fail MD013 (then would every single line pass MD013 if it didn't fail? a question best left to the philosophers). Azure Devops is the tool I have easy access to so I did something that works with that.

This will generate a single Test Suite called Markdownlint and each failure will be testcase. I set the classname as the filename, as that's interpreted that way by Azure Devops and it looks like others as well. I played around with what the name should be, and ended up just doing the full message minus the detail. I originally tried just the rule as name. It looks like Gitlab might be expecting name to be unique for each file, and also the display of the error in Azure Devops wasn't great, so I settled on something that will be unique and also show usable detail in the test failure. The failure message is the full markdownlint output, including Detail.

I added a new option, --junit or -j for this. I decided to make it exclusive of --output because I couldn't imagine wanting both, and that made it relatively easy to drop in. I replaced the if output, else if lintResultString with if junit, else if output, else if lintResultString. Since the intention I think here is that this would be run in a CI, it will still print results to the console, because it can be nice to see those, but it will write it with console.log() instead of console.error() which satisfied my desire to exit successfully even if there were failures.

Based on my JavaScript skill (read: none) I don't want to make a PR for this, I'm mostly providing it as a proof of concept or guidance as to how this feature might be implemented. But if you think it's good enough for a PR let me know and I'll open one. But I won't be bothered if you don't 😄

@DavidAnson
Copy link
Collaborator

This looks great! You used the same package I’d made a note to use. If this is your first project with JS, you should be really proud. I’ve got a set of nitpicks based on a quick review, but I’d suggest you send a PR because what I see seems very close to final. (After adding a couple of test cases.) I’m in agreement with most of your decisions here. Thanks a lot!

@FISHMANPET FISHMANPET linked a pull request Jun 25, 2020 that will close this issue
@DavidAnson
Copy link
Collaborator

@jjangga0214 @alejandroclaro I had wanted to try some different ideas for a markdownlint CLI and did so recently with https://github.com/DavidAnson/markdownlint-cli2

It supports pluggable output formatters and one of the ones I built to prove the concept is for JUnit in the style of what @FISHMANPET proposes: https://www.npmjs.com/package/markdownlint-cli2-formatter-junit

If you find it useful or have any feedback, please let me know!

@FISHMANPET
Copy link

I finally got around to trying markdownlint-cli2 and the jnit formatter included there. It appears to meet my needs running specifically within azure devops. I especially like the change in exit codes which aren't part of this issue but were mentioned in the comments.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants