Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Option to explicitly fail check run #994

Open
pavetok opened this issue Nov 25, 2023 · 8 comments
Open

Option to explicitly fail check run #994

pavetok opened this issue Nov 25, 2023 · 8 comments
Assignees

Comments

@pavetok
Copy link

pavetok commented Nov 25, 2023

Imagine we have workflow with 2 jobs. In the former (matrix job) we run e2e tests in multiple environments and then upload test reports as artifacts. In the latter we download all artifacts, analyze test reports with condition like ${{ !cancelled() && (needs.testing.result == 'success' || needs.testing.result == 'failure') }} and create check run with name like E2E test report.

Important detail: branch protection rule require E2E test report as status check and nothing else.

Testing jobs may fail in some environments without producing any artifacts. So report job doesn't detect test failures thus check run is green.

I think it can be fixed with option like fail_check: "${{ needs.testing.result == 'failure' }}".

@mikepenz
Copy link
Owner

Thank you for the report.

Can you please provide some more explanation? I am not sure I fully see the "fix" being related to this project

Overall I don't think this is something this action should be responsible off, if a matrix job is meant to produce artifacts, but it didn't then the failure should already happen during the collection of these artifacts. Not at the later stage for the reporting.

Similarly a step could be added before this action (or after) which checks that the count of artifacts matched your expecation -> and fail the workflow in that case.

@pavetok
Copy link
Author

pavetok commented Dec 9, 2023

Thank you for the response.

but it didn't then the failure should already happen during the collection of these artifacts

Action download-artifact has no wildcard API. You can download single artifact or all at once. I'll have to declare 8 different artifacts at the moment. This approach doesn't scale well, because later it could turn into 16, 32, etc.

Similarly a step could be added before this action (or after) which checks that the count of artifacts matched your expectation

It will require at least 2 extra steps (count & assert) in comparison with single line fail_check: "${{ needs.testing.result == 'failure' }}". I think explicit failing of check run much clearer.

@pavetok
Copy link
Author

pavetok commented Dec 9, 2023

I tried idea with assertion step. It works. But I forgot to say about one more subtlety. If application code doesn't change relative to previous run, tests will be skipped at all. Hence no artifacts will be produced but build is expected to be successful.

@mikepenz
Copy link
Owner

Thank you.

I try to understand how fail_check: "${{ needs.testing.result == 'failure' }}" would offer the same capability as counting the artifacts? As again the action wouldn't know how many are expected. and very likely shouldn't.

Which actually the second answer highlights even more. How would the action be able to identify for the case where it shouldn't exepect files. You can configure the action to not fail if there are no results, but then there is still an external factor where something needs to identify that this is an acceptable result.

Also then how do you differentiate between tests failed and no artifacts were retained, there was a problem with retaining artifcats, vs artifacts where retained but some files where missing?

I think one good addition of the action could be that it outputs the amount of files it detected, so the validation afterwards would be slightly simpler 🤔

@mikepenz mikepenz self-assigned this Dec 11, 2023
@pavetok
Copy link
Author

pavetok commented Dec 11, 2023

I try to understand how fail_check: "${{ needs.testing.result == 'failure' }}" would offer the same capability as counting the artifacts?

Testing job fails in presence of not passed tests. So X testing jobs will fail and Y - will pass. They all share the same job id and needs.testing.result == 'failure' will be evaluated to true.

How would the action be able to identify for the case where it shouldn't exepect files.

If at least one testing job fails, then workflow should fail check run. No need to count files. Something like this:

- uses: mikepenz/action-junit-report@v4
  with:
    check_name: E2E test report
    report_paths: "artifacts/e2e-test-report-*/**/TEST-*.xml"
    fail_check_run: "${{ needs.testing.result == 'failure' }}"

I think one good addition of the action could be that it outputs the amount of files it detected, so the validation afterwards would be slightly simpler 🤔

Amount of files ~= amount of test classes. I doubt it will be useful.

@mikepenz
Copy link
Owner

In your example which you propose, if fail_check_run is true, the action step would just fail? (no other logic associated)?

@pavetok
Copy link
Author

pavetok commented Dec 16, 2023

the action step would just fail? (no other logic associated)?

Created check run would just fail, not action step.

@pavetok
Copy link
Author

pavetok commented Dec 17, 2023

Check runs as PR status checks not constrain job structure. That's why I used them. They like lowest common divisor for different workflows and branches.

I found trick with defining jobs that “gather” up other jobs’ statuses. I'll try to implement it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants