-
-
Notifications
You must be signed in to change notification settings - Fork 110
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Option to explicitly fail check run #994
Comments
Thank you for the report. Can you please provide some more explanation? I am not sure I fully see the "fix" being related to this project Overall I don't think this is something this action should be responsible off, if a matrix job is meant to produce artifacts, but it didn't then the failure should already happen during the collection of these artifacts. Not at the later stage for the reporting. Similarly a step could be added before this action (or after) which checks that the count of artifacts matched your expecation -> and fail the workflow in that case. |
Thank you for the response.
Action
It will require at least 2 extra steps (count & assert) in comparison with single line |
I tried idea with assertion step. It works. But I forgot to say about one more subtlety. If application code doesn't change relative to previous run, tests will be skipped at all. Hence no artifacts will be produced but build is expected to be successful. |
Thank you. I try to understand how Which actually the second answer highlights even more. How would the action be able to identify for the case where it shouldn't exepect files. You can configure the action to not fail if there are no results, but then there is still an external factor where something needs to identify that this is an acceptable result. Also then how do you differentiate between tests failed and no artifacts were retained, there was a problem with retaining artifcats, vs artifacts where retained but some files where missing? I think one good addition of the action could be that it outputs the amount of files it detected, so the validation afterwards would be slightly simpler 🤔 |
Testing job fails in presence of not passed tests. So X testing jobs will fail and Y - will pass. They all share the same job id and
If at least one testing job fails, then workflow should fail check run. No need to count files. Something like this:
Amount of files ~= amount of test classes. I doubt it will be useful. |
In your example which you propose, if |
Created check run would just fail, not action step. |
Check runs as PR status checks not constrain job structure. That's why I used them. They like lowest common divisor for different workflows and branches. I found trick with defining jobs that “gather” up other jobs’ statuses. I'll try to implement it. |
Imagine we have workflow with 2 jobs. In the former (matrix job) we run e2e tests in multiple environments and then upload test reports as artifacts. In the latter we download all artifacts, analyze test reports with condition like
${{ !cancelled() && (needs.testing.result == 'success' || needs.testing.result == 'failure') }}
and create check run with name likeE2E test report
.Important detail: branch protection rule require
E2E test report
as status check and nothing else.Testing jobs may fail in some environments without producing any artifacts. So report job doesn't detect test failures thus check run is green.
I think it can be fixed with option like
fail_check: "${{ needs.testing.result == 'failure' }}"
.The text was updated successfully, but these errors were encountered: