Skip to content

Commit

Permalink
Add option to write test cases to JSON file (#366)
Browse files Browse the repository at this point in the history
  • Loading branch information
cpdeethree committed Oct 24, 2022
1 parent 71fba75 commit 1c4c5ee
Show file tree
Hide file tree
Showing 11 changed files with 545 additions and 231 deletions.
8 changes: 8 additions & 0 deletions .github/workflows/ci-cd.yml
Expand Up @@ -169,6 +169,7 @@ jobs:
check_name: Test Results (Dockerfile)
junit_files: "artifacts/**/*.xml"
json_file: "tests.json"
json_test_case_results: true
log_level: DEBUG

- name: JSON output
Expand Down Expand Up @@ -251,6 +252,7 @@ jobs:
-e INPUT_SECONDS_BETWEEN_GITHUB_READS \
-e INPUT_SECONDS_BETWEEN_GITHUB_WRITES \
-e INPUT_JSON_THOUSANDS_SEPARATOR \
-e INPUT_JSON_TEST_CASE_RESULTS \
-e HOME \
-e GITHUB_JOB \
-e GITHUB_REF \
Expand Down Expand Up @@ -303,6 +305,8 @@ jobs:
INPUT_CHECK_NAME: Test Results (Docker Image)
INPUT_JUNIT_FILES: "artifacts/**/*.xml"
INPUT_JSON_FILE: "tests.json"
INPUT_JSON_TEST_CASE_RESULTS: true


- name: JSON output
uses: ./misc/action/json-output
Expand Down Expand Up @@ -428,6 +432,7 @@ jobs:
check_name: Test Results (${{ matrix.os-label }} python ${{ matrix.python }})
junit_files: "artifacts${{ steps.os.outputs.path-sep }}**${{ steps.os.outputs.path-sep }}*.xml"
json_file: "tests.json"
json_test_case_results: true

- name: JSON output
uses: ./misc/action/json-output
Expand Down Expand Up @@ -478,6 +483,7 @@ jobs:
check_name: Test Results (setup-python)
junit_files: "artifacts/**/*.xml"
json_file: "tests.json"
json_test_case_results: true

- name: JSON output
uses: ./misc/action/json-output
Expand Down Expand Up @@ -521,6 +527,7 @@ jobs:
xunit_files: "test-files/xunit/**/*.xml"
trx_files: "test-files/trx/**/*.trx"
json_file: "tests.json"
json_test_case_results: true
log_level: DEBUG

- name: JSON output
Expand Down Expand Up @@ -562,6 +569,7 @@ jobs:
fail_on: nothing
junit_files: "test-files/pytest/junit.gloo.standalone.xml"
json_file: "tests.json"
json_test_case_results: true
log_level: DEBUG

- name: JSON output
Expand Down
52 changes: 51 additions & 1 deletion README.md
Expand Up @@ -268,6 +268,7 @@ The list of most notable options:
|`check_run_annotations_branch`|`event.repository.default_branch` or `"main, master"`|Adds check run annotations only on given branches. If not given, this defaults to the default branch of your repository, e.g. `main` or `master`. Comma separated list of branch names allowed, asterisk `"*"` matches all branches. Example: `main, master, branch_one`.|
|`json_file`|no file|Results are written to this JSON file.|
|`json_thousands_separator`|`" "`|Formatted numbers in JSON use this character to separate groups of thousands. Common values are "," or ".". Defaults to punctuation space (\u2008).|
|`json_test_case_results`|`false`|Write out all individual test case results to the JSON file. Setting this to `true` can greatly increase the size of the output. Defaults to `false`.|
|`fail_on`|`"test failures"`|Configures the state of the created test result check run. With `"test failures"` it fails if any test fails or test errors occur. It never fails when set to `"nothing"`, and fails only on errors when set to `"errors"`.|

Pull request comments highlight removal of tests or tests that the pull request moves into skip state.
Expand Down Expand Up @@ -360,7 +361,11 @@ via `json_thousands_separator`. Formatted numbers are especially useful when tho
is not easily available, e.g. when [creating a badge from test results](#create-a-badge-from-test-results).

The optional `json_file` allows to configure a file where extended JSON information are to be written.
Compared to `"Access JSON via step outputs"` above, `errors` and `annotations` contain more information than just the number of errors and annotations, respectively:
Compared to `"Access JSON via step outputs"` above, `errors` and `annotations` contain more information
than just the number of errors and annotations, respectively.

Additionally, `json_test_case_results` can be enabled to add the `cases` field to the JSON file, which provides
all test results of all tests. Enabling this may greatly increase the output size of the JSON file.

```json
{
Expand Down Expand Up @@ -388,9 +393,54 @@ Compared to `"Access JSON via step outputs"` above, `errors` and `annotations` c
"title": "1 out of 3 runs failed: test_events (test.Tests)",
"raw_details": "self = <test.Tests testMethod=test_events>\n\n def test_events(self):\n > self.do_test_events(3)\n\n test.py:821:\n _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\n test.py:836: in do_test_events\n self.do_test_rsh(command, 143, events=events)\n test.py:852: in do_test_rsh\n self.assertEqual(expected_result, res)\n E AssertionError: 143 != 0\n "
}
],
"cases": [
{
"class_name": "class1",
"test_name": "test1",
"states": {
"success": [
{
"result_file": "result",
"test_file": "test",
"line": 123,
"class_name": "class1",
"test_name": "test1",
"result": "success",
"message": "message1",
"content": "content1",
"stdout": "stdout1",
"stderr": "stderr1",
"time": 1
}
]
}
},
{
"class_name": "class1",
"test_name": "test2",
"states": {
"skipped": [
{
"result_file": "result",
"test_file": "test",
"line": 123,
"class_name": "class1",
"test_name": "test2",
"result": "skipped",
"message": "message2",
"content": "content2",
"stdout": "stdout2",
"stderr": "stderr2",
"time": 2
}
]
}
}
]
}
```

</details>

See [Create a badge from test results](#create-a-badge-from-test-results) for an example on how to create a badge from this JSON.
Expand Down
4 changes: 4 additions & 0 deletions action.yml
Expand Up @@ -101,6 +101,10 @@ inputs:
description: 'Formatted numbers in JSON use this character to separate groups of thousands. Common values are "," or ".". Defaults to punctuation space (\u2008).'
default: ''
required: false
json_test_case_results:
description: 'Write out all individual test case results to the JSON file. Setting this to "true" can greatly increase the size of the output. Defaults to "false".'
default: false
required: false

outputs:
json:
Expand Down
6 changes: 5 additions & 1 deletion composite/action.yml
Expand Up @@ -101,7 +101,10 @@ inputs:
description: 'Formatted numbers in JSON use this character to separate groups of thousands. Common values are "," or ".". Defaults to punctuation space (\u2008).'
default: ''
required: false

json_test_case_results:
description: 'Write out all individual test case results to the JSON file. Setting this to "true" can greatly increase the size of the output. Defaults to "false".'
default: false
required: false
outputs:
json:
description: "Test results as JSON"
Expand Down Expand Up @@ -177,6 +180,7 @@ runs:
SECONDS_BETWEEN_GITHUB_WRITES: ${{ inputs.seconds_between_github_writes }}
JSON_FILE: ${{ inputs.json_file }}
JSON_THOUSANDS_SEPARATOR: ${{ inputs.json_thousands_separator }}
JSON_TEST_CASE_RESULTS: ${{ inputs.json_test_case_results }}
JOB_SUMMARY: ${{ inputs.job_summary }}
# not documented
ROOT_LOG_LEVEL: ${{ inputs.root_log_level }}
Expand Down
30 changes: 26 additions & 4 deletions python/publish/publisher.py
Expand Up @@ -24,7 +24,7 @@
from publish import logger
from publish.github_action import GithubAction
from publish.unittestresults import UnitTestCaseResults, UnitTestRunResults, UnitTestRunDeltaResults, \
UnitTestRunResultsOrDeltaResults, get_stats_delta
UnitTestRunResultsOrDeltaResults, get_stats_delta, create_unit_test_case_results


@dataclass(frozen=True)
Expand All @@ -40,6 +40,7 @@ class Settings:
commit: str
json_file: Optional[str]
json_thousands_separator: str
json_test_case_results: bool
fail_on_errors: bool
fail_on_failures: bool
# one of these *_files_glob must be set
Expand Down Expand Up @@ -72,6 +73,7 @@ class PublishData:
stats_with_delta: Optional[UnitTestRunDeltaResults]
annotations: List[Annotation]
check_url: str
cases: Optional[UnitTestCaseResults]

@classmethod
def _format_digit(cls, value: Union[int, Mapping[str, int], Any], thousands_separator: str) -> Union[str, Mapping[str, str], Any]:
Expand Down Expand Up @@ -100,24 +102,41 @@ def _formatted_stats_and_delta(cls,
def _as_dict(self) -> Dict[str, Any]:
self_without_exceptions = dataclasses.replace(
self,
# remove exceptions
stats=self.stats.without_exceptions(),
stats_with_delta=self.stats_with_delta.without_exceptions() if self.stats_with_delta else None
stats_with_delta=self.stats_with_delta.without_exceptions() if self.stats_with_delta else None,
# turn defaultdict into simple dict
cases={test: {state: cases for state, cases in states.items()}
for test, states in self.cases.items()} if self.cases else None
)

# the dict_factory removes None values
return dataclasses.asdict(self_without_exceptions,
dict_factory=lambda x: {k: v for (k, v) in x if v is not None})

def to_dict(self, thousands_separator: str) -> Mapping[str, Any]:
d = self._as_dict()

# beautify cases, turn tuple-key into proper fields
if d.get('cases'):
d['cases'] = [{k: v for k, v in [('file_name', test[0]),
('class_name', test[1]),
('test_name', test[2]),
('states', states)]
if v}
for test, states in d['cases'].items()]

# provide formatted stats and delta
d.update(formatted=self._formatted_stats_and_delta(
d.get('stats'), d.get('stats_with_delta'), thousands_separator
))

return d

def to_reduced_dict(self, thousands_separator: str) -> Mapping[str, Any]:
data = self._as_dict()

# replace some large fields with their lengths
# replace some large fields with their lengths and delete individual test cases if present
def reduce(d: Dict[str, Any]) -> Dict[str, Any]:
d = deepcopy(d)
if d.get('stats', {}).get('errors') is not None:
Expand All @@ -126,6 +145,8 @@ def reduce(d: Dict[str, Any]) -> Dict[str, Any]:
d['stats_with_delta']['errors'] = len(d['stats_with_delta']['errors'])
if d.get('annotations') is not None:
d['annotations'] = len(d['annotations'])
if d.get('cases') is not None:
del d['cases']
return d

data = reduce(data)
Expand Down Expand Up @@ -347,7 +368,8 @@ def publish_check(self,
stats=stats,
stats_with_delta=stats_with_delta if before_stats is not None else None,
annotations=all_annotations,
check_url=check_run.html_url
check_url=check_run.html_url,
cases=cases if self._settings.json_test_case_results else None
)
self.publish_json(data)

Expand Down
28 changes: 18 additions & 10 deletions python/publish/unittestresults.py
@@ -1,7 +1,8 @@
import dataclasses
from collections import defaultdict
from copy import deepcopy
from dataclasses import dataclass
from typing import Optional, List, Mapping, Any, Union, Dict, Callable
from typing import Optional, List, Mapping, Any, Union, Dict, Callable, Tuple, AbstractSet
from xml.etree.ElementTree import ParseError as XmlParseError


Expand All @@ -20,11 +21,18 @@ class UnitTestCase:
time: Optional[float]


class UnitTestCaseResults(defaultdict):
def __init__(self, items=None):
if items is None:
items = []
super(UnitTestCaseResults, self).__init__(lambda: defaultdict(list), items)
UnitTestCaseFileName = str
UnitTestCaseClassName = str
UnitTestCaseTestName = str
UnitTestCaseResultKey = Tuple[Optional[UnitTestCaseFileName], UnitTestCaseClassName, UnitTestCaseTestName]
UnitTestCaseState = str
UnitTestCaseResults = Mapping[UnitTestCaseResultKey, Mapping[UnitTestCaseState, List[UnitTestCase]]]


def create_unit_test_case_results(indexed_cases: Optional[UnitTestCaseResults] = None) -> UnitTestCaseResults:
if indexed_cases:
return deepcopy(indexed_cases)
return defaultdict(lambda: defaultdict(list))


@dataclass(frozen=True)
Expand Down Expand Up @@ -130,7 +138,7 @@ def without_cases(self):
cases_failures=self.suite_failures,
cases_errors=self.suite_errors,
cases_time=self.suite_time,
case_results=UnitTestCaseResults(),
case_results=create_unit_test_case_results(),

tests=self.suite_tests,
tests_skipped=self.suite_skipped,
Expand Down Expand Up @@ -390,7 +398,7 @@ def without_exceptions(self) -> 'UnitTestRunDeltaResults':
UnitTestRunResultsOrDeltaResults = Union[UnitTestRunResults, UnitTestRunDeltaResults]


def aggregate_states(states: List[str]) -> str:
def aggregate_states(states: AbstractSet[str]) -> str:
return 'error' if 'error' in states else \
'failure' if 'failure' in states else \
'success' if 'success' in states else \
Expand Down Expand Up @@ -419,7 +427,7 @@ def get_test_results(parsed_results: ParsedUnitTestResultsWithCommit,
cases_time = sum([case.time or 0 for case in cases])

# index cases by tests and state
cases_results = UnitTestCaseResults()
cases_results = create_unit_test_case_results()
for case in cases:
# index by test file name (when de-duplicating by file name), class name and test name
test = (case.test_file if dedup_classes_by_file_name else None, case.class_name, case.test_name)
Expand All @@ -432,7 +440,7 @@ def get_test_results(parsed_results: ParsedUnitTestResultsWithCommit,

test_results = dict()
for test, states in cases_results.items():
test_results[test] = aggregate_states(states)
test_results[test] = aggregate_states(states.keys())

tests = len(test_results)
tests_skipped = len([test for test, state in test_results.items() if state in ['skipped', 'disabled']])
Expand Down
1 change: 1 addition & 0 deletions python/publish_test_results.py
Expand Up @@ -370,6 +370,7 @@ def get_settings(options: dict, gha: Optional[GithubAction] = None) -> Settings:
commit=get_var('COMMIT', options) or get_commit_sha(event, event_name, options),
json_file=get_var('JSON_FILE', options),
json_thousands_separator=get_var('JSON_THOUSANDS_SEPARATOR', options) or punctuation_space,
json_test_case_results=get_bool_var('JSON_TEST_CASE_RESULTS', options, default=False),
fail_on_errors=fail_on_errors,
fail_on_failures=fail_on_failures,
junit_files_glob=get_var('JUNIT_FILES', options) or default_junit_files_glob,
Expand Down
5 changes: 3 additions & 2 deletions python/test/test_action_script.py
Expand Up @@ -178,7 +178,8 @@ def get_settings(token='token',
seconds_between_github_reads=1.5,
seconds_between_github_writes=2.5,
json_file=None,
json_thousands_separator=punctuation_space) -> Settings:
json_thousands_separator=punctuation_space,
json_test_case_results=False) -> Settings:
return Settings(
token=token,
api_url=api_url,
Expand All @@ -191,6 +192,7 @@ def get_settings(token='token',
commit=commit,
json_file=json_file,
json_thousands_separator=json_thousands_separator,
json_test_case_results=json_test_case_results,
fail_on_errors=fail_on_errors,
fail_on_failures=fail_on_failures,
junit_files_glob=junit_files_glob,
Expand Down Expand Up @@ -849,7 +851,6 @@ def test_parse_files(self):
gha = GithubAction(file=string)
with mock.patch('publish.github_action.logger') as m:
log_parse_errors(actual.errors, gha)
self.maxDiff = None
expected = [
"::error::lxml.etree.XMLSyntaxError: Start tag expected, '<' not found, line 1, column 1",
"::error file=non-xml.xml::Error processing result file: Start tag expected, '<' not found, line 1, column 1 (non-xml.xml, line 1)",
Expand Down

0 comments on commit 1c4c5ee

Please sign in to comment.