-
-
Notifications
You must be signed in to change notification settings - Fork 218
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(core): scoring data customizable #2353
base: develop
Are you sure you want to change the base?
Changes from 10 commits
fb5ec2f
b74588c
294ec3a
ded50ec
c086041
7909d08
d8cd9b6
0ed625a
69cd341
fdea3e9
5f33c1b
aeb85a4
2ab64a9
90d024c
3815428
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -39,6 +39,7 @@ Other options include: | |
--stdin-filepath path to a file to pretend that stdin comes from [string] | ||
--resolver path to custom json-ref-resolver instance [string] | ||
-r, --ruleset path/URL to a ruleset file [string] | ||
--scoring-config path/URL to a scoring config file [string] | ||
-F, --fail-severity results of this level or above will trigger a failure exit code | ||
[string] [choices: "error", "warn", "info", "hint"] [default: "error"] | ||
-D, --display-only-failures only output results equal to or greater than --fail-severity [boolean] [default: false] | ||
|
@@ -60,6 +61,86 @@ Here you can build a [custom ruleset](../getting-started/3-rulesets.md), or exte | |
- [OpenAPI ruleset](../reference/openapi-rules.md) | ||
- [AsyncAPI ruleset](../reference/asyncapi-rules.md) | ||
|
||
## Scoring the API | ||
|
||
Scoring an API definition is a way to understand in a high level, how compliant is the API definition with the rulesets provided. This helps teams to understand the quality of the APIs regarding the definition. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Scoring an API definition is a way to understand in a high level, how compliant is the API definition with the rulesets provided. Suggest Scoring an API definition is a way to understand at a high level how compliant the API definition is with the rulesets provided. |
||
|
||
The scoring is produced in two different metrics: | ||
|
||
- A number scoring. Who cames as substracting from 100% from any error or warning | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Suggest: A number scoring: Calculated by subtracting any error or warning from 100%. |
||
- A letter, who groups numeric scorings in letters from A (better) to any | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Suggest: A letter scoring, which groups numeric scoring in letters from A to Z, with A being the best score. |
||
|
||
Also it introduces a quality gate, were an API scoring below the specific threshold will fail in a pipeline. | ||
|
||
Enabling scoring is done using a new parameter called --scoring-config and the scoring configuration file, where you can define how an error or a warning affects to the scoring | ||
|
||
Usage: | ||
|
||
```bash | ||
spectral lint ./reference/**/*.oas*.{json,yml,yaml} --ruleset mycustomruleset.js --scoring-config ./scoringFile.json | ||
``` | ||
|
||
Heres an example of this scoringFile config file: | ||
|
||
``` | ||
{ | ||
"scoringSubtract": | ||
{ | ||
"error": | ||
{ | ||
1:55, | ||
2:65, | ||
3:75, | ||
6:85, | ||
10:95 | ||
} | ||
"warn": | ||
{ | ||
1:3, | ||
2:7, | ||
3:10, | ||
6:15, | ||
10:18 | ||
} | ||
}, | ||
"scoringLetter": | ||
{ | ||
"A":75, | ||
"B":65, | ||
"C":55, | ||
"D":45, | ||
"E":0 | ||
}, | ||
"threshold":50, | ||
"warningsSubtract": true, | ||
"uniqueErrors": false | ||
} | ||
``` | ||
|
||
Where: | ||
|
||
- scoringSubtract : An object with a key/value pair objects for every result level we want to subtract percentage, with the percentage to subtract from number of results on every result type | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. ...a key/value pair objects..." I think you need to remove "a" or make "objects" singular. So: An object with key/value pair objects for... |
||
- scoringLetter : An object with key/value pairs with scoring letter and scoring percentage, that the result must be greater , for this letter | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Remove space after greater and before the comma |
||
- threshold : A number with minimum percentage value to provide valid the file we are checking | ||
- warningsSubtract : A boolean to setup if accumulate the result types to less the scoring percentage or stop counting on most critical result types | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Suggest:
|
||
- uniqueErrors : A boolean to setup a count with unique errors or with all of them. An error is considered unique if its code and message have not been seen yet | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Suggest: A boolean to count unique errors or all errors. |
||
|
||
Example: | ||
|
||
With previous scoring config file, if we have: | ||
|
||
1 error, the scoring is 45% and D | ||
2 errors, the scoring is 35% and E | ||
3 errors, the scoring is 25% and E | ||
4 errors, the scoring is 25% and E | ||
and so on | ||
|
||
Output: | ||
|
||
Below your output log you can see the scoring, like: | ||
|
||
✖ SCORING: A (93%) | ||
|
||
## Error Results | ||
|
||
Spectral has a few different error severities: `error`, `warn`, `info`, and `hint`, and they're in order from highest to lowest. By default, all results are shown regardless of severity, but since v5.0, only the presence of errors causes a failure status code of 1. Seeing results and getting a failure code for it are now two different things. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@pamgoodrich do you want to have a quick look at the docs here?