Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(core): scoring data customizable #2353

Open
wants to merge 15 commits into
base: develop
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 10 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
81 changes: 81 additions & 0 deletions docs/guides/2-cli.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,7 @@ Other options include:
--stdin-filepath path to a file to pretend that stdin comes from [string]
--resolver path to custom json-ref-resolver instance [string]
-r, --ruleset path/URL to a ruleset file [string]
--scoring-config path/URL to a scoring config file [string]
-F, --fail-severity results of this level or above will trigger a failure exit code
[string] [choices: "error", "warn", "info", "hint"] [default: "error"]
-D, --display-only-failures only output results equal to or greater than --fail-severity [boolean] [default: false]
Expand All @@ -60,6 +61,86 @@ Here you can build a [custom ruleset](../getting-started/3-rulesets.md), or exte
- [OpenAPI ruleset](../reference/openapi-rules.md)
- [AsyncAPI ruleset](../reference/asyncapi-rules.md)

## Scoring the API
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pamgoodrich do you want to have a quick look at the docs here?


Scoring an API definition is a way to understand in a high level, how compliant is the API definition with the rulesets provided. This helps teams to understand the quality of the APIs regarding the definition.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Scoring an API definition is a way to understand in a high level, how compliant is the API definition with the rulesets provided.

Suggest

Scoring an API definition is a way to understand at a high level how compliant the API definition is with the rulesets provided.


The scoring is produced in two different metrics:

- A number scoring. Who cames as substracting from 100% from any error or warning
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggest:

A number scoring: Calculated by subtracting any error or warning from 100%.

- A letter, who groups numeric scorings in letters from A (better) to any
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggest:

A letter scoring, which groups numeric scoring in letters from A to Z, with A being the best score.


Also it introduces a quality gate, were an API scoring below the specific threshold will fail in a pipeline.

Enabling scoring is done using a new parameter called --scoring-config and the scoring configuration file, where you can define how an error or a warning affects to the scoring

Usage:

```bash
spectral lint ./reference/**/*.oas*.{json,yml,yaml} --ruleset mycustomruleset.js --scoring-config ./scoringFile.json
```

Heres an example of this scoringFile config file:

```
{
"scoringSubtract":
{
"error":
{
1:55,
2:65,
3:75,
6:85,
10:95
}
"warn":
{
1:3,
2:7,
3:10,
6:15,
10:18
}
},
"scoringLetter":
{
"A":75,
"B":65,
"C":55,
"D":45,
"E":0
},
"threshold":50,
"warningsSubtract": true,
"uniqueErrors": false
}
```

Where:

- scoringSubtract : An object with a key/value pair objects for every result level we want to subtract percentage, with the percentage to subtract from number of results on every result type
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

...a key/value pair objects..."

I think you need to remove "a" or make "objects" singular.

So:

An object with key/value pair objects for...

- scoringLetter : An object with key/value pairs with scoring letter and scoring percentage, that the result must be greater , for this letter
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove space after greater and before the comma

- threshold : A number with minimum percentage value to provide valid the file we are checking
- warningsSubtract : A boolean to setup if accumulate the result types to less the scoring percentage or stop counting on most critical result types
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggest:

  • warningsSubtract : A boolean to accumulate the result types to less than the scoring percentage or to stop counting on most critical result types

- uniqueErrors : A boolean to setup a count with unique errors or with all of them. An error is considered unique if its code and message have not been seen yet
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggest:

A boolean to count unique errors or all errors.


Example:

With previous scoring config file, if we have:

1 error, the scoring is 45% and D
2 errors, the scoring is 35% and E
3 errors, the scoring is 25% and E
4 errors, the scoring is 25% and E
and so on

Output:

Below your output log you can see the scoring, like:

✖ SCORING: A (93%)

## Error Results

Spectral has a few different error severities: `error`, `warn`, `info`, and `hint`, and they're in order from highest to lowest. By default, all results are shown regardless of severity, but since v5.0, only the presence of errors causes a failure status code of 1. Seeing results and getting a failure code for it are now two different things.
Expand Down
20 changes: 18 additions & 2 deletions packages/cli/src/commands/__tests__/lint.test.ts
Original file line number Diff line number Diff line change
Expand Up @@ -146,6 +146,22 @@ describe('lint', () => {
);
});

it('calls lint with document, ruleset and scoring config file', async () => {
const doc = './__fixtures__/empty-oas2-document.json';
const ruleset = 'custom-ruleset.json';
const configFile = 'scoring-config.json';
await run(`lint -r ${ruleset} --scoring-config ${configFile} ${doc}`);
expect(lint).toBeCalledWith([doc], {
encoding: 'utf8',
format: ['stylish'],
output: { stylish: '<stdout>' },
ruleset: 'custom-ruleset.json',
stdinFilepath: undefined,
ignoreUnknownFormat: false,
failOnUnmatchedGlobs: false,
});
});

it.each(['json', 'stylish'])('calls formatOutput with %s format', async format => {
await run(`lint -f ${format} ./__fixtures__/empty-oas2-document.json`);
expect(formatOutput).toBeCalledWith(results, format, { failSeverity: DiagnosticSeverity.Error });
Expand Down Expand Up @@ -244,13 +260,13 @@ describe('lint', () => {
expect(process.stderr.write).nthCalledWith(2, `Error #1: ${chalk.red('some unhandled exception')}\n`);
expect(process.stderr.write).nthCalledWith(
3,
expect.stringContaining(`packages/cli/src/commands/__tests__/lint.test.ts:236`),
expect.stringContaining(`packages/cli/src/commands/__tests__/lint.test.ts:252`),
);

expect(process.stderr.write).nthCalledWith(4, `Error #2: ${chalk.red('another one')}\n`);
expect(process.stderr.write).nthCalledWith(
5,
expect.stringContaining(`packages/cli/src/commands/__tests__/lint.test.ts:237`),
expect.stringContaining(`packages/cli/src/commands/__tests__/lint.test.ts:253`),
);

expect(process.stderr.write).nthCalledWith(6, `Error #3: ${chalk.red('original exception')}\n`);
Expand Down
46 changes: 44 additions & 2 deletions packages/cli/src/commands/lint.ts
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,14 @@ import { formatOutput, writeOutput } from '../services/output';
import { FailSeverity, ILintConfig, OutputFormat } from '../services/config';

import { CLIError } from '../errors';
import { ScoringConfig } from './../formatters/types';
import {
getScoringConfig,
getScoringLevel,
groupBySource,
getCountsBySeverity,
getUniqueErrors,
} from '../formatters//utils';

const formatOptions = Object.values(OutputFormat);

Expand Down Expand Up @@ -127,6 +135,10 @@ const lintCommand: CommandModule = {
description: 'path/URL to a ruleset file',
type: 'string',
},
'scoring-config': {
description: 'path/URL to a scoring config file',
type: 'string',
},
'fail-severity': {
alias: 'F',
description: 'results of this level or above will trigger a failure exit code',
Expand Down Expand Up @@ -168,6 +180,7 @@ const lintCommand: CommandModule = {
failSeverity,
displayOnlyFailures,
ruleset,
scoringConfig,
stdinFilepath,
format,
output,
Expand Down Expand Up @@ -197,20 +210,30 @@ const lintCommand: CommandModule = {
results = filterResultsBySeverity(results, failSeverity);
}

const scoringConfigData = await getScoringConfig(scoringConfig);

await Promise.all(
format.map(f => {
const formattedOutput = formatOutput(results, f, { failSeverity: getDiagnosticSeverity(failSeverity) });
const formattedOutput = formatOutput(results, f, {
failSeverity: getDiagnosticSeverity(failSeverity),
scoringConfig: scoringConfigData,
});
return writeOutput(formattedOutput, output?.[f] ?? '<stdout>');
}),
);

if (results.length > 0) {
process.exit(severeEnoughToFail(results, failSeverity) ? 1 : 0);
process.exit(
scoringThresholdNotEnough(results, scoringConfigData) ? 1 : severeEnoughToFail(results, failSeverity) ? 1 : 0,
);
} else if (config.quiet !== true) {
const isErrorSeverity = getDiagnosticSeverity(failSeverity) === DiagnosticSeverity.Error;
process.stdout.write(
`No results with a severity of '${failSeverity}' ${isErrorSeverity ? '' : 'or higher '}found!\n`,
);
if (scoringConfig !== void 0) {
process.stdout.write(`SCORING: (100%)\nPASSED!`);
PagoNxt-Trade marked this conversation as resolved.
Show resolved Hide resolved
}
}
} catch (ex) {
fail(isError(ex) ? ex : new Error(String(ex)), config.verbose === true);
Expand Down Expand Up @@ -273,6 +296,25 @@ const filterResultsBySeverity = (results: IRuleResult[], failSeverity: FailSever
return results.filter(r => r.severity <= diagnosticSeverity);
};

const scoringThresholdNotEnough = (results: IRuleResult[], scoringConfig: ScoringConfig | undefined): boolean => {
if (scoringConfig !== void 0) {
const groupedResults = groupBySource(results);
let groupedUniqueResults = { ...groupedResults };
if (scoringConfig.uniqueErrors) {
groupedUniqueResults = { ...groupBySource(getUniqueErrors(results)) };
}
return (
scoringConfig.threshold >
getScoringLevel(
getCountsBySeverity(groupedUniqueResults),
scoringConfig.scoringSubtract,
scoringConfig.warningsSubtract,
)
);
}
return false;
};

export const severeEnoughToFail = (results: IRuleResult[], failSeverity: FailSeverity): boolean => {
const diagnosticSeverity = getDiagnosticSeverity(failSeverity);
return results.some(r => r.severity <= diagnosticSeverity);
Expand Down
26 changes: 23 additions & 3 deletions packages/cli/src/formatters/json.ts
Original file line number Diff line number Diff line change
@@ -1,6 +1,15 @@
import { Formatter } from './types';
import { ISpectralDiagnostic } from '@stoplight/spectral-core';
import { Formatter, FormatterOptions } from './types';

export const json: Formatter = results => {
import { groupBySource, getUniqueErrors, getCountsBySeverity, getScoringText } from './utils';

export const json: Formatter = (results: ISpectralDiagnostic[], options: FormatterOptions) => {
let groupedResults;
let scoringText = '';
if (options.scoringConfig !== void 0) {
groupedResults = groupBySource(getUniqueErrors(results));
scoringText = getScoringText(getCountsBySeverity(groupedResults), options.scoringConfig);
}
const outputJson = results.map(result => {
return {
code: result.code,
Expand All @@ -11,5 +20,16 @@ export const json: Formatter = results => {
source: result.source,
};
});
return JSON.stringify(outputJson, null, '\t');
let objectOutput;
if (options.scoringConfig !== void 0) {
const scoring = +(scoringText !== null ? scoringText.replace('%', '').split(/[()]+/)[1] : 0);
objectOutput = {
scoring: scoringText.replace('SCORING:', '').trim(),
passed: scoring >= options.scoringConfig.threshold,
results: outputJson,
};
} else {
objectOutput = outputJson;
}
return JSON.stringify(objectOutput, null, '\t');
};
43 changes: 38 additions & 5 deletions packages/cli/src/formatters/pretty.ts
Original file line number Diff line number Diff line change
Expand Up @@ -24,22 +24,33 @@
* @author Ava Thorn
*/

import { ISpectralDiagnostic } from '@stoplight/spectral-core';
import { printPath, PrintStyle } from '@stoplight/spectral-runtime';
import { IDiagnostic, IRange } from '@stoplight/types';
import { IDiagnostic, IRange, DiagnosticSeverity } from '@stoplight/types';
import chalk from 'chalk';

import { Formatter } from './types';
import { getColorForSeverity, getHighestSeverity, getSummary, getSeverityName, groupBySource } from './utils';
import { Formatter, FormatterOptions } from './types';
import {
getColorForSeverity,
getHighestSeverity,
getSummary,
getSeverityName,
groupBySource,
getScoringText,
getCountsBySeverity,
getUniqueErrors,
} from './utils';

function formatRange(range?: IRange): string {
if (range === void 0) return '';

return ` ${range.start.line + 1}:${range.start.character + 1}`;
}

export const pretty: Formatter = results => {
export const pretty: Formatter = (results: ISpectralDiagnostic[], options: FormatterOptions) => {
const cliui = require('cliui');
let output = '\n';

const DEFAULT_TOTAL_WIDTH = process.stdout.columns;
const COLUMNS = [10, 13, 25, 20, 20];
const variableColumns = DEFAULT_TOTAL_WIDTH - COLUMNS.reduce((a, b) => a + b);
Expand All @@ -50,10 +61,23 @@ export const pretty: Formatter = results => {
const PAD_TOP1_LEFT0 = [1, 0, 0, 0];
const ui = cliui({ width: DEFAULT_TOTAL_WIDTH, wrap: true });

const uniqueResults = getUniqueErrors(results);
const groupedResults = groupBySource(results);
const summaryColor = getColorForSeverity(getHighestSeverity(results));
const summaryColor = getColorForSeverity(getHighestSeverity(uniqueResults));
const summaryText = getSummary(groupedResults);

let groupedUniqueResults = { ...groupedResults };
let scoringColor = '';
let scoringText = null;

if (options.scoringConfig !== void 0) {
if (options.scoringConfig.uniqueErrors) {
groupedUniqueResults = { ...groupBySource(uniqueResults) };
}
scoringColor = getColorForSeverity(DiagnosticSeverity.Information);
scoringText = getScoringText(getCountsBySeverity(groupedUniqueResults), options.scoringConfig);
}

const uniqueIssues: IDiagnostic['code'][] = [];
Object.keys(groupedResults).forEach(i => {
const pathResults = groupedResults[i];
Expand Down Expand Up @@ -83,6 +107,15 @@ export const pretty: Formatter = results => {
output += ui.toString();
output += chalk[summaryColor].bold(`${uniqueIssues.length} Unique Issue(s)\n`);
output += chalk[summaryColor].bold(`\u2716${summaryText !== null ? ` ${summaryText}` : ''}\n`);
if (options.scoringConfig !== void 0) {
output += chalk[scoringColor].bold(`\u2716${scoringText !== null ? ` ${scoringText}` : ''}\n`);
const scoring = +(scoringText !== null ? scoringText.replace('%', '').split(/[()]+/)[1] : 0);
if (scoring >= options.scoringConfig.threshold) {
output += chalk['green'].bold(`\u2716 PASSED!\n`);
} else {
output += chalk['red'].bold(`\u2716 FAILED!\n`);
}
}

return output;
};