Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tests that have not run end up in SpecStateSkipped in after suite report #1320

Open
mat007 opened this issue Dec 21, 2023 · 6 comments
Open

Comments

@mat007
Copy link
Contributor

mat007 commented Dec 21, 2023

Hi!

Is there a way in a ginkgo.ReportAfterSuite handler to tell the difference between a test that has been explicitly skipped (i.e. by calling ginkgo.Skip in its code) from a test that was not run (because e.g. a ginkgo.AbortSuite happened before the test had a chance to run)?
It seems all these tests end up in the SpecStateSkipped state. Shouldn’t the tests that never ran at all still be in the SpecStatePending state?

Thanks!

@onsi
Copy link
Owner

onsi commented Dec 26, 2023

hey @mat007 - SpecStatePending is reserved for tests that are explicitly marked as Pending. To distinguish between specs that were skipped by the user explicitly (via Skip(...)) and specs skipped for some other reason you can look at spec.FailureMessage(). If this is non-empty then the spec was skipped explicitly by the user. Otherwise the spec was skipped either because of a filter or a previous timeout or AbortSuite.

@mat007
Copy link
Contributor Author

mat007 commented Dec 26, 2023

Ah, I totally misunderstood what SpecStatePending meant! 🤦

Thanks for the suggestion about looking at the test failure message. But then, is there now a way to tell the difference between a test that was not run because it was out of focus (suite’s config FocusStrings, SkipStrings, LabelFilter, etc…) and because the suite was aborted before it ran?

@onsi
Copy link
Owner

onsi commented Dec 26, 2023

nor straightforwardly I don’t think - but at this point i should ask: what problem are you trying to solve?

@mat007
Copy link
Contributor Author

mat007 commented Dec 26, 2023

Yes, sorry, fair ask!

We run tests nightly on the CI a whole bunch of times in a loop, in batches with various focuses, and record the state of each test.
Then every day we use that data to compute some statistics, for instance the % of passed, failed and skipped tests (a very few actually manually skipped).
When a test fails, sometimes we have to abort the test suite, and as our tests run order is randomized, the remaining tests that do not run are not always the same.
The result is that the % of skipped tests varies a lot, between e.g. ~0.5% and 5%.

We wanted to start categorizing these a bit better, by separating the «real» skipped tests (typically these ~0.5%), the «not focused by design» tests (our tests are split in different «groups» so we can parallelize batch them), and the «unknown» (or «not run», or …) tests. It’s actually important for us to monitor the latter, because when it increases it means the test failure recovery mechanism we have may be starting to get flaky as well, and needs to be looked at.

I hope this is clear enough 😅

I guess I could try to manually re-apply the focus/skip regexps in my after suite report to tell if a test was in our out of focus. 🤔

@onsi
Copy link
Owner

onsi commented Dec 26, 2023

ok got it - thanks. Ginkgo doesn't have super strong first-class support for this right now. It's almost like you want something like "SkipReason" which will allow you to tell if a spec was skipped because it was out of focus, or if the user skipped it, or if an abort/timeout has occured. I could imagine adding that to the codebase though I'm not going to get to it soon :/

If you're up for doing some more work on your end, though, you can add a ReportBeforeSuite to get a version of the report before the suite runs. This will include all the specs, but specs that are out of focus (because of the various focus/filter flags) will already be skipped. That will give you N_skipped_because_of_focus. In the ReportAfterSuite you can measure N_total_skipped (all specs with SpecStateSkipped) and N_user_skipped (all specs with SpecStateSkipped and non-empty failure message). You can then do N_abort_skipped = N_total_skipped - N_user_skipped - N_skipped_because_of_focus to get the number that were skipped because of an abort or timeout.

I think that'll get you what you want?

If you want to be more granular and do this on a per-spec level you'll need to be able to connect the specs in the ReportBeforeSuite to the specs in the ReportAfterSuite. Ginkgo doesn't give you a unique identifier but if you do something like: id := fmt.Sprintf("%s %s:%d", spec.FullText() , spec.FileName(), spec.LineNumber()) you'll have something that should avoid collisions. That would allow you to compare before and after.

Sorry it's so tedious, though. I'll add SkipReason to the backlog and see if I can get to it.

@mat007
Copy link
Contributor Author

mat007 commented Jan 8, 2024

Thanks mate, that’s a good lead!
I’ll look into implementing it eventually, when time permits 😄

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants