-
Notifications
You must be signed in to change notification settings - Fork 3.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Break up e2e Error UI tests and fix flakiness #7691
Conversation
Thanks for taking the time to open a PR!
|
Test summaryRun details
View run in Cypress Dashboard ➡️ This comment has been generated by cypress-bot as a result of this project's GitHub integration settings. You can manage this integration in this project's settings in the Cypress Dashboard |
Looks like the timeouts were hiding some legitimate failures/flakiness. If I have time, I'll look into fixing them. Or if the isolated runner PR gets in first, will just move these tests to that. |
The isolated runner PR has been in for about a week now - get with @bkucera on transferring these over. |
I experimented with moving them to the isolated runner. Unfortunately, it doesn't look like it will work for these tests as it currently stands. Since it evals the test function, the errors don't end up with a stack trace. It might be possible to get it to work with some updates to the isolated runner. In any case, it would be valuable to merge the changes in this PR now as they are, since the flakiness of these tests are slowing down other PRs. I can take another look at moving them to the isolated runner next week. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It'd be a bit nicer if the sets of tests had some kind of logical grouping instead of 1, 2, 3 - it's a bit unclear if I needed to add a new test where to put it. But, this should be an improvement overall. 👍
sorry, have some more things to say
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are a couple of failures in the 8_errors_ui now after these changes. And...they're a bit confusing to debug I think. Maybe I don't understand the structure well enough. The message is below.
- So, what was the actual state of this test? Did it pass when it was supposed to fail or did it fail when it was supposed to pass?
- Is there any way to print which test file failed? It'd be a lot quicker to debug if I could jump straight to the file? Right now I look up
onResponse assertion failure
and there are 4 matches, so then I have to track down which file/line this one was.
Closing in favor of #7831 |
User facing changelog
N/A - Internal only
Additional details