-
-
Notifications
You must be signed in to change notification settings - Fork 154
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Better handling of when no min MSI is provided & skip check in dry-run #1191
Conversation
There are plans to fix this issue by making them be caught in a pseudo-random fashion. As per #1150 |
@sanmai do you think it's necessary? I feel like we can have an isolated scenario for profiling the reports rather than trying to set up some sort of fixtures in a dry run |
I do think this is necessary because leaving out reporting we get an incomplete picture. Say, it could happen a report keeps a handle to some larger object. If we can make a thorough dry run with all parts involved, we could notice this. |
But it's easier to detect that through a dedicated isolated scenario no? Plus we already have this sort of structure in our tests. Also this allows to really test all reports in a consistent and similar fashion, whereas in a dry run, it depends on what report was configured |
Well, I'm not sure. Either way current approach just doesn't work for me. For example, you can easily trick a progress reporter to show current/max memory usage. But if there's no progress, as it is now, there's no reporting. Really inconvenient. |
@sanmai I would actually not include the progress reporter either, when profiling it's best combined with |
If you are not using it for profiling, then putting different states is anything but confusing IMO. I would like however to make them as "skipped" later, which I also suggested in #1171 |
"Skipped" won't do here because, say, if we were to change our logging procedure to be more efficient, any effect from the changes won't be seen. |
752e0d3
to
a26517a
Compare
@sanmai I'm not sure to understand your concerns. So far dry-run is only that, a dry run: you parse the files, create mutations, but don't process them. Faking execution results to hit some "potential bottleneck parts" IMO fits more into a benchmark/profiling case or a "fake run" rather than a "dry-run". And if you don't execute the mutations at all, as suggested by the mode, it does not make sense to check the min MSI |
For example, we're collecting 100% of execution results, even if we can tell for certain we won't need some of them, or all of them, because of disabled logging. This is a definite area of improvement, but with the current state of |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With all due respect, I don't think the part about dry run is a sensible change. It goes against the proposal from #1150.
But then again, we're back to profiling aren't we? And that's a part I would rather see profiled differently because very easy to do so without all the previous part noise |
Profiling with a tool like Blackfire is a final step. It is important, but not something I'd use as I code to quickly iterate over a feature or a change. |
Depends on #1190
As of now there is two issues:
0.0
min MSI score, you will not get suggestions to increase it if the score is above the suggestion thresholdThis PR addresses those 3 issues.