Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Better handling of when no min MSI is provided & skip check in dry-run #1191

Closed
wants to merge 10 commits into from

Conversation

theofidry
Copy link
Member

Depends on #1190

As of now there is two issues:

  • If you provide a 0.0 min MSI score, you will not get suggestions to increase it if the score is above the suggestion threshold
  • Same for min covered code MSI
  • The MSI scores are checked even on dry-runs which does not makes sense since all mutations will be escaped in that mode

This PR addresses those 3 issues.

@sanmai
Copy link
Member

sanmai commented Mar 24, 2020

The MSI scores are checked even on dry-runs which does not makes sense since all mutations will be escaped in that mode

There are plans to fix this issue by making them be caught in a pseudo-random fashion. As per #1150

@theofidry
Copy link
Member Author

theofidry commented Mar 24, 2020

@sanmai do you think it's necessary? I feel like we can have an isolated scenario for profiling the reports rather than trying to set up some sort of fixtures in a dry run

@sanmai
Copy link
Member

sanmai commented Mar 24, 2020

I do think this is necessary because leaving out reporting we get an incomplete picture. Say, it could happen a report keeps a handle to some larger object. If we can make a thorough dry run with all parts involved, we could notice this.

@theofidry
Copy link
Member Author

But it's easier to detect that through a dedicated isolated scenario no? Plus we already have this sort of structure in our tests.

Also this allows to really test all reports in a consistent and similar fashion, whereas in a dry run, it depends on what report was configured

@sanmai
Copy link
Member

sanmai commented Mar 24, 2020

Well, I'm not sure. Either way current approach just doesn't work for me. For example, you can easily trick a progress reporter to show current/max memory usage. But if there's no progress, as it is now, there's no reporting. Really inconvenient.

@theofidry
Copy link
Member Author

@sanmai I would actually not include the progress reporter either, when profiling it's best combined with --quiet to avoid additional I/O. So in the same fashion adding more heavy I/O via reports is yet another part adding more instability to a profile. Whereas if you are interested in performances for a heavy report, it's gonna be trivial to adjust a scenario's fixtures

@sanmai
Copy link
Member

sanmai commented Mar 24, 2020

Right. We have fine benchmarks for profiling purposes. If you can see #1150 isn't about profiling per se. Profiling isn't even mentioned there.

Shall I reopen #1150 then?

@theofidry
Copy link
Member Author

theofidry commented Mar 24, 2020

If you are not using it for profiling, then putting different states is anything but confusing IMO. I would like however to make them as "skipped" later, which I also suggested in #1171

@sanmai
Copy link
Member

sanmai commented Mar 24, 2020

"Skipped" won't do here because, say, if we were to change our logging procedure to be more efficient, any effect from the changes won't be seen.

@theofidry theofidry marked this pull request as ready for review March 25, 2020 15:30
@theofidry
Copy link
Member Author

theofidry commented Mar 25, 2020

@sanmai I'm not sure to understand your concerns.

So far dry-run is only that, a dry run: you parse the files, create mutations, but don't process them.

Faking execution results to hit some "potential bottleneck parts" IMO fits more into a benchmark/profiling case or a "fake run" rather than a "dry-run". And if you don't execute the mutations at all, as suggested by the mode, it does not make sense to check the min MSI

@sanmai
Copy link
Member

sanmai commented Mar 26, 2020

For example, we're collecting 100% of execution results, even if we can tell for certain we won't need some of them, or all of them, because of disabled logging. This is a definite area of improvement, but with the current state of --dry-run it is impossible to see any benefits from changes in the area.

Copy link
Member

@sanmai sanmai left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With all due respect, I don't think the part about dry run is a sensible change. It goes against the proposal from #1150.

@theofidry
Copy link
Member Author

with the current state of --dry-run it is impossible to see any benefits from changes in the area.

But then again, we're back to profiling aren't we? And that's a part I would rather see profiled differently because very easy to do so without all the previous part noise

@sanmai
Copy link
Member

sanmai commented Mar 26, 2020

Profiling with a tool like Blackfire is a final step. It is important, but not something I'd use as I code to quickly iterate over a feature or a change.

@theofidry theofidry closed this Mar 26, 2020
@theofidry theofidry deleted the refactor/command-4 branch March 26, 2020 09:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants