Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] bench: add test_plots_diff #352

Closed
wants to merge 1 commit into from
Closed

[WIP] bench: add test_plots_diff #352

wants to merge 1 commit into from

Conversation

pared
Copy link
Contributor

@pared pared commented May 25, 2022

Benchmark for dvc plots diff
related: iterative/dvc#7811

@pared pared changed the title bench: add test_plots bench: add test_plots_diff May 25, 2022


def test_plots_diff(tmp_dir, bench_dvc, repo_with_plots):
bench_dvc("plots", "diff")
Copy link
Member

@efiop efiop May 25, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Btw, the issue can be narrowed down to plots show, right? Or is there something diff specific that we are interested as well in?

I'm just looking at the test file generation and it is pretty slow and complex and being run at every test run (not even cached across session or something), so wondering if we could maybe generate it once, put into our data (maybe even two versions for diff if we really need it) and just use it here?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, do we really need that many metrics files? If I understood your research correctly, it wasn't really about the number of files, as each file took quite a while to run. So maybe let's simplify?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, any other potential tests (other commands?) where these files would be useful?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Btw, the issue can be narrowed down to plots show, right? Or is there something diff specific that we are interested as well in?

Well, when using diff we also check branched to some extent, but I wouldn't say its necessary.

Also, do we really need that many metrics files?

Well this depends - the problem with dpath was about retrieved data dict (the result of Repo.plots.show) - the bigger the dict the more visible the problem is. We could, for example create single file with few tens of thousands of points (which would show the problem with dpath) but for future testing I would presume that more realistic scenario would be having multiple plots files.

Also, any other potential tests (other commands?) where these files would be useful?

Probably not.

I will change the data generation and change diff to show

Comment on lines +8 to +28
CODE = dedent(
"""
import json
import sys
num_points=int(sys.argv[1])
num_files=int(sys.argv[2])
metric = [{'m':(i/num_points)**2} for i in range(0, num_points)]
for i in range(num_files):
with open(f'metric_{i}.json', 'w') as fd:
json.dump(metric, fd)
"""
)

tmp_dir.gen("train.py", CODE)

dvc.run(
name=f"generate_plots",
deps=["train.py"],
plots=[f"metric_{i}.json" for i in range(num_files)],
cmd=f"python train.py {num_points} {num_files}",
)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we really need to generate it each time? If I'm running this benchmark locally, trying to optimize plots show, I will be wasting a lot of time re-generating this over and over and over again. Could you just save plots files somewhere so that we can also use them later in other tests? Also, do you really need to create a stage? Can we just plots show target or something, or would that not test the same thing in this scenario?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess it also illustrates the point that we have more and more non-data benchmarks these days, and similar to a standard dataset (mnist), we need some kind of standard repository with experiments, params, etc. Thinking how to formalize this in a nice way longterm...

Copy link
Member

@efiop efiop Jun 1, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Had a similar pain with exp show, so created a simple helper in #359 Please take a look.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, do you really need to create a stage? Can we just plots show target or something, or would that not test the same thing in this scenario?

In any case we need to go through stages to check if some plot matches the targets, so I think that would still test the same thing, even if we were to just provide the targets.

we need some kind of standard repository with experiments, params, etc.

It won't work in this particular case - some generic problems will not have enough metrics/plots to observe the root problem of plots slow performance. Maybe we should create a standard repository that would have different edge cases as branches? Like for this issue: plots_show_test branch could have an additional stage copying the plots multiple times. That way we could have a generic project (main) but some more artificial cases could be kept separate and not occlude whats going on in the repo. Also, we would not have to reproduce it upon runs, put just do git checkout && dvc checkout when testing.

@@ -89,7 +89,7 @@ jobs:
steps:
- uses: actions/setup-python@v2
with:
python-version: 3.7
python-version: "3.10"
- uses: actions/checkout@v2
- name: install requirements
run: pip install -r requirements.txt
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to rebase?

@efiop
Copy link
Member

efiop commented Jun 2, 2022

Added plots show test in #359 , lets see if that will be enough. Closing this one for now.

@efiop efiop closed this Jun 2, 2022
@pared pared changed the title bench: add test_plots_diff [WIP] bench: add test_plots_diff Jun 2, 2022
@efiop efiop deleted the bench_plots_diff branch July 1, 2022 14:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants