Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Move benchmarks under napari directory but not distributed #913

Merged
merged 3 commits into from
Jan 31, 2020

Conversation

ttung
Copy link
Contributor

@ttung ttung commented Jan 27, 2020

Description

This removes the spurious benchmarks package. It is now under the napari directory to ensure that the code is put through black and flake8, but without an __init__.py so it is not packaged into installs.

Type of change

  • Bug-fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • This change requires a documentation update

References

Fixes #910

How has this been tested?

python napari/benchmarks/benchmark_qt_viewer.py
asv run --python=python

Final checklist:

  • My PR is the minimum possible work for the desired functionality
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • I have added tests that prove my fix is effective or that my feature works

@sofroniewn sofroniewn added the bug Something isn't working label Jan 27, 2020
@sofroniewn sofroniewn added this to the 0.2.11 milestone Jan 27, 2020
@sofroniewn sofroniewn requested a review from jni January 27, 2020 21:31
Copy link
Contributor

@sofroniewn sofroniewn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks good to me, though curious if @jni likes this approach.

Can you also fix the flake8 error (app not used).

Also do the commands here https://github.com/napari/napari/blob/master/docs/BENCHMARKS.md#profiling still work? Do we need to adjust some paths as described in https://asv.readthedocs.io/en/stable/using.html#setting-up-a-new-benchmarking-project. I just see this line there

The benchmarks live in Python files in the benchmarks directory.

and I worry they won't be found now

@jni
Copy link
Member

jni commented Jan 28, 2020

I agree with @sofroniewn that this needs to be tested in conjunction with asv. Having said this, I think for now we should go with #911, sorry for the conflicting signals, @ttung! Benchmarks are still relatively rare in the SciPy world so I don't know whether "best practices" still exist, but scipy and numpy have a benchmarks top level dir that is not distributed, while pandas has asv_bench/benchmarks, which is also not distributed. So, I suggest we leave the benchmarks out for now as in #911 — and cut a new release immediately after it's merged!

@sofroniewn
Copy link
Contributor

I'm fine with the #911 approach - again apologies for the mixed signals.

and cut a new release immediately after it's merged!

I'll start a thread on zullip for 0.2.11 - we've got a lot tagged with that right now. I'm ok waiting on some of that, but i'd at least like to fix our setup.py / tests / python 3.8 stuff and some other close bugs first

@sofroniewn
Copy link
Contributor

@tlambert03 we're still getting intermittent macos 3.8 failures - sometimes it will pass if I click re-run

@tlambert03
Copy link
Member

@tlambert03 we're still getting intermittent macos 3.8 failures - sometimes it will pass if I click re-run

those kind of fails (where all the tests actually pass and then you get a segfault at the very end) are the hardest to debug for me. I assume there is some sort of race condition with cleaning up threads that sometimes wins and sometimes loses. In this case it looks like there might be a lingering QtUpdateUI(). I don't have time to look into this now, but we should probably have a global look sometime soon into our handling of threads, making absolutely sure we're hooking close signals to thread deleteLater slots and stuff... probably easiest just to merge anyway if you see that type of error.

@sofroniewn
Copy link
Contributor

I assume there is some sort of race condition with cleaning up threads that sometimes wins and sometimes loses. In this case it looks like there might be a lingering QtUpdateUI()

Ok thanks @tlambert03 - maybe @shanaxel42 can look at that, she did some of that for our console too. The relevant place to start looking might be https://github.com/napari/napari/blob/master/napari/_qt/qt_update_ui.py, https://github.com/napari/napari/blob/master/napari/viewer.py#L85-L88, and

def test_update(qtbot):
import time
data = np.random.random((512, 512))
viewer = Viewer()
view = viewer.window.qt_viewer
qtbot.addWidget(view)
layer = viewer.add_image(data)
def layer_update(*, update_period, num_updates):
# number of times to update
for k in range(num_updates):
time.sleep(update_period)
dat = np.random.random((512, 512))
layer.data = dat
assert layer.data.all() == dat.all()
viewer.update(layer_update, update_period=0.01, num_updates=100)
# if we do not sleep, main thread closes before update thread finishes and many qt components get cleaned
time.sleep(3)
# Close the viewer
viewer.window.close()
. I can pull this out into it's own issue

@ttung
Copy link
Contributor Author

ttung commented Jan 28, 2020

  1. I updated the docs.
  2. I updated the link to BENCHMARK.md.
  3. I tested with asv (after a change to asv.conf.json)

@sofroniewn
Copy link
Contributor

Given that this now works with asv I like this approach as it is consistent with the proposed approach we are taking for our tests in #918.

I will point out that @jni said

Having said this, I think for now we should go with #911, sorry for the conflicting signals

but I'm not sure if that still holds true for him in light of asv working and #918.

@ttung can you also fix the Flake8 error

Copy link
Contributor

@sofroniewn sofroniewn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am approving, but as noted above will wait for @jni to weigh in on the new changes before merge given his previous concerns

@jni
Copy link
Member

jni commented Jan 28, 2020

@ttung thanks! Actually this is a nice, elegant solution. Something I thought about in a conversation with @sofroniewn is that I don't think right now that there is a way to run the benchmarks on pip-installed napari — correct me if I'm wrong!

There isn't much harm in distributing _benchmarks with napari, it just seems a bit pointless and "bloaty" if they can't actually be used. So, I'm happy for this to be merged, but it would be great if we had a way to run the benchmarks in the installed package, in the existing env. If this is something that needs modifications in asv itself, though, I still would have a slight preference for #911 -> release -> fixing this one to include instruction to run the benchmarks on the install.

Copy link
Member

@jni jni left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I should also say, I'll approve just to make clear that I'm ok with this getting merged, I just wanted to raise the two potential issues (bloat + running benchmarks on an installed version). Thank you @ttung!

@ttung ttung changed the title Move benchmarks under napari package as a hidden package Move benchmarks under napari directory but not distributed Jan 30, 2020
@ttung
Copy link
Contributor Author

ttung commented Jan 30, 2020

I updated this such that the benchmarks live in napari/benchmarks so they get tested with black and flake8, but it is not a package so it should not be distributed.

@sofroniewn
Copy link
Contributor

Aren't these now inside napari and so distributed and available at from napari import benchmarks?

@ttung
Copy link
Contributor Author

ttung commented Jan 30, 2020

Aren't these now inside napari and so distributed and available at from napari import benchmarks?

If it's not a python package, setuptools won't include it.

% python setup.py sdist
% tar tzf dist/napari-0+unknown.tar.gz | grep benchmark
%

@sofroniewn
Copy link
Contributor

Does the asv.conf.json need to be updated now napari/_benchmarks -> napari/benchmarks?

I'm still not quite sure about the advantage of having it inside napari/benchmarks vs just benchmarks if it isn't getting distributed either way, though I don't have strong feelings

drop napari/benchmarks/__init__.py so benchmarks are not distributed.
@ttung
Copy link
Contributor Author

ttung commented Jan 30, 2020

Does the asv.conf.json need to be updated now napari/_benchmarks -> napari/benchmarks?

Yes, just fixed it.

I'm still not quite sure about the advantage of having it inside napari/benchmarks vs just benchmarks if it isn't getting distributed either way, though I don't have strong feelings

At least some of the linters aren't run outside of napari/

@sofroniewn
Copy link
Contributor

Ok I am fine with this. If this is ok with @jni I will then merge. We can always revisit. This is the last PR before 0.2.11 so I'd like to get it in and start the release process asap

@jni
Copy link
Member

jni commented Jan 30, 2020

Agreed, since this is not shipped, we can revisit in the future. Thank you @ttung!

@sofroniewn sofroniewn merged commit 08c7147 into napari:master Jan 31, 2020
@ttung ttung deleted the benchmarks branch February 4, 2020 03:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging this pull request may close these issues.

napari should not install "benchmarks" as a toplevel package
4 participants