Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

test: Consolidate and update pytest options in pyproject.toml #1773

Merged
merged 25 commits into from Feb 15, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
25 commits
Select commit Hold shift + click to select a range
1d9cd7c
test: Add --cov-branch option to pytest options for redundancy
matthewfeickert Feb 9, 2022
256b894
change source to be project level in .coeveragerc to match pytest
matthewfeickert Feb 9, 2022
c84696c
[ci all]: Remove tests __init__ given Scikit-HEP recommendations
matthewfeickert Feb 9, 2022
a13a6b6
Remove ignores
matthewfeickert Feb 9, 2022
9fe36c5
Remove omit
matthewfeickert Feb 9, 2022
1a864a4
Add additional options
matthewfeickert Feb 9, 2022
8a964d8
Remove redundancy of coveragerc
matthewfeickert Feb 9, 2022
0dc561f
remove old black stuff
matthewfeickert Feb 9, 2022
6809a02
[ci all]: Apply '-r a' to all pytest runs
matthewfeickert Feb 9, 2022
f5fad45
Remove .coveragerc
matthewfeickert Feb 9, 2022
2b71f9d
add -Wd
matthewfeickert Feb 9, 2022
bd0bd25
Switch to filterwarnings and start list of warnings to fix
matthewfeickert Feb 10, 2022
3b0e39d
Move to bottom to make easier
matthewfeickert Feb 10, 2022
621f5f2
Avoid pytest.PytestUnraisableExceptionWarning
matthewfeickert Feb 10, 2022
8b94bb6
Avoid scipy.optimize.optimize.OptimizeWarning
matthewfeickert Feb 10, 2022
a59b723
Avoid divide by zero encountered in log:RuntimeWarning
matthewfeickert Feb 10, 2022
80f5e31
Add match
matthewfeickert Feb 10, 2022
d8f8a4c
Simplify some of the warnings given other coverage
matthewfeickert Feb 10, 2022
efc0f53
Add warning for new scipy
matthewfeickert Feb 10, 2022
760f27f
Use newer API
matthewfeickert Feb 10, 2022
989bb55
Avoid
matthewfeickert Feb 10, 2022
643ae54
Add match for divide by zero
matthewfeickert Feb 10, 2022
e8d1351
Don't apply to Minimum supported dependencies workflow
matthewfeickert Feb 10, 2022
342d3d6
Still show warnings
matthewfeickert Feb 10, 2022
9a42ff4
Use --override-ini to set filterwanrings to empty list
matthewfeickert Feb 10, 2022
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
16 changes: 0 additions & 16 deletions .coveragerc

This file was deleted.

8 changes: 4 additions & 4 deletions .github/workflows/ci.yml
Expand Up @@ -49,7 +49,7 @@ jobs:

- name: Test with pytest
run: |
pytest -r sx --ignore tests/benchmarks/ --ignore tests/contrib --ignore tests/test_notebooks.py
pytest --ignore tests/benchmarks/ --ignore tests/contrib --ignore tests/test_notebooks.py

- name: Launch a tmate session if tests fail
if: failure() && github.event_name == 'workflow_dispatch'
Expand All @@ -64,7 +64,7 @@ jobs:

- name: Test Contrib module with pytest
run: |
pytest -r sx tests/contrib --mpl --mpl-baseline-path tests/contrib/baseline
pytest tests/contrib --mpl --mpl-baseline-path tests/contrib/baseline

- name: Report contrib coverage with Codecov
if: github.event_name != 'schedule' && matrix.python-version == '3.9' && matrix.os == 'ubuntu-latest'
Expand All @@ -75,7 +75,7 @@ jobs:

- name: Test docstring examples with doctest
if: matrix.python-version == '3.9'
run: pytest -r sx src/ README.rst
run: pytest src/ README.rst

- name: Report doctest coverage with Codecov
if: github.event_name != 'schedule' && matrix.python-version == '3.9' && matrix.os == 'ubuntu-latest'
Expand All @@ -87,4 +87,4 @@ jobs:
- name: Run benchmarks
if: github.event_name == 'schedule' && matrix.python-version == '3.9'
run: |
pytest -r sx --benchmark-sort=mean tests/benchmarks/test_benchmark.py
pytest --benchmark-sort=mean tests/benchmarks/test_benchmark.py
10 changes: 5 additions & 5 deletions .github/workflows/dependencies-head.yml
Expand Up @@ -31,7 +31,7 @@ jobs:

- name: Test with pytest
run: |
pytest -r sx --ignore tests/benchmarks/ --ignore tests/contrib --ignore tests/test_notebooks.py
pytest --ignore tests/benchmarks/ --ignore tests/contrib --ignore tests/test_notebooks.py

scipy:

Expand Down Expand Up @@ -61,7 +61,7 @@ jobs:

- name: Test with pytest
run: |
pytest -r sx --ignore tests/benchmarks/ --ignore tests/contrib --ignore tests/test_notebooks.py
pytest --ignore tests/benchmarks/ --ignore tests/contrib --ignore tests/test_notebooks.py

iminuit:

Expand All @@ -87,7 +87,7 @@ jobs:
python -m pip list
- name: Test with pytest
run: |
pytest -r sx --ignore tests/benchmarks/ --ignore tests/contrib --ignore tests/test_notebooks.py
pytest --ignore tests/benchmarks/ --ignore tests/contrib --ignore tests/test_notebooks.py

uproot4:

Expand All @@ -112,7 +112,7 @@ jobs:
python -m pip list
- name: Test with pytest
run: |
pytest -r sx --ignore tests/benchmarks/ --ignore tests/contrib --ignore tests/test_notebooks.py
pytest --ignore tests/benchmarks/ --ignore tests/contrib --ignore tests/test_notebooks.py

pytest:

Expand All @@ -137,4 +137,4 @@ jobs:
python -m pip list
- name: Test with pytest
run: |
pytest -r sx --ignore tests/benchmarks/ --ignore tests/contrib --ignore tests/test_notebooks.py
pytest --ignore tests/benchmarks/ --ignore tests/contrib --ignore tests/test_notebooks.py
6 changes: 5 additions & 1 deletion .github/workflows/lower-bound-requirements.yml
Expand Up @@ -34,5 +34,9 @@ jobs:

- name: Test with pytest
run: |
# Override the ini option for filterwarnings with an empty list to disable error on filterwarnings
# as testing for oldest releases that work with latest API, not the oldest releases that are warning
# free. Though still show warnings by setting warning control to 'default'.
export PYTHONWARNINGS='default'
# Run on tests/ to skip doctests of src given examples are for latest APIs
pytest -r sx --ignore tests/benchmarks/ --ignore tests/contrib --ignore tests/test_notebooks.py tests/
pytest --override-ini filterwarnings= --ignore tests/benchmarks/ --ignore tests/contrib --ignore tests/test_notebooks.py tests/
2 changes: 1 addition & 1 deletion .github/workflows/notebooks.yml
Expand Up @@ -27,4 +27,4 @@ jobs:
python -m pip list
- name: Test example notebooks
run: |
pytest -r sx tests/test_notebooks.py
pytest tests/test_notebooks.py
2 changes: 1 addition & 1 deletion .github/workflows/release_tests.yml
Expand Up @@ -40,7 +40,7 @@ jobs:

- name: Canary test public API
run: |
pytest -r sx tests/test_public_api.py
pytest tests/test_public_api.py

- name: Verify requirements in codemeta.json
run: |
Expand Down
30 changes: 20 additions & 10 deletions pyproject.toml
Expand Up @@ -42,18 +42,19 @@ ignore = [
minversion = "6.0"
xfail_strict = true
addopts = [
"--ignore=setup.py",
"--ignore=validation/",
"--ignore=binder/",
"--ignore=docs/",
"-ra",
"--cov=pyhf",
"--cov-config=.coveragerc",
"--cov-branch",
"--showlocals",
"--strict-markers",
"--strict-config",
"--cov-report=term-missing",
"--cov-report=xml",
"--cov-report=html",
"--doctest-modules",
"--doctest-glob='*.rst'"
"--doctest-glob='*.rst'",
]
log_cli_level = "info"
testpaths = "tests"
markers = [
"fail_jax",
Expand All @@ -75,12 +76,21 @@ markers = [
"skip_pytorch64",
"skip_tensorflow",
]

[tool.nbqa.config]
black = "pyproject.toml"
filterwarnings = [
"error",
'ignore:the imp module is deprecated:DeprecationWarning', # tensorflow
'ignore:distutils Version classes are deprecated:DeprecationWarning', # tensorflow-probability
'ignore:the `interpolation=` argument to percentile was renamed to `method=`, which has additional options:DeprecationWarning', # Issue #1772
"ignore:The interpolation= argument to 'quantile' is deprecated. Use 'method=' instead:DeprecationWarning", # Issue #1772
'ignore: Exception ignored in:pytest.PytestUnraisableExceptionWarning', #FIXME: Exception ignored in: <_io.FileIO [closed]>
'ignore:invalid value encountered in true_divide:RuntimeWarning', #FIXME
'ignore:invalid value encountered in add:RuntimeWarning', #FIXME
"ignore:In future, it will be an error for 'np.bool_' scalars to be interpreted as an index:DeprecationWarning", #FIXME: tests/test_tensor.py::test_pdf_eval[pytorch]
'ignore:Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with:UserWarning', #FIXME: tests/test_optim.py::test_minimize[no_grad-scipy-pytorch-no_stitch]
'ignore:divide by zero encountered in true_divide:RuntimeWarning', #FIXME: pytest tests/test_tensor.py::test_pdf_calculations[numpy]
]

[tool.nbqa.mutate]
black = 1
pyupgrade = 1

[tool.nbqa.addopts]
Expand Down
Empty file removed tests/__init__.py
Empty file.
22 changes: 10 additions & 12 deletions tests/test_export.py
Expand Up @@ -352,17 +352,14 @@ def test_export_sample_zerodata(mocker, spec):
sampledata = [0.0] * len(samplespec['data'])

mocker.patch('pyhf.writexml._ROOT_DATA_FILE')
# make sure no RuntimeWarning, https://stackoverflow.com/a/45671804
with pytest.warns(None) as record:
for modifierspec in samplespec['modifiers']:
pyhf.writexml.build_modifier(
{'measurements': [{'config': {'parameters': []}}]},
modifierspec,
channelname,
samplename,
sampledata,
)
assert not record.list
for modifierspec in samplespec['modifiers']:
pyhf.writexml.build_modifier(
{'measurements': [{'config': {'parameters': []}}]},
modifierspec,
channelname,
samplename,
sampledata,
)


@pytest.mark.parametrize(
Expand Down Expand Up @@ -424,7 +421,8 @@ def test_integer_data(datadir, mocker):
"""
Test that a spec with only integer data will be written correctly
"""
spec = json.load(open(datadir.join("workspace_integer_data.json")))
with open(datadir.join("workspace_integer_data.json")) as spec_file:
spec = json.load(spec_file)
channel_spec = spec["channels"][0]
mocker.patch("pyhf.writexml._ROOT_DATA_FILE")

Expand Down
10 changes: 8 additions & 2 deletions tests/test_optim.py
Expand Up @@ -4,6 +4,7 @@
from pyhf.tensor.common import _TensorViewer
import pytest
from scipy.optimize import minimize, OptimizeResult
from scipy.optimize import OptimizeWarning
import iminuit
import itertools
import numpy as np
Expand Down Expand Up @@ -563,7 +564,8 @@ def test_solver_options_scipy(mocker):


# Note: in this case, scipy won't usually raise errors for arbitrary options
# so this test exists as a sanity reminder that scipy is not perfect
# so this test exists as a sanity reminder that scipy is not perfect.
# It does raise a scipy.optimize.OptimizeWarning though.
def test_bad_solver_options_scipy(mocker):
optimizer = pyhf.optimize.scipy_optimizer(
solver_options={'arbitrary_option': 'foobar'}
Expand All @@ -573,7 +575,11 @@ def test_bad_solver_options_scipy(mocker):

model = pyhf.simplemodels.uncorrelated_background([50.0], [100.0], [10.0])
data = pyhf.tensorlib.astensor([125.0] + model.config.auxdata)
assert pyhf.infer.mle.fit(data, model).tolist()

with pytest.warns(
OptimizeWarning, match="Unknown solver options: arbitrary_option"
):
assert pyhf.infer.mle.fit(data, model).tolist()


def test_minuit_param_names(mocker):
Expand Down
73 changes: 38 additions & 35 deletions tests/test_tensor.py
Expand Up @@ -274,37 +274,39 @@ def test_shape(backend):
@pytest.mark.fail_pytorch64
def test_pdf_calculations(backend):
tb = pyhf.tensorlib
assert tb.tolist(tb.normal_cdf(tb.astensor([0.8]))) == pytest.approx(
[0.7881446014166034], 1e-07
)
assert tb.tolist(
tb.normal_logpdf(
tb.astensor([0, 0, 1, 1, 0, 0, 1, 1]),
tb.astensor([0, 1, 0, 1, 0, 1, 0, 1]),
tb.astensor([0, 0, 0, 0, 1, 1, 1, 1]),
# FIXME
with pytest.warns(RuntimeWarning, match="divide by zero encountered in log"):
assert tb.tolist(tb.normal_cdf(tb.astensor([0.8]))) == pytest.approx(
[0.7881446014166034], 1e-07
)
assert tb.tolist(
tb.normal_logpdf(
tb.astensor([0, 0, 1, 1, 0, 0, 1, 1]),
tb.astensor([0, 1, 0, 1, 0, 1, 0, 1]),
tb.astensor([0, 0, 0, 0, 1, 1, 1, 1]),
)
) == pytest.approx(
[
np.nan,
np.nan,
np.nan,
np.nan,
-0.91893853,
-1.41893853,
-1.41893853,
-0.91893853,
],
nan_ok=True,
)
# Allow poisson(lambda=0) under limit Poisson(n = 0 | lambda -> 0) = 1
assert tb.tolist(
tb.poisson(tb.astensor([0, 0, 1, 1]), tb.astensor([0, 1, 0, 1]))
) == pytest.approx([1.0, 0.3678794503211975, 0.0, 0.3678794503211975])
assert tb.tolist(
tb.poisson_logpdf(tb.astensor([0, 0, 1, 1]), tb.astensor([0, 1, 0, 1]))
) == pytest.approx(
np.log([1.0, 0.3678794503211975, 0.0, 0.3678794503211975]).tolist()
)
) == pytest.approx(
[
np.nan,
np.nan,
np.nan,
np.nan,
-0.91893853,
-1.41893853,
-1.41893853,
-0.91893853,
],
nan_ok=True,
)
# Allow poisson(lambda=0) under limit Poisson(n = 0 | lambda -> 0) = 1
assert tb.tolist(
tb.poisson(tb.astensor([0, 0, 1, 1]), tb.astensor([0, 1, 0, 1]))
) == pytest.approx([1.0, 0.3678794503211975, 0.0, 0.3678794503211975])
assert tb.tolist(
tb.poisson_logpdf(tb.astensor([0, 0, 1, 1]), tb.astensor([0, 1, 0, 1]))
) == pytest.approx(
np.log([1.0, 0.3678794503211975, 0.0, 0.3678794503211975]).tolist()
)

# Ensure continuous approximation is valid
assert tb.tolist(
Expand Down Expand Up @@ -343,11 +345,12 @@ def test_pdf_calculations_pytorch(backend):
assert tb.tolist(
tb.poisson(tb.astensor([0, 0, 1, 1]), tb.astensor([0, 1, 0, 1]))
) == pytest.approx([1.0, 0.3678794503211975, 0.0, 0.3678794503211975])
assert tb.tolist(
tb.poisson_logpdf(tb.astensor([0, 0, 1, 1]), tb.astensor([0, 1, 0, 1]))
) == pytest.approx(
np.log([1.0, 0.3678794503211975, 0.0, 0.3678794503211975]).tolist()
)
with pytest.warns(RuntimeWarning, match="divide by zero encountered in log"):
assert tb.tolist(
tb.poisson_logpdf(tb.astensor([0, 0, 1, 1]), tb.astensor([0, 1, 0, 1]))
) == pytest.approx(
np.log([1.0, 0.3678794503211975, 0.0, 0.3678794503211975]).tolist()
)

# Ensure continuous approximation is valid
assert tb.tolist(
Expand Down