Skip to content

Commit

Permalink
Add nbQA for notebook and docs linting (#361)
Browse files Browse the repository at this point in the history
* Add NBQA for notebook and docs linting

This is now possible after nbQA-dev/nbQA#745 which solved this issue (nbQA-dev/nbQA#668) I opened a year ago.

* Run pre-commit filters on all files

* Lint

* bump

* [pre-commit.ci] auto fixes from pre-commit.com hooks

for more information, see https://pre-commit.ci

* update os

* Fix all nbqa issues

---------

Co-authored-by: pre-commit-ci[bot] <66853113+pre-commit-ci[bot]@users.noreply.github.com>
  • Loading branch information
basnijholt and pre-commit-ci[bot] committed Apr 7, 2023
1 parent 81464a3 commit 0590be6
Show file tree
Hide file tree
Showing 13 changed files with 128 additions and 107 deletions.
10 changes: 9 additions & 1 deletion .pre-commit-config.yaml
Expand Up @@ -11,9 +11,17 @@ repos:
- repo: https://github.com/psf/black
rev: 23.3.0
hooks:
- id: black
- id: black-jupyter
- repo: https://github.com/charliermarsh/ruff-pre-commit
rev: "v0.0.261"
hooks:
- id: ruff
args: ["--fix"]
- repo: https://github.com/nbQA-dev/nbQA
rev: 1.7.0
hooks:
- id: nbqa-black
additional_dependencies: [jupytext, black]
- id: nbqa
args: ["ruff", "--fix", "--ignore=E402,B018,F704"]
additional_dependencies: [jupytext, ruff]
30 changes: 13 additions & 17 deletions docs/source/algorithms_and_examples.md
@@ -1,15 +1,13 @@
---
kernelspec:
name: python3
display_name: python3
jupytext:
text_representation:
extension: .md
format_name: myst
format_version: '0.13'
jupytext_version: 1.13.8
execution:
timeout: 300
format_version: 0.13
jupytext_version: 1.14.5
kernelspec:
display_name: python3
name: python3
---

```{include} ../../README.md
Expand Down Expand Up @@ -101,16 +99,17 @@ def plot_loss_interval(learner):
return hv.Scatter((x, y)).opts(size=6, color="r")
def plot(learner, npoints):
adaptive.runner.simple(learner, npoints_goal= npoints)
def plot_interval(learner, npoints):
adaptive.runner.simple(learner, npoints_goal=npoints)
return (learner.plot() * plot_loss_interval(learner))[:, -1.1:1.1]
def get_hm(loss_per_interval, N=101):
learner = adaptive.Learner1D(f, bounds=(-1, 1), loss_per_interval=loss_per_interval)
plots = {n: plot(learner, n) for n in range(N)}
plots = {n: plot_interval(learner, n) for n in range(N)}
return hv.HoloMap(plots, kdims=["npoints"])
plot_homo = get_hm(uniform_loss).relabel("homogeneous sampling")
plot_adaptive = get_hm(default_loss).relabel("with adaptive")
layout = plot_homo + plot_adaptive
Expand All @@ -122,7 +121,6 @@ layout.opts(toolbar=None)
```{code-cell} ipython3
:tags: [hide-input]
def ring(xy):
import numpy as np
Expand All @@ -131,7 +129,7 @@ def ring(xy):
return x + np.exp(-((x**2 + y**2 - 0.75**2) ** 2) / a**4)
def plot(learner, npoints):
def plot_compare(learner, npoints):
adaptive.runner.simple(learner, npoints_goal=npoints)
learner2 = adaptive.Learner2D(ring, bounds=learner.bounds)
xs = ys = np.linspace(*learner.bounds[0], int(learner.npoints**0.5))
Expand All @@ -146,7 +144,7 @@ def plot(learner, npoints):
learner = adaptive.Learner2D(ring, bounds=[(-1, 1), (-1, 1)])
plots = {n: plot(learner, n) for n in range(4, 1010, 20)}
plots = {n: plot_compare(learner, n) for n in range(4, 1010, 20)}
hv.HoloMap(plots, kdims=["npoints"]).collate()
```

Expand All @@ -155,7 +153,6 @@ hv.HoloMap(plots, kdims=["npoints"]).collate()
```{code-cell} ipython3
:tags: [hide-input]
def g(n):
import random
Expand All @@ -167,12 +164,12 @@ def g(n):
learner = adaptive.AverageLearner(g, atol=None, rtol=0.01)
def plot(learner, npoints):
def plot_avg(learner, npoints):
adaptive.runner.simple(learner, npoints_goal=npoints)
return learner.plot().relabel(f"loss={learner.loss():.2f}")
plots = {n: plot(learner, n) for n in range(10, 10000, 200)}
plots = {n: plot_avg(learner, n) for n in range(10, 10000, 200)}
hv.HoloMap(plots, kdims=["npoints"])
```

Expand All @@ -181,7 +178,6 @@ hv.HoloMap(plots, kdims=["npoints"])
```{code-cell} ipython3
:tags: [hide-input]
def sphere(xyz):
import numpy as np
Expand Down
7 changes: 2 additions & 5 deletions docs/source/logo.md
Expand Up @@ -4,19 +4,16 @@ jupytext:
extension: .md
format_name: myst
format_version: 0.13
jupytext_version: 1.14.1
jupytext_version: 1.14.5
kernelspec:
display_name: Python 3 (ipykernel)
language: python
name: python3
execution:
timeout: 300
---

```{code-cell} ipython3
:tags: [remove-input]
import os
import functools
import subprocess
import tempfile
Expand Down Expand Up @@ -75,7 +72,7 @@ def remove_rounded_corners(fname):
def learner_till(till, learner, data):
new_learner = adaptive.Learner2D(None, bounds=learner.bounds)
new_learner.data = {k: v for k, v in data[:till]}
new_learner.data = dict(data[:till])
for x, y in learner._bounds_points:
# always include the bounds
new_learner.tell((x, y), learner.data[x, y])
Expand Down
18 changes: 11 additions & 7 deletions docs/source/tutorial/tutorial.BalancingLearner.md
@@ -1,14 +1,15 @@
---
kernelspec:
name: python3
display_name: python3
jupytext:
text_representation:
extension: .md
format_name: myst
format_version: '0.13'
jupytext_version: 1.13.8
format_version: 0.13
jupytext_version: 1.14.5
kernelspec:
display_name: python3
name: python3
---

# Tutorial {class}`~adaptive.BalancingLearner`

```{note}
Expand Down Expand Up @@ -60,7 +61,10 @@ runner.live_info()
```

```{code-cell} ipython3
plotter = lambda learner: hv.Overlay([L.plot() for L in learner.learners])
def plotter(learner):
return hv.Overlay([L.plot() for L in learner.learners])
runner.live_plot(plotter=plotter, update_interval=0.1)
```

Expand All @@ -83,7 +87,7 @@ combos = {
}
learner = adaptive.BalancingLearner.from_product(
jacobi, adaptive.Learner1D, dict(bounds=(0, 1)), combos
jacobi, adaptive.Learner1D, {"bounds": (0, 1)}, combos
)
runner = adaptive.BlockingRunner(learner, loss_goal=0.01)
Expand Down
2 changes: 1 addition & 1 deletion docs/source/tutorial/tutorial.DataSaver.md
Expand Up @@ -69,7 +69,7 @@ runner.live_info()
```

```{code-cell} ipython3
runner.live_plot(plotter=lambda l: l.learner.plot(), update_interval=0.1)
runner.live_plot(plotter=lambda lrn: lrn.learner.plot(), update_interval=0.1)
```

Now the `DataSavingLearner` will have an dictionary attribute `extra_data` that has `x` as key and the data that was returned by `learner.function` as values.
Expand Down
15 changes: 7 additions & 8 deletions docs/source/tutorial/tutorial.IntegratorLearner.md
@@ -1,14 +1,15 @@
---
kernelspec:
name: python3
display_name: python3
jupytext:
text_representation:
extension: .md
format_name: myst
format_version: '0.13'
jupytext_version: 1.13.8
format_version: 0.13
jupytext_version: 1.14.5
kernelspec:
display_name: python3
name: python3
---

# Tutorial {class}`~adaptive.IntegratorLearner`

```{note}
Expand Down Expand Up @@ -60,9 +61,7 @@ learner = adaptive.IntegratorLearner(f24, bounds=(0, 3), tol=1e-8)
# We use a SequentialExecutor, which runs the function to be learned in
# *this* process only. This means we don't pay
# the overhead of evaluating the function in another process.
runner = adaptive.Runner(
learner, executor=SequentialExecutor()
)
runner = adaptive.Runner(learner, executor=SequentialExecutor())
```

```{code-cell} ipython3
Expand Down
19 changes: 13 additions & 6 deletions docs/source/tutorial/tutorial.Learner1D.md
@@ -1,14 +1,15 @@
---
kernelspec:
name: python3
display_name: python3
jupytext:
text_representation:
extension: .md
format_name: myst
format_version: '0.13'
jupytext_version: 1.13.8
format_version: 0.13
jupytext_version: 1.14.5
kernelspec:
display_name: python3
name: python3
---

(TutorialLearner1D)=
# Tutorial {class}`~adaptive.Learner1D`

Expand Down Expand Up @@ -112,6 +113,8 @@ random.seed(0)
offsets = [random.uniform(-0.8, 0.8) for _ in range(3)]
# sharp peaks at random locations in the domain
def f_levels(x, offsets=offsets):
a = 0.01
return np.array(
Expand All @@ -124,7 +127,9 @@ The `Learner1D` can be used for such functions:

```{code-cell} ipython3
learner = adaptive.Learner1D(f_levels, bounds=(-1, 1))
runner = adaptive.Runner(learner, loss_goal=0.01) # continue until `learner.loss()<=0.01`
runner = adaptive.Runner(
learner, loss_goal=0.01
) # continue until `learner.loss()<=0.01`
```

```{code-cell} ipython3
Expand Down Expand Up @@ -211,12 +216,14 @@ learner.to_numpy()
```

If Pandas is installed (optional dependency), you can also run

```{code-cell} ipython3
df = learner.to_dataframe()
df
```

and load that data into a new learner with

```{code-cell} ipython3
new_learner = adaptive.Learner1D(learner.function, (-1, 1)) # create an empty learner
new_learner.load_dataframe(df) # load the pandas.DataFrame's data
Expand Down
12 changes: 7 additions & 5 deletions docs/source/tutorial/tutorial.Learner2D.md
@@ -1,14 +1,15 @@
---
kernelspec:
name: python3
display_name: python3
jupytext:
text_representation:
extension: .md
format_name: myst
format_version: '0.13'
jupytext_version: 1.13.8
format_version: 0.13
jupytext_version: 1.14.5
kernelspec:
display_name: python3
name: python3
---

# Tutorial {class}`~adaptive.Learner2D`

```{note}
Expand All @@ -24,6 +25,7 @@ import holoviews as hv
import numpy as np
from functools import partial
adaptive.notebook_extension()
```

Expand Down
14 changes: 7 additions & 7 deletions docs/source/tutorial/tutorial.LearnerND.md
@@ -1,16 +1,15 @@
---
kernelspec:
name: python3
display_name: python3
jupytext:
text_representation:
extension: .md
format_name: myst
format_version: '0.13'
jupytext_version: 1.13.8
execution:
timeout: 300
format_version: 0.13
jupytext_version: 1.14.5
kernelspec:
display_name: python3
name: python3
---

# Tutorial {class}`~adaptive.LearnerND`

```{note}
Expand Down Expand Up @@ -111,6 +110,7 @@ You could use the following code as an example:
```{code-cell} ipython3
import scipy
def f(xyz):
x, y, z = xyz
return x**4 + y**4 + z**4 - (x**2 + y**2 + z**2) ** 2
Expand Down

0 comments on commit 0590be6

Please sign in to comment.