Skip to content

Commit

Permalink
Implement Type Hints (#441)
Browse files Browse the repository at this point in the history
Co-authored-by: Raoul Collenteur <raoulcollenteur@gmail.com>
  • Loading branch information
martinvonk and raoulcollenteur committed Jan 10, 2023
1 parent 4e2b8b0 commit 900a182
Show file tree
Hide file tree
Showing 37 changed files with 1,760 additions and 781 deletions.
1 change: 1 addition & 0 deletions .github/pull_request_template.md
Expand Up @@ -5,5 +5,6 @@ Add a short description describing the pull request (PR) here.
- [ ] closes issue #xxxx
- [ ] is documented
- [ ] PEP8 compliant code
- [ ] type hints for functions and methods
- [ ] tests added / passed
- [ ] Example Notebook (for new features)
17 changes: 12 additions & 5 deletions doc/concepts/armamodel.ipynb
@@ -1,13 +1,14 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# ARMA(1,1) Noise Model for Pastas\n",
"*R.A. Collenteur, University of Graz, May 2020*\n",
"\n",
"In this notebook an Autoregressive-Moving-Average (ARMA(1,1)) noise model is developed for Pastas models. This new noise model is tested against synthetic data generated with Numpy or Statsmodels' ARMA model. This noise model is tested on head time series with a regular time step.\n",
"In this notebook an Autoregressive-Moving-Average (ARMA(1,1)) noise model is developed for Pastas models. This new noise model is tested against synthetic data generated with NumPy or Statsmodels' ARMA model. This noise model is tested on head time series with a regular time step.\n",
"\n",
"<div class=\"alert alert-info\">\n",
" \n",
Expand Down Expand Up @@ -82,11 +83,12 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"## 3. Generate ARMA(1,1) noise and add it to the synthetic heads\n",
"In the following code-block, noise is generated using an ARMA(1,1) process using Numpy. An alternative procedure is available from Statsmodels (commented out now). More information about the ARMA model can be found on the [statsmodels website](https://www.statsmodels.org/dev/generated/statsmodels.tsa.arima_process.ArmaProcess.html). The noise is added to the head series generated in the previous code-block."
"In the following code-block, noise is generated using an ARMA(1,1) process using NumPy. An alternative procedure is available from Statsmodels (commented out now). More information about the ARMA model can be found on the [statsmodels website](https://www.statsmodels.org/dev/generated/statsmodels.tsa.arima_process.ArmaProcess.html). The noise is added to the head series generated in the previous code-block."
]
},
{
Expand All @@ -107,7 +109,7 @@
"# arma = stats.tsa.ArmaProcess(ar, ma)\n",
"# noise = arma.generate_sample(head[0].index.size)*np.std(head.values) * 0.1\n",
"\n",
"# generate samples using Numpy\n",
"# generate samples using NumPy\n",
"random_seed = np.random.RandomState(1234)\n",
"\n",
"noise = random_seed.normal(0,1,len(head)) * np.std(head.values) * 0.1\n",
Expand Down Expand Up @@ -281,7 +283,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"display_name": "pastas_dev",
"language": "python",
"name": "python3"
},
Expand All @@ -295,7 +297,12 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.7"
"version": "3.9.15 | packaged by conda-forge | (main, Nov 22 2022, 08:41:22) [MSC v.1929 64 bit (AMD64)]"
},
"vscode": {
"interpreter": {
"hash": "29475f8be425919747d373d827cb41e481e140756dd3c75aa328bf3399a0138e"
}
}
},
"nbformat": 4,
Expand Down
14 changes: 10 additions & 4 deletions doc/concepts/noisemodel.ipynb
@@ -1,13 +1,14 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"# Testing an AR(1) Noise Model for Pastas\n",
"*R.A. Collenteur, University of Graz, May 2020*\n",
"\n",
"In this notebook the classical Autoregressive AR1 noise model is tested for Pastas models. This noise model is tested against synthetic data generated with Numpy or Statsmodels' ARMA model. This noise model is tested on head time series with a regular time step."
"In this notebook the classical Autoregressive AR1 noise model is tested for Pastas models. This noise model is tested against synthetic data generated with NumPy or Statsmodels' ARMA model. This noise model is tested on head time series with a regular time step."
]
},
{
Expand Down Expand Up @@ -90,7 +91,7 @@
"np.random.seed(1234)\n",
"alpha= 0.8\n",
"\n",
"# generate samples using Numpy\n",
"# generate samples using NumPy\n",
"random_seed = np.random.RandomState(1234)\n",
"noise = random_seed.normal(0,1,len(head)) * np.std(head.values) * 0.2\n",
"a = np.zeros_like(head[0])\n",
Expand Down Expand Up @@ -239,7 +240,7 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"display_name": "pastas_dev",
"language": "python",
"name": "python3"
},
Expand All @@ -253,7 +254,12 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.7"
"version": "3.9.15 | packaged by conda-forge | (main, Nov 22 2022, 08:41:22) [MSC v.1929 64 bit (AMD64)]"
},
"vscode": {
"interpreter": {
"hash": "29475f8be425919747d373d827cb41e481e140756dd3c75aa328bf3399a0138e"
}
}
},
"nbformat": 4,
Expand Down
3 changes: 2 additions & 1 deletion doc/examples/02_fix_parameters.ipynb
Expand Up @@ -146,13 +146,14 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"### First time series model\n",
"Once the time series are read from the data files, a time series model can be constructed by going through the following three steps:\n",
"\n",
"1. Creat a `Model` object by passing it the observed head series. Store your model in a variable so that you can use it later on. \n",
"1. Create a `Model` object by passing it the observed head series. Store your model in a variable so that you can use it later on. \n",
"2. Add the stresses that are expected to cause the observed head variation to the model. In this example, this is only the recharge series. For each stess, a `StressModel` object needs to be created. Each `StressModel` object needs three input arguments: the time series of the stress, the response function that is used to simulate the effect of the stress, and a name. In addition, it is recommended to specified the `kind` of series, which is used to perform a number of checks on the series and fix problems when needed. This checking and fixing of problems (for example, what to substitute for a missing value) depends on the kind of series. In this case, the time series of the stress is stored in the variable `recharge`, the Gamma function is used to simulate the response, the series will be called `'recharge'`, and the kind is `prec` which stands for precipitation. One of the other keyword arguments of the `StressModel` class is `up`, which means that a positive stress results in an increase (up) of the head. The default value is `True`, which we use in this case as a positive recharge will result in the heads going up. Each `StressModel` object needs to be stored in a variable, after which it can be added to the model. \n",
"3. When everything is added, the model can be solved. The default option is to minimize the sum of the squares of the errors between the observed and modeled heads. "
]
Expand Down
6 changes: 3 additions & 3 deletions doc/examples/example_uncertainty.py
Expand Up @@ -29,12 +29,12 @@
# # Plot some results
axes = ml.plots.results(tmin="2010", tmax="2015", figsize=(10, 6))
axes[0].fill_between(df.index, df.iloc[:, 0], df.iloc[:, 1], color="gray",
zorder=-1, alpha=0.5, label="95% Prediction interval")
zorder=-1, alpha=0.5, label="95% Prediction interval")
axes[0].legend(ncol=3)
df = ml.fit.ci_contribution("recharge", tmin="2010", tmax="2015")
axes[2].fill_between(df.index, df.iloc[:, 0], df.iloc[:, 1], color="gray",
zorder=-1, alpha=0.5, label="95% confidence")
zorder=-1, alpha=0.5, label="95% confidence")

df = ml.fit.ci_step_response("recharge", alpha=0.05, n=1000)
axes[3].fill_between(df.index, df.iloc[:, 0], df.iloc[:, 1], color="gray",
zorder=-1, alpha=0.5, label="95% confidence")
zorder=-1, alpha=0.5, label="95% confidence")
Expand Up @@ -666,7 +666,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.7"
"version": "3.9.15 | packaged by conda-forge | (main, Nov 22 2022, 08:41:22) [MSC v.1929 64 bit (AMD64)]"
},
"vscode": {
"interpreter": {
Expand Down
44 changes: 27 additions & 17 deletions pastas/io/base.py
Expand Up @@ -9,10 +9,13 @@
import pastas as ps
from pandas import to_numeric

# Type Hinting
from pastas.typing import Model

logger = getLogger(__name__)


def load(fname, **kwargs):
def load(fname: str, **kwargs) -> Model:
"""Method to load a Pastas Model from file.
Parameters
Expand Down Expand Up @@ -44,14 +47,18 @@ def load(fname, **kwargs):

ml = _load_model(data)

logger.info("Pastas Model from file %s successfully loaded. This file "
"was created with Pastas %s. Your current version of Pastas "
"is: %s", fname, data["file_info"]["pastas_version"],
ps.__version__)
logger.info(
"Pastas Model from file %s successfully loaded. This file "
"was created with Pastas %s. Your current version of Pastas "
"is: %s",
fname,
data["file_info"]["pastas_version"],
ps.__version__,
)
return ml


def _load_model(data):
def _load_model(data: dict) -> Model:
"""Internal method to create a model from a dictionary."""
# Create model
oseries = ps.TimeSeries(**data["oseries"])
Expand All @@ -76,8 +83,9 @@ def _load_model(data):
else:
noise = False

ml = ps.Model(oseries, constant=constant, noisemodel=noise, name=name,
metadata=metadata)
ml = ps.Model(
oseries, constant=constant, noisemodel=noise, name=name, metadata=metadata
)

if "settings" in data.keys():
ml.settings.update(data["settings"])
Expand All @@ -88,13 +96,15 @@ def _load_model(data):
for name, ts in data["stressmodels"].items():
# Deal with old StressModel2 files for version 0.22.0. Remove in 0.23.0.
if ts["stressmodel"] == "StressModel2":
logger.warning("StressModel2 is removed since Pastas 0.22.0 and "
"is replaced by the RechargeModel using a Linear "
"recharge model. Make sure to save this file "
"again using Pastas version 0.22.0 as this file "
"cannot be loaded in newer Pastas versions. This "
"will automatically update your model to the newer "
"RechargeModel stress model.")
logger.warning(
"StressModel2 is removed since Pastas 0.22.0 and "
"is replaced by the RechargeModel using a Linear "
"recharge model. Make sure to save this file "
"again using Pastas version 0.22.0 as this file "
"cannot be loaded in newer Pastas versions. This "
"will automatically update your model to the newer "
"RechargeModel stress model."
)
ts["stressmodel"] = "RechargeModel"
ts["recharge"] = "Linear"
ts["prec"] = ts["stress"][0]
Expand Down Expand Up @@ -125,7 +135,7 @@ def _load_model(data):
ts["rfunc"] = getattr(ps.rfunc, ts["rfunc"])(**rfunc_kwargs)
if "recharge" in ts.keys():
recharge_kwargs = {}
if 'recharge_kwargs' in ts:
if "recharge_kwargs" in ts:
recharge_kwargs = ts.pop("recharge_kwargs")
ts["recharge"] = getattr(
ps.recharge, ts["recharge"])(**recharge_kwargs)
Expand Down Expand Up @@ -171,7 +181,7 @@ def _load_model(data):
return ml


def dump(fname, data, **kwargs):
def dump(fname: str, data: dict, **kwargs):
"""Method to save a pastas-model to a file.
Parameters
Expand Down
10 changes: 7 additions & 3 deletions pastas/io/men.py
Expand Up @@ -14,13 +14,17 @@
from ..utils import datetime2matlab


def load(fname):
def load(fname: str) -> NotImplementedError:
raise NotImplementedError("This is not implemented yet. See the "
"reads-section for a Menyanthes-read")


def dump(fname, data, version=3, verbose=True):
# version can also be a specific version, like '2.x.g.t (beta)', or an integer (see below)
def dump(fname: str,
data: dict,
version: int = 3,
verbose: bool = True) -> None:
# version can also be a specific version,
# like '2.x.g.t (beta)', or an integer (see below)
if version == 3:
version = '3.x.b.c (gamma)'
elif version == 2:
Expand Down
8 changes: 4 additions & 4 deletions pastas/io/pas.py
Expand Up @@ -16,12 +16,12 @@
logger = getLogger(__name__)


def load(fname):
def load(fname: str) -> dict:
data = json.load(open(fname), object_hook=pastas_hook)
return data


def pastas_hook(obj):
def pastas_hook(obj: dict):
for key, value in obj.items():
if key in ["tmin", "tmax", "date_modified", "date_created"]:
val = Timestamp(value)
Expand Down Expand Up @@ -56,7 +56,7 @@ def pastas_hook(obj):
return obj


def dump(fname, data):
def dump(fname: str, data: dict) -> None:
json.dump(data, open(fname, 'w'), indent=4, cls=PastasEncoder)
logger.info("%s file successfully exported", fname)

Expand All @@ -67,7 +67,7 @@ class PastasEncoder(json.JSONEncoder):
Notes
-----
Currently supported formats are: DataFrame, Series,
Timedelta, TimeStamps.
Timedelta, Timestamps.
see: https://docs.python.org/3/library/json.html
"""
Expand Down

0 comments on commit 900a182

Please sign in to comment.