Skip to content

Commit

Permalink
Remove the experimental decorators on autolog() for all flavors (#5028)
Browse files Browse the repository at this point in the history
* Remove the experimental decorators on autolog() for all flavors

Signed-off-by: Liang Zhang <liang.zhang@databricks.com>

* update tracking.rst

Signed-off-by: Liang Zhang <liang.zhang@databricks.com>
  • Loading branch information
liangz1 committed Nov 10, 2021
1 parent 79d86d3 commit 1e62d2f
Show file tree
Hide file tree
Showing 13 changed files with 19 additions and 52 deletions.
48 changes: 18 additions & 30 deletions docs/source/tracking.rst
Expand Up @@ -341,8 +341,8 @@ The following libraries support autologging:
For flavors that automatically save models as an artifact, `additional files <https://mlflow.org/docs/latest/models.html#storage-format>`_ for dependency management are logged.


Scikit-learn (experimental)
---------------------------
Scikit-learn
------------

Call :py:func:`mlflow.sklearn.autolog` before your training code to enable automatic logging of sklearn metrics, params, and models.
See example usage `here <https://github.com/mlflow/mlflow/tree/master/examples/sklearn_autolog>`_.
Expand Down Expand Up @@ -389,11 +389,8 @@ containing the following data:
.. _GridSearchCV:
https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html

.. note::
This feature is experimental - the API and format of the logged data are subject to change.

TensorFlow and Keras (experimental)
-----------------------------------
TensorFlow and Keras
--------------------
Call :py:func:`mlflow.tensorflow.autolog` or :py:func:`mlflow.keras.autolog` before your training code to enable automatic logging of metrics and parameters. See example usages with `Keras <https://github.com/mlflow/mlflow/tree/master/examples/keras>`_ and
`TensorFlow <https://github.com/mlflow/mlflow/tree/master/examples/tensorflow>`_.

Expand Down Expand Up @@ -432,10 +429,9 @@ If a run already exists when ``autolog()`` captures data, MLflow will log to tha

.. note::
- Parameters not explicitly passed by users (parameters that use default values) while using ``keras.Model.fit_generator()`` are not currently automatically logged.
- This feature is experimental - the API and format of the logged data are subject to change.

Gluon (experimental)
--------------------
Gluon
-----
Call :py:func:`mlflow.gluon.autolog` before your training code to enable automatic logging of metrics and parameters.
See example usages with `Gluon <https://github.com/mlflow/mlflow/tree/master/examples/gluon>`_ .

Expand All @@ -447,11 +443,8 @@ Autologging captures the following information:
| Gluon | Training loss; validation loss; user-specified metrics | Number of layers; optimizer name; learning rate; epsilon | -- | `MLflow Model <https://mlflow.org/docs/latest/models.html>`_ (Gluon model); on training end |
+------------------+--------------------------------------------------------+----------------------------------------------------------+---------------+-------------------------------------------------------------------------------------------------------------------------------+

.. note::
This feature is experimental - the API and format of the logged data are subject to change.

XGBoost (experimental)
----------------------
XGBoost
-------
Call :py:func:`mlflow.xgboost.autolog` before your training code to enable automatic logging of metrics and parameters.

Autologging captures the following information:
Expand All @@ -465,15 +458,14 @@ Autologging captures the following information:
If early stopping is activated, metrics at the best iteration will be logged as an extra step/iteration.

.. note::
- This feature is experimental - the API and format of the logged data are subject to change.
- The `scikit-learn API <https://xgboost.readthedocs.io/en/latest/python/python_api.html#module-xgboost.sklearn>`__ is not supported.

.. _xgboost.train: https://xgboost.readthedocs.io/en/latest/python/python_api.html#xgboost.train
.. _MLflow Model: https://mlflow.org/docs/latest/models.html


LightGBM (experimental)
-----------------------
LightGBM
--------
Call :py:func:`mlflow.lightgbm.autolog` before your training code to enable automatic logging of metrics and parameters.

Autologging captures the following information:
Expand All @@ -487,13 +479,12 @@ Autologging captures the following information:
If early stopping is activated, metrics at the best iteration will be logged as an extra step/iteration.

.. note::
- This feature is experimental - the API and format of the logged data are subject to change.
- The `scikit-learn API <https://lightgbm.readthedocs.io/en/latest/Python-API.html#scikit-learn-api>`__ is not supported.

.. _lightgbm.train: https://lightgbm.readthedocs.io/en/latest/pythonapi/lightgbm.train.html#lightgbm-train

Statsmodels (experimental)
--------------------------
Statsmodels
-----------
Call :py:func:`mlflow.statsmodels.autolog` before your training code to enable automatic logging of metrics and parameters.

Autologging captures the following information:
Expand All @@ -505,13 +496,12 @@ Autologging captures the following information:
+--------------+------------------------+------------------------------------------------+---------------+-----------------------------------------------------------------------------+

.. note::
- This feature is experimental - the API and format of the logged data are subject to change.
- Each model subclass that overrides `fit` expects and logs its own parameters.

.. _statsmodels.base.model.Model.fit: https://www.statsmodels.org/dev/dev/generated/statsmodels.base.model.Model.html

Spark (experimental)
--------------------
Spark
-----

Initialize a SparkSession with the mlflow-spark JAR attached (e.g.
``SparkSession.builder.config("spark.jars.packages", "org.mlflow.mlflow-spark")``) and then
Expand All @@ -528,11 +518,10 @@ Autologging captures the following information:
+------------------+---------+------------+----------------------------------------------------------------------------------------------+-----------+

.. note::
- This feature is experimental - the API and format of the logged data are subject to change.
- Moreover, Spark datasource autologging occurs asynchronously - as such, it's possible (though unlikely) to see race conditions when launching short-lived MLflow runs that result in datasource information not being logged.

Fastai (experimental)
---------------------
Fastai
------

Call :py:func:`mlflow.fastai.autolog` before your training code to enable automatic logging of metrics and parameters.
See an example usage with `Fastai <https://github.com/mlflow/mlflow/tree/master/examples/fastai>`_.
Expand All @@ -551,8 +540,8 @@ Autologging captures the following information:
| | | `OneCycleScheduler`_ callbacks | | |
+-----------+------------------------+----------------------------------------------------------+---------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------+

Pytorch (experimental)
--------------------------
Pytorch
-------

Call :py:func:`mlflow.pytorch.autolog` before your Pytorch Lightning training code to enable automatic logging of metrics, parameters, and models. See example usages `here <https://github.com/chauhang/mlflow/tree/master/examples/pytorch/MNIST>`__. Note
that currently, Pytorch autologging supports only models trained using Pytorch Lightning.
Expand Down Expand Up @@ -586,7 +575,6 @@ If a run already exists when ``autolog()`` captures data, MLflow will log to tha
.. note::
- Parameters not explicitly passed by users (parameters that use default values) while using ``pytorch_lightning.trainer.Trainer.fit()`` are not currently automatically logged
- In case of a multi-optimizer scenario (such as usage of autoencoder), only the parameters for the first optimizer are logged
- This feature is experimental - the API and format of the logged data are subject to change


.. _organizing_runs_in_experiments:
Expand Down
2 changes: 0 additions & 2 deletions mlflow/fastai/__init__.py
Expand Up @@ -39,7 +39,6 @@
from mlflow.utils.file_utils import write_to
from mlflow.utils.docstring_utils import format_docstring, LOG_MODEL_PARAM_DOCS
from mlflow.utils.model_utils import _get_flavor_configuration
from mlflow.utils.annotations import experimental
from mlflow.utils.autologging_utils import (
log_fn_args_as_params,
safe_patch,
Expand Down Expand Up @@ -368,7 +367,6 @@ def load_model(model_uri, dst_path=None):
return _load_model(path=model_file_path)


@experimental
@autologging_integration(FLAVOR_NAME)
def autolog(
log_models=True,
Expand Down
1 change: 0 additions & 1 deletion mlflow/gluon.py
Expand Up @@ -345,7 +345,6 @@ def log_model(
)


@experimental
@autologging_integration(FLAVOR_NAME)
def autolog(
log_models=True,
Expand Down
2 changes: 0 additions & 2 deletions mlflow/keras.py
Expand Up @@ -39,7 +39,6 @@
from mlflow.utils.file_utils import write_to
from mlflow.utils.docstring_utils import format_docstring, LOG_MODEL_PARAM_DOCS
from mlflow.utils.model_utils import _get_flavor_configuration
from mlflow.utils.annotations import experimental
from mlflow.utils.autologging_utils import (
autologging_integration,
safe_patch,
Expand Down Expand Up @@ -555,7 +554,6 @@ def load_model(model_uri, dst_path=None, **kwargs):
)


@experimental
@autologging_integration(FLAVOR_NAME)
def autolog(
log_models=True,
Expand Down
2 changes: 0 additions & 2 deletions mlflow/lightgbm.py
Expand Up @@ -46,7 +46,6 @@
from mlflow.utils.docstring_utils import format_docstring, LOG_MODEL_PARAM_DOCS
from mlflow.utils.model_utils import _get_flavor_configuration
from mlflow.exceptions import MlflowException
from mlflow.utils.annotations import experimental
from mlflow.utils.arguments_utils import _get_arg_names
from mlflow.utils.autologging_utils import (
autologging_integration,
Expand Down Expand Up @@ -295,7 +294,6 @@ def predict(self, dataframe):
return self.lgb_model.predict(dataframe)


@experimental
@autologging_integration(FLAVOR_NAME)
def autolog(
log_input_examples=False,
Expand Down
2 changes: 0 additions & 2 deletions mlflow/paddle/__init__.py
Expand Up @@ -24,7 +24,6 @@
from mlflow.models.utils import ModelInputExample, _save_example
from mlflow.protos.databricks_pb2 import RESOURCE_ALREADY_EXISTS
from mlflow.tracking.artifact_utils import _download_artifact_from_uri
from mlflow.utils.annotations import experimental
from mlflow.utils.environment import (
_mlflow_conda_env,
_validate_env_arguments,
Expand Down Expand Up @@ -453,7 +452,6 @@ def _contains_pdparams(path):
return any(".pdparams" in file for file in file_list)


@experimental
@autologging_integration(FLAVOR_NAME)
def autolog(
log_every_n_epoch=1, log_models=True, disable=False, exclusive=False, silent=False,
Expand Down
2 changes: 0 additions & 2 deletions mlflow/pyspark/ml/__init__.py
Expand Up @@ -14,7 +14,6 @@
_get_fully_qualified_class_name,
_inspect_original_var_name,
)
from mlflow.utils.annotations import experimental
from mlflow.utils.autologging_utils import (
_get_new_training_session_class,
autologging_integration,
Expand Down Expand Up @@ -674,7 +673,6 @@ def log_post_training_metric(self, run_id, key, value):
_AUTOLOGGING_METRICS_MANAGER = _AutologgingMetricsManager()


@experimental
@autologging_integration(AUTOLOGGING_INTEGRATION_NAME)
def autolog(
log_models=True,
Expand Down
1 change: 0 additions & 1 deletion mlflow/pytorch/__init__.py
Expand Up @@ -868,7 +868,6 @@ def load_state_dict(state_dict_uri, **kwargs):
return torch.load(state_dict_path, **kwargs)


@experimental
@autologging_integration(FLAVOR_NAME)
def autolog(
log_every_n_epoch=1,
Expand Down
2 changes: 0 additions & 2 deletions mlflow/sklearn/__init__.py
Expand Up @@ -33,7 +33,6 @@
from mlflow.protos.databricks_pb2 import RESOURCE_ALREADY_EXISTS
from mlflow.tracking.artifact_utils import _download_artifact_from_uri
from mlflow.utils import _inspect_original_var_name
from mlflow.utils.annotations import experimental
from mlflow.utils.autologging_utils import get_instance_method_first_arg_value
from mlflow.utils.environment import (
_mlflow_conda_env,
Expand Down Expand Up @@ -884,7 +883,6 @@ def _patch_estimator_method_if_available(flavor_name, class_def, func_name, patc
pass


@experimental
@autologging_integration(FLAVOR_NAME)
def autolog(
log_input_examples=False,
Expand Down
2 changes: 0 additions & 2 deletions mlflow/spark.py
Expand Up @@ -58,7 +58,6 @@
)
from mlflow.utils import databricks_utils
from mlflow.utils.model_utils import _get_flavor_configuration_from_uri
from mlflow.utils.annotations import experimental
from mlflow.tracking._model_registry import DEFAULT_AWAIT_MAX_SLEEP_SECONDS
from mlflow.utils.autologging_utils import autologging_integration, safe_patch

Expand Down Expand Up @@ -733,7 +732,6 @@ def predict(self, pandas_df):
]


@experimental
@autologging_integration(FLAVOR_NAME)
def autolog(disable=False, silent=False): # pylint: disable=unused-argument
"""
Expand Down
2 changes: 0 additions & 2 deletions mlflow/statsmodels.py
Expand Up @@ -37,7 +37,6 @@
from mlflow.utils.docstring_utils import format_docstring, LOG_MODEL_PARAM_DOCS
from mlflow.utils.model_utils import _get_flavor_configuration
from mlflow.exceptions import MlflowException
from mlflow.utils.annotations import experimental
from mlflow.utils.autologging_utils import (
log_fn_args_as_params,
autologging_integration,
Expand Down Expand Up @@ -384,7 +383,6 @@ def _get_autolog_metrics(fitted_model):
return result_metrics


@experimental
@autologging_integration(FLAVOR_NAME)
def autolog(
log_models=True,
Expand Down
3 changes: 1 addition & 2 deletions mlflow/tensorflow.py
Expand Up @@ -33,7 +33,7 @@
from mlflow.protos.databricks_pb2 import DIRECTORY_NOT_EMPTY
from mlflow.tracking import MlflowClient
from mlflow.tracking.artifact_utils import _download_artifact_from_uri, get_artifact_uri
from mlflow.utils.annotations import keyword_only, experimental
from mlflow.utils.annotations import keyword_only
from mlflow.utils.environment import (
_mlflow_conda_env,
_validate_env_arguments,
Expand Down Expand Up @@ -657,7 +657,6 @@ class _TensorBoard(TensorBoard, metaclass=ExceptionSafeClass):
return out_list, log_dir


@experimental
@autologging_integration(FLAVOR_NAME)
def autolog(
every_n_iter=1,
Expand Down
2 changes: 0 additions & 2 deletions mlflow/xgboost.py
Expand Up @@ -46,7 +46,6 @@
from mlflow.utils.file_utils import write_to
from mlflow.utils.model_utils import _get_flavor_configuration
from mlflow.exceptions import MlflowException
from mlflow.utils.annotations import experimental
from mlflow.utils.docstring_utils import format_docstring, LOG_MODEL_PARAM_DOCS
from mlflow.utils.arguments_utils import _get_arg_names
from mlflow.utils.autologging_utils import (
Expand Down Expand Up @@ -333,7 +332,6 @@ def predict(self, dataframe):
return self.xgb_model.predict(dataframe)


@experimental
@autologging_integration(FLAVOR_NAME)
def autolog(
importance_types=None,
Expand Down

0 comments on commit 1e62d2f

Please sign in to comment.