The three simple examples illustrates how you can use the mlflow.evaluate
API to evaluate a PyFunc model on the
specified dataset using builtin default evaluator, and log resulting metrics & artifacts to MLflow Tracking.
- Example
evaluate_on_binary_classifier.py
evaluate a xgboostXGBClassifier
model on dataset loaded byshap.datasets.adult
. - Example
evaluate_on_multiclass_classifier.py
evaluate a scikit-learnLogisticRegression
model on dataset generated bysklearn.datasets.make_classification
. - Example
evaluate_on_regressor.py
evaluate a scikit-learnLinearRegression
model on dataset loaded bysklearn.datasets.load_boston
Run from the current git directory with Python.
Note: These examples assumes that you have all the dependencies for scikit-learn
, xgboost
, and shap
library
installed in your development environment.
python evaluate_on_binary_classifier.py
python evaluate_on_multiclass_classifier.py
python evaluate_on_regressor.py