Skip to content

Latest commit

 

History

History
24 lines (16 loc) · 1.04 KB

README.md

File metadata and controls

24 lines (16 loc) · 1.04 KB

MLflow evaluation Example

The three simple examples illustrates how you can use the mlflow.evaluate API to evaluate a PyFunc model on the specified dataset using builtin default evaluator, and log resulting metrics & artifacts to MLflow Tracking.

  • Example evaluate_on_binary_classifier.py evaluate a xgboost XGBClassifier model on dataset loaded by shap.datasets.adult.
  • Example evaluate_on_multiclass_classifier.py evaluate a scikit-learn LogisticRegression model on dataset generated by sklearn.datasets.make_classification.
  • Example evaluate_on_regressor.py evaluate a scikit-learn LinearRegression model on dataset loaded by sklearn.datasets.load_boston

How to run this code

Run from the current git directory with Python. Note: These examples assumes that you have all the dependencies for scikit-learn, xgboost, and shap library installed in your development environment.

python evaluate_on_binary_classifier.py

python evaluate_on_multiclass_classifier.py

python evaluate_on_regressor.py