Skip to content

pranaychandekar/ml-prediction-web-service

Repository files navigation

Web Service to host an ML Model for Prediction.

A simple python web service to host a machine learning model for prediction as a REST API.

RESTful Web Service

The container trains a simple text classifier and hosts it for prediction as a web service using FastAPI. The data for model training is included in the project.

For more details on customizing this project or using this as a template please follow this article.


Pre-requisites

  1. System should have docker engine installed.

Note: I developed and tested this on Ubuntu-16.04.


Hosting the web service

  1. Build the docker image
docker build --network=host -t ml-prediction-web-service:v1 .  
  1. Check the image
docker images 

Docker Images

  1. Run the container
docker run -d --net=host --name=ml-prediction-web-service ml-prediction-web-service:v1
  1. Check whether the container is up
docker ps 

Running Containers

When we run the container two scripts are initiated:

  1. train.py which trains the model to be hosted.
  2. app.py which hosts the model as a web service.

API Usage

The web services includes the openapi integration. Thus, we can directly use the swagger portal from web browser to use the API. To open the swagger portal go to your browser and enter http://localhost:8080/swagger/. This will open the swagger portal only if the service is hosted properly.

Swagger Portal

To check whether service is up:

  1. Click on the GET bar.

  2. Click on Try it out

Try It Out

  1. Click on Execute

Execute

  1. If you see the following screen then your service is up.

Service Up

To predict the label of the text:

  1. Click on the POST bar.

  2. Click on Try it out

Try It Out

  1. Click on Execute
  2. We should see a response similar to the following:

Prediction Response


Logs checking

To check the the web service logs we need to get inside the running container. To do so execute the following command:

docker exec -it ml-prediction-web-service bash

Now we are inside the container.

The logs are available in the logs folder in the files ml-prediction-web-service.log and ml-prediction-web-service.err.

Inside container


Stopping the web service

Run the following command to stop the container:

docker stop ml-prediction-web-service

API Documentation with ReDoc

ReDoc Top

ReDoc API Details


Author: Pranay Chandekar