8 mins to Production Machine Learning!

Aaron (Ari) Bornstein
9 min readJun 10, 2018

--

TL;DR

Based on the docs, this tutorial shows you how you can design a put a ML model into production with the AzureML CLI. We have taken the arduous work of the process and turned it into a simple Jupyter Notebook that you can find here.

Big Picture

In my role as a cloud developer advocate I often get asked what is fastest way to deploy my machine learning models on Azure. I’ve seen many dev’s get lost in the documentation so I worked with my amazing colleague Sherif El Mahdi to automate the Azure CLI process with an interactive Jupyter Notebook tutorial.

In this tutorial, we will move the focus to operationalizing models by deploying trained models as web services so that you can consume it later from any client application via REST API call. For that purpose, we are using Azure Machine Learning Model Management Service.

Azure Model Management Service

Azure Machine Learning Model Management enables you to manage and deploy machine-learning models. It provides different services like creating Docker containers with models for local testing, deploying models to production through Azure ML Compute Environment with Azure Container Service and versioning & tracking models. Learn more here: Conceptual Overview of Azure Model Management Service

What’s needed to deploy my model?

  • Your Model File or Directory of Model Files
  • You need to create a score.py that loads your model and returns the prediction result(s) using the model and also used to generates a schema JSON file
  • Schema JSON file for API parameters (validates API input and output)
  • Runtime Environment Choice e.g. python or spark-py
  • Conda dependency file listing runtime dependencies

How it works:

Learn more here: Conceptual Overview of Azure Model Management Service

Deployment Steps:

  • Use your saved, trained, Machine Learning model
  • Create a schema for your web service’s input and output data
  • Create a Docker-based container image
  • Create and deploy the web service

Deployment Target Environments:

  1. Local Environment: You can set up a local environment to deploy and test your web service on your local machine or DSVM. (Requires you to install Docker on the machine)
  2. Production Environment: You can use Cluster deployment for high-scale production scenarios. It sets up an ACS cluster with Kubernetes as the orchestrator. The ACS cluster can be scaled out to handle larger throughput for your web service calls. (Kubernetes deployment on an Azure Container Service (ACS) cluster)

Challenge

# Run the following train.py from the notebook to generate a classifier model 
from sklearn.svm import SVC
from cvworkshop_utils import ensure_exists
import pickle
# indicator1, NF1, cellprofiling
X = [[362, 160, 88], [354, 140, 86], [320, 120, 76], [308, 108, 47], [332, 130, 80], [380, 180, 94], [350, 128, 78],
[354, 140, 80], [318, 110, 74], [342, 150, 84], [362, 170, 86]]
Y = ['positive', 'positive', 'negative', 'negative', 'positive', 'positive', 'negative', 'negative', 'negative', 'positive', 'positive']clf = SVC()
clf = clf.fit(X, Y)
print('Predicted value:', clf.predict([[380, 140, 86]]))
print('Accuracy', clf.score(X,Y))
print('Export the model to output/trainedModel.pkl')
ensure_exists('output')
f = open('output/trainedModel.pkl', 'wb')
pickle.dump(clf, f)
f.close()
print('Import the model from output/trainedModel.pkl')
f2 = open('output/trainedModel.pkl', 'rb')
clf2 = pickle.load(f2)
X_new = [[308, 108, 70]]
print('New Sample:', X_new)
print('Predicted class:', clf2.predict(X_new))

Now navigate to the repository root directory then open “output” folder and you should be able to see the created trained model file “trainedModel.pkl”

# Run the following score.py from the notebook to generate the web serivce schema JSON file
# Learn more about creating score file from here: https://docs.microsoft.com/en-us/azure/machine-learning/preview/model-management-service-deploy
def init():
from sklearn.externals import joblib
global model
model = joblib.load('output/trainedModel.pkl')
def run(input_df):
import json
pred = model.predict(input_df)
return json.dumps(str(pred[0]))
def main():
from azureml.api.schema.dataTypes import DataTypes
from azureml.api.schema.sampleDefinition import SampleDefinition
from azureml.api.realtime.services import generate_schema
import pandas
df = pandas.DataFrame(data=[[380, 120, 76]], columns=['indicator1', 'NF1', 'cellprofiling'])# Check the output of the function
init()
input1 = pandas.DataFrame([[380, 120, 76]])
print("Result: " + run(input1))

inputs = {"input_df": SampleDefinition(DataTypes.PANDAS, df)}
# Generate the service_schema.json
generate_schema(run_func=run, inputs=inputs, filepath='output/service_schema.json')
print("Schema generated")
if __name__ == "__main__":
main()

Navigate again to the repository root directory then open “output” folder and you should be able to see the created JSON schema file “service_schema.json”

By reaching this point, we now have what’s needed (Score.py file, trained model and JSON schema file) to start deploying our trained model using Azure Model Management Service. Now it’s the time to think which deployment environoment are you going to consider as deployment target (Local Deployment or Cluster Deploymment). In this tutorial, we will walk through both scenarios so feel free to either walk through scenario A or scenario B or even both.

Before deploying, first login to you Azure subscription using your command prompt and register few environment providers.

Once you execute this command, the command prompt will show you a message asking you to open your web browser then navigate to https://aka.ms/devicelogin to enter a specific code given in the terminal to login to your Azure subscription.

#Return to your command prompt and execute the following commands!az login# Once you are logged in, now let's execute the following commands to register our environment providers!az provider register -n Microsoft.MachineLearningCompute
!az provider register -n Microsoft.ContainerRegistry
!az provider register -n Microsoft.ContainerService

Registering the environments takes some time so you can monitor the status using the following command:

az provider show -n {Envrionment Provider Name}

Before you complete this tutorial, make sure that all the registration status for all the providers are “Registered”.

!az provider show -n Microsoft.MachineLearningCompute!az provider show -n Microsoft.ContainerRegistry!az provider show -n Microsoft.ContainerService

While waiting the environment providers to be registered, you can create a resource group to include all the resources that we are going to provision through this tutorial.

Command format: az group create — name {group name} — location {azure region}

Example:

!az group create --name capetownrg --location westus

Also create a Model Management account to be used for our deployment whether the local deployment or the custer deployment.

Command format az ml account modelmanagement create -l {resource targeted region} -n {model management name} -g {name of created resource group}

Example:

!az ml account modelmanagement create -l eastus2 -n capetownmodelmgmt -g capetownrg

Once your model management account is create, set the model management you created to be used in our deployment.

Command format: az ml account modelmanagement set -n {your model management account name} -g {name of created resource group}

Example:

!az ml account modelmanagement set -n capetownmodelmgmt -g capetownrg

Cluster Deployment — Environment Setup:

If you want to deploy from a cluster you need to setup a cluster deployment environment using the following command first to be able to deploy our trained model as a web service

Creating the environment may take 10–20 minutes.

Command format: az ml env setup -c — name {your environment name} — location {azure region} -g {name of created resource group}

Example:

!az ml env setup -c --name capetownenv --location eastus2 -g capetownrg -y --debug

You can use the following command to monitor the status:

Command format: az ml env show -g {name of created resource group} -n {your environment name}

Example:

!az ml env show -g capetownrg -n capetownenv

Once your provisioning status is “Succeeded”, open your web browser and login to your Azure subscription through the portal and you should be able to see the following resources created in your resource group:

  • A storage account
  • An Azure Container Registry (ACR)
  • A Kubernetes deployment on an Azure Container Service (ACS) cluster
  • An Application insights account

Now set set your environment as your deployment enviroment using the following command:

Command format: az ml env set -n {your environment name} -g {name of created resource group}

Example:

!az ml env set -n capetownenv -g capetownrg --debug

Now feel free to choose one of the following deployment environments as your targeted environment.

Local Deployment — Environment Setup:

You need to set up a local environment using the following command first to be able to deploy our trained model as a web service

Command format: az ml env setup -l {azure region} -n {your environment name} -g {name of created resource group}

Example:

# !az ml env setup -l eastus2 -n capetownlocalenv -g capetownrg -y

Creating the enviroment may take some time so you can use the following command to monitor the status:

Command format: az ml env show -g {name of created resource group} -n {your environment name}

Example:

# !az ml env show -g capetownrg -n capetownlocalenv

Once your provisioning status is “Succeeded”, open your web browser and login to your Azure subscription through the portal and you should be able to see the following resources created in your resource group:

  • A storage account
  • An Azure Container Registry (ACR)
  • An Application insights account

Now set set your environment as your deployment enviroment using the following command:

Command format: az ml env set -n {your environment name} -g {name of created resource group}

Example:

!az ml env set -n capetownenv -g capetownrg  --debug

Whether you finish your enviroment setup by following Scenario A or Scenario B. Now you are ready to deploy our trained model as a web service to cosnume later from any application.

Create your Web Service:

As a reminder, here’s what’s needed to create your webservice:

  • Your trained model file -> in our case it’s “output/trainedModel.pkl”
  • Your score.py file which loads your model and returns the prediction result(s) using the model -> in our case it’s “modelmanagement/score.py”
  • Your JSON schema file that automatically validate the input and output of your web service -> in our case it’s “output/service_schema.json”
  • You runtime environment for the Docker container -> in our case it’s “python”
  • conda dependencies file for additional python packages. (We don’t have it in our case)

Use the following command to create your web service:

Command format: az ml service create realtime — model-file {model file/folder path} -f {scoring file} -n {your web service name} -s {json schema file} -r {runtime choice} -c {conda dependencies file}

!az ml service create realtime -m output/trainedModel.pkl -f score.py -n classifierservice -s output/service_schema.json -r python --debug

Test your Web Service:

Once the web service is successfully created, open your web browser and login to your Azure subscription through the portal then jump into your resource group and open your model management account.

Open your model management account

Click on “Model Management” under Application Settings

Click on “Services” and you select your created “classifier” service from the righ hand side panel

Copy your “Service id”, “URL” and “Primary key”

Call your web service from your terminal:

Command format: az ml service run realtime -i {your service id} -d {json input for your web service}

Example:

!az ml service run realtime -i YOUR_SERVICE_ID -d "{\"input_df\": [{\"NF1\": 120, \"cellprofiling\": 76, \"indicator1\": 380}]}"

Call your web service from Postman:

More where this came from

This story is published in Noteworthy, where thousands come every day to learn about the people & ideas shaping the products we love.

Follow our publication to see more stories featured by the Journal team.

--

--

Aaron (Ari) Bornstein

<Microsoft Open Source Engineer> I am an AI enthusiast with a passion for engaging with new technologies, history, and computational medicine.