Why is Google App Engine used

In this post we show how to deploy a self-created machine learning model on the Google Cloud platform. In a previous post, we created a classification model, which we now package as a Flask app and then run on the Google Cloud app engine.

Google Kubeflow can also be used to create comprehensive ML pipelines; in addition, the Google AI platform offers some preconfigured models that can be used without programming knowledge. We use the Google App Engine here, as it offers the most straightforward and cost-effective solution for our example. At the end of the deployment, predictions for new data can be generated via an API call. The path from the raw data to a finished ML model can be found in the previous article. Here we are only referring to the cloud deployment of a Python ML model.


Pack the model using a flask

After we have created our model, we need to bring it into a usable form. Flask is a lean web framework for Python that allows us to turn Python applications into web applications by configuring the necessary interfaces for our model as APIs (Application Programming Interface). Our model can receive requests with new data, then use the data in the request to create predictions and send them back to the user.

In order to package our ML model as a Flask app, we start with a new Python file (the development environment can of course be freely selected, here I use Spyder):

  • First we load all the necessary libraries.
  • Then an instance of the Flask class is initialized.
  • Then we load the model.
  • In our example we also do a little data validation for the input.




  • Then we configure the API interface. Since we only want to incorporate one functionality, we stay on the minimal route (@ app.route (“/“)). However, several interfaces can be configured in the app, authentications can be integrated and even HTML templates can be integrated with Flask’s render_template.
  • We select the method and read the data from the request, which are transmitted as JSON.
  • Then the data can be validated (e.g. checking the completeness and sequence).




  • Then we use the transmitted data for forecasting. In our example, it must be ensured that model.predict expects a numpy array, which is why the data must be transformed.
  • Finally, we pack the output and send it back. After all API interfaces have been configured, the app still has to be started. The app's debug mode is useful for quickly testing changes, but it can be turned off for the final deployment.




Now the app can be tested on your own computer. A program like Postman makes checking the API easier by allowing you to write your own API calls using any method and format.





Deployment of the app on the Google Cloud Platform

With its app engine, the Google Cloud Platform offers an ideal way of providing our app. GCP automatically takes care of the provision of storage and computing power. In this example, our Python app runs in the standard configuration without any problems and nothing else needs to be declared. The advantage here is that in the standard configuration you only pay for resources that are actually used and you don't have to worry about making them available. Similar services can also be found in the Azure Cloud and AWS.

Before we start with the deployment, however, we must first download the command line tool Google SDK and create a new folder that we fill with the necessary deployment files.

In the folder we put:

  • Our Flask Python file, in which the APIs were configured, under the name.
  • The final model so that it can be loaded into the Flask app.
  • A simple text file with the name in which we declare the Python version (e.g.).
  • A File. To generate this, you create a virtual environment for Python, if you are not already working in one, and install all the necessary packages in it.

These can then be displayed using.




If other files or website templates are to be used by the app, then these are also placed in the folder, but the four files described are enough for our project.





For the deployment, we log into the Google Cloud Console using a Google account and create a new project, as all resources in Google are tied to projects. Do not worry about possible costs, the free contingent that Google provides when registering is completely sufficient for most trial projects. It only becomes a problem if the models are in the gigabyte range.








To create an app engine instance, we click on the navigation menu in the top left corner of the Cloud Console and then on App Engine.








Then create an application on the app. Then follow the instructions and select the region, language and environment and wait until the app has been created.

Then we open the Google Cloud SDK and select our deployment directory. We start the gcloud functionality with and then select the default setting, the Google account and our created project.





The final deployment only requires the command. Now we wait until the data has been uploaded from the folder and the app starts to run. You can then access the app via the link provided.





Congratulations! We have just deployed a machine learning model on the Google Cloud app engine and can now send the model new data from anywhere via the target url and receive predictions back. In the end, of course, don't forget to turn off unnecessary services.





Summary

In the post Construction of a machine learning model with Python we have created a CatBoost classification model. We have now packed this model into a Flask app in which APIs have been configured. So the model is ready for web-based predictions. Finally, we deployed the model in the Google Cloud app engine. All the necessary information can be found on Gitlab.