Logo

The Data Daily

How to Build an Instant Machine Learning Web Application with Streamlit and FastAPI | NVIDIA Technical Blog

How to Build an Instant Machine Learning Web Application with Streamlit and FastAPI | NVIDIA Technical Blog

Technical Walkthrough Oct 12, 2022
How to Build an Instant Machine Learning Web Application with Streamlit and FastAPI
Tags: featured , machine learning , microservice architecture , Software Engineering , Technical Walkthrough
Imagine that you’re working on a machine learning (ML) project and you’ve found your champion model. What happens next? For many, the project ends there, with their models sitting isolated in a Jupyter notebook. Others will take the initiative to convert their notebooks to scripts for somewhat production-grade code. 
Both of these end points restrict a project’s accessibility, requiring knowledge of source code hosting sites like GitHub and Bitbucket. A better solution is to convert your project into a prototype with a frontend that can be deployed on internal servers. 
While a prototype may not be production standard, it’s an effective technique companies use to provide stakeholders with insight into a proposed solution. This then allows the company to collect feedback and develop better iterations in the future.  
To develop a prototype, you will need:
A frontend for user interaction
A backend that can process requests
Both requirements can take a significant amount of time to build, however. In this tutorial, you will learn how to rapidly build your own machine learning web application using Streamlit for your frontend and FastAPI for your microservice, simplifying the process. Learn more about microservices in Building a Machine Learning Microservice with FastAPI . 
You can try the application featured in this tutorial using the code in the kurtispykes/car-evaluation-project GitHub repository.
Overview of Streamlit and FastAPI
Streamlit , an open-source app framework, aims to simplify the process of building web applications for machine learning and data science. It has been gaining a significant amount of traction in the applied ML community in recent years. Founded in 2018, Streamlit was born out of the frustrations of ex-Google engineers faced with the challenges experienced by practitioners when deploying machine learning models and dashboards. 
Using the Streamlit framework, data scientists and machine learning practitioners can build their own predictive analytics web applications in a few hours. There is no need to depend on front-end engineers or knowledge of HTML, CSS, or Javascript since it’s all done in Python.
FastAPI has also had a rapid rise to prominence among Python developers. It’s a modern web framework, also initially released in 2018, that was designed to compensate in almost all areas in which Flask falls flat. One of the great things about switching to FastAPI is the learning curve is not so steep, especially if you already know Flask. With FastAPI you can expect thorough documentation, short development times, simple testing, and easy deployment. This makes it possible to develop RESTful APIs in Python. 
By combining the power of the two frameworks, it’s possible to develop an exciting machine learning application you could share with your friends, colleagues, and stakeholders in less than a day. 
Build a full-stack machine learning application
The following steps guide you through building a simple classification model using FastAPI and Streamlit. This model evaluates whether a car is acceptable based on the following six input features: 
buying: The cost to buy the car
maint: The cost of maintenance 
doors: The number of doors 
persons: The carrying capacity (number of people) 
lug_boot:  The size of the luggage boot
safety: The estimated safety 
You can download the full Car Evaluation dataset from the UCI machine learning repository . 
After you have done all of the data analysis, trained your champion model, and packaged the machine learning model, the next step is to create two dedicated services: 1) the FastAPI backend and 2) the Streamlit frontend. These two services can then be deployed in two Docker containers and orchestrated using Docker Compose.
Each service requires its own Dockerfile to assemble the Docker images. A Docker Compose YAML file is also required to define and share both container applications. The following steps work through the development of each service. 
The user interface
In the car_evaluation_streamlit package, create a simple user-interface in the app.py file using Streamlit. The code below includes: 
A title for the UI 
A short description of the project
Six interactive elements the user will use to input information about a car
Class values returned by the API 
A submit button that, when clicked, will send all data collected from the user to the machine learning API service as a post request and then display the response from the model 
import requests import streamlit as st # Define the title st.title("Car evaluation web application") st.write( "The model evaluates a cars acceptability based on the inputs below.\ Pass the appropriate details about your car using the questions below to discover if your car is acceptable." ) # Input 1 buying = st.radio( "What are your thoughts on the car's buying price?", ("vhigh", "high", "med", "low") ) # Input 2 maint = st.radio( "What are your thoughts on the price of maintenance for the car?", ("vhigh", "high", "med", "low") ) # Input 3 doors = st.select_slider( "How many doors does the car have?", options=["2", "3", "4", "5more"] ) # Input 4 persons = st.select_slider( "How many passengers can the car carry?", options=["2", "4", "more"] ) # Input 5 lug_boot = st.select_slider( "What is the size of the luggage boot?", options=["small", "med", "big"] ) # Input 6 safety = st.select_slider( "What estimated level of safety does the car provide?", options=["low", "med", "high"] ) # Class values to be returned by the model class_values = { 0: "unacceptable", 1: "acceptable", 2: "good", 3: "very good" } # When 'Submit' is selected if st.button("Submit"): # Inputs to ML model inputs = { "inputs": [ { "buying": buying, "maint": maint, "doors": doors, "persons": persons, "lug_boot": lug_boot, "safety": safety } ] } # Posting inputs to ML API response = requests.post(f"http://host.docker.internal:8001/api/v1/predict/", json=inputs, verify=False) json_response = response.json() prediction = class_values[json_response.get("predictions")[0]] st.subheader(f"This car is **{prediction}!**")
The only framework required for this service is Streamlit. In the requirements.txt file, note the version of Streamlit to install when creating the Docker image.
streamlit>=1.12.0, Any: """Basic HTML response.""" body = ( "" "" "
Welcome to the API
" "
" "Check the docs: here " "
" "" "" ) return HTMLResponse(content=body) app.include_router(api_router, prefix=settings.API_V1_STR) app.include_router(root_router) # Set all CORS enabled origins if settings.BACKEND_CORS_ORIGINS: app.add_middleware( CORSMiddleware, allow_origins=[str(origin) for origin in settings.BACKEND_CORS_ORIGINS], allow_credentials=True, allow_methods=["*"], allow_headers=["*"], ) if __name__ == "__main__": # Use this for debugging purposes only logger.warning("Running in development mode. Do not run like this in production.") import uvicorn uvicorn.run(app, host="localhost", port=8001, log_level="debug")
The code above defines the server, which includes three endpoints:
"/": An endpoint used to define a body that returns an HTML response
"/health": An endpoint to return the health response schema of the model 
"/predict": An endpoint used to serve predictions from the trained model
You may only see the "/" endpoint in the code above: this is because the "/health" and "/predict" endpoints were imported from the API module and added to the application router. 
Next, save the dependencies for the API service in the requirements.txt file:
--extra-index-url="https://repo.fury.io/kurtispykes/" car-evaluation-model==1.0.0 uvicorn>=0.18.2, =0.79.0, =0.0.5, =1.9.1, =3.10.0, =0.6.0,

Images Powered by Shutterstock