Logo

The Data Daily

Why You Should Consider Google AI Platform For Your Machine Learning Projects

Why You Should Consider Google AI Platform For Your Machine Learning Projects

Google has many investments in the space of machine learning and artificial intelligence. It is the founder of TensorFlow, the most popular framework for building sophisticated machine learning and deep learning models. It also had Cloud ML Engine, a platform for training and deploying ML models. In 2017, Google started an open source project called Kubeflow that aims to bring distributed machine learning to Kubernetes. Kubeflow combines the best of TensorFlow and Kubernetes to enable organizations to train and deploy ML models in containers.

With AI Platform, Google is bringing all its assets under one roof. This offering covers the end-to-end spectrum of ML services including data preparation, training, tuning, deploying, collaborating and sharing of machine learning models.

Below is a summary of the building blocks of Google AI Platform:

AI Hub acts as the one-stop shop for discovering, sharing and deploying ML models. It’s a catalog of reusable models that can be quickly deployed to one of the execution environments of AI Platform. The catalog has a collection of models based on popular frameworks such as Tensorflow, PyTorch, Keras, XGBoost and Scikit-learn. Each of the models is packaged in a format that can be deployed in Kubeflow, deep learning VMs backed by GPU or TPU, Jupyter Notebooks, or Google’s own AI APIs. Each model is tagged with labels that make it easy to search and discover content based on a variety of attributes.

AI Platform Deep Learning VM Image makes it easy and fast to instantiate a VM image containing the most popular deep learning and machine learning frameworks on a Google Compute Engine instance. These VM images come pre-installed with all software drivers and third-party dependencies, including the latest GPU and TPU software. Jupyter Lab, the latest web-based interface for Project Jupyter, the de facto standard of interactive environments for running ML experiments, is preinstalled in the VMs for easy access to Notebooks. Since Google maintains the VM images, they have the latest version of TensorFlow and PyTorch. This service is based on Google Computing Engine, the IaaS component of Google Cloud Platform.

Kubeflow, the machine learning toolkit for Kubernetes, makes deployments of ML workflows on Kubernetes simple, portable, and scalable by providing a straightforward way to deploy best-of-breed open-source systems for ML to diverse infrastructures. Kubeflow Pipelines is a tool for building and deploying portable, scalable ML workflows based on Docker containers. Since Kubeflow runs on Kubernetes, the platform is extremely portable. Customers can design Kubeflow pipelines on-premises and deploy it to Google Kubernetes Engine for training at scale. This service is based on Google Kubernetes Engine, the Containers as a Service (CaaS) component of the Google Cloud Platform.

Google Cloud ML Engine has been around for a couple of years. The service is now augmented to support a variety of new features in-built algorithms, custom container-based training, and support for PyTorch. As a PaaS, Cloud ML Engine removes the friction involved in provisioning, configuring and managing the infrastructure.

Developers can submit ML training jobs created in TensorFlow, Keras, PyTorch, Scikit-learn, and XGBoost. Google now offers in-built algorithms based on linear classifier, wide and deep and XGBoost models. Developers can also bring their own containers with custom frameworks to train at scale. The service also supports hyperparameter tuning to increase the accuracy of trained models.

AI Platform Notebooks enables developers to create and manage virtual machine (VM) instances that are pre-packaged with JupyterLab. These Notebook instances are synchronized with Github for iterative development and deployment. AI Platform Notebooks are configured with the core packages needed for TensorFlow and PyTorch environments. They also have the packages with the latest Nvidia driver for GPU-enabled instances.

The models generated by training jobs, Notebooks, and external tools can be deployed on the AI Platform for scalable hosting. This service supports both online and batch predictions.Since Jobs and Notebooks can be used without provisioning the infrastructure, they represent Platform as a Service (PaaS) component of the AI Platform.

Google AI Platform is one of the most comprehensive offerings in the public cloud to train, tune and deploy machine learning models.

At Cloud Next 2019, Google announced the launch of AI Platform, a comprehensive machine learning service for developers and data scientists.

Google has many investments in the space of machine learning and artificial intelligence. It is the founder of TensorFlow, the most popular framework for building sophisticated machine learning and deep learning models. It also had Cloud ML Engine, a platform for training and deploying ML models. In 2017, Google started an open source project called Kubeflow that aims to bring distributed machine learning to Kubernetes. Kubeflow combines the best of TensorFlow and Kubernetes to enable organizations to train and deploy ML models in containers.

With AI Platform, Google is bringing all its assets under one roof. This offering covers the end-to-end spectrum of ML services including data preparation, training, tuning, deploying, collaborating and sharing of machine learning models.

Below is a summary of the building blocks of Google AI Platform:

AI Hub acts as the one-stop shop for discovering, sharing and deploying ML models. It’s a catalog of reusable models that can be quickly deployed to one of the execution environments of AI Platform. The catalog has a collection of models based on popular frameworks such as Tensorflow, PyTorch, Keras, XGBoost and Scikit-learn. Each of the models is packaged in a format that can be deployed in Kubeflow, deep learning VMs backed by GPU or TPU, Jupyter Notebooks, or Google’s own AI APIs. Each model is tagged with labels that make it easy to search and discover content based on a variety of attributes.

AI Platform Deep Learning VM Image makes it easy and fast to instantiate a VM image containing the most popular deep learning and machine learning frameworks on a Google Compute Engine instance. These VM images come pre-installed with all software drivers and third-party dependencies, including the latest GPU and TPU software. Jupyter Lab, the latest web-based interface for Project Jupyter, the de facto standard of interactive environments for running ML experiments, is preinstalled in the VMs for easy access to Notebooks. Since Google maintains the VM images, they have the latest version of TensorFlow and PyTorch. This service is based on Google Computing Engine, the IaaS component of Google Cloud Platform.

Kubeflow, the machine learning toolkit for Kubernetes, makes deployments of ML workflows on Kubernetes simple, portable, and scalable by providing a straightforward way to deploy best-of-breed open-source systems for ML to diverse infrastructures. Kubeflow Pipelines is a tool for building and deploying portable, scalable ML workflows based on Docker containers. Since Kubeflow runs on Kubernetes, the platform is extremely portable. Customers can design Kubeflow pipelines on-premises and deploy it to Google Kubernetes Engine for training at scale. This service is based on Google Kubernetes Engine, the Containers as a Service (CaaS) component of the Google Cloud Platform.

Google Cloud ML Engine has been around for a couple of years. The service is now augmented to support a variety of new features in-built algorithms, custom container-based training, and support for PyTorch. As a PaaS, Cloud ML Engine removes the friction involved in provisioning, configuring and managing the infrastructure.

Developers can submit ML training jobs created in TensorFlow, Keras, PyTorch, Scikit-learn, and XGBoost. Google now offers in-built algorithms based on linear classifier, wide and deep and XGBoost models. Developers can also bring their own containers with custom frameworks to train at scale. The service also supports hyperparameter tuning to increase the accuracy of trained models.

AI Platform Notebooks enables developers to create and manage virtual machine (VM) instances that are pre-packaged with JupyterLab. These Notebook instances are synchronized with Github for iterative development and deployment. AI Platform Notebooks are configured with the core packages needed for TensorFlow and PyTorch environments. They also have the packages with the latest Nvidia driver for GPU-enabled instances.

The models generated by training jobs, Notebooks, and external tools can be deployed on the AI Platform for scalable hosting. This service supports both online and batch predictions.Since Jobs and Notebooks can be used without provisioning the infrastructure, they represent Platform as a Service (PaaS) component of the AI Platform.

Google AI Platform is one of the most comprehensive offerings in the public cloud to train, tune and deploy machine learning models.

Images Powered by Shutterstock