- Create a GCP Project, enable billing, and create a GCS bucket.
- Enable the required APIs.
- Generate API Key to use for submitting AI Platfrom Managed Pipeline jobs.
- Create an AI Notebook instance.
- Open the JupyterLab then open a new Terminal
- Clone the repository to your AI Notebook instance:
git clone https://github.com/ksalama/ucaip-labs.git
cd ucaip-labs
- Open 00-env-setup.ipynb and run the cells to tnstall requirements
The Chicago Taxi Trips dataset is one ofof public datasets hosted with BigQuery, which includes taxi trips from 2013 to the present, reported to the City of Chicago in its role as a regulatory agency. The task is to predict whether a given trip will result in a tip > 20%.
The 01-data-analysis-and-prep notebook covers:
- Performing exploratory data analysis on the data in BigQuery.
- Creating managed AI Platform Dataset using the Python SDK.
- Generating the schema for the raw data using TensorFlow Data Validation.
We experiment with creating two models: AutoML and Custom Model.
-
The 02-1-experimentation-automl notebook covers:
- Using AutoML Tables to create a classification model.
- Retrieving the evaluation metrics from the AutoML model.
-
The 02-2-experimentation-keras notebook covers:
- Preparing the data using Dataflow
- Implementing a Keras classification model
- Training the keras model in AI Platform using a pre-built container
- Upload the exported model from Cloud Storahe to AI Platform as a Model.
We serve the model trained either using AutoML Tables or a custom training job for predictions and explainations. The 03-model-serving notebook covers:
- Creating an AI Platform Endpoint.
- Deploy the AutoML Tables and the custom modesl to the endpoint.
- Test the endpoints for online prediction.
- Getting online explaination from the AutoML Tables mode.
- Use the uploaded custom model for batch prediciton.
We build an end-to-end TFX training pipeline that performs the following steps:
- Receive hyperparameters using hyperparam_gen custom python component.
- Extract data from BigQuery using BigQueryExampleGen.
- Validate the raw data using StatisticsGen and ExampleValidator.
- Process the data using Transform.
- Train a custom model using Trainer.
- Train an AutoML Tables model using automl_trainer custom python component.
- Evaluat the custom model using ModelEvaluator.
- Validate the custom model against the AutoML Tables model using a custom python component.
- Save the blessed to model registry location using using Pusher.
- Upload the model to AI Platform using aip_model_pusher custom python component.
We have the following notebooks for the ML training pipeline:
- The 04-tfx-interactive notebook covers testing the pipeline components interactively.
- The 05-tfx-local-run notebook covers running the end-to-end pipeline locally.
- The 06-tfx-kfp-deploy notebook covers compiling and deploying the pipeline to AI Platform Hosted KFP.
- The 07-tfx-managed-run notebook covers compiling and running the pipeline to AI Platform Managed Pipelines.