/skin

Primary LanguageJupyter Notebook

Potato Disease Classification

Setup for Python:

  1. Install Python (Setup instructions)

  2. Install Python packages

pip3 install -r training/requirements.txt
pip3 install -r api/requirements.txt
  1. Install Tensorflow Serving (Setup instructions)

Setup for ReactJS

  1. Install Nodejs (Setup instructions)
  2. Install NPM (Setup instructions)
  3. Install dependencies
cd frontend
npm install --from-lock-json
npm audit fix
  1. Copy .env.example as .env.

  2. Change API url in .env.

Setup for React-Native app

  1. Go to the React Native environment setup, then select React Native CLI Quickstart tab.

  2. Install dependencies

cd mobile-app
yarn install
  • 2.1 Only for mac users
cd ios && pod install && cd ../
  1. Copy .env.example as .env.

  2. Change API url in .env.

Training the Model

  1. Download the data from kaggle.
  2. Only keep folders related to Potatoes.
  3. Run Jupyter Notebook in Browser.
jupyter notebook
  1. Open training/potato-disease-training.ipynb in Jupyter Notebook.
  2. In cell #2, update the path to dataset.
  3. Run all the Cells one by one.
  4. Copy the model generated and save it with the version number in the models folder.

Running the API

Using FastAPI

  1. Get inside api folder
cd api
  1. Run the FastAPI Server using uvicorn
uvicorn main:app --reload --host 0.0.0.0
  1. Your API is now running at 0.0.0.0:8000

Using FastAPI & TF Serve

  1. Get inside api folder
cd api
  1. Copy the models.config.example as models.config and update the paths in file.
  2. Run the TF Serve (Update config file path below)
docker run -t --rm -p 8501:8501 -v C:/Code/potato-disease-classification:/potato-disease-classification tensorflow/serving --rest_api_port=8501 --model_config_file=/potato-disease-classification/models.config
  1. Run the FastAPI Server using uvicorn For this you can directly run it from your main.py or main-tf-serving.py using pycharm run option (as shown in the video tutorial) OR you can run it from command prompt as shown below,
uvicorn main-tf-serving:app --reload --host 0.0.0.0
  1. Your API is now running at 0.0.0.0:8000

Running the Frontend

  1. Get inside api folder
cd frontend
  1. Copy the .env.example as .env and update REACT_APP_API_URL to API URL if needed.
  2. Run the frontend
npm run start

Running the app

  1. Get inside mobile-app folder
cd mobile-app
  1. Copy the .env.example as .env and update URL to API URL if needed.

  2. Run the app (android/iOS)

npm run android

or

npm run ios
  1. Creating public (signed APK)

Creating the TF Lite Model

  1. Run Jupyter Notebook in Browser.
jupyter notebook
  1. Open training/tf-lite-converter.ipynb in Jupyter Notebook.
  2. In cell #2, update the path to dataset.
  3. Run all the Cells one by one.
  4. Model would be saved in tf-lite-models folder.

Deploying the TF Lite on GCP

  1. Create a GCP account.
  2. Create a Project on GCP (Keep note of the project id).
  3. Create a GCP bucket.
  4. Upload the potatoes.h5 model in the bucket in the path models/potatos.h5.
  5. Install Google Cloud SDK (Setup instructions).
  6. Authenticate with Google Cloud SDK.
gcloud auth login
  1. Run the deployment script.
cd gcp
gcloud functions deploy predict_lite --runtime python38 --trigger-http --memory 512 --project project_id
  1. Your model is now deployed.
  2. Use Postman to test the GCF using the Trigger URL.

Inspiration: https://cloud.google.com/blog/products/ai-machine-learning/how-to-serve-deep-learning-models-using-tensorflow-2-0-with-cloud-functions

Deploying the TF Model (.h5) on GCP

  1. Create a GCP account.
  2. Create a Project on GCP (Keep note of the project id).
  3. Create a GCP bucket.
  4. Upload the tf .h5 model generate in the bucket in the path models/potato-model.h5.
  5. Install Google Cloud SDK (Setup instructions).
  6. Authenticate with Google Cloud SDK.
gcloud auth login
  1. Run the deployment script.
cd gcp
gcloud functions deploy predict --runtime python38 --trigger-http --memory 512 --project project_id
  1. Your model is now deployed.
  2. Use Postman to test the GCF using the Trigger URL.

Inspiration: https://cloud.google.com/blog/products/ai-machine-learning/how-to-serve-deep-learning-models-using-tensorflow-2-0-with-cloud-functions