Cocoa Bean Prediction is a web application that utilizes deep learning to classify cocoa beans into one of six categories. The application takes an image, either from a camera or a local directory, and processes it through a Flask API that hosts the predictive models.
The cocoa beans can be classified into the following types:
- Bean Fraction
- Broken Bean
- Fermented Bean
- Moldy Bean
- Unfermented Bean
- Whole Bean
There are four folders in the directory. The data analysis and modelling
folder contains the notebook files for analysis, model definitions and training. The models
folder contains the trained models. The static
folder contains CSS files and images. The templates
folder contains the html files for the home and prediction result pages.
This notebook focuses on the data pipeline required for processing and preparing the data for model training. It includes the following:
- Data loading and cleaning: Methods to load and clean the raw data.
- Data preprocessing and augmentation: Methods to prepare and augment the data for better model training.
This notebook contains the code for training the deep learning model used for classifying cocoa beans. It includes the following:
- Model architecture definition: Detailed architecture of the deep learning models used.
- Training and validation routines
- Training and validation routines: Procedures for training the models and validating their performance.
- Evaluation metrics: Metrics used to evaluate the performance of the models.
The Custom CNN model consists of the following layers:
- Convolutional Layers: Five convolutional layers are used to extract features from the images. These layers are followed by max-pooling layers to reduce the spatial dimensions, allowing the network to learn more complex patterns.
- Dense Layers: After flattening the output from the convolutional layers, the model includes two dense layers to further process the extracted features. Dropout is applied to reduce overfitting.
- Output Layer: The final layer is a dense layer with a softmax activation function, producing probabilities for each of the six cocoa bean categories.
The InceptionV3 model is a pre-trained convolutional neural network (CNN) on the ImageNet dataset, repurposed for cocoa bean classification. The architecture includes:
- Base Model: The base of the model is the InceptionV3 network, excluding its top (fully connected) layers. This provides a robust feature extraction mechanism.
- Global Average Pooling: After the base model, a global average pooling layer reduces the spatial dimensions.
- Dense Layers: Similar to the Custom CNN, two dense layers are used with dropout to enhance learning and prevent overfitting.
- Output Layer: A dense softmax layer outputs the classification probabilities for the six categories.
The MobileNet model is another pre-trained network on ImageNet, designed for mobile and embedded vision applications. It is used for its efficiency and lightweight architecture:
- Base Model: The MobileNet architecture is used as the base model, excluding the top layers, which serves as a feature extractor.
- Global Average Pooling: A global average pooling layer follows the MobileNet base, condensing the features.
- Dense Layers: Two dense layers with dropout are added to adapt the model to the cocoa bean classification task.
- Output Layer: The final layer is a softmax layer that provides the classification probabilities for the six cocoa bean types.
The app.py
file is the main Flask application that handles image uploads, predictions, and rendering the results. Key functionalities include:
- File uploads: Handling image file uploads and ensuring they are of the correct type (jpg, jpeg, png).
- Image processing: Loading and preparing the image for prediction.
- Model loading and prediction: Loading multiple models and predicting the class of the uploaded image.
- Result rendering: Rendering the prediction results, including the predicted class and confidence score.
The index.html
file is the homepage of the application, where users can upload an image for classification. It includes:
- Form for image upload: Allows users to select and upload an image.
- Image preview: Displays a preview of the uploaded image.
- Buttons: Provides buttons to clear the preview or submit the form.
- Loading indicator: Displays a loading indicator while the prediction is processing.
The predict.html
file displays the prediction results. It includes:
- Predicted bean type: Displays the predicted cocoa bean type.
- Confidence score: Shows the confidence score of the prediction.
- Upload another image button: Allows users to upload another image for classification.
The styles.css
file contains custom styles for the web application.
The images
folder is used to store images that have been classified by the model. This will be replaced by a database in the future.
The assets
folder is used to store images and other resources that maybe be important.
- Python 3.7 or higher
- Flask
- TensorFlow
- OpenCV
- Clone the repository:
git clone https://github.com/Berchie-Sam/Cocoa_Bean_Prediction.git
- Install the required packages:
pip install -r requirements.txt
- Run the Flask API:
python app.py
Open your web browser and go to http://127.0.0.1:8000
.
You can either take a picture using your camera or upload an image from your local directory.
The application will classify the uploaded image into one of the six cocoa bean categories.
- Image Upload: Upload an image from your local device or use the camera.
- Prediction: Get real-time classification of cocoa beans.
- API: A Flask-based API to handle image processing and predictions.
We welcome contributions! Please follow these steps to contribute:
- Fork the repository.
- Create a new branch:
git checkout -b feature-branch
- Commit your changes:
git commit -am 'Add new feature'
- Push to the branch:
git push origin feature-branch
- Create a new Pull Request.
- Note: Please ensure that your pull request targets the
main
branch of the repository when submitting contributions.
For any inquiries or questions, please contact us at: [soberchie@gmail.com]