/Careium-AI

Careium is an AI android application to help in having a long well-healthy life. Helps in tracking eaten food, ingredients, and nutrients. Careium has the advantage of estimating food ingredients, by getting food images using the mobile camera or uploading a pre-captured image to the application, resulting in related info of it such as Food’s Nutrition Components.

Primary LanguageJupyter NotebookApache License 2.0Apache-2.0

GitHub contributors GitHub issues GitHub forks GitHub stars GitHub license Last Commit

Careium-AI

        At this extreme moment, we began working from home, away from campus, and keeping social distance from as many people as possible. As we stay home and stick with the foods that have been in our fridge, we are temporarily living a sedentary lifestyle with increased odds of physical inactivity, excessive eating and sitting, stress, anxiety, and depression. Many of us will gain some weight during the pandemic and may keep the extra weight permanently, which may carry considerable health risks for type 2 diabetes, hypertension, heart attack, stroke, and other health problems. This is where the importance of Careium Artificial Intelligence (AI) Food Tracker comes in.

        Accordingly, the Careium-AI Food Tracker (Android) application helps diabetes, liver patients, and those who want to track their health throughout the day through daily meals by calculating the calories and minerals per each meal by taking a picture of the meal, and the application detects the food type and analyzes its calories and minerals. Also, the application follows up gaining or losing weight, or even avoiding some food due to allergies. That is achieved using Deep Learning and Image Processing.

        Careium-AI Food Tracker shows astonishing results compared to what was expected for a simple use case example. Capturing the food plate, long dish, deep dish or even scattered amount of food, Careium can still segment the food region, process it and predicts the main five nutrition components (Calories, Mass, Fat, Carbs, Proteins). Calories are measured in cal, all others are measured in grams. It can be used in classifying a mixed dish of food into multiple food categories, generating food eaten reports periodically, recommending a proper meal from a set of different available meals in a given dataset and giving alarms for each mealtime.

🥣 Features

  • Providing weekly reports
  • Meals' time alarm
  • Capturing an image of the eaten food plate
  • Recognizing the food ingredients
  • Predicting the food nutrients (calories, fats, carbs, proteins)
  • Other user profile manipulations e.g. (profile viewing/editing, login/register authentication, etc...)

-----------------------------------------------------

🗂️ Table of Contents

Table of Contents
  1. ➤ Background
  2. ➤ Architecture
  3. ➤ Datasets
  4. ➤ Development
  5. ➤ Testing
  6. ➤ User Manual
  7. ➤ Conclusion
  8. ➤ Contributors
  9. ➤ References
  10. ➤ Copyrights

-----------------------------------------------------

📋 Background

Paper 1: Food Detection and Recognition Using CNN [1]

        The researchers praised the effectiveness of CNNs for food image recognition and detection. They found out that CNN performed much better than traditional methods using handcrafted features. Through observation of trained convolution kernels, they confirmed that color features are essential to food image recognition. The researchers applied CNN to food detection, finding that CNN significantly outperformed a baseline method.


Paper CNN Accuracy

Paper 2: Food Recognition - New Dataset, Experiments and Results [2]

        In the paper, researchers designed a suitable automatic tray analysis pipeline that takes a tray image as input, finds the regions of interest and predicts for each region the corresponding food class. The researchers evaluated three different classification strategies using several visual descriptors. The best performance has been obtained by using ConvolutionalNeural-Networks-based features.


Paper Food Recognition Results

Paper 3: Deep Learning-Based Food Calorie Estimation Method in Dietary Assessment [3]

        Their method includes 5 steps: image acquisition, object detection, image segmentation volume estimation, and calorie estimation. To estimate calories, it requires the user to take a top view and a side view of the food before eating with a smartphone. Each image used to estimate must include a calibration object; in their experiments. The researchers use the One Yuan coin as a reference. To get better results, they choose to use Faster Region-based Convolutional Neural Networks (Faster R-CNN) to detect objects and GrabCut as segmentation algorithms. To estimate the volume of each food they used the next equations.


Paper Food Volume Equations

Survey of the Work

        We posted an online Google Form Survey on different social media apps to provide us with important statistical analysis and most importantly, how the idea we are on, motivated the common people.

Description of Existing Similar Systems

  • HealthifyMe [4]
            A mobile application that provides smart meal plans, Customized workout plans with certified fitness coaches, and Tracks daily calorie intake, weight goals, and workouts. Sleep monitoring, meal journal & step counter. Health advice recipes, and a daily dose of motivation for fitness goals through fresh content on the app's feed.

  • MyFitnessPal [5]
            A mobile application that gives accurate nutrition facts for over 14 million foods. Easily log everything eaten to the food diary. Engaged online community of 200 million members. 250+ healthy recipes. 150+ workouts keep routines fresh and fun. Logging Food by scanning a barcode.


Existing Systems Comparison

-----------------------------------------------------

📐 Architecture


Careium-AI System Architecture

Phases Description

Food Detection Model Food Recognition Model Nutrition Extractor Model
Takes an image as an input and by using Deep Neural Networks, Image Label is resulted whether the image contains food or not. (future work) Takes the detected Food Image, pre-process the image and returns the food label based on the most probability gained. Depends mostly on the user profile (calories & nutrition needs). Based on his behavior a set of proper foods/meals are recommended.
Recommendation Model Daily Meals Reminder Report Generation
Depends mostly on the user profile (e.g. calories) Based on his behavior a set of proper foods/meals are recommended. Periodically reminds the healthy-life seeker with each meal time. Generates reports according to the statistics and BMR based on user activities.

Use Case Scenario


Careium-AI Use Case Scenario

-----------------------------------------------------

💽 Datasets

Nutrition5K [6]

According to Google research datasets [7]:

  • Nutrition5k is a dataset of visual and nutritional data for ~5k realistic plates of food.
  • Nutrition5k Scans data for 5,006 plates of food, each containing:
    • 4 rotating side-angle videos
    • Overhead RGB-D images (when available)
    • Fine-grained list of ingredients
    • Per-ingredient mass
    • Total dish mass and calories
    • Fat, protein, and carbohydrate macronutrient masses

Food101 [8]

According to the Food101 paper [9], Food101 is a dataset of 101,000 real-world images in total:

  • 750 training images
  • 250 testing images
  • the training images were not cleaned, and thus still contain some amount of noise

-----------------------------------------------------

💻 Development

General Guidelines & Preprocessing Techniques

  • Uses Nutrition5k Dataset for Regression
  • Uses Food101 Dataset for Classification
  • Train-Valid-Test split
    • Train Size => 2771
    • Valid Size => 380
    • Test Size => 360
  • Target Size (24x24) and Batch Size (16)
  • Training/Evaluation Metric
  • Inputs/Outputs Normalization

R1 Trials & Outcomes

Model Test Loss Cal Loss Mass Loss Fat Loss Carb Loss Prot Loss
1. ML Model 0.8764 0.172 0.181 0.173 0.182 0.163
2. Sequential DL Model 0.7613 0.143 0.151 0.151 0.163 0.153
3. Pre-trained MobileNet 0.4768 0.112 0.094 0.092 0.087 0.091
4. Pre-trained Inception-v3 0.2957 0.076 0.064 0.061 0.052 0.042


Careium-AI R1 Architectures

C1 Trials & Outcomes

Base Model Accuracy Optimized Model Accuracy
Training 61.23% 81.34%
Testing [51%-68%] 78.50%


Careium-AI C1 Performance

-----------------------------------------------------

🔌 Testing

Unit Testing

we tested the core application pillars by adding unit tests to robust functions in the application.

  • A snapshot of a unit test to validate the login input

  • A snapshot of another unit test to validate the register input

UI Testing

we tested the frontend UI by adding UI tests to test the visibility of the screens, text, buttons, and much more.

-----------------------------------------------------

📖 User Manual

Installation Guide --> FOR OPEN SOURCE DEVELOPERS

  • Please refer to the project's documentation to know more about the project scenario
  • Also, refer to the CONTRIBUTING.md for the purpose of a bug fix or proposing a feature.
  • Navigate to the main/Models directory to give a look at the used AI models
  • Install Android Studio with the Bumblebee version or higher
  • Sync the build.gradle-app file with minSDK 28, compileSdk 31 or higher making sure of the existence of the additional dependencies below
    // Drawer Layout
    implementation 'androidx.drawerlayout:drawerlayout:1.1.1'
    
    // Rounded Image View
    implementation 'com.makeramen:roundedimageview:2.3.0'
    
    // Bottom Navigation Bar
    implementation 'com.etebarian:meow-bottom-navigation-java:1.2.0'
    
    // Circular Floating Action Menu
    implementation 'com.oguzdev:CircularFloatingActionMenu:1.0.2'
    implementation 'androidx.legacy:legacy-support-v4:1.0.0'
    
    //tensorflow
    implementation('org.tensorflow:tensorflow-lite:2.4.0') { changing = true }
    implementation('org.tensorflow:tensorflow-lite-support:0.1.0') { changing = true }
    implementation 'org.tensorflow:tensorflow-lite-metadata:0.1.0'
    implementation 'com.google.firebase:firebase-storage-ktx:20.0.1'
    implementation 'androidx.gridlayout:gridlayout:1.0.0'
    
    //view model
    def lifecycle_version = "2.1.0"
    //noinspection GradleDependency
    implementation "androidx.lifecycle:lifecycle-extensions:$lifecycle_version"
    
    //YOYO animation
    implementation 'com.daimajia.androidanimations:library:2.4@aar'
    
    // Firebase
    implementation 'com.google.firebase:firebase-auth-ktx:21.0.6'
    implementation 'com.google.firebase:firebase-database:20.0.5'
    
    // Testing
    testImplementation 'junit:junit:4.13.2'
    androidTestImplementation 'androidx.test.ext:junit:1.1.3'
    androidTestImplementation 'androidx.test.espresso:espresso-core:3.4.0'
    testImplementation "com.google.truth:truth:1.1.3"
    androidTestImplementation "com.google.truth:truth:1.1.3"
    androidTestImplementation 'androidx.test:runner:1.4.0'
    androidTestImplementation 'androidx.test:rules:1.4.0'
    androidTestImplementation 'androidx.test:core:1.4.0'
    androidTestImplementation 'androidx.test.ext:junit-ktx:1.1.3'
    

For the need of accessing the database server, please, contact one of the main admins at once.

Getting Started --> FOR END-USERS GUIDANCE

  • Download the app-realease.apk

  • Follow the tips below for more guidance on how to use Careium-AI

    • TIP #1: Make sure that you are connected to the internet

    • TIP #2: On the first screen that appears, you can select if you want to login or register

      1_

    • TIP #3: If you click on Register, you will need to go through three separate screens to enter your data

      • First Screen: Name, Email, Password and Confirm Password
      • Second Screen: Height, Weight, Age and Gender
      • Third Screen: adjust your goals

      2_

    • TIP #3 (CONT): If you click on Login, a screen will appear to enter the previously registered email and password

      3_

    • TIP #4: Home screen navigations

      • Total calories remaining according to the user's needs (previously determined from his personal desire)
      • Left/Bottom navigation bar that enables the user to explore the application features
      • Floating action button that enables the user to record an action e.g. (food tracker camera, ...)

      4_

    • TIP #5: Users can make private notes that help them to enhance their diet

      5_

    • TIP #6: Users can make custom alarms to remind them with each meal time

      6_

    • TIP #7: Careium-AI provides the user with weekly reports that have detailed information since the last report

      7_

    • TIP #8: The user can track his meal using the app camera to get information

      • The meal name
      • Total meal calories
      • Total meal proteins
      • Total meal fats
      • Total meal carbs

      8_

    • TIP #9: Careium-AI sotres each meal image in the meal studio

      9_

    • TIP #10: The meal notification message received on time selected before (from tip #6)

      10_

-----------------------------------------------------

💡 Conclusion

        Careium AI Food Tracker shows astonishing results compared to what was expected for a simple use case example. Capturing the food plate, long dish, deep dish or even scattered amount of food, Careium can still segment the food region, process it and predicts the main five nutrition components (Calories, Mass, Fat, Carbs, Proteins). Calories are measured in cal, all others are measured in grams. It can be used in classifying a mixed dish of food into multiple food categories, generating food-eaten reports periodically, recommending a proper meal from a set of different available meals in a given dataset and giving alarms for each mealtime.

        We provide evidence of the challenging nature of visual annotation in our human portion size estimation studies, demonstrating the limitations of typical data annotation approaches in this space. We validate the effectiveness of our approach to data collection and the resulting Nutrition5k [6] dataset by training a neural network that can outperform professional nutritionists at caloric and macronutrient estimation in the generic food setting. We further introduce multiple baseline approaches of incorporating depth data to significantly improve upon our direct nutritional prediction from 2D images alone.

-----------------------------------------------------

✨ Contributors

Credits and thanks go to these wonderful people (emoji key):

Kareem Sherif's Image
Kareem S. Fathy

🔣 💻 🐛 ♿️
Kareem Saeed's Image
Kareem S. Ragab

💻 🐛 ♿️
Abanoub Asaad's Image
Abanoub A. Azaab

📝 💬
Nada ElSayed's Image
Nada S. Anies

💻 🐛 🎨
Nada Mohamed's Image
Nada M. Abdelhamed

🎨 📖

A word from our leaders

Our hope is that the release of this project will inspire further innovation in the automatic nutritional understanding space and provide a benchmark for the evaluation of future techniques. Download it and enjoy with our release ☺️.

-----------------------------------------------------

📚 References

    [4] HealthifyMe: related existing application (Last Accessed on 03-2022)
    [5] Myfitnesspal: related existing application (Last Accessed on 05-2022)
    [6] github.com/google-research-datasets/Nutrition5k (Last Accessed on 04-2022)
    [7] Google researsh datasets (Last Accessed on 04-2022)
    [8] Food 101 | Kaggle. (Last Accessed on 04-2022)
    [10] Careium APK (app release) (Last Accessed on 07-2022)

-----------------------------------------------------

©️ Copyrights