/culinary

Primary LanguageCMakeMIT LicenseMIT

culinary

Uses computer vision to identify ingredients in a picture along with an algorithm to determine when items are added and removed from a pantry. The goal is to create a smart pantry that can keep track of what you have and what you need to buy.

The current approach is to use the YOLOv3 algorithm to identify ingredients in a picture. The algorithm is trained on the Food-101 dataset. The dataset contains 101 different classes of food. The algorithm is trained on 80% of the dataset and tested on the remaining 20%.

The next step is to track where the objects are in 3D space. This will allow us to determine when items are added and removed from the pantry. The current approach is to use a stereo camera setup to determine the depth of the objects. The stereo camera setup is calibrated using the OpenCV library. The depth of the objects is determined using the disparity map generated by the stereo camera setup.

Finally the depth of the objects is used to determine when items are added and removed from the pantry. The current approach is to use a simple thresholding algorithm to determine when items are added and removed from the pantry. The thresholding algorithm is based on the depth of the objects. The thresholding algorithm is currently being tested on a simulated environment. The simulated environment is created using the Blender software.