Recifly is a web-based application designed to analyze pictures of food items, and return a set of recipes that incorporate those particular food items found in the pictures.
This application was built during the NSBE 2017 National Hackathon with over 100 competitors. #NSBE43
The purpose of this application is to automate the process of choosing what to make with a set of available food items (aka ingredients). This application saves users time and money. The application encourages users to take advantage of the food they have or provides a unique method to determine how to meal prep. The target audience includes any sets of users that cook on a semi-frequent basis (e.g., families, college students, single employees).
The pictures are uploaded to the website by the user. Recify analyzes the pictures using the Vision API, returning labels of the food items as a list. This information is queried via the Yummly API, which returns a JSON response. This response is parsed to identify the recipes that incorporates all of the food items on the list. The recipes are displayed to the user using pictures and descriptive captions.
- HMTL, CSS, Javascript - Used to create the front-end display for the user to interact with the user
- Node.js - Used to interface between the APIs and the front-end display
- Google Cloud Vision API - Used to recognize and label objects in photos
- Yummly API - Used to find recipe alternatives when provided with ingredient queries
node-cloud-vision-api is a node client wrapper for Cloud Vision API.
Cloud Vision API Docs https://cloud.google.com/vision/docs/
Note that currently only limited preview for alpha-test users.
Supported features
Feature Type | Description |
---|---|
FACE_DETECTION | Run face detection |
LANDMARK_DETECTION | Run models to execute landmark detection |
LOGO_DETECTION | Run models to execute product logo detection |
LABEL_DETECTION | Run models to execute Image Content Analysis |
TEXT_DETECTION | Run models to execute OCR on an image |
SAFE_SEARCH_DETECTION | Run models to compute image safe search properties |
- Sign up limited preview for Cloud Vision API https://cloud.google.com/vision/
- Cloud Vision API Key is needed
npm install node-cloud-vision-api --save
API requests on node-cloud-vision-api is internally managed by google-api-nodejs-client
You can setup auth data with the following samples
- Use Server Key
const vision = require('node-cloud-vision-api')
vision.init({auth: 'YOUR_API_KEY'})
- Use OAuth
const vision = require('node-cloud-vision-api')
const google = require('googleapis')
const oauth2Client = new google.auth.OAuth2('YOUR_GOOGLE_OAUTH_CLIENT_ID', 'YOUR_GOOGLE_OAUTH_SECRET', 'YOUR_GOOGLE_OAUTH_CALLBACK_URL')
oauth2Client.setCredentials({refresh_token: 'YOUR_GOOGLE_OAUTH_REFRESH_TOKEN'})
vision.init({auth: oauth2Client})
- For others, see references. google-api-nodejs-client
'use strict'
const vision = require('node-cloud-vision-api')
// init with auth
vision.init({auth: 'YOUR_API_KEY'})
// construct parameters
const req = new vision.Request({
image: new vision.Image('/Users/tejitak/temp/test1.jpg'),
features: [
new vision.Feature('FACE_DETECTION', 4),
new vision.Feature('LABEL_DETECTION', 10),
]
})
// send single request
vision.annotate(req).then((res) => {
// handling response
console.log(JSON.stringify(res.responses))
}, (e) => {
console.log('Error: ', e)
})
See more in test_annotate.js
Image files on web can be specified with 'url' paramters in Image object
const req = new vision.Request({
image: new vision.Image({
url: 'https://scontent-nrt1-1.cdninstagram.com/hphotos-xap1/t51.2885-15/e35/12353236_1220803437936662_68557852_n.jpg'
}),
features: [
new vision.Feature('FACE_DETECTION', 1),
new vision.Feature('LABEL_DETECTION', 10),
]
})
See more in test_annotate_remote.js
// construct parameters
// 1st image of request is load from local
const req1 = new vision.Request({
image: new vision.Image({
path: '/Users/tejitak/temp/test1.jpg'
}),
features: [
new vision.Feature('FACE_DETECTION', 4),
new vision.Feature('LABEL_DETECTION', 10),
]
})
// 2nd image of request is load from Web
const req2 = new vision.Request({
image: new vision.Image({
url: 'https://scontent-nrt1-1.cdninstagram.com/hphotos-xap1/t51.2885-15/e35/12353236_1220803437936662_68557852_n.jpg'
}),
features: [
new vision.Feature('FACE_DETECTION', 1),
new vision.Feature('LABEL_DETECTION', 10),
]
})
// send multi requests by one API call
vision.annotate([req1, req2]).then((res) => {
// handling response for each request
console.log(JSON.stringify(res.responses))
}, (e) => {
console.log('Error: ', e)
})
See more in test_annotate_remote.js
Recommended node version is above v4.0.0 because this module is implemented with ES6.
Fork the repository and create a PR to 'dev' branch.
- Anwaar Bastien (Front End Developer)
- Bryce Hammond (Front End Developer)
- Marquel Hendricks (Front End Developer)
- Joshua Land (Back End Developer)
- Favour Nerrise (Back End Developer)
- Reginald Padgett (Back End Developer)
- Jordan Tyner (README Editor)
- Hackathon Sponsors
- Two Sigma
- Rockwell Collins
- Cox Automotive