/sight-m

Primary LanguageTypeScriptMIT LicenseMIT

Sight-m: Images to sound

Sight-m is a very basic Expo app that uses Tensorflow JS and expo-speech to perform image classification and then announce it the user. This is mainly just having a bit of fun with some Expo APIs, but could possibly be used as a tool for Speech Language Therapy (SLT) or for the visually impaired.

Demo: https://expo.dev/@thomascoldwell/sight-m

Getting started

Follow these steps to get it running locally on your machine (make sure you have NodeJS and expo-cli installed):

git clone https://github.com/thomas-coldwell/sight-m.git
cd sight-m
yarn
expo start