/Indoor-Navigation-for-visually-imapaired

Project based on computer vision that is meant to help low vision people navigate indoors.

Primary LanguageJupyter Notebook

Indoor-Navigation for low vision people

• Trained and deployed an LSTM based Deep Learning convolutional neural network on 1000 videos dataset, to classify doors and stairs in indoors, with less than 0.01% error, using Python, MobileNetv2, TensorFlow, Keras, Tensorboard, CNN, Google Cloud Platform (GCP), sklearn etc.

• Project Layout-

http://csweb01.csueastbay.edu/~mi7383/CS663/home.html

• Group Members: Richa Khagwal, Subhangi Asati, Maithri Chulikana

• Steps involved in actual implementation-

Step 1: Building and Training model-

alt text

alt text

alt text

alt text

alt text

alt text

alt text

alt text

alt text

alt text

alt text

Step 2: Feature Extraction-

  • Sampling the video: We don’t process every frame, we define a frame generator to create certain sequence length as 40 samples and load the dataset & specify output frames.

alt text

alt text

alt text

  • Extracting Features using MobileNetv2-

alt text

Step 3: Batch Prediction-

alt text

alt text

alt text

alt text

alt text

Step 4: Live Predictions-

alt text

alt text

alt text

alt text

alt text

alt text

alt text

-Screenshots of the LiveCaptureResults:

alt text

alt text

alt text

alt text

alt text