/FacialNavigation

Control your computer by moving your head and using voice to text

Primary LanguageCSS

Facial Navigation

Control your computer with your mind... and by moving your head!
https://devpost.com/software/facial-navigation

gallery

Inspiration

Helping people that have been effected by natural disaster or have developed a disability. We developed this proof of concept to showcase how people without a set of hands can still use a computer.

What it does

The program scans the position of our users face and moves the mouse based off of where their face is pointing towards. They can blink to click and open there mouth briefly to initiate text to speech typing.

How we built it

We used python's opencv library to scan the users face and determine where to shift the mouse. Google's Cloud Platform allowed us to use google's powerful speech to text api so that the user can easily speak to replace their keyboard and browse the internet. We also created a website using html5 and css to demo our project.

Challenges we ran into

Detecting the face's position, determining if the mouth is open, package management, and API credential verification.

Accomplishments that we're proud of

Our greatest accomplishment was being able to integrate these technologies together so that it can actually be used to browse the internet.

What we learned

We learned proper version control, package management, using Google Cloud Platform, using opencv, and integrating different technologies.

What's next for Facial Navigation

Smoother operation, scrolling, right clicking, computer shortcuts through gestures, and more keyboard functions such as backspace.