Inspiration

Baby Monitor for Deaf & Hearing Impaired Parents

What it does

Our program detects and classifies the baby-sound inputs and notifies the caretaker when the baby is in need. (ie. Baby Crying)

How we built it

The program was built as the team submission to AI for Social Good Hackathon. We created our model using multilayer perception neural network(MLP) from the scratch in order to classify 4 different sound parameters. These parameters include:

  • Quiet/Silent Background
  • Noisy Background
  • Baby Laughing
  • Baby Crying

Challenges we ran into

Since this was the project submission for a hackathon we had a very limited time to create the program. The collection of the dataset was somewhat limited and the processing of a sound was very heavy on our everyday-use laptop.

Accomplishments that we're proud of

By designing and implementing the multilayer perception model, we were able to acquire nearly 100 percent of accuracy in classifying of the test data.

What we learned

We learned as a team that choosing the right machine learning model to fit and achieve our goal was the �crucial step when started the project. We also learned to work under a very short time limit as a team.

What's next for SilentBabyWatcher

Possible improvements in the future include:

  • An interactive UI/UX mobile application
  • Collection of a larger dataset
  • Increased complexity of the model to classify more sound parameters

Potential application of our deep learning model

The focus of sound classification in present AI field is predominantly in speech recognition. We believe that our project can help contribute to the study of other areas of sound domains to better support both hearing and hearing-impaired community.

Presentation slide

https://docs.google.com/presentation/d/1ww5qf22RXpZ9dHm3nVQrwmjkTZF2H-s2U4cPCq_Padw/edit?usp=sharing

The Team