/Child-Safety-System

A bot which warns users against potential predators while engaging in conversations via (online) chatrooms.

Primary LanguageJupyter Notebook

Online Child Safety

An application that detects and warns users against potential predators in online chatrooms


DOCS UI

Preview of the application

The purpose of our application was to build a bot (called SAF in the images shown below) which warns users against potential predators while engaging in conversations via (online) chatrooms. It is based on predictions made by the LSTM model trained on dangerous and relatively normal conversations.

Welcome Page

Welcome page

Potentially dangerous chats

dangerous chats dangerous chats

Normal chats

normal chats normal chats

Functionalities

  • Detects potential predators in online chatrooms.
  • Provides a chatting interface with the bot which generates responses and predicts whether they're dangerous or safe.
  • Displays warning messages when texts are perverted or suspicious.
  • User can also provide text inputs to the bot which will then detect how perverted they are.

Instructions to run

  • Dependencies:

    • Tensorflow
    • Keras
    • Numpy
    • Pandas
    • pickle
    • NLTK
    • Symspellpy
    • Streamlit
  • The Bot (application) has been built using Streamlit.

Future improvements

  • Model's responses are random and can instead be tailored to fit user's responses or questions.
  • The bot makes accurate predictions as to whether a conversation is dangerous or not, most of the time, but requires fine-tuning.
  • Each conversation between the bot and user lasts only for one iteration. This can be extended to include actual conversations.
  • This bot can be built as an extension instead of a stand-alone application and can be employed in actual online chatrooms.

Contributors

Naman Garg

Naman Garg

Pooja Ravi

Pooja Ravi

Breenda Das

Breenda Das

Sadhavi Thapa

Sadhavi Thapa

License

License

Made with ❤️ by DS Community SRM