By SafeSpace
With the advent of social media platforms, cyberbullying has become much more prevalent in our everyday lives. Therefore, a question arises: How can we ensure students' safety and emotional well-being on e-learning platforms? Some platforms are trying to deal with cyberbullying, but no one is trying to raise awareness, foster empathy, and change behavior.
A study by the University of British Columbia found that one in three teens has experienced cyberbullying, while one in five teens have cyberbullied others. The study also found that the effects of cyberbullying can be just as harmful as traditional forms of bullying, including depression, anxiety, and even suicide.
In response to this growing problem, we have developed a solution in the form of a Discord bot called Spacey
that can help reduce bullying behavior on the platform by encouraging empathy and spreading awareness messages.
Spacey is a Discordbot that monitors the messages and images sent to Discord Server and detects toxic images and messages. If any such incident is detected, Spacey deletes the message immediately, sends a direct message to that user on your behalf to raise awareness on bullying behavior.
It also sends a message to the moderators to inform them of the behavior that was issued by the person, making it easier to identify and take appropriate action towards the user. This system is designed to create a safer, more positive environment on Discord, promoting respectful communication and discouraging bullying behavior. :)
Our goal is to help reduce the negative effects of cyberbullying on Discord and create a more inclusive, welcoming community for all users. By implementing this bot, we hope to encourage positive communication and support a culture of kindness and respect on the platform.
- Use requirements.txt to install all dependent packages.
- Create a new application in the Discord Developer Portal and add a bot user to it.
- Copy your bot token from the Bot tab of your application page.
- Change TOKEN variable in config.py with your bot token.
- Change moderator_id variable in config.py to your server moderator id.
- Write in cmd
python Bot.py
and make sure you in the right path.
First, we got a ready transformer from hugging face called toxic-bert and this falls under the category, which is Text Classification
. It detects toxic words, are they toxic, obscene, insult, identity_hate, threat, serve_toxic, or neutral?
After that, we used another transformer named CLIPProcessor from openai, and this falls under the category, which is Zero-Shot Image Classification
. We enter the input image and the Labels that we would like to classify on in order to detect the content of the images, is it bullying or not?
And by using these two models in Discord, we can identify bullying, whether in pictures or text messages, but otherwise it is ignored.