This project is a cross-platform mobile web browser with an integrated personalized micro-content filter that helps users avoid abuse, bullying, hate speech, and other harmful content online without otherwise disrupting normal use of the web.
The system hides individual blocks of text that match a user’s personal definition of harmful content as each page loads. The user can selectively reveal these blocks of text if they wish, which guarantees that they have ultimate control over the content they see and are able to address any false positives. The project uses machine learning based on word embeddings to perform the NLP that drives the personalized micro-content filter, which helps users to build their personalized definition of harmful content more efficiently than they could with, for example, a simple word blacklist.
The project’s goal is to reduce the impact of harmful speech while simultaneously promoting greater freedom of speech and eliminating justifications for censorship. The responsibility of curbing harmful speech online is currently entrusted to content platforms who, for a variety of reasons such as perverse incentives, will never solve the problem. This project shifts power from content platforms to individual users, who are each given the autonomy to fully personalize their own web experience.