This is a repository containing code for the final project of the UC Berkeley MIDS program course on data ethics.
This repository contains a data science notebook (Jupyter) that explores the ways in which predictive policing models can perpetuate human bias, discriminatory behavior, and social injustices in the law enforcement process. Existing predictive policing models, such as PredPol, are not open sourced, so we developed a predictive policing model of our own using the same primary data source that feeds the commercial models, historic crime data. We demonstrate how these models can pick up and proliferate the biases in the historic crime data through feedback loops. Then, we devise a method to audit these models and curb the feedback loop.
Disclaimer: This analysis is not meant to be proof any realized biases in the sample data set, rather, it is a demonstration of an auditing technique for adjusting biases in predictive models.