/blm

blm.stanford.edu

Primary LanguagePythonMIT LicenseMIT

Anonymize BLM Protest Images

This repository automates @BLMPrivacyBot, a Twitter bot that shows the anonymized images to help keep protesters safe. Use our interface at blm.stanford.edu.

What's happened? Arrests at protests from public images

Over the past weeks, we have seen an increasing number of arrests at BLM protests, with images circulating around the web enabling automatic identification of those individuals and subsequent arrests to hamper protest activity. This primarily concerns social media protest images.

Numerous applications have emerged in response to this threat that aim to anonymize protest images and enable people to continue protesting in safety. Of course, this would require a shift on the public's part to recognize this issue and an easy and effective method for anonymization to surface. In an ideal world, platforms like Twitter would enable an on-platform solution.

So what's your goal? AI to help alleviate some of the worst parts of AI

The goal of this work is to leverage our group's knowledge of facial recognition AI to offer the most effective anonymization tool that evades the state of the art in facial recognition technology. AI facial recognition models can still recognize blurred faces. This work tries to discourage people from trying to recognize or reconstruct pixelated faces by masking people with an opaque mask. We use the BLM fist emoji as that mask for solidarity. While posting anonymized images does not delete the originals, we are starting with awareness and hope Twitter and other platforms would offer an on-platform solution (might be a tall order, but one can hope).

Importantly, this application does not save images. We hope the transparency of this repository will allow for community input. The Twitter bot posts anonymized images based on the Fair Use policy; however, if your image is used and you'd like it to be taken down, we will do our best to do so immediately.

Q&A

How can AI models still recognize blurred faces, even if they cannot reconstruct them perfectly? Recognition is different from reconstruction. Facial recognition technology can still identify many blurred faces and is better than humans at it. Reconstruction is a much more arduous task (see the difference between discriminative and generative models, if you're curious). Reconstruction has recently been exposed to be very biased (see lessons from PULSE). Blurring faces has the added threat of encouraging certain people or groups to de-anonymize images through reconstruction or directly identifying individuals through recognition.

Do you save my pre-anonymized images? No. The goal of this tool is to protect your privacy and saving the images would be antithetical to that. We don’t save any images you give us or any of the anonymized images created from the AI model (sometimes they’re not perfect, so saving them would still not be great!). If you like technical details: the image is passed into the AI model on the cloud, then the output is passed back and directly displayed in a base64 jpg on your screen.

The bot tweeted my image with the fists on it. Can you take it down? Yes, absolutely. Please DM the bot or reply directly.

Can you talk a bit more about your AI technical approach? We build on state-of-the-art crowd counting AI, because it offers huge advantages to anonymizing crowds over traditional facial recognition models. Traditional methods can only find a few (less than 20 or even less than 5) in a single image. Crowds of BLM protesters can number in the hundreds and thousands, and certainly around 50, in a single image. The model we use in this work has been trained on over 1.2 million people in the open-sourced research dataset, called QNRF, with crowds ranging from the few to the the thousands. False negatives are the worst error in our case. The pretrained model weights live in the LSC-CNN that we build on - precisely, it's in a Google Drive folder linked from their README.

Other amazing tools

We would love to showcase other parallel efforts (please propose any we have missed here!). Not only that, if this is not the tool for you, please check these tools out too:

And more...

Built by and built on

  1. This work is built by the Stanford Machine Learning Group. We are Krishna Patel, JQ, and Sharon Zhou.

  2. Flask-Postgres Template by @sharonzhou

https://github.com/sharonzhou/flask-postgres-template
  1. Image Uploader by @christianbayer
https://github.com/christianbayer/image-uploader
  1. LSC-CNN by @vlad3996
https://github.com/vlad3996/lsc-cnn

Paper associated with this work:

@article{LSCCNN20,
    Author = {Sam, Deepak Babu and Peri, Skand Vishwanath and Narayanan Sundararaman, Mukuntha,  and Kamath, Amogh and Babu, R. Venkatesh},
    Title = {Locate, Size and Count: Accurately Resolving People in Dense Crowds via Detection},
    Journal = {IEEE Transactions on Pattern Analysis and Machine Intelligence},
    Year = {2020}
}

Paper available on arXiv.org

Running Offline

Anonymizing images using an online service that you do not fully trust may still include some risks. As the original code provided by stanfordmlgroup is open source this fork ads the capability to run the service locally on your own computer.

Changes to Original Source

  • Remove dependency to postgresql and all google services in requirements.txt
  • Enable service to work on computers without GPU support
  • Cleanup of requirements.txt
  • Some code cleanup

Steps to Run Offline (Traditional)

Preparations:

  1. Clone this repository
  2. Install recent version of Python 3
  3. Download the model weights file according to the readme.txt file

Installation of dependencies:

  1. cd blm/app
  2. pip install -r requirements.txt

Run and use the server:

  1. python app.py
  2. Open the app in your browser at http://127.0.0.1:5000/ (the valid link is printed on the command line, so please verify)

Steps to Run Offline (Docker)

Preparations:

  1. Clone this repository
  2. Download the model weights file according to the readme.txt file

Build Docker image:

  1. cd blm/app
  2. docker build -t blm .

Run Docker container:

  1. docker run --rm --volume "$PWD":/app -p 5000:5000 -m 6g blm python app.py
  2. Open the app in your browser at http://127.0.0.1:5000/ (the valid link is printed on the command line, so please verify)