2021-22-Term1-Fairness-in-Socio-technical-Systems

This repository is for archiving course materials for IS457: Fairness in Socio-technial Systems (AY2021-22 Term1), School of Computing and Information Systems, Singapore Management University.

Instructor: Associate Professor. KWAK Haewoon (@haewoon)

Synopsis

We interact with a variety of services and systems in our daily lives. While manual labors still take some part in those systems, some other parts become more and more automated by artificial intelligence (AI). In general, we might expect that those systems treat users fairly. If the system uses AI that is built on big data and complex algorithms, such expectation would be strengthened. Compared to human labor that might involve subjective decision-making, algorithmic systems are expected to objectively work and treat users fairly. However, in recent years, there are raising concerns about the potential harms of those systems, which are rooted in biases embedded in socio-technical systems. The inherent opaque nature of AI systems makes the problem worse.

For example, YouTube recommends next videos when a video is finished playing. Those recommendations, on the one hand, are helpful to find interesting videos from a tremendous number of YouTube videos, but on the other hand, it is often unclear how or why the video is recommended. What happens if some biases exist in the recommendation algorithm, such as favoring videos with a specific (political) view? No matter whether those biases are intentional or unintentional, users would be exposed to a certain set of videos and are likely to be influenced by them.

YouTube is only one out of many examples because AI systems are becoming pervasive these days. In various areas, including healthcare, hiring, financial service, ads, policymaking, and internet services, AI systems are actively used. Thus, it is crucial to ensure that those systems are working fairly without any potential biases. It might be overlooked that the biases are embedded not only in the AI systems but also in established processes or human operators within the systems.

The goal of this course is to provide students with an extensive understanding of diverse concepts of fairness and bias in socio-technical systems through examples across diverse domains, from healthcare to internet search. Then, students will learn how to audit practical systems in terms of fairness and bias through recent case studies. The course also aims to understand public concerns related to AI systems and help students to deeply think about ethical AI within multiple social contexts.

Prerequisite

  • IS111 Intro to Programming / CS101 Programming Fundamentals
  • IS217 Analytics Foundations / MGMT108 Intro to Business Analytics / CS105 Statistical Thinking for Data Science

Topics to be Covered

Week Description Slides Group activity
W1 Introduction
PDF 1. Colab - Twitter's image cropping algorithm
2. Algorithmic curations in your favorite apps
W2 Case studies of measuring fairness and biase (I)
PDF 1. Google Teachable Machine
2. High- and low-resource hospitals' EHRs
W3 Case studies of measuring fairness and biase (II)
PDF 1. WooClap: Ethical considerations in AI Healthcare
2. Gender and racial stereotypes in image search
W4 Auditing algorithms
PDF 1. Design your audit study
W5 Bias in data and machine learning models (I)
PDF 1. Sharing your story about various cognitive biases
W6 Project consultation
W7 Project idea pitching
W8 Recess week
W9 Bias in data and machine learning models (II)
PDF 1. Inappropriate synsets in ImageNet
2. Bias in word embeddings
W10 Interpretability of algorithmic systems
PDF 1. Colab - Interpretable machine learning
W11 Fairness mechanisms
PDF 1. Error metrics in context
2. WooClap: Is this algorithm fair?
W12 HCI perspective of fairness + Project consultation
PDF 1. Fairness metric in skin cancer prediction
2. WooClap: Trade-off between accuracy and fairness
W13 Project presentation
W14 Study week
W15 Final exam

(!Optional!🙏) References

Reading is completely optional. The main findings of the below papers are summarized during the lecture.
If you are interested in more, please check the papers.

For your convenience, I mark reading materials as one of three levels: I (Introductory), M (interMediate), and A (Advanced).

  • I: Introductory materials are typically news articles, YouTube videos, or technical blog articles that are easy to follow. They introduce a new concept or a case for those who are not familiar with this domain.
  • M: Intermediate materials are typically research papers but (I guess) most of the students can understand without much difficulties.
  • A: Advanced materials are research papers that I can recommend you reading introduction and discussion. For example, papers that are theory-drive, use many jargons (e.g., medical or legal terms), are based on some advanced concepts (language models), fall in this category. It would be great if you can understand a whole, but it might not be easy for junior undergrads.

And some materials are marked as R (recommended).

Note that SMU students can access to many news websites (e.g., The New York Times, Wall Street Journal, etc.) for free through the library website.
A detailed registration process will be available on the library website.

W1: Introduction

W2: Case studies of measuring fairness and biase (I) - Healthcare, Criminal system

W3: Case studies of measuring fairness and biase (II) - Hiring, Urban mobility, Immigration system, Web search, Wikipedia

W4: Auditing algorithms

W5: Bias in data and machine learning models (I)

W9: Bias in data and machine learning models (II)

W10: Interpretability of algorithmic systems

W11: Fairness mechanisms

W12: HCI perspective of fairness

WX: Re-imagning fairness