Lab assignments for Introduction to Data-Centric AI
This repository contains the lab assignments for the Introduction to Data-Centric AI class.
Contributions are most welcome! If you have ideas for improving the labs, please open an issue or submit a pull request.
For more hands-on experience with techniques taught in this class, participate in the Data-centric AI Competition 2023.
Lab 1: Data-Centric AI vs. Model-Centric AI
The first lab assignment walks you through an ML task of building a text classifier, and illustrates the power (and often simplicity) of data-centric approaches.
Lab 2: Label Errors
This lab guides you through writing your own implementation of automatic label error identification using Confident Learning, the technique taught in today’s lecture.
Lab 3: Dataset Creation and Curation
This lab assignment is to analyze an already collected dataset labeled by multiple annotators.
Lab 4: Data-centric Evaluation of ML Models
This lab assignment is to try improving the performance of a given model solely by improving its training data via some of the various strategies covered here.
Lab 5: Class Imbalance, Outliers, and Distribution Shift
The lab assignment for this lecture is to implement and compare different methods for identifying outliers. For this lab, we've focused on anomaly detection. You are given a clean training dataset consisting of many pictures of dogs, and an evaluation dataset that contains outliers (non-dogs). Your task is to implement and compare various methods for detecting these outliers. You may implement some of the ideas presented in today's lecture, or you can look up other outlier detection algorithms in the linked references or online.
Lab 6: Growing or Compressing Datasets
This lab guides you through an implementation of active learning.
Lab 7: Interpretability in Data-Centric ML
This lab guides you through finding issues in a dataset’s features by applying interpretability techniques.
Lab 8: Encoding Human Priors: Data Augmentation and Prompt Engineering
[This lab] guides you through prompt engineering, crafting inputs for large language models (LLMs). With these large pre-trained models, even small amounts of data can make them very useful. This lab is also available on Colab.
Lab 9: Data Privacy and Security
The lab assignment for this lecture is to implement a membership inference attack. You are given a trained machine learning model, available as a black-box prediction function. Your task is to devise a method to determine whether or not a given data point was in the training set of this model. You may implement some of the ideas presented in today’s lecture, or you can look up other membership inference attack algorithms.
License
Copyright (c) by the instructors of Introduction to Data-Centric AI (dcai.csail.mit.edu).
dcai-lab is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
dcai-lab is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
See GNU Affero General Public LICENSE for details.