/AIS-cost-effectiveness

Cost-effectiveness models, tools, and results for various AI safety field-building programs.

Primary LanguagePythonMIT LicenseMIT

AI safety cost-effectiveness

This repository contains code for estimating the cost-effectiveness of various field-building programs. It has been built for the Center for AI Safety. This is a fork to estimate the cost effectiveness of Apart Research's programs:

Each of these will be evaluated in turn. Earlier points have higher priority.

Using this repository

To get this repository working on your local machine:

  1. Install Python and your preferred code editor.
  2. Fork this repository.
  3. Install the repository's dependencies by executing pip install -r requirements.txt in your terminal.

Then, see the examples README for demonstrations of the repository's use.

If you would like assistance with this repo and/or your own evaluations, contact CAIS at contact@safe.ai.

Directory Structure

  • src: Contains source code.
    • models: Contains cost-effectiveness models, the main logic of this project.
    • parameters: Contains parameter instances for each program evaluated. (The parameters README describes what we mean by instances.)
    • scripts: Contains scripts for generating outputs. Organized into subdirectories for examples of how to use the repository, and code used to generate content for written posts.
    • utilities: Contains functions and assumptions that are common across multiple cost-benefit analyses. Organized into subdirectories for assumptions, defaults, functions, plotting, and sampling.
  • output: Contains data and plot outputs generated from scripts.

The scripts feed parameters into models to produce outputs. The utilities are used at many different stages of the project -- providing functions for specifying parameters, lower-level functions for models, sampling functions for the scripts, plotting functions for the outputs, and more.

Other resources connected to this project

Our introduction post lays out our approach to modeling – including our motivations for using models, the benefits and limitations of our key metric, comparisons between programs for students and professionals, and more.

We have two posts evaluating student programs and professional programs respectively.

Finally, for definitions and values of the parameters used in our models, refer to the Parameter Documentation sheet.

Programs

Evaluated in this repository

  1. The Trojan Detection Challenge (or ‘TDC’): A prize at a top ML conference.
  2. The NeurIPS ML Safety Social (or ‘NeurIPS Social’): A social at a top ML conference.
  3. The NeurIPS ML Safety Workshop (or ‘NeurIPS Workshop’): A workshop at a top ML conference.
  4. The Atlas Fellowship: A 10-day in-person program providing a scholarship and networking opportunities for select high school students.
  5. ML Safety Scholars (or 'MLSS'): CAIS’s discontinued summer course, designed to teach undergraduates ML safety.
  6. Student Group : A high-cost, high-engagement student group at a top university, similar to HAIST, MAIA, or SAIA.
  7. Undergraduate Stipends: Specifically, the ML Safety Student Scholarship, which provides stipends to undergraduates connected with research opportunities.

Not evaluated in this repository

Notice that the professional_program and student_program:

  1. Are flexible enough to accommodate a wide range of possible field-building programs, and
  2. Could be easily repurposed for research areas beyond AI safety.

We hope that the tools in this repository might be used or extended by other organizations. For suggestions of how to go about this, see the examples README.

License

This project is licensed under the MIT License.