/dynamic-alg-service-agreements

home for exploring building new modes of participatory oversight through AI transparency building blocks

How do we come to terms with AI systems?

Corporate self-regulation and government regulations are not the only possible paths towards empowering a more trustworthy relationship between people and AI systems. The goal for this project and repository is to challenge the existing terms-of-service and end-user license agreements between people and companies utilizing any form of algorithmic systems. To the best of our knowledge, existing license agreements are not capable of protecting users’ rights and liberties when they interact with the algorithmic outcomes of AI systems as well as the collective rights of communities which might be impacted by an AI system.

What kinds of user interface could enable a more trustworthy relationship between people and a consumer product heavily utilizing a particular AI model? Could we design transparency building blocks for AI algorithms, ensuring open and participatory model oversight centered on agency and accountability?

Imagine computable service agreements that are open-source, generative, dynamic, and accommodating for the plurality of our multidimensional human preferences. Each agreement takes as its input the output of an algorithmic system and is designed to provide more transparency by verifying certain properties of the algorithmic outcome. For example, such agreements could be built in order to:

  • provide an interactive explanation interface for algorithmic outputs by allowing a user of an online platform to understand what data variables were taken into account in the production of a specific algorithmic outcome
  • alert a user when their repeated interactions with an online platform might be exhibiting a filter bubble or echo chamber effects
  • alert a user when the information that is being curated for them by an online platform might be a subject of targeted political ad campaigns
  • allow users of an online platform to dispute an algorithmic outcome (by also providing sufficient evidence to support the dispute)

This work is building on a research collaboration with Laura Kahn which we presented at the AAAI Spring Symposium 2020 - Towards Responsible AI in Surveillance, Media, and Security through Licensing. Read our paper Dynamic Algorithmic Service Agreements Perspective or this blog post on Integrity beyond the Terms of Service Agreement in a Human + AI world.

How it works

alt text

Related projects:

Key academic references we're building on:

Have you been thinking about service agreements in the context of consumer tech AI systems? What other relevant projects or academic publications should we look at?