Designing Ethical Futures

Taught by Monika Seyfried and Annelie Berner at the Copenhagen Institute for Interactive Design Summer School 2019

July 22nd – 26th, 2019
Coverage

Description:

Ethics comes up when things go down. During the Designing Ethical Futures course we explore ethical theories to reflect upon what we design and why. We engage with the methods of speculative design to envision the unexpected consequences of new technology and learn how to create experiences that bring to life ethical theories. As more and more products become connected – to each other, to us, to their environments – more and more data are being shared, stored and processed algorithmically. In this workshop, we use three core elements: ethics, futurescaping and new technology to spark our inspiration and ground our creative prototyping. Participants create experiences that immerse people into possible futures, using ethical theories to question those futures and speculative design to rethink how we may live with new technology. This workshop has an experimental approach. We focus on building thought-provoking projects that explore the role of ethics in the creation process. The class welcomes all participants, regardless of age or educational background.

Main Topics:

  • The future of memory
  • Cultural and technological forms of memory
  • Ethical theories, applied ethics
  • Surveillance economy
  • Bias, machine learning
  • Data flows, system mapping
  • Futurescaping, speculative design

Resources

  • Full reference list here, including reading snippets on the course topic, inspirational futurescaping studios and speculative design practice

Schedule

Day 1

  • Focus: Speculative Futures
  • Lecture and activities:
    • Defining futurescaping, worldbuilding, scenario-making
    • Group work to create and communicate a speculative scenario using future dreams , trends and signals, visual techniques for presenting the world within the scenario exists (video, collage, writing+image examples shared)

Day 2

  • Focus: Worries and Opportunities
  • Lecture and activities:
    • The role of speculative design in futurescaping
    • How to continue to deepen initial speculation through multiple lenses on that future, creating implication trees with handout
    • Identify the scenarios with the most risk and use as an opportunity for design
    • Brainstorm and prototype

Day 3

  • Focus: From Practical to Ethical
  • Lecture and activities:
    • Ethical theories (virtue, utilitarianism, consequentialism, care ethics, capabilities approach)
    • Ethics in design and technology
    • Brainstorming with ethics as inspiration

Day 4

  • Focus: Ethical Critique & Iteration
  • Lecture and activities:
    • How to question and evaluate prototypes from an ethical perspective handouts
    • Testing and iterative prototyping

Day 5:

Student Work

  • On the last day of the course, students presented their projects

Setting the Stage:

Why ethical? We are firmly in the moment of not just “can we” do X design but rather, “should we”? Why not? People-centered research and design thinking approaches might help us along the way. But there is another approach we need to add in as well, of zooming in to our own pulse as designers, as well as jumping out of our bodies and zoom way out to the level of system-wide implications that we need to take into account when considering the answer to “should we”. Why futures? Once we start to understand our work in relation to a system, we have new dimensions to consider: first, that a system continues, over time. While we cannot predict the future, we can envision possible scenarios in a simulated time jump, and design within those in order to inform our work in the present.
What are the intended and unintended consequences? What are the positive aspects we want to focus on? What are the not-so-positive aspects we want to mitigate? How do we make decisions about our futures? How do we understand what we should and should not do? If we think about “shoulds”, we look to ethical theories of how we should conduct ourselves. But we also may think about wants, desires, personal goals. When technology enters this complex space, we may see machines that define personal wellbeing. However, personal wellbeing is complex and highly individual. Can an algorithm take in the full complexity of what a good life means to me and reduce wellbeing to a number?

Brief:

In this course (2019), we focus on memory within new technology. The brief is to identify possible futures related to memory and new technologies, choose one of the most promising or worrying futures, and create prototype designs that would lead towards or against the selected future. The prototype, service or object should mediate people’s relationships with memory in the future(s) they have started to build.

Memory:

We live our life based on the things we have learned and remembered and crucially, understood thus far. Literally. That is how a child learns to walk and talk. That is how when we are adults, we develop an awareness of ourselves and a consciousness around what and how we are doing. Another way we could think of this memory, processed, and how it bears upon the futures, is from a machine standpoint, that memory is data, bits. It is processed through a form of intelligence - whether a machine learning algorithm or another - and choices are made based on previous memories / history (whether of mistakes, successes). These choices are made from the point of view of an artificial intelligence that uses its capacity to machine-learn in order to make decisions about its / our futures. So what happens when these lines get fuzzy between the notion of memory as something inside of Me only, and memory as something that also exists in the memory of my devices, in the memory on the servers in the Clouds I connect to?

Ethical decisions:

In order to learn from my memories, and try to make decisions about my futures, I would hope that at least my memories are “real.” We know the old quotation, that history repeats itself. So we should learn from history and not make the same mistakes. What happens when history is now being purposefully manipulated to present some alternate version of reality? What should we learn from in order to make our next decisions? Whether the “We” in this case is human or artificial, the question remains. And if my visual memory is stored in Google’s cloud, what kind of processes might determine the image that I (really) want as opposed to the image that I actually tried to put onto the cloud? We use machines that have been encoded with decisions around how we should look and therefore the “memory” of us. And the other thing is about the strategy behind it! Why did this machine seek to gather so much data about our faces? And how does this potentially interfere with the governance structures we know about?

Futures:

AI and decision-support systems are embedded in a wide array of social institutions, from influencing who is released from jail to shaping the news we see. For example, Facebook’s automated content editing system recently censored the Pulitzer-prize winning image of a nine-year old girl fleeing napalm bombs during the Vietnam War. The girl is naked; to an image processing algorithm, this might appear as a simple violation of the policy against child nudity. But to human eyes, Nick Ut’s photograph, “The Terror of War”, means much more: it is an iconic portrait of the indiscriminate horror of conflict, and it has an assured place in the history of photography and international politics. The removal of the image caused an international outcry before Facebook backed down and restored the image. “What they do by removing such images, no matter what good intentions, is to redact our shared history,” said the Prime Minister of Norway, Erna.