This is the class of 2019 of the course "Human-AI Interaction: Designing the Explanation Interface"
GOAL: In this research-based teaching course you will learn about some principles and differences between explainable AI and causability to design, develop and test how humans and machine learning systems can interact and collaborate for effective decision making. You will learn about the differences between explainable AI and explanability and experiment with explanation user-interface frameworks. Note that causability (similar to usability) includes properties of human intelligence, whilst explainability (explainable AI) deals with properties of computational intelligence (algorithms)
MOTIVATION: Artificial Intelligence (AI) and Machine Learning (ML) demonstrate impressive success. Particularly deep learning (DL) approaches hold great premises (see differences between AI/ML/DL here). Unfortuntately, the best performing methods turn out to be intransparent, so-called “black-boxes”. Such models have no explicit declarative knowledge representation, hence have difficulty in generating explanatory and contextual structures. This considerably limits the achievement of their full potential in certain application domains. Consequently, in safety critical systems and domains (e.g. in the medical domain) we may raise the question: “Can we trust these results?”, “Can we explain how and why a result was achieved?”. This is not only crucial for user acceptance, e.g. in medicine the ultimate responsibility remains with the human, but it is mandatory since 25th May 2018 due to the European GDPR, which includes a “right for explanation”.
RELEVANCE: There is growing industrial demand in machine learning approaches, which are not only well performing, but transparent, interpretable and trustworthy, e.g. in medicine, but also in production (industry 4.0), robotics, automous driving, recommender systems, etc.
BACKGROUND: Methods to reenact the machine decision-making process, to reproduce and to comprehend the learning and knowledge extraction process need affective user interfaces. For decision support it is necessary to understand the causality of learned representations. If human intelligence is complemented by machine learning and at least in some cases even overruled, humans must still be able to understand, and most of all to be able to interactively influence the machine decision process – on demand. This needs context awareness and sensemaking to close the gap between human thinking and “machine thinking”.
SETTING: In this course the students will have the unique opportunity to work on mini-projects in real-world problems within our digital pathology project. Students will learn basic principles of human-computer interaction, interaction design, usability engineering and evaluation methods and get an introduction into causability research. On the basis of this course there are opportunities for further work (software developing tasks, bachelor, master’s and PhD positions, see open work).