/xai-iml-sota

Interesting resources related to Explainable Artificial Intelligence, Interpretable Machine Learning, Interactive Machine Learning, Human in Loop and Visual Analytics.

Primary LanguageR

Interesting resources related to XAI (Explainable Artificial Intelligence)

Researchers

Lectures

Publications

You can find a collection of scientific publications listed here. Also, latest publications can be downloaded from arxiv.org by executing the R script provided in xai-iml-sota.R file.

Books

Videos

  • Muhammad Rehman Zafar - A Deterministic Local Interpretable Model-Agnostic Explanations Approach for Computer-Aided Diagnosis | AISC Watch on Youtube
  • Cynthia Rudin - Please Stop Doing "Explainable" ML Watch on Youtube
  • Hubert Baniecki - DrWhy.AI - Tools for Explainable Artificial Intelligence | AISC Watch on Youtube
  • Hubert Baniecki - Tools for Explainable Artificial Intelligence Watch on Youtube
  • Ozan Ozyegen - Explainable AI for Time Series - Literature Review | AISC Watch on Youtube
  • Raheel Ahmad, Muddassar Sharif - explainX - Explainable AI for model developers | AISC Watch on Youtube
  • Zifan Wang - Towards Frequency-Based Explanation for Robust CNN | AISC Watch on Youtube
  • Sherin Mathews - DeepFakes & Explainable AI Applications in NLP, Biomedical & Malware Classification Watch on Youtube
  • Andrey Sharapov - XAI Explainable AI in Retail | AISC Watch on Youtube
  • Yucheng Yang - Interpretable Neural Networks for Panel Data Analysis in Economics | AISC Watch on Youtube
  • Ali El Sharif - A Literature Review on Interpretability for Machine Learning | AISC Watch on Youtube
  • LEAP - Improving the quality of explanations with local embedding perturbations Watch on Youtube
  • Explainable AI in Industry Tutorial Watch on Youtube
  • "Why Should I Trust you?" Explaining the Predictions of Any Classifier Watch on Youtube
  • Interpretability - now what? - Been Kim Watch on Youtube
  • How to Fail Interpretability Research - Been Kim Watch on Youtube
  • Interpretable Machine Learning Using LIME Framework - Kasia Kulma (PhD), Data Scientist Watch on Youtube
  • Machine Learning Interpretability with Driverless AI Watch on Youtube
  • Interpretable Machine Learning Watch on Youtube
  • Practical Tips for Interpreting Machine Learning Models - Patrick Hall, H2O.ai Watch on Youtube
  • Interpretable Machine Learning Meetup Watch on Youtube
  • CVPR18: Tutorial: Part 1: Interpretable Machine Learning for Computer Vision Watch on Youtube
  • Cynthia Rudin - Interpretable ML for Recidivism Prediction - The Frontiers of Machine Learning Watch on Youtube
  • Algorithms for interpretable machine learning (KDD 2014 Presentation) Watch on Youtube
  • Cynthia Rudin: New Algorithms for Interpretable Machine Learning Watch on Youtube
  • Interpretable Machine Learning: Methods for understanding complex models Watch on Youtube
  • Machine Learning and Interpretability Watch on Youtube
  • Interpretability Beyond Feature Attribution Watch on Youtube
  • Interpretable machine learning (part 3): Shapley values and packages for IML Watch on Youtube
  • Kilian Weinberger, "Interpretable Machine Learning" Watch on Youtube
  • Emily Fox: "Interpretable Neural Network Models for Granger Causality Discovery" Watch on Youtube
  • Inherent Trade-offs with the Local Explanations Paradigm Watch on Youtube
  • Interpreting Deep Neural Networks (DNNs) Watch on Youtube
  • iml: A new Package for Model-Agnostic Interpretable Machine Learning Watch on Youtube
  • Open the Black Box: an Introduction to Model Interpretability with LIME and SHAP - Kevin Lemagnen Watch on Youtube
  • Paper Review Calls 013 -- Bianca Furtuna -- Anchors: High-Precision Model-Agnostic Explanations Watch on Youtube
  • Reliable Interpretability - Explaining AI Model Predictions | Sara Hooker @PyBay2018 Watch on Youtube
  • Alejandro Saucedo - Guide towards algorithm explainability in machine learning @ PyData London 2019 Watch on Youtube
  • Alejandro Saucedo - ML explainability, bias evaluation and reproducibility. Watch on Youtube
  • Alejandro Saucedo - Algorithmic bias and explainability in machine learning with tensorflow. Watch on Youtube
  • Ajay Thampi - Interpretable AI or How I Learned to Stop Worrying and Trust AI @ PyData London 2019 Watch on Youtube
  • Alejandro Saucedo, A practical guide towards explainability and bias evaluation in ML @ PyConBy 2019 Watch on Youtube
  • Alexander Engelhardt - Interpretable Machine Learning: How to make black box @ PyData Berlin 2019 Watch on Youtube
  • Alex Hanna - Responsible AI Practices: Fairness in ML @ PyData Miami 2019 Watch on Youtube
  • Yanjun Qi - Making Deep Learning Interpretable for Analyzing Sequential Data about Gene Regulation Watch on Youtube
  • Illija Ilievski - Interpretable forecasting of financial time series with deep learning Watch on Youtube
  • IUI2019 keynote : DARPA’s Explainable Artificial Intelligence (XAI) Program Watch on Youtube
  • Artificial Intelligence Colloquium: Explainable AI Watch on Youtube
  • Dr. Wojciech Samek - Explainable AI - Methods, Applications & Recent Developments | ODSC Europe 2019 Watch on Youtube

Audios

2018

  • Explaining Explainable AI; In this webinar, we will conduct a panel discussion with Patrick Hall and Tom Aliff around the business requirements of explainable AI and the subsequent value that can benefit any organization

  • Approaches to Fairness in Machine Learning with Richard Zemel; Today we continue our exploration of Trust in AI with this interview with Richard Zemel, Professor in the department of Computer Science at the University of Toronto and Research Director at Vector Institute.

  • Making Algorithms Trustworthy with David Spiegelhalter; In this, the second episode of our NeurIPS series, we’re joined by David Spiegelhalter, Chair of Winton Center for Risk and Evidence Communication at Cambridge University and President of the Royal Statistical Society.

Publication Venues

Journals

  • Call for Papers: Special Issue on Explainable Artificial Intelligence in the Spatial Domain (X-GeoAI) Call for Paper Deadline: November 30, 2021

  • Special Issue on: Explainable AI (XAI) for Web-based Information Processing in Elsevier's Information Processing & Management Call for Paper Deadline: October 30, 2021

  • Special Issue "Advances in Explainable Artificial Intelligence" in MDPI's Information Call for Paper Deadline: July 31, 2021

  • Special Issue "Explainable Artificial Intelligence (XAI)" in MDPI's Applied Sciences Call for Paper Deadline: July 10, 2021

  • Special Issue on Explainable Artificial Intelligence for Healthcare in Elsevier's Future Generation Computer Systems Call for Paper Deadline: July 01, 2021

  • Call for Papers: Special Issue on Explainable and Interpretable Machine Learning and Data Mining in Springer's Data Mining and Knowledge Discovery Call for Paper Deadline: March 31, 2021

  • IEEE SPM Special Issue on Explainability in Data Science: Interpretability, Reproducibility, and Replicability in IEEE Signal Processing Call for Paper Deadline: March 22, 2021

  • Call for Papers: Special Issue on Foundations of Data Science in Springer's Machine Learning Call for Paper Deadline: March 01, 2021

  • Special Issue Explainable and Trustworthy Artificial Intelligence in IEEE Computational Intelligence Magazine Call for Paper Deadline: February 22, 2021

  • Special Issue Springer/Nature BMC Medical Informatics and Decision Making Call for Paper Deadline: December 31, 2020

  • Call for Papers: Special Issue on Explainable AI and Machine Learning in IEEE's Magazine - Computer Call for Paper Deadline: December 31, 2020

  • Special Issue on Advances in Explainable (XAI) and Responsible (RAI) Artificial Intelligence in Elsevier's Jounal of Information Fusion Call for Paper Deadline: December 15, 2020

  • Special issue call: "Explainable AI on Multimedia Computing" in ACM Transactions on Multimedia Computing, Communications, and Applications Call for Paper Deadline: December 01, 2020

  • Special Issue on Explainable Robotic Systems in ACM Transactions on Human-Robot Interaction Call for Paper Deadline: December 01, 2020

  • Special Issue on Algorithmic Bias and Fairness in Search and Recommendation in Elsevier's Information Processing & Management Call for Paper Deadline: November 15, 2020

  • Special Issue on Explainable AI on Multimedia Computing in ACM Transactions on Multimedia Computing, Communications, and Applications Call for Paper Deadline: November 01, 2020

  • Special Issue on Explainable AI and Machine Learning in IEEE Computer Call for Paper Deadline: October 30, 2020

  • Special Issue "Explainable Artificial Intelligence (XAI)" in Applied Sciences Call for Paper Deadline: September 30, 2020

  • Explainable AI for Clinical and Population Health Informatics in IEEE Journal of Biomedical and Health Informatics Call for Paper Deadline: August 24, 2020

  • Special Issue on Explainable Artificial Intelligence in Elsevier's Artificial Intelligence Call for Paper Deadline: May 01, 2020

  • Special Issue on Learning Complex Couplings and Interactions in IEEE Intelligent Systems Call for Paper Deadline: May 30, 2020

  • ACM THRI Special Issue on Explainable Robotic Systems in ACM Transactions on Human-Robot Interaction Call for Paper Deadline: December 01, 2019

Conferences

Workshops

2020

  • 2nd Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence Call for Paper Deadline: September 20, 2020

  • 3rd Workshop on Advances In Argumentation In Artificial Intelligence (AI3 2019) Call for Paper Deadline: August 30, 2020

  • International Workshop on Explainable and Interpretable Machine Learning (XI-ML) Call for Paper Deadline: Jul 23, 2020

  • XXAI: Extending Explainable AI Beyond Deep Models and Classifiers Call for Paper; Videos; Link to Papers Deadline: June 19, 2020

  • eXplainable Knowledge Discovery in Data Mining Call for Paper Deadline: June 19, 2020

  • 1st Workshop on Data Science with Human in the Loop (DaSH) Call for Paper Deadline: June 10, 2020

  • Explainable AI xAI 2020 Call for Paper Link to Papers Deadline: January 26, 2020

  • 3rd international workshop on explainable AI in Dublin 2020 Call for Paper

2019

  • 2nd international workshop on explainable AI in Canterbury 2019 Call for Paper

2018

2017

  • NIPS 2017 Tutorial on Fairness in Machine Learning; Solon Barocas, Moritz Hardt
  • Interpretability for AI safety; Victoria Krakovna; Long-term AI safety, Reliably specifying human preferences and values to advanced AI systems, Setting incentives for AI systems that are aligned with these preferences
  • Debugging machine-learning; MichaÅ‚ Å�opuszyÅ„ski; Model introspection You can answer thy why question, only for very simple models (e.g., linear model, basic decision trees) Sometimes, it is instructive to run such a simple model on your dataset, even though it does not provide top-level performance You can boost your simple model by feeding it with more advanced (non-linearly transformed) features

Tools

  • EUCA; A practical prototyping tool to guide AI practitioners and researchers to design explainable AI systems for non-technical end-users. It contains end-users' perceptions on feature-, example-, and rule-based explanations.

  • GRACE; This work proposed to generate concise and informative contrastive sample to explain neural network model’s prediction for tabular datasets.

  • ModelOriented; A collection of tools and packages.

  • DLIME; This work proposed a deterministic version of Local Interpretable Model Agnostic Explanations (LIME) and the experimental results on three different medical datasets shows the superiority for Deterministic Local Interpretable Model-Agnostic Explanations (DLIME).

  • LIME; Explaining the predictions of any machine learning classifier.

  • Anchor; An anchor explanation is a rule that sufficiently “anchorsâ€� the prediction locally – such that changes to the rest of the feature values of the instance do not matter. In other words, for instances on which the anchor holds, the prediction is (almost) always the same.

  • shap; A game theoretic approach to explain the output of any machine learning model.

  • DrWhy; DrWhy is the collection of tools for eXplainable AI (XAI).

  • TreeInterpreter; Package for interpreting scikit-learn's decision tree and random forest predictions. Allows decomposing each prediction into bias and feature contribution components as described in http://blog.datadive.net/interpreting-random-forests/. For a dataset with n features, each prediction on the dataset is decomposed as prediction = bias + feature_1_contribution + ... + feature_n_contribution.

  • bLIMEy; The Local Interpretable Model-agnostic Explana-tions (LIME) algorithm is often mistakenly unified with a more general frameworkof surrogate explainers, which may lead to a belief that it is the solution to surrogateexplainability. In this paper we empower the community to “build LIME yourselfâ€�(bLIMEy) by proposing a principled algorithmic framework for building customlocal surrogate explainers of black-box model predictions, including LIME itself.

  • triplot; The triplot package provides an instance-level explainer for the groups of explanatory variables called aspect importance. Package enables grouping predictors into entities called aspects. Afterwards, it calculates the contribution of those aspects to the prediction for a given observation.

  • iml; iml is an R package that interprets the behavior and explains predictions of machine learning models. It implements model-agnostic interpretability methods - meaning they can be used with any machine learning model.

  • lightgbmExplainer; An R package that makes lightgbm model fully interpretable.

  • XAI; XAI is a Machine Learning library that is designed with AI explainability in its core. XAI contains various tools that enable for analysis and evaluation of data and models.

  • Break Down; The breakDown package is a model agnostic tool for decomposition of predictions from black boxes. Break Down Table shows contributions of every variable to a final prediction. Break Down Plot presents variable contributions in a concise graphical way. This package works for binary classifiers and general regression models.

  • iBreakDown; The iBreakDown package is a model agnostic tool for explanation of predictions from black boxes ML models. Break Down Table shows contributions of every variable to a final prediction. Break Down Plot presents variable contributions in a concise graphical way.

  • pyBreakDown; Python implementation of breakDown package.

  • Alibi; Alibi is an open source Python library aimed at machine learning model inspection and interpretation. The initial focus on the library is on black-box, instance based model explanations.

  • Eli5; A library for debugging/inspecting machine learning classifiers and explaining their predictions.

  • live; Local Interpretable (Model-Agnostic) Visual Explanations. Interpretability of complex machine learning models is a growing concern. This package helps to understand key factors that drive the decision made by complicated predictive model (so called black box model). This is achieved through local approximations that are either based on additive regression like model or CART like model that allows for higher interactions.

  • vivo; This package helps to calculate instance level variable importance (local sensitivity). The importance measure is based on Ceteris Paribus profiles and can be calculated in eight variants.

  • modelStudio; The modelStudio package automates the Explanatory Analysis of Machine Learning predictive models. Generate advanced interactive and animated model explanations in the form of a serverless HTML site with only one line of code. This tool is model agnostic, therefore compatible with most of the black box predictive models and frameworks.

  • captum; Captum is a model interpretability and understanding library for PyTorch. Captum means comprehension in latin and contains general purpose implementations of integrated gradients, saliency maps, smoothgrad, vargrad and others for PyTorch models. It has quick integration for models built with domain-specific libraries such as torchvision, torchtext, and others.

  • casme; This repository contains the code originally forked from the ImageNet training in PyTorch that is modified to present the performance of classifier-agnostic saliency map extraction, a practical algorithm to train a classifier-agnostic saliency mapping by simultaneously training a classifier and a saliency mapping.

  • MindsDB; MindsDB is an Explainable AutoML framework for developers built on top of Pytorch. It enables you to build, train and test state of the art ML models in as simple as one line of code.

  • AI Explainability 360; The IBM AI Explainability 360 toolkit is an open-source library that supports interpretability and explainability of datasets and machine learning models. The AI Explainability 360 Python package includes a comprehensive set of algorithms that cover different dimensions of explanations along with proxy explainability metrics.

  • AI Fairness 360; The AI Fairness 360 toolkit is an extensible open-source library containg techniques developed by the research community to help detect and mitigate bias in machine learning models throughout the AI application lifecycle.

  • pdp; pdp is an R package for constructing partial dependence plots (PDPs) and individual conditional expectation (ICE) curves. PDPs and ICE curves are part of a larger framework referred to as interpretable machine learning (IML), which also includes (but not limited to) variable importance plots (VIPs).

  • DICE; Generate Diverse Counterfactual Explanations for any machine learning model.

  • dowhy; DoWhy is a Python library for causal inference that supports explicit modeling and testing of causal assumptions. DoWhy is based on a unified language for causal inference, combining causal graphical models and potential outcomes frameworks.

  • Aequitas: A Bias and Fairness Audit Toolkit; Recent work has raised concerns on the risk of unintended bias in AI systems being used nowadays that can affect individuals unfairly based on race, gender or religion, among other possible characteristics. While a lot of bias metrics and fairness definitions have been proposed in recent years, there is no consensus on which metric/definition should be used and there are very few available resources to operationalize them. Aequitas facilitates informed and equitable decisions around developing and deploying algorithmic decision making systems for both data scientists, machine learning researchers and policymakers.

  • InterpretML by Microsoft; Python library by Microsoft related to explainability of ML models.

  • Assessing Causality from Observational Data using Pearl's Structural Causal Models;

  • sklearn_explain; Model explanation provides the ability to interpret the effect of the predictors on the composition of an individual score.

  • heatmapping.org; This webpage aims to regroup publications and software produced as part of a joint project at Fraunhofer HHI, TU Berlin and SUTD Singapore on developing new method to understand nonlinear predictions of state-of-the-art machine learning models. Machine learning models, in particular deep neural networks (DNNs), are characterized by very high predictive power, but in many case, are not easily interpretable by a human. Interpreting a nonlinear classifier is important to gain trust into the prediction, and to identify potential data selection biases or artefacts. The project studies in particular techniques to decompose the prediction in terms of contributions of individual input variables such that the produced decomposition (i.e. explanation) can be visualized in the same way as the input data.

  • ggeffects; Daniel Lüdecke; Compute marginal effects from statistical models and returns the result as tidy data frames. These data frames are ready to use with the 'ggplot2'-package. Marginal effects can be calculated for many different models. Interaction terms, splines and polynomial terms are also supported. The main functions are ggpredict(), ggemmeans() and ggeffect(). There is a generic plot()-method to plot the results using 'ggplot2'.

  • KDD 2018: Explainable Models for Healthcare AI; The Explainable Models for Healthcare AI tutorial was presented by a trio from KenSci Inc. that included a data scientist and a clinician. The premise of the session was that explainability is particularly important in healthcare applications of machine learning, due to the far-reaching consequences of decisions, high cost of mistakes, fairness and compliance requirements. The tutorial walked through a number of aspects of interpretability and discussed techniques that can be applied to explain model predictions.

  • MAGMIL: Model Agnostic Methods for Interpretable Machine Learning; European Union’s new General Data Protection Regulation which is going to be enforced beginning from 25th of May, 2018 will have potential impact on the routine use of machine learning algorithms by restricting automated individual decision-making (that is, algorithms that make decisions based on user-level predictors) which â€�significantly affectâ€� users. The law will also effectively create a â€�right to explanation,â€� whereby a user can ask for an explanation of an algorithmic decision that was made about them. Considering such challenging norms on the use of machine learning systems, we are making an attempt to make the models more interpretable. While we are concerned about developing a deeper understanding of decisions made by a machine learning model, the idea of extracting the explaintations from the machine learning system, also known as model-agnostic interpretability methods has some benefits over techniques such as model specific interpretability methods in terms of flexibility.

  • A toolbox to iNNvestigate neural networks' predictions!; Maximilian Alber; In the recent years neural networks furthered the state of the art in many domains like, e.g., object detection and speech recognition. Despite the success neural networks are typically still treated as black boxes. Their internal workings are not fully understood and the basis for their predictions is unclear. In the attempt to understand neural networks better several methods were proposed, e.g., Saliency, Deconvnet, GuidedBackprop, SmoothGrad, IntergratedGradients, LRP, PatternNet&-Attribution. Due to the lack of a reference implementations comparing them is a major effort. This library addresses this by providing a common interface and out-of-the-box implementation for many analysis methods. Our goal is to make analyzing neural networks' predictions easy!

  • Black Box Auditing and Certifying and Removing Disparate Impact; This repository contains a sample implementation of Gradient Feature Auditing (GFA) meant to be generalizable to most datasets. For more information on the repair process, see our paper on Certifying and Removing Disparate Impact. For information on the full auditing process, see our paper on Auditing Black-box Models for Indirect Influence.

  • Skater: Python Library for Model Interpretation/Explanations; Skater is a unified framework to enable Model Interpretation for all forms of model to help one build an Interpretable machine learning system often needed for real world use-cases(** we are actively working towards to enabling faithful interpretability for all forms models). It is an open source python library designed to demystify the learned structures of a black box model both globally(inference on the basis of a complete data set) and locally(inference about an individual prediction).

  • Weight Watcher; Charles Martin; Weight Watcher analyzes the Fat Tails in the weight matrices of Deep Neural Networks (DNNs). This tool can predict the trends in the generalization accuracy of a series of DNNs, such as VGG11, VGG13, ..., or even the entire series of ResNet models--without needing a test set ! This relies upon recent research into the Heavy (Fat) Tailed Self Regularization in DNNs

  • Adversarial Robustness Toolbox - ART; This is a library dedicated to adversarial machine learning. Its purpose is to allow rapid crafting and analysis of attacks and defense methods for machine learning models. The Adversarial Robustness Toolbox provides an implementation for many state-of-the-art methods for attacking and defending classifiers.

  • Model Describer; Python script that generates html report that summarizes predictive models. Interactive and rich in descriptions.

  • AI Fairness 360; Python library developed by IBM to help detect and remove bias in machine learning models. Some introduction

  • The What-If Tool: Code-Free Probing of Machine Learning Models; An interactive tool for What-If scenarios developed in Google, part of TensorBoard.

  • Impact encoding for categorical features; Imagine working with a dataset containing all the zip codes in the United States. That is a datset containing nearly 40,000 unique categories. How would you deal with that kind of data if you planned to do predictive modelling? One hot encoding doesn't get you anywhere useful, since that would add 40,000 sparse variables to your dataset. Throwing the data out could be leaving valuable information on the table, so that doesn't seem right either. In this post, I'm going to examine how to deal with categorical variables with high cardinality using a stratey called impact encoding. To illustrate this example, I use a data set containing used car sales. The probelm is especially well suited because there are several categorical features with many levels. Let's get started.

  • FairTest; FairTest enables developers or auditing entities to discover and test for unwarranted associations between an algorithm's outputs and certain user subpopulations identified by protected features.

  • Explanation Explorer; Visual tool implemented in python for visual diagnostics of binary classifiers using lnstance-level explanations (local explainers).

  • ggeffects; Create Tidy Data Frames of Marginal Effects for ‚ggplot‘ from Model Outputs, The aim of the ggeffects-package is similar to the broom-package: transforming “untidyâ€� input into a tidy data frame, especially for further use with ggplot. However, ggeffects does not return model-summaries; rather, this package computes marginal effects at the mean or average marginal effects from statistical models and returns the result as tidy data frame (as tibbles, to be more precisely).

  • LOFO; LOFO (Leave One Feature Out) Importance calculates the importances of a set of features based on a metric of choice, for a model of choice, by iteratively removing each feature from the set, and evaluating the performance of the model, with a validation scheme of choice, based on the chosen metric.

Toolkits

Model debugging and refinement through visualisation toolkits

The process of identifying, addressing defects or issues within a deep learning model that fails to converge or does not achieve an acceptable performance, interactively incorporate expert knowledge and expertise into the improvement and refinement process of a deep learning model, through a set of rich user interactions, in addition to semi-supervised learning or active learning.

  • Keras Visualization Toolkit; keras-vis is a high-level toolkit for visualizing and debugging your trained keras neural net models.

  • TensorBoard visualizes the structure of a given computational graph that a user creates and provides basic line graphs and histograms of user-selected statistics.

  • Visdom is a web-based interactive visualization toolkit that is easy to use with deep learning libraries for PyTorch.

  • DL4J UI allows users to monitor the training process with several basic visualization components.

  • DIGITS simplifies common deep learning tasks such as managing data, designing and training neural networks on multi-GPU systems, monitoring performance in real time with advanced visualizations, and selecting the best performing model from the results browser for deployment.

  • CNNVis; Towards Better Analysis of Deep Convolutional Neural Networks. By Visual Analytics Group of Tsinghua University.

  • ActiVis provides a visual exploratory analysis of a given deep learning model via multiple coordinated views, such as a matrix view and an embedding view. [slideshare] [demo]

  • LSTMVis allows a user to select a hypothesis input range to focus on local state changes, to match these states changes to similar patterns in a large data set, and to align these results with structural annotations from their domain. [github]

  • DGMTracker is a visual analytics tool that helps experts understand and diagnose the training processes of deep generative models.

  • GANViz aims to help experts understand the adversarial process of GANs in-depth. Specifically, GANViz evaluates the model performance of two subnet works of GANs, provides evidence and interpretations of the models’ performance, and empowers comparative analysis with the evidence

  • DeepVis Toolbox; This is the code required to run the Deep Visualization Toolbox, as well as to generate the neuron-by-neuron visualizations using regularized optimization. The toolbox and methods are described casually here

  • EnsembleVis; Visualization and Visual Analysis of Ensemble Data: A Survey

  • DNN Genealogy [github]; DNN Genealogy is an interactive visualization tool that offers a visual summary of representative DNNs and their evolutionary relationships.

  • ReVACNN; ReVACNN: Real-Time Visual Analytics for Convolutional Neural Network.

  • Summit [github]; Summit is an interactive system that scalably and systematically summarizes and visualizes what features a deep learning model has learned and how those features interact to make predictions.

  • CleverHans; This repository contains the source code for CleverHans, a Python library to benchmark machine learning systems' vulnerability to adversarial examples. You can learn more about such vulnerabilities on the accompanying blog.

  • Tensorboard what-if; Building effective machine learning models means asking a lot of questions. Look for answers using the What-if Tool, an interactive visual interface designed to probe your models better. Compatible with TensorBoard, Jupyter and Colaboratory notebooks. Works on Tensorflow and Python-accessible models.

  • Tensorflow Lucid; A collection of infrastructure and tools for research in neural network interpretability.

  • TensorFlow Model Analysis (TFMA); TensorFlow Model Analysis (TFMA) is a library for evaluating TensorFlow models. It allows users to evaluate their models on large amounts of data in a distributed manner, using the same metrics defined in their trainer. These metrics can be computed over different slices of data and visualized in Jupyter notebooks.

  • TF-Explain; Interpretability Methods for tf.keras models with Tensorflow 2.0.

Demos

  • Explainable AI Demos; Machine learning models, in particular deep neural networks (DNNs), are characterized by very high predictive power, but in many cases, are not easily interpretable by a human. Interpreting a nonlinear classifier is important to gain trust into the prediction, and to identify potential data selection biases or artifacts. This demo shows how decisions made by systems based on artificial intelligence can be explained by LRP.

  • DeepExplain; A unified framework of perturbation and gradient-based attribution methods for Deep Neural Networks interpretability. DeepExplain also includes support for Shapley Values sampling.

Online

2020

  • Three Model Explanability Methods Every Data Scientist Should Know Moto DEI; Thanks to many researchers’s contributions, though, there are some useful tools to give explanability to the machine learning models. With those tools, we can know and understand (at least feel like we understand) that “which variable affect the prediction how muchâ€�?

  • The three stages of Explainable AI: How explainability facilitates real world deployment of AI; Clodéric Mars; The explainability of AI has become a major concern for AI builders and users, especially in the enterprise world. As AIs have more and more impact on the daily operations of businesses, trust, acceptance, accountability and certifiability become requirements for any deployment at a large scale.

  • In Automation We Trust: How to Build an Explainable AI Model; Jean-Michel Franco; As AI becomes more advanced and complex, the algorithms and logic powering it become less transparent. This lack of clarity can be unnerving for some people. Recent high-profile AI failures illustrate this.

  • Well, AI, How do You Explain That?!; Alon Tvina; AI explainability is one of the most significant barriers to its adoption — but revealing the logic can take the human-machine relationship to the next level. The AI boom is upon us, but a closer look at specific use cases reveals a big barrier to adoption. In every vertical, businesses are struggling to make the most of AI’s promise. The biggest pain point? AI explainability.

  • What Explainable AI fails to explain (and how we fix that); Alvin Wan; Neural networks are accurate but un-interpretable. Decision Trees are interpretable but inaccurate in computer vision. We have a solution.

  • Explainable AI or XAI: the key to overcoming the accountability challenge; Hani Hagras; AI has become a key part of our day-to-day lives and business operations. A report from Microsoft and EY that analysed the outlook for AI in 2019 and beyond, stated that “65% of organisations in Europe expect AI to have a high or a very high impact on the core business.â€�

  • Explainable AI: The key to Responsibly Adopting AI in Medicine; Niv Mizrahi; In this special guest feature, Niv Mizrahi, CTO & Co-Founder of Emedgene, discusses a field of technology that constantly is rising in importance – explainable (or interpretable) AI, and specifically how it has become a key responsibility for adopting AI in medicine. Emedgene is a genomics company using AI to automatically interpret genetic data so that health organizations can scale personalized care to wider populations. An expert in machine learning and big data, Niv has led Emedgene’s development from idea to a mature solution used by leading genomics labs.

  • Explainable Monitoring: Stop flying blind and monitor your AI; Krishna Gade; Data Science teams find Explainable Monitoring essential to manage their AI.

2019

  • Teaching AI, Ethics, Law and Policy; Asher Wilk; The cyberspace and the development of intelligent systems using Artificial Intelligence (AI) created new challenges to computer professionals, data scientists, regulators and policy makers. For example, self-driving cars raise new technical, ethical, legal and policy issues. This paper proposes a course Computers, Ethics, Law, and Public Policy, and suggests a curriculum for such a course. This paper presents ethical, legal, and public policy issues relevant to building and using software and artificial intelligence. It describes ethical principles and values relevant to AI systems.
  • An introduction to explainable AI, and why we need it; Patrick Ferris; I was fortunate enough to attend the Knowledge Discovery and Data Mining(KDD) conference this year. Of the talks I went to, there were two main areas of research that seem to be on a lot of people’s minds: Firstly, finding a meaningful representation of graph structures to feed into neural networks. Oriol Vinyalsfrom DeepMind gave a talk about their Message Passing Neural Networks. The second area, and the focus of this article, are explainable AI models. As we generate newer and more innovative applications for neural networks, the question of ‘How do they work?’ becomes more and more important.
  • The AI Black Box Explanation Problem; At a very high level, we articulated the problem in two different flavours: eXplanation by Design (XbD): given a dataset of training decision records, how to develop a machine learning decision model together with its explanation; Black Box eXplanation (BBX): given the decision records produced by a black box decision model, how to reconstruct an explanation for it.
  • VOZIQ Launches ‘Agent Connect,’ an Explainable AI Product to Enable Large-Scale Customer Retention Programs; RESTON, VIRGINIA , USA, April 3, 2019 /EINPresswire.com/ -- VOZIQ, an enterprise cloud-based application solution provider that enables recurring revenue businesses to drive large-scale predictive customer retention programs, announced the launch of its new eXplainable AI (XAI) product ‘Agent Connect’ to help businesses enhance proactive retention capabilities of their most critical resource – customer retention agents. ‘Agent Connect’ is VOZIQ’s newest product powered by next-generation eXplainable AI (XAI) that brings together multiple retention risk signals with expressed and inferred needs, sentiment, churn drivers and behaviors that lead to attrition of customers discovered directly from millions of customer interactions by analyzing unstructured and structured customer data, and converts insights those into easy-to-act, prescriptive intelligence about predicted health for any customer.
  • Derisking machine learning and artificial intelligence ; Machine learning and artificial intelligence are set to transform the banking industry, using vast amounts of data to build models that improve decision making, tailor services, and improve risk management. According to the McKinsey Global Institute, this could generate value of more than $250 billion in the banking industry. But there is a downside, since machine-learning models amplify some elements of model risk. And although many banks, particularly those operating in jurisdictions with stringent regulatory requirements, have validation frameworks and practices in place to assess and mitigate the risks associated with traditional models, these are often insufficient to deal with the risks associated with machine-learning models. Conscious of the problem, many banks are proceeding cautiously, restricting the use of machine-learning models to low-risk applications, such as digital marketing. Their caution is understandable given the potential financial, reputational, and regulatory risks. Banks could, for example, find themselves in violation of antidiscrimination laws, and incur significant fines—a concern that pushed one bank to ban its HR department from using a machine-learning résumé screener. A better approach, however, and ultimately the only sustainable one if banks are to reap the full benefits of machine-learning models, is to enhance model-risk management.
  • Explainable AI should help us avoid a third 'AI winter'; The General Data Protection Regulation (GDPR) that came into force last year across Europe has rightly made consumers and businesses more aware of personal data. However, there is a real risk that through over-correcting around data collection critical AI development will be negatively impacted. This is not only an issue for data scientists, but also those companies that use AI-based solutions to increase competitiveness. The potential negative impact would not only be on businesses implementing AI but also on consumers who may miss out on the benefits AI could bring to the products and services they rely on.
  • Explainable AI: From Prediction To Understanding; It’s not enough to make predictions. Sometimes, you need to generate a deep understanding. Just because you model something doesn’t mean you really know how it works. In classical machine learning, the algorithm spits out predictions, but in some cases, this isn’t good enough. Dr. George Cevora explains why the black box of AI may not always be appropriate and how to go from prediction to understanding.
  • Why Explainable AI (XAI) is the future of marketing and e-commerce; “New machine-learning systems will have the ability to explain their rationale, characterize their strengths and weaknesses, and convey an understanding of how they will behave in the future.â€� – David Gunning, Head of DARPA. As machine learning begins to play a greater role in the delivery of personalized customer experiences in commerce and content, one of the most powerful opportunities is the development of systems that offer marketers the ability to maximize every dollar spent on marketing programs via actionable insights. But the rise of AI in business for actionable insights also creates a challenge: How can marketers know and trust the reasoning behind why an AI system is making recommendations for action? Because AI makes decisions using incredibly complex processes, its decisions are often opaque to the end-user.
  • Interpretable AI or How I Learned to Stop Worrying and Trust AI Techniques to build Robust, Unbiased AI Applications; Ajay Thampi; In the last five years alone, AI researchers have made significant breakthroughs in areas such as image recognition, natural language understanding and board games! As companies are considering handing over critical decisions to AI in industries like healthcare and finance, the lack of understanding of complex machine learned models is hugely problematic. This lack of understanding could result in models propagating bias and we’ve seen quite a few examples of this in criminal justice, politics, retail, facial recognition and language understanding.
  • In Search of Explainable Artificial Intelligence; Today, if a new entrepreneur wants to understand why the banks rejected a loan application for his start-up, or if a young graduate wants to know why the large corporation for which he was hoping to work did not invite her for an interview, they will not be able to discover the reasons that led to these decisions. Both the bank and the corporation used artificial intelligence (AI) algorithms to determine the outcome of the loan or the job application. In practice, this means that if your loan application is rejected, or your CV rejected, no explanation can be provided. This produces an embarrassing scenario, which tends to relegate AI technologies to suggesting solutions, which must be validated by human beings.
  • Explainable AI and the Rebirth of Rules; Artificial intelligence (AI) has been described as a set of “prediction machines.â€� In general, the technology is great at generating automated predictions. But if you want to use artificial intelligence in a regulated industry, you better be able to explain how the machine predicted a fraud or criminal suspect, a bad credit risk, or a good candidate for drug trials. International law firm Taylor Wessing (the firm) wanted to use AI as a triage tool to help advise clients of the firm about their predicted exposure to regulations such as the Modern Slavery Act or the Foreign Corrupt Practices Act. Clients often have suppliers or acquisitions around the world, and they need systematic due diligence to determine where they should investigate more deeply into possible risk. Supply chains can be especially complicated with hundreds of small suppliers. Rumors of Rule Engines’ Death Have Been Greatly Exaggerated
  • Attacking discrimination with smarter machine learning; Here we discuss "threshold classifiers," a part of some machine learning systems that is critical to issues of discrimination. A threshold classifier essentially makes a yes/no decision, putting things in one category or another. We look at how these classifiers work, ways they can potentially be unfair, and how you might turn an unfair classifier into a fairer one. As an illustrative example, we focus on loan granting scenarios where a bank may grant or deny a loan based on a single, automatically computed number such as a credit score.
  • Better Preference Predictions: Tunable and Explainable Recommender Systems; Amber Roberts; Ad recommendations should be understandable to the individual consumer, but is it possible to increase interpretability without sacrificing accuracy?
  • Machine Learning is Creating a Crisis in Science; Kevin McCaney; The adoption of machine-learning techniques is contributing to a worrying number of research findings that cannot be repeated by other researchers.
  • Artificial Intelligence and Ethics; Jonathan Shaw; On march 2018, at around 10 P.M., Elaine Herzberg was wheeling her bicycle across a street in Tempe, Arizona, when she was struck and killed by a self-driving car. Although there was a human operator behind the wheel, an autonomous system—artificial intelligence—was in full control. This incident, like others involving interactions between people and AI technologies, raises a host of ethical and proto-legal questions. What moral obligations did the system’s programmers have to prevent their creation from taking a human life? And who was responsible for Herzberg’s death? The person in the driver’s seat? The company testing the car’s capabilities? The designers of the AI system, or even the manufacturers of its onboard sensory equipment?
  • Building Trusted Human-Machine Partnerships; A key ingredient in effective teams – whether athletic, business, or military – is trust, which is based in part on mutual understanding of team members’ competence to fulfill assigned roles. When it comes to forming effective teams of humans and autonomous systems, humans need timely and accurate insights about their machine partners’ skills, experience, and reliability to trust them in dynamic environments. At present, autonomous systems cannot provide real-time feedback when changing conditions such as weather or lighting cause their competency to fluctuate. The machines’ lack of awareness of their own competence and their inability to communicate it to their human partners reduces trust and undermines team effectiveness.
  • HOW AUGMENTED ANALYTICS AND EXPLAINABLE AI WILL CAUSE A DISRUPTION IN 2019 & BEYOND; Kamalika Some; Artificial intelligence (AI) is a transformational $15 trillion opportunity which has caught the attention of all tech users, leaders and influencers. Yet, as AI becomes more sophisticated, the algorithmic ‘black box’ dominates more to make all the decisions. To have a confident outcome and stakeholder trust with an ultimate aim to capitalise on the opportunities, it is essential to know the rationale of how the algorithm arrived at its recommendation or decision, the basic premise behind Explainable AI (XAI).
  • Why ‘Explainable AI’ is the Next Frontier in Financial Crime Fighting ; Chad Hetherington; Financial institutions (FIs) must manage compliance budgets without losing sight of primary functions and quality control. To answer this, many have made the move to automating time-intensive, rote tasks like data gathering and sorting through alerts by adopting innovative technologies like AI and machine learning to free up time-strapped analysts for more informed and precise decision-making processes.
  • Machine Learning Interpretability: Do You Know What Your Model Is Doing?; Marcel Spitzer; With the adoption of GDPR, there are now EU-wide regulations concerning automated individual decision-making and profiling (Art. 22, also termed „right to explanation“), engaging companies to give individuals information about processing, to introduce ways for them to request intervention and to even carry out regular checks to make sure that the systems are working as intended.
  • Building explainable machine learning models; Thomas Wood; Sometimes as data scientists we will encounter cases where we need to build a machine learning model that should not be a black box, but which should make transparent decisions that humans can understand. This can go against our instincts as scientists and engineers, as we would like to build the most accurate model possible.
  • AI is not IT; Silvie Spreeuwenberg; XAI suggests something in between. It is still narrow AI but used in such a way that there is a feedback loop to the environment. The feedback loop may involve human intervention. We understand the scope of the narrow AI solution. We can adjust the solution when the task at hand requires more knowledge, or are warned in a meaningful way when the task at hand does not fit in the scope of the AI solution.
  • A computer program used for bail and sentencing decisions was labeled biased against blacks. It’s actually not that clear.; This past summer, a heated debate broke out about a tool used in courts across the country to help make bail and sentencing decisions. It’s a controversy that touches on some of the big criminal justice questions facing our society. And it all turns on an algorithm.
  • AAAS: Machine learning 'causing science crisis'; Machine-learning techniques used by thousands of scientists to analyse data are producing results that are misleading and often completely wrong. Dr Genevera Allen from Rice University in Houston said that the increased use of such systems was contributing to a “crisis in scienceâ€�. She warned scientists that if they didn’t improve their techniques they would be wasting both time and money.
  • Automatic Machine Learning is broken; Debt that comes with maintenance and understand of complex models
  • Charles River Analytics creates tool to help AI communicate effectively with humans; Developer of intelligent systems solutions, Charles River Analytics Inc. created the Causal Models to Explain Learning (CAMEL) approach under the Defense Advanced Research Projects Agency's (DARPA) Explainable Artificial Intelligence (XAI) effort. The goal of the CAMEL tool approach will be help artificial intelligence effectively communicate with human teammates.
  • Inside DARPA’s effort to create explainable artificial intelligence; Among DARPA’s many exciting projects is Explainable Artificial Intelligence (XAI), an initiative launched in 2016 aimed at solving one of the principal challenges of deep learning and neural networks, the subset of AI that is becoming increasing prominent in many different sectors.
  • Boston University researchers develop framework to improve AI fairness; Experience in the past few years shows AI algorithms can manifest gender and racial bias, raising concern over their use in critical domains, such as deciding whose loan gets approved, who’s qualified for a job, who gets to walk free and who stays in prison. New research by scientists at Boston University shows just how hard it is to evaluate fairness in AI algorithms and tries to establish a framework for detecting and mitigating problematic behavior in automated decisions. Titled “From Soft Classifiers to Hard Decisions: How fair can we be?,â€� the research paper is being presented this week at the Association for Computing Machinery conference on Fairness, Accountability, and Transparency (ACM FAT*).

2018

  • Understanding Explainable AI; (Extracted from The Basis Technology Handbook for Integrating AI in Highly Regulated Industries) For the longest time, the public perception of AI has been linked to visions of the apocalypse: AI is Skynet, and we should be afraid of it. You can see that fear in the reactions to the Uber self-driving car tragedy. Despite the fact that people cause tens of thousands of automobile deaths per year, it strikes a nerve when even a single accident involves AI. This fear belies something very important about the technical infrastructure of the modern world: AI is already thoroughly baked in. That’s not to say that there aren’t reasons to get skittish about our increasing reliance on AI technology. The “black boxâ€� problem is one such justified reason for hesitation.

  • The Importance of Human Interpretable Machine Learning; This article is the first in my series of articles aimed at ‘Explainable Artificial Intelligence (XAI)’. The field of Artificial Intelligence powered by Machine Learning and Deep Learning has gone through some phenomenal changes over the last decade. Starting off as just a pure academic and research-oriented domain, we have seen widespread industry adoption across diverse domains including retail, technology, healthcare, science and many more. Rather than just running lab experiments to publish a research paper, the key objective of data science and machine learning in the 21st century has changed to tackling and solving real-world problems, automating complex tasks and making our life easier and better. More than often, the standard toolbox of machine learning, statistical or deep learning models remain the same. New models do come into existence like Capsule Networks, but industry adoption of the same usually takes several years. Hence, in the industry, the main focus of data science or machine learning is more ‘applied’ rather than theoretical and effective application of these models on the right data to solve complex real-world problems is of paramount importance.

  • Uber Has Open-Sourced Autonomous Vehicle Visualization; With an open source version of its Autonomous Visualization System, Uber is hoping to create a standard visualization system for engineers to use in autonomous vehicle development.

  • Holy Grail of AI for Enterprise - Explainable AI (XAI); Saurabh Kaushik; Apart from a solution of the above scenarios, XAI offers deeper Business benefits, such as: Improves AI Model performance as explanation help pinpoint issues in data and feature behaviors. Better Decision Making as explanation provides added info and confidence for Man-in-Middle to act wisely and decisively. Gives a sense of Control as an AI system owner clearly knows levers for its AI system’s behavior and boundary. Gives a sense of Safety as each decision can be subjected to pass through safety guidelines and alerts on its violation. Build Trust with stakeholders who can see through all the reasoning of each and every decision made. Monitor for Ethical issues and violation due to bias in training data. Better mechanism to comply with Accountability requirements within the organization for auditing and other purposes. Better adherence to Regulatory requirements (like GDPR) where ‘Right to Explain’ is must-have for a system.

  • Artificial Intelligence Is Not A Technology; Kathleen Walch; Making intelligent machines is both the goal of AI as well as the underlying science behind understanding what it takes to make a machine intelligent. AI represents our desired outcome and many of the developments along the way of that understanding such as self-driving vehicles, image recognition technology, or natural language processing and generation are steps along the journey to AGI.

  • The Building Blocks of Interpretability; Chris Olah ...; Interpretability techniques are normally studied in isolation. We explore the powerful interfaces that arise when you combine them — and the rich structure of this combinatorial space

  • Why Machine Learning Interpretability Matters; Even though machine learning (ML) has been around for decades, it seems that in the last year, much of the news (notably in mainstream media) surrounding it has turned to interpretability - including ideas like trust, the ML black box, and fairness or ethics. Surely, if the topic is growing in popularity, that must mean it’s important. But why, exactly - and to whom?

  • IBM, Harvard develop tool to tackle black box problem in AI translation; seq2seq vis; Researchers at IBM and Harvard University have developed a new debugging tool to address this issue. Presented at the IEEE Conference on Visual Analytics Science and Technology in Berlin last week, the tool lets creators of deep learning applications visualize the decision-making an AI makes when translating a sequence of words from one language to another.

  • The Five Tribes of Machine Learning Explainers; MichaÅ‚ Å�opuszyÅ„ski; Lightning talk from PyData Berlin 2018

  • Beware Default Random Forest Importances; Terence Parr, Kerem Turgutlu, Christopher Csiszar, and Jeremy Howard; TL;DR: The scikit-learn Random Forest feature importance and R's default Random Forest feature importance strategies are biased. To get reliable results in Python, use permutation importance, provided here and in our rfpimp package (via pip). For R, use importance=T in the Random Forest constructor then type=1 in R's importance() function. In addition, your feature importance measures will only be reliable if your model is trained with suitable hyper-parameters.

  • A Case For Explainable AI & Machine Learning; Very nice list of possible use-cases for XAI, examples: Energy theft detection - Different types of theft require different action by the investigators; Credit scoring - he Fair Credit Reporting Act (FCRA) is a federal law that regulates credit reporting agencies and compels them to insure the information they gather and distribute is a fair and accurate summary of a consumer's credit history; Video threat detection - Flagging an individual as a threat has a potential for significant legal implications;

  • Ethics of AI: A data scientist’s perspective; QuantumBlack

  • Explainable AI vs Explaining AI; Ahmad Haj Mosa; Some ideas that links tools for XAI with ideas from ,,Thinking fast, thinking slow''.

  • Regulating Black-Box Medicine; Data drive modern medicine. And our tools to analyze those data are growing ever more powerful. As health data are collected in greater and greater amounts, sophisticated algorithms based on those data can drive medical innovation, improve the process of care, and increase efficiency. Those algorithms, however, vary widely in quality. Some are accurate and powerful, while others may be riddled with errors or based on faulty science. When an opaque algorithm recommends an insulin dose to a diabetic patient, how do we know that dose is correct? Patients, providers, and insurers face substantial difficulties in identifying high-quality algorithms; they lack both expertise and proprietary information. How should we ensure that medical algorithms are safe and effective?

  • 3 Signs of a Good AI Model; Troy Hiltbrand; Until recently, the success of an AI project was judged only by its outcomes for the company, but an emerging industry trend suggests another goal -- explainable artificial intelligence (XAI). The gravitation toward XAI stems from demand from consumers (and ultimately society) to better understand how AI decisions are made. Regulations, such as the General Data Protection Regulation (GDPR) in Europe, have increased the demand for more accountability when AI is used to make automated decisions, especially in cases where bias has a detrimental effect on individuals.

  • Rapid new advances are now underway in AI; Yet, as AI gets more widely deployed, the importance of having explainable models will increase. Simply, if systems are responsible for making a decision, there comes a step in the process whereby that decision has to be shown — communicating what the decision is, how it was made and – now – why did the AI do what it did.

  • Why We Need to Audit Algorithms; James Guszcza Iyad Rahwan Will Bible Manuel Cebrian Vic Katyal; Algorithmic decision-making and artificial intelligence (AI) hold enormous potential and are likely to be economic blockbusters, but we worry that the hype has led many people to overlook the serious problems of introducing algorithms into business and society. Indeed, we see many succumbing to what Microsoft’s Kate Crawford calls “data fundamentalismâ€� — the notion that massive datasets are repositories that yield reliable and objective truths, if only we can extract them using machine learning tools. A more nuanced view is needed. It is by now abundantly clear that, left unchecked, AI algorithms embedded in digital and social technologies can encode societal biases, accelerate the spread of rumors and disinformation, amplify echo chambers of public opinion, hijack our attention, and even impair our mental wellbeing.

  • Taking machine thinking out of the black box; Anne McGovern; Adaptable Interpretable Machine Learning project is redesigning machine learning models so humans can understand what computers are thinking.

  • Explainable AI won’t deliver. Here’s why; Cassie Kozyrkov; Interpretability: you do understand it but it doesn’t work well. Performance: you don’t understand it but it does work well. Why not have both?

  • We Need an FDA For Algorithms; Hannah Fry; Do we need to develop a brand-new intuition about how to interact with algorithms? What do you mean when you say that the best algorithms are the ones that take the human into account at every stage? What is the most dangerous algorithm?

  • Explainable AI, interactivity and HCI; Erik Stolterman Bergqvist; develop AI systems that technically can explain their inner workings in some way that makes sense to people. approach the XAI from a legal point of view. explanable AI is needed for practical reasons, pproach the topic from a more philosophical perspective and ask some broader questions about how reasonable it is for humans to ask systems to be able to explain their actions

  • Why your firm must embrace explainable AI to get ahead of the hype and understand the business logic of AI; Maria Terekhova; If AI is to have true business-ready capabilities, it will only succeed if we can design the business logic behind it. That means business leaders who are steeped in business logic need to be front-and-center in the AI design and management processes.

  • Explainable AI : The margins of accountability; Yaroslav Kuflinski; How much can anyone trust a recommendation from an AI? Increasing the adoption of ethics in artificial intelligence

2017

  • Sent to Prison by a Software Program’s Secret Algorithms; Adam Liptak The new York Times; The report in Mr. Loomis’s case was produced by a product called Compas, sold by Northpointe Inc. It included a series of bar charts that assessed the risk that Mr. Loomis would commit more crimes. The Compas report, a prosecutor told the trial judge, showed “a high risk of violence, high risk of recidivism, high pretrial risk.â€� The judge agreed, telling Mr. Loomis that “you’re identified, through the Compas assessment, as an individual who is a high risk to the community.â€�
  • AI Could Resurrect a Racist Housing Policy And why we need transparency to stop it.- "The fact that we can't investigate the COMPAS algorithm is a problem"

2016

  • How We Analyzed the COMPAS Recidivism Algorithm; ProPublica investigation. Black defendants were often predicted to be at a higher risk of recidivism than they actually were. Our analysis found that black defendants who did not recidivate over a two-year period were nearly twice as likely to be misclassified as higher risk compared to their white counterparts (45 percent vs. 23 percent). The analysis also showed that even when controlling for prior crimes, future recidivism, age, and gender, black defendants were 45 percent more likely to be assigned higher risk scores than white defendants.

Thesis

2021

2021

2020

2018

2017

2016

2015

2006

Other