David Heinemeir Hansson, the creator of Ruby on Rails, called Apple Card a sexist program.

Apples black box algorithm thinks I deserve 20x the credit limit [my wife] does, he tweeted.

The success ofdeep learningin the past decade has increased interest in the field of artificial intelligence.

AI models need to be ‘interpretable’ rather than just ‘explainable’

The increased attention to black-box machine learning has given rise to a body of research onexplainable AI.

Explanations are often not reliable, Rudin writes.

and can be misleading, as we discuss below.

Article image

The first kind of black-box AI includesdeep neural networks, the architecture used in deep learning algorithms.

Most mainstream media outlets covering AI research use the terms explainable AI and interpretable AI interchangeably.

But theres a fundamental difference between the two.

deep neural networks

Interpretable AI are algorithms that gives a clear explanation of their decision-making processes.

Many machine learning algorithms are interpretable.

Explanation models do not always attempt to mimic the calculations made by the original model, Rudin writes.

decision tree

This can lead to erroneous conclusions about black-box AI systems and explainability methods.

For instance, an investigation into a black-box recidivism AI system found that the software was racially biased.

The problems of AI explanation techniques are also visible in saliency maps forcomputer vision systems.

RISE explainable AI example saliency map

Most of these techniques will highlight which parts of an image-led an image classifier to output a label.

Saliency-map explanations do not provide accurate representations of how black-box AI models work.

Rudin warns that this kind of practice can mislead users into thinking the explanation is useful.

AI interpretability-accuracy tradeoff chart

Poor explanations can make it very hard to troubleshoot a black box, she writes.

This trend is especially worrying in areas such as banking, health care, and criminal justice.

Theres already a body of work and research onalgorithmic biasand AI systems that discriminate against certain demographics.

black box AI explanation saliency map for husky

Rudin also refutes this argument.

This is an approach that is being embraced in other fields of software engineering.

Theres no reason for the AI community not to support the same approach.

feature-based explanation of deep learning model

An alternative is to organizations that introduce black-box models to report the accuracy of interpretable modeling methods.

you’re able to read the original articlehere.

Also tagged with