David Heinemeir Hansson, the creator of Ruby on Rails, called Apple Card a sexist program.
Apples black box algorithm thinks I deserve 20x the credit limit [my wife] does, he tweeted.
The success ofdeep learningin the past decade has increased interest in the field of artificial intelligence.

The increased attention to black-box machine learning has given rise to a body of research onexplainable AI.
Explanations are often not reliable, Rudin writes.
and can be misleading, as we discuss below.

The first kind of black-box AI includesdeep neural networks, the architecture used in deep learning algorithms.
Most mainstream media outlets covering AI research use the terms explainable AI and interpretable AI interchangeably.
But theres a fundamental difference between the two.

Interpretable AI are algorithms that gives a clear explanation of their decision-making processes.
Many machine learning algorithms are interpretable.
Explanation models do not always attempt to mimic the calculations made by the original model, Rudin writes.

This can lead to erroneous conclusions about black-box AI systems and explainability methods.
For instance, an investigation into a black-box recidivism AI system found that the software was racially biased.
The problems of AI explanation techniques are also visible in saliency maps forcomputer vision systems.

Most of these techniques will highlight which parts of an image-led an image classifier to output a label.
Saliency-map explanations do not provide accurate representations of how black-box AI models work.
Rudin warns that this kind of practice can mislead users into thinking the explanation is useful.

Poor explanations can make it very hard to troubleshoot a black box, she writes.
This trend is especially worrying in areas such as banking, health care, and criminal justice.
Theres already a body of work and research onalgorithmic biasand AI systems that discriminate against certain demographics.

Rudin also refutes this argument.
This is an approach that is being embraced in other fields of software engineering.
Theres no reason for the AI community not to support the same approach.

An alternative is to organizations that introduce black-box models to report the accuracy of interpretable modeling methods.
you’re able to read the original articlehere.