And a system that cant be explained cant be trusted.

In the past few years,explainable artificial intelligencehas become a growing field of interest.

Scientists and developers are deploying deep learning algorithms in sensitive fields such as medical imaging analysis and self-driving cars.

The advantages of self-explainable AI over interpretable AI

There is concern, however, about how these AI operate.

For instance, suppose a neural data pipe has labeled the image of a skin mole as cancerous.

Researchers have developed various interpretability techniques that help investigate decisions made by variousmachine learning algorithms.

RISE explainable AI example saliency map

Whats wrong with current explainable AI methods?

Classicsymbolic AI systemsare based on manual rules created by developers.

In contrast, machine learning algorithms develop their behavior by comparing training examples and creating statistical models.

human brain

As a result, their decision-making logic is often ambiguous even to their developers.

Machine learnings interpretability problem is both well-known and well-researched.

Efforts in the field split into two categories in general: global explanations and local explanations.

self-explainable AI

For instance, they mightproduce saliency mapsof the parts of an image that have contributed to a specific decision.

Elton also challenges another popular belief about deep learning.

Many scientists believe that deep neural networks extract high-level features and rules from their underlying problem domain.

This is true, depending on what you mean by features.

Theres a body of research that shows neural networks do in factlearn recurring patterns in imagesand other data types.

Some research is focused on developing interpretable AI models to replace current black boxes.

These models make their reasoning logic visible and transparent to developers.

What this means is that when it comes to understanding human decision, we seldom investigate neuron activations.

Interestingly, although the human brain is a black box, we are able to trust each other.

One day, science might enable us to explain human decisions at the neuron activation level.

An explainable AI yields two pieces of information: its decision and the explanation of that decision.

This is an idea that has been proposed and explored before.

In the paper, Elton suggests how relevant causal information can be extracted from a neural internet.

Neural networks tend to provide an output value for any input they receive.

Self-explainable AI models should send an alert when results fall outside the models applicability domain, Elton says.

Self-explainable AI models should provide confidence levels for both their output and their explanation.

An obvious example would be health care, where errors can result in irreparable damage to health.

you might read the original articlehere.

Also tagged with