Europe has some of the most progressive, human-centric artificial intelligence governance policies in the world.

But that doesnt mean its perfect.

Published in 2019, this document lays out the barebones ethical concerns and best practices for the EU.

A critical review of the EU’s ‘Ethics Guidelines for Trustworthy AI’

The problem: Human-in-the-loop relies on competent humans at every level of the decision-making process to ensure fairness.

Unfortunately, studies show that humans areeasily manipulated by machines.

Were also prone to ignore warnings whenever they become routine.

Technical Robustness and safety

AI systems need to be resilient and secure.

That is the only way to ensure that also unintentional harm can be minimized and prevented.

Neurals rating :needs work.

Without a definition of safe, the whole statement is fluff.

Neurals rating:good, but could be better.

Luckily, theGeneral Data Protection Regulation(GDPR) does most of the heavy lifting here.

However, the terms quality and integrity are highly subjective as is the term legitimised access.

Transparency

The data, system and AI business models should be transparent.

Traceability mechanisms can help achieving this.

Moreover, AI systems and their decisions should be explained in a manner adapted to the stakeholder concerned.

Neurals rating:this is hot garbage.

Only a small percentage of AI models lend themselves to transparency.

In other words, a given AI system might use billions of different parameters to produce an output.

Neurals rating:poor.

Societal and environmental well-being

AI systems should benefit all human beings, including future generations.

It must hence be ensured that they are sustainable and environmentally friendly.

Neurals rating: great.

Moreover, adequate an accessible redress should be ensured.

Theres currently no political consensus as to whos responsible when AI goes wrong.

The employees following procedure based on the AIs flagging of a potential threat are just doing their jobs.

And the developers who trained the systems are typically beyond reproach once their models go into production.

It shouldnt allow AI developers to get away with deploying models that function that way either.

Also tagged with