This article features an interview with Lila Ibrahim, COO of Google DeepMind.

Ibrahim will be speaking atTNW Conference, which takes place on June 15 & 16 in Amsterdam.

If you want to experience the event (and say hi to our editorial team!

Inside Google DeepMind’s approach to AI safety

), weve got something special for our loyal readers.

Use the promo codeREAD-TNW-25and get a 25% discount on your business pass for TNW Conference.

See you in Amsterdam!

Lila Ibrahim

AIsafety has become a mainstream concern.

Last month,a warningthat artificial intelligence posed a risk of extinction attracted newspaper headlines around the world.

The warning came in a statement signed by more than 350 industry heavyweights.

40% off TNW Conference!

Its a visionary ambition that needs to remain grounded in reality which is where Ibrahim comes in.

In 2018, Ibrahim was appointed as DeepMinds first-ever COO.

Her role oversees business operations and growth, with a strong focus on building AI responsibly.

Similarly, we want to confirm were doing what we can to maximize the beneficial outcomes.

To uncover the building blocks of advanced AI, DeepMind adheres to the scientific method.

DeepMind uses various systems and processes to guide its research into the real world.

One example is an internal review committee.

To guide the companys AI development, DeepMind has produced a series of clear, shared principles.

They also codify our aim to prioritize widespread benefit, says Ibrahim.

One of Ibrahims chief concerns involves representation.

The company also engages with a broad range of communities to understand techs impact on them.

The engagement has already delivered powerful results.

Scientists believe the work could dramatically accelerate drug development.

Determining the 3D structure of a protein used to take many months or years, it now takes seconds.

AlphaFolds success was guided by a diverse array of external experts.

In the initial phases of the work, DeepMind investigated a range of big questions.

How could AlphaFold accelerate biological research and applications?

What might be the unintended consequences?

And how could the progress be shared responsibly?

Their feedback guided DeepMinds strategy for AlphaFold.

But the external experts recommended retaining these predictions in the release.

DeepMind followed their advice.

Their work is addressing major global challenges, fromdeveloping malaria vaccinestofighting plastic pollution.

Responsible AI also requires a diverse talent pool.

To expand the pipeline, DeepMind works withacademia,community groups, andcharitiesto supportunderrepresented communities.

The motivations arent solely altruistic.

Closing the skills gap will produce more talent for DeepMind and the wider tech sector.

As AlphaFold demonstrated, responsible AI can also accelerate scientific advances.

And amid growing public concerns andregulatory pressures, the business case is only getting stronger.

Story byThomas Macaulay

Thomas is the managing editor of TNW.

He leads our coverage of European tech and oversees our talented team of writers.

Away from work, he e(show all)Thomas is the managing editor of TNW.

He leads our coverage of European tech and oversees our talented team of writers.

Away from work, he enjoys playing chess (badly) and the guitar (even worse).

Also tagged with