Google DeepMind has a new AI model that can direct robotic tasks it was never trained to perform.

Named RT-2, the model learns from web and robotics data.

It then turns this information into simple instructions for machines.

New DeepMind AI model can control robotic actions it was never trained to do

In other words, RT-2 can speak robot.

40% off TNW Conference!

It also has an improved semantic and visual understanding of robotic data that wasnt previously encountered.

The model was tested on various emergent robotic skills that are not present in the robotics data and require knowledge transfer from web pre-training

Notably, the model can use rudimentary reasoning to follow new user commands.

Impressively, it can evenperform multi-stage semantic reasoning.

In another evaluation, the model was commanded to push a bottle of ketchup towards a blue cube.

Here we show an example of such reasoning and the robot’s resulting behaviour

There were several items in the scene, but the only one in the training dataset was the cube.

Nonetheless, RT-2 successfully pushed the ketchup towards the specified destination.

DeepMind has heralded RT-2 as a breakthrough inartificial intelligence.

RT-2 performs well on real robot Language Table tasks. None of the objects except the blue cube were present in the training data.

The Londonlab says the model brings us closer to a future of helpfulrobots.

it’s possible for you to read the RT-2 study paperhere.

Story byThomas Macaulay

Thomas is the managing editor of TNW.

He leads our coverage of European tech and oversees our talented team of writers.

Away from work, he e(show all)Thomas is the managing editor of TNW.

He leads our coverage of European tech and oversees our talented team of writers.

Away from work, he enjoys playing chess (badly) and the guitar (even worse).

Also tagged with