ChatGPTs engine cooperates more than people but also overestimates human collaboration, according to new research.

Scientists believe the study offers valuable clues about deployingAIin real-world applications.

The findings emerged from a famous game-theory problem:the prisoners dilemma.

A prisoner’s dilemma shows AI’s path to human cooperation

There are numerous variations, but the thought experiment typically starts with the arrest of two gang members.

Each accomplish is then placed in a separate room for questioning.

During the interrogations, they receive an offer: snitch on your fellow prisoner and go free.

Graphic of autonomous vehicles at a road crossing

Over a series of moves, the players have to choose between mutual benefit or self-interest.

Typically, they prioritise collective gains.

Empirical studies consistently show that humans will cooperate to maximise their joint payoff even if theyre total strangers.

40% off TNW Conference!

But does it exist in the digital kingdom?

Self-preservation instincts in AI may pose societal challenges.

GPT played the game with a human.

The first player would choose between a cooperative or selfish move.

The second player would then respond with their own choice of move.

Mutual cooperation would yield the optimal collective outcome.

But it could only be achieved if both players expected their decisions to be reciprocated.

GPT apparently expects this more than we do.

Across the game, the model cooperated more than people do.

Intriguingly, GPT was also overly optimistic about the selflessness of the human player.

The findings also point to LLM applications beyond just natural language processing tasks.

The researchers proffer two examples: urban traffic management and energy consumption.

LLMs in the real world

In cities plagued by congestion,motorists face their own prisoners dilemma.

They could cooperate by driving considerately and using mutually beneficial routes.

According to Professor Kevin Bauer, the studys lead author, the impact could be tremendous.

The result could be fewer traffic jams, reduced commute times, and a more harmonious driving environment.

Bauer sees similar potential in energy usage.

The challenge is optimising their consumption during peak hours.

To do this, Bauer recommends extensive transparency in the decision-making process and education about effective usage.

He also strongly advises close monitoring of the AI systems values.

These may be acquiredduring self-supervised learning, data curation, or human feedback to the model.

Sometimes, the results are concerning.

This hyper-rationality underscores the imperative need for well-defined ethical guidelines and responsible AI deployment practices, Bauer said.

Story byThomas Macaulay

Thomas is the managing editor of TNW.

He leads our coverage of European tech and oversees our talented team of writers.

Away from work, he e(show all)Thomas is the managing editor of TNW.

He leads our coverage of European tech and oversees our talented team of writers.

Away from work, he enjoys playing chess (badly) and the guitar (even worse).

Also tagged with