With access to powerful cloud computing platforms, AI researchers have been able to train largerneural networksin shorter timespans.
But what youll hear less is the darker implications of the current direction of AI research.
This means that in seven years, the metric has grown by a factor of 300,000.

This requirement imposes severe limits on AI research and can also have other, less savory repercussions.
Called OpenAI Five, the bot entered a major e-sports competition but lost to human players in the finals.
There are many other examples like this, where an increase in compute resources has resulted in better results.

This is especially true inreinforcement learning, which is one of the hottest areas of AI research.
The computation costs of training AI models (source: OpenAI)
A FLOP is a floating-point operation.
A petaflop/s-day (pfs-day) amounts to about 1020operations per day.

This means that it would cost around $246,800-822,800 to train the AlphaGoZero model.
And that is just the compute costs.
Other notable achievements in the field have similar costs.

For instance, according to figures released by DeepMind, itsStarCraft-playing AIconsisted of 18 agents.
Each AI agent was trained with 16 Google TPUs v3 for 14 days.
This means that at current pricing rates, the company spent about $774,000 for the 18 AI agents.
Popular UK-based AI lab DeepMind owes its success to the vast resources of Google, its parent company.
Google acquired DeepMind in 2014 for $650 million, giving it much needed financial and technical backing.
DeepMind also has 1.04 billion in debts due this year, which includes an 883-million loan from Alphabet.
The lab was running out of financial resources to support its research.
Microsoft declared that it wouldinvest $1 billion in the lab.
This trendthreatens to commercialize AI research.
But they also expect a return on investment in the near future.
This will make their wealthy funders happy, it will beto detriment of AI research in general.
Googles famousBERT language modelandOpenAIs GPT-2respective 340 million and 1.5 billion parameters.
Unfortunately, AI researchers seldom report or pay attention to these aspects of their work.
Being too infatuated with increasing compute resources can blind us in finding new solutions for more efficient AI techniques.
Unlike neural networks, symbolic AI does not scale by increasing compute resources and data.
It is also terrible at processing the messy, unstructured data of the real world.
But it is terrific at knowledge representation and reasoning, two areas where neural networks lack sorely.
Exploring hybrid AI approaches might open new pathways for creating more resource-efficient AI.
Unfortunately, the current excitement surroundingdeep learninghas marginalized these conversations.