This could be a breakthrough for artificial neural networks.
You know: robot brains.
So the scientists are using the equivalent of a math trick to make things simpler.

Hows it work?The research is based on a theory of criticality in neural networks.
Basically, all the neurons in your brain exist in an equilibrium between chaos and order.
They dont all do the same thing, but they also arent bouncing around randomly.
The researchers believe the brain operates in this balance in much the same way other state-transitioning systems do.
Water, for example, can change from gas to liquid to solid.
According to their research paper, their model is accurate to within a few percentage points.
On the conservative side of things, this could lead to much more robust deep learning solutions.
Our current neural networks are a pale attempt to imitate what nature does with ease.
This could, potentially, include stronger AI inferences where diversity is concerned and increased resilience against bias.
And other predictive systems could benefit as well, such as stock market prediction algorithms and financial tracking models.
Its possible this could even increase our ability to predict weather patterns over long periods of time.
Quick take:This is brilliant, but its actual usefulness remains to be seen.
You got to start somewhere.