Physics can solve key challenges facing AI, study finds

These findings represent a bridge in physics, AI and neuroscience, and has the potential to advance on-the-spot decision making in AI.

Artificial intelligence (photo credit: INGIMAGE)
Artificial intelligence
(photo credit: INGIMAGE)
The field of physics could provide a solution to some of the key challenges encountered in the artificial intelligence field, according to new research from Bar-Ilan University.
Some of the key challenges facing the AI field include estimating necessary dataset size, how many circumstances does it need to learn beforehand and fast, on the spot decision making skills. However, tackling these challenges may be possible through the use of a central concept in physics known as power-law scaling.
As described in an article published last Thursday in the academic journal Scientific Reports, the power-law scaling arises from a number of different phenomena, including the timing of magnitude of earthquakes to stock market fluctuations and to even frequency of word use in linguistics. It is this concept, which originally was thought of to describe how magnets are formed in the iron bulk cooling process, that could see application in the AI field, especially with deep learning.
"Test errors with online learning, where each example is trained only once, are in close agreement with state-of-the-art algorithms consisting of a very large number of epochs, where each example is trained many times. This result has an important implication on rapid decision making such as robotic control," the study's lead author, Prof. Ido Kanter of Bar-Ilan's Department of Physics and Gonda (Goldshmied) Multidisciplinary Brain Research Center, said in a statement.
"The power-law scaling, governing different dynamical rules and network architectures, enables the classification and hierarchy creation among the different examined classification or decision problems."
"One of the important ingredients of the advanced deep learning algorithm is the recent new bridge between experimental neuroscience and advanced artificial intelligence learning algorithms," said co-author and PhD student Shira Sardi.
"This accelerated brain-inspired mechanism enables building advanced deep learning algorithms which outperform existing ones," said co-author and PhD student Yuval Meir.
The researchers made these conclusions following careful optimization and thorough simulations, and is based on findings that increased training frequency leads to neuronal adaptation to be significantly accelerated.
These findings represent a bridge in physics, AI and neuroscience, and has the potential to advance on-the-spot decision making in AI.