Posted in | Machine-Vision

New Study on Neural Networks has Implications for Improved AI Applications

At North Carolina State University, researchers have found when neural networks are taught physics, they can better adapt to chaos within their environment.

The Hamiltonian flow represented as a donut-like torus; rainbow colors code a fourth dimension. Image Credit: North Carolina State University.

The study has wide-ranging implications for optimized artificial intelligence (AI) applications, such as automated drone piloting and medical diagnostics.

Neural networks are a sophisticated kind of AI somewhat based on how the human brain works. In humans, electrical impulses are exchanged between natural neurons based on the strengths of their connections.

Artificial neural networks simulate this behavior by tweaking numerical weights and biases at the time of training sessions to reduce the difference between the desired and actual outputs.

For instance, it is possible to train a neural network to recognize photos of dogs by scouring through a huge number of photos, thereby guessing whether the photo is of a dog, observing how distant it is, and finally tweaking its weights and biases until they are closer to reality.

However, such a neural network training if hampered by a drawback known as “chaos blindness”—an inability to respond to or predict chaos in a system. Traditional AI is blind to chaos.

Researchers at NC State’s Nonlinear Artificial Intelligence Laboratory (NAIL) have discovered that when a Hamiltonian function is incorporated into neural networks, they can better “see” chaos in a system and adapt accordingly.

In other words, the Hamiltonian incorporates the complete information related to a dynamic physical system—the total amount of both the kinetic and potential energies that exist. Visualize a swinging pendulum that moves back and forth in space over time.

A snapshot of that pendulum cannot inform where that pendulum is located in its arc or where it will go next. Traditional neural networks work from a snapshot of the pendulum. By contrast, neural networks embodying Hamiltonian flow perceive the pendulum’s movement as a whole—where it is, where it could or will be, as well as the energies that are involved when it moves.

As part of a proof-of-principle study, the NAIL team integrated Hamiltonian structure into neural networks. Then, they applied the networks to a familiar model of molecular and stellar dynamics termed the Hénon-Heiles model. The Hamiltonian neural network was able to predict the dynamics of the system accurately, even when it moved between chaos and order.

The Hamiltonian is really the ‘special sauce’ that gives neural networks the ability to learn order and chaos. With the Hamiltonian, the neural network understands underlying dynamics in a way that a conventional network cannot. This is a first step toward physics-savvy neural networks that could help us solve hard problems.

John Lindner, Professor of Physics, The College of Wooster

Lindner is also a visiting researcher at NAIL and the corresponding author of a paper that describes the study.

The study, published in Physical Review E, was partially supported by the Office of Naval Research (grant N00014-16-1-3066). The first author of the study is NC State postdoctoral researcher Anshul Choudhary. Bill Ditto, professor of physics at NC State University, is the director of NAIL.

Others who contributed to the study were visiting researcher Scott Miller; Sudeshna Sinha from the Indian Institute of Science Education and Research, Mohali; and Elliott Holliday, a graduate student at NC State.

Journal Reference:

Choudhary, A., et al. (2020) Physics-enhanced neural networks learn order and chaos. Physical Review E.


Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback