Posted in | News | Nanorobotics

Researchers Take Artificial Intelligence to a Whole New Level

Computer scientists at TU Wien (Vienna) have used neurobiology to enhance Artificial Intelligence. The latest methods are able to accomplish incredible results with unexpectedly little effort.

A vehicle is manoevred into a parking space by a tiny neural network. (Image credit: TU Wien)

Compared to an ordinary computer program, a brain that is grown naturally functions quite differently and does not utilize codes containing clear logical instructions. It is actually a network of cells interacting with one another.

Mimicking such neural networks on a computer can aid in solving issues that are very hard to break down into logical components.

In order to program such neural networks, TU Wien (Vienna) researchers, in association with scientists at Massachusetts Institute of Technology (MIT), have developed a method capable of modeling the time evolution of the nerve signals in an entirely different manner.

The approach was inspired by the roundworm C. elegans. Neural circuits from the nervous system of this creature were replicated on the computer, followed by adapting the model with machine learning algorithms.

In this manner, extraordinary tasks with a very low number of simulated nerve cells were solved – for instance, parking a car.

Although the worm-inspired network contains only 12 neurons, it can still be trained to maneuver a rover robot to a specified spot.

The work was recently presented at the TEDx conference by Ramin Hasani from the Institute of Computer Engineering at TU Wien.

Neural networks have to be trained. You provide a specific input and adjust the connections between the neurons so that the desired output is delivered.”

Ramin Hasani, Co-author

For instance, the input might be a photograph, and the output could be the person’s name in the picture.

Time usually does not play an important role in this process,” informs Radu Grosu from the Institute of Computer Engineering of TU Wien.

For the majority of neural networks, all of the input is delivered at once, instantly leading to a certain output. However, things are quite different in nature.

For example, speech recognition is invariably dependent on time, as are concurrent translations or series of movements responding to a varying environment.

Such tasks can be handled better using what we call RNN, or recurrent neural networks,” says Ramin Hasani. “This is an architecture that can capture sequences, because it makes neurons remember what happened previously.”

Along with his colleagues, Hasani recommended an innovative RNN-architecture based on a biophysical neuron and synapse prototype that enables time-varying dynamics.

"In a standard RNN-model, there is a constant link between neuron one and neuron two, defining how strongly the activity of neuron one influences the activity of neuron two,” says Ramin Hasani. “In our novel RNN architecture, this link is a nonlinear function of time.”

A neuronal Circuit Policy for parking a mobile rover

Enabling links and cell activities between cells to differ over time paves the way for new possibilities. Mathias Lechner, Ramin Hasani, and their colleagues theoretically demonstrated that their new architecture can technically approximate random dynamics.

In order to show the versatility of the latest method, the researchers devised and trained a tiny neural network.

We re-purposed a neural circuit from the nervous system of the nematode C. elegans. It is responsible for generating a simple reflexive behavior - the touch-withdrawal. This neural network was simulated and trained to control real-life applications.”

Mathias Lechner, Co-author

The success is quite extraordinary: the small and simple network containing just 12 neurons is capable of (following suitable training) solving difficult tasks. For example, the network was trained to steer a vehicle into a parking space along a pre-defined path.

The output of the neural network, which in nature would control the movement of nematode worms, is used in our case to steer and accelerate a vehicle. We theoretically and experimentally demonstrated that our novel neural networks can solve complex tasks in real-life and in simulated physical environments.”

Ramin Hasani, Co-author

Another major benefit of the novel method is that it provides a better understanding of the inner dynamics of the neural network.

Earlier neural networks, which usually comprised of several thousands of nodes, have been so complicated that it was possible to examine only the end results.

Moreover, it was hardly possible to gain a better understanding of what is happening inside. On the other hand, it is much easier to study the smaller yet highly powerful network developed by the Vienna team, and as a consequence, researchers can somewhat comprehend which nerve cells cause which effects.

This is a great advantage which encourages us to further research their properties,” says Hasani.

While this does not mean that artificial worms will be parking cars in the coming days, it does prove that artificial intelligence having a more brain-like architecture can be much more robust than previously believed.

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.