Posted in | News | Humanoids

AI Makes Progress in Understanding and Explaining Tasks

One fundamental aspect of human communication that continues to elude artificial intelligence (AI) is the ability to do new work purely based on spoken or written instructions and then describe it to others so that they can duplicate it. The University of Geneva (UNIGE) team has successfully modeled an artificial neural network with this level of cognitive ability.

This AI was able to teach and execute several fundamental tasks, and then it was able to describe those tasks linguistically to a “sister” AI, which carried them out. The findings are published in the journal Nature Neuroscience.

It is a special human ability to carry out a new task based solely on written or spoken directions without any prior training. Furthermore, after mastering the task, humans can explain it to someone else so the person can perform it again.

This dual ability sets them apart from other animals that, to learn a new task, require multiple tries and positive or negative reinforcement signals, without being able to convey the information to their fellows.

Natural language processing, a branch of AI, aims to replicate this human ability by building robots that can comprehend and react to spoken or written language.

This method is based on artificial neural networks inspired by neurons in human bodies that communicate electrically with one another. However, nothing is still known about the neuronal computations that would enable the above-described cognitive accomplishment.

Currently, conversational agents using AI are capable of integrating linguistic information to produce text or an image. But, as far as we know, they are not yet capable of translating a verbal or written instruction into a sensorimotor action, and even less explaining it to another artificial intelligence so that it can reproduce it.

Alexandre Pouget, Full Professor, Department of Basic Neurosciences, Faculty of Medicine, University of Geneva

A Model Brain

With some prior training, the researcher and his team were able to create an artificial neural model with this dual capacity.

We started with an existing model of artificial neurons, S-Bert, which has 300 million neurons and is pre-trained to understand language. We ‘connected’ it to another, simpler network of a few thousand neurons.

Reidar Riveland, Study First Author, Department of Basic Neurosciences, Faculty of Medicine, University of Geneva

During the initial phase of the investigation, the neuroscientists trained this network to mimic Wernicke's area, the region of the brain responsible for language perception and interpretation. In the second stage of training, the network was trained to replicate Wernicke's and Broca's areas, which work together to form and articulate words.

Conventional laptop computers were used for the entire operation. The AI was then given written instructions in English.

For instance, pointing left or right to indicate where a stimulus is perceived, reacting in the opposite direction of a stimulus, or, in a more complex example, indicating which of two visual stimuli is brighter when there is a tiny contrast difference. The model replicated the goal of moving, or in this example, pointing, and the scientists then assessed the model's output.

Once these tasks had been learned, the network was able to describe them to a second network - a copy of the first - so that it could reproduce them. To our knowledge, this is the first time that two AIs have been able to talk to each other in a purely linguistic way.

Alexandre Pouget, Full Professor, Department of Basic Neurosciences, Faculty of Medicine, University of Geneva

For Future Humanoids

This approach provides fresh perspectives on how language and behavior interact. It is especially encouraging for the robotics industry, where one of the main concerns is the advancement of technologies that allow machines to communicate with one another.

Researchers concluded, “The network we have developed is very small. Nothing now stands in the way of developing, on this basis, much more complex networks that would be integrated into humanoid robots capable of understanding us but also of understanding each other.”

Journal Reference:

‌Riveland, R., et al. (2024) Natural language instructions induce compositional generalization in networks of neurons. Nature Neurosciencedoi.org/10.1038/s41593-024-01607-5

Source: https://www.unige.ch/

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.