Posted in | News | Medical Robotics

Understanding the Decodability of Scene Prediction Using Virtual Reality

Picture a future where a city gets destroyed by some calamity. Following that, how easy could it be to locate one’s home after the community has been changed into a huge maze of wreckage, with only very few landmarks left?

Understanding the Decodability of Scene Prediction Using Virtual Reality.
Future scenes predicted by human subjects during maze navigation were decoded from fMRI activity patterns. Image Credit: KyotoU/Global Comms/Risa Katayama

Currently, scientists from Kyoto University have employed partial-observation mazes in virtual reality to discover that they can decode the subjects’ capacities to foresee their scenes and positions from brain activity within the maze, along with the degree of confidence in their forecasts.

An AI model based on human brain activity shows that the decoding accuracy of the scene prediction is dependent on the confidence level of the subject’s ability to predict.

Risa Katayama, Study Lead Author, Kyoto University

In the proposed apocalyptic situation, the subject passes via a sequence of scenes while comparing every scene forecast with the perceived scene, eventually updating or confirming the previous VR.

The team examined if AI can decode the neuronal representations of each VR experienced by the subjects and, possibly more interestingly, if related self-confidence levels impact how the forecasts can be reproduced.

Through functional magnetic resonance imaging, or fMRI, brain activity was measured, during which subjects were involved in a VR maze game. Although there is no knowledge of the ultimate goal, subjects seemed to be able to employ their forecasts and map memory to aid in estimating their locations in the maze and selecting the right way to continue.

Our results suggest that when prediction confidence is high, subjects are able to imagine the scene clearly and predict quickly.

Risa Katayama, Study Lead Author, Kyoto University

In the developing field of metaverse research, the study might have extensive implications. It might result in the development of brain-machine interfaces as communication tools using a diversity of rich environments, even though the scene forecast here was founded on door arrangements in a maze.

Scene prediction can be generalized and lead to new applications such as control methods connecting human brains and AI for aerial and land vehicles,” says Katayama.

We believe the intersection of the human mind and AI has interdisciplinary significance for further elucidation of the source of our self-consciousness,” the author concludes.

Journal Reference:

Katayama, R., et al. (2022) Confidence modulates the decodability of scene prediction during partially-observable maze exploration in humans. Communications Biology. doi.org/10.1038/s42003-022-03314-y.

Source: https://www.kyoto-u.ac.jp/en

Tell Us What You Think

Do you have a review, update or anything you would like to add to this news story?

Leave your feedback
Your comment type
Submit

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.