In the proposed apocalyptic situation, the subject passes via a sequence of scenes while comparing every scene forecast with the perceived scene, eventually updating or confirming the previous VR.
The team examined if AI can decode the neuronal representations of each VR experienced by the subjects and, possibly more interestingly, if related self-confidence levels impact how the forecasts can be reproduced.
Through functional magnetic resonance imaging, or fMRI, brain activity was measured, during which subjects were involved in a VR maze game. Although there is no knowledge of the ultimate goal, subjects seemed to be able to employ their forecasts and map memory to aid in estimating their locations in the maze and selecting the right way to continue.
Our results suggest that when prediction confidence is high, subjects are able to imagine the scene clearly and predict quickly.
Risa Katayama, Study Lead Author, Kyoto University
In the developing field of metaverse research, the study might have extensive implications. It might result in the development of brain-machine interfaces as communication tools using a diversity of rich environments, even though the scene forecast here was founded on door arrangements in a maze.
“Scene prediction can be generalized and lead to new applications such as control methods connecting human brains and AI for aerial and land vehicles,” says Katayama.
“We believe the intersection of the human mind and AI has interdisciplinary significance for further elucidation of the source of our self-consciousness,” the author concludes.
Journal Reference:
Katayama, R., et al. (2022) Confidence modulates the decodability of scene prediction during partially-observable maze exploration in humans. Communications Biology. doi.org/10.1038/s42003-022-03314-y.