How does the human brain process social cues in virtual reality? This is what a recent study published in eNeuro hopes to address as a team of researchers from Sapienza University in Italy investigated how avatar appearances in virtual reality environments could influence human-AI interactions. This study holds the potential to help scientists, engineers, video game developers, and the public better understand how virtual reality experiences can be improved based on visual cues within the virtual reality environment.
“One pending question in social neuroscience is whether interpersonal interactions are processed differently by the brain depending on the bodily characteristics of the interactor, i.e., their physical appearance,” the study notes.
For the study, the researchers initially enlisted 21 participants but reduced to 20 due to technical difficulties. They were not informed of the purpose of the study until after they completed all tasks. These tasks involved simple keyboard commands when a virtual avatar, with one possessing a physical body and another consisting of two dots, touched a bottle-shaped object presented in front of them. The researchers monitored the participants’ electroencephalography (EEG) to ascertain how they reacted to each virtual avatar presented to them while also monitoring reaction times, as well.
In the end, the researchers found that participants’ reactions varied based on the visual appearance of the virtual avatar based on EEG data and the timing of the participants’ responses.
The study notes, “Taken together, these findings broaden the understanding of how bodily appearance shapes the spatiotemporal processing of an interactor's movements. This holds particular relevance in our modern society, where human-artificial (virtual or robotic) agent interactions are rapidly becoming ubiquitous.”
What new discoveries about avatars and human-AI interactions will researchers make in the coming years and decades? Only time will tell, and this is why we science!
As always, keep doing science & keep looking up!
Sources: eNeuro, EurekAlert!