The Virtual Environment-based Adaptive System Helps Children with Autism to Enhance Social…


The Virtual Environment-based Adaptive System Helps Children with Autism to Enhance Social Functioning

According to estimates from CDC (Centers for Disease Control and Prevention)’s Autism and Developmental Disabilities Monitoring (ADDM) Network, about 1 in 68 children in this world is suffering from Autism Spectrum Disorder (ASD), a developmental disability that can cause some significant social problems including difficulties communicating and interacting with others. Specifically, children with ASD have shown impairment in understanding complex facial emotional expressions of others and are slow when processing people’s faces. In other words, they can hardly get the sense of context when interacting with people, which might later cause more severe problems in communication.

Unfortunately, little is known about the diagnosis and even treatment for ASD; currently, there is no cure for ASD but only some evidence which states that early intervention treatment services can improve a child’s development. These services refer to medical therapy that helps the child talk, walk, and interact with others. However, the real problem that blocks children with ASD to overcome social interaction impairments lies in the lack of accessibility of the therapy. The traditional intervention paradigm, which requires a professional therapist to sit next to the child, is not accessible to the vast majority of ASD population. There aren’t as many trained therapists available to assist a lot of children in need of help, and even when they are accessible, it is burdensome for the most of the households with ASD child to afford excessive intervention costs.

Technology can help children with ASD to overcome social interaction disabilities

There is good news, though. Recent advances in computer and robotic technology are introducing innovative assistive technologies for ASD therapy. In particular, among all emerging technologies, virtual Reality (VR) is the most leading one since it has its potential to individualize autism therapy to offer useful technology-enabled therapeutic systems. As children suffering ASD manifest varying social deficits from one individual to another, it is exceedingly essential to provide proper help to each of them through personalized therapy; VR-based intervention system that keeps track of the child’s mental state can fulfill this customization need. Moreover, a number of studies indicated that many children with ASD are in favor of the advanced technology. This preference can be further interpreted to assume that the new intervention paradigm for ASD such as VR might be, and should be well adopted by children with ASD.

Multimodal Adaptive Social Interaction in Virtual Environment

To the point, this week’s research review covers the new VR-based intervention system by introducing Multimodal Adaptive Social Interaction in Virtual Environment (MASI-VR) for children with ASD. This study presents design, development and a usability study of MASI-VR platform. It first has aimed to design the multimodal VR-based social interaction platform that integrates eye gaze, EEG signals, and peripheral psychophysiological signals. The research team has proved the usefulness of the designed system, particularly for emotional face processing task. Through this review, we hope you to get the sense of how virtual environment based technological system works as a whole to help improve overall social functioning in autism.

Synthesizing different aspects of a social interaction

The research team has designed the VR system that incorporated various aspects of emotional social interaction. The system, in turn, aims to help children with ASD to learn proper processing of emotional faces.

Fig.1. System architecture of MASI-VR

It mainly consists of three parts: VR task engine and dialog management module; the central supervisory controller; peripheral interfaces that monitor eye gaze, EEG, and peripheral physiological signals to assess the subject’s affective state. When the central controller facilitates the event synchronization between the other two parts, the subject starts to undergo various social task while their physiological information is collected and analyzed in real time. The signals further work as a primary determinant to control the next stage within the virtual environment, letting the whole process to become individualized.

Fig.2. Various emotion and gestural animations

To be more specific, there were total seven characters of teenagers presented in the virtual environment, and they can change their facial emotional expressions among seven kinds (enjoyment, surprise, contempt, sadness, fear, disgust, and anger) in line with the situational context. In the pre-set VR cafeteria environment, the subject wanders around the virtual space and meets one of the characters who wishes to interact with the subject. In this situation, the subject can either choose or not choose to start a conversation with the avatar. If it decides to communicate, different kinds of conversational dialog missions will take place. After each session, the training trial begins for the subject to practice recognizing the character’s emotional state through observing its facial expression. At the end of each dialog, the face of the character will be presented with oval occlusion. The occlusion will gradually disappear following the gaze of the subject to give adaptive gaze feedback. This process encourages children with ASD to look at critical parts in the face that determines one’s emotional state such as areas around eyes and mouth. Therefore, if the subject succeeds in paying enough attention to those parts, the face reveals the emotion and the subject gets to choose what the emotion was.

Fig.3. The VR cafeteria environment for the social task

Effectiveness of MASI-VR in improving eventual social functioning

In order to prove the usability and effectiveness of the gaze-sensitive system, the nearly identical system only without gaze feedback was also tested by the control group. The performance difference showed that the adaptive system was significantly more helpful to enhance the subject’s engagement to the social task as well as the accuracy of recognizing the character’s facial emotion. In other words, MASI-VR is considerably useful in training core deficit areas of children with ASD. Though the study is still in the preliminary stage, the findings suggest that VR-based social interactive environment can be utilized to help improve the eventual social functioning of those with ASD.

LooxidVR monitors eye gaze and EEG in the virtual environment

Now that the effectiveness of Multimodal Adaptive Social Interaction in Virtual Environment for children with social communication disabilities has been proved, which device should be chosen to further enrich the study to develop the quality of the therapy?


In the study, several different devices were used simultaneously to monitor each corresponding physiological signals from the subject. However, there exists some inconvenience caused in the process of installing and setting up all of those devices; it would be best if the entire data can be collected and analyzed in a single VR device. Though sounds like a future dream yet to be realized, there is one in this world that enables concurrent measurement of a person’s eye gaze and EEG data in VR situation. LooxidVR, the world first mobile VR headset to provide an interface for both the eyes and the brain, allows robust data acquisition through VR compatible sensor that measures the user’s brain activity and eye movement. Recently winning Best Of Innovation Award at CES 2018, Looxid Labs is ready to provide the integrated solution to many of those who are interested in exploring user’s mind. With LooxidVR, further development of in-person therapy for children with ASD to enhance social functioning would come true.

LooxidVR pre-orders will start on Feb 1st, 2018. For more information, visit our website and do not miss the pre-order opportunity to enrich your current research and study.

Also, we are sending our newsletter on the VR trend and VR research periodically. Subscribe us if interested in receiving our newsletter.


  1. Multimodal adaptive social interaction in virtual environment (MASI-VR) for children with Autism spectrum disorders (ASD)| Virtual Reality (VR), 2016 IEEE
  2. Autism Spectrum Disorder (ASD)|Centers for Disease Control and Prevention

Read More

Science Stuff that You shouldn’t Miss in The Big Bang Theory: The Yerkes and Dodson Law


A Little Anxiety Won’t Kill You, It Will Make You Stronger

Imagine the sound that irritates you the most: nails on a chalkboard, a baby crying or (for some people) Taylor Swift music. While it may be frustrating for you, your stressful feelings may actually improve your performance. This idea is not new though; it also showed up in a beloved sitcom Big Bang Theory when Sheldon, a renowned physicist, tries to find his optimal anxiety level.

In Big Bang Theory episode 13 from season 8, Sheldon gets stuck with his work in Dark Matter and wants to make himself more efficient. In order to do so, he tries to optimize his work environment but sees no progress in it and believes that he has created too pleasant of an environment to work in. So instead of putting himself in a comfort zone, he thinks that he should increase his anxiety level and seeks the help of his girlfriend Amy who happens to be a neuroscientist.

Sheldon: According to a classic psychological experiment by Yerkes and Dodson, in order to maximize performance, one must create a state of productive anxiety.

They begin the experiment by first measuring the baseline of his brain activity and then by basically ‘making Sheldon irritated’ while he is wearing a EEG cap. For instance, while Sheldon is solving a maze, Amy starts to make squeaky noises by rubbing a balloon. Finding the sound intolerable, Sheldon ends up popping up the balloon and says that he was aiming for her heart. The experiment eventually fails as Sheldon vetoes to all the suggestions that Amy made.

Amy: Look, I know you don’t like it, but that’s the point of the experiment. I need to irritate you to find your optimal anxiety zone. And you said no to tickling, polka music or watching me eat a banana.

At this point, one might wonder if this experiment has a solid ground. So we delved into the experiment done by Yerkes and Dodson and the answer was, YES! It is useful to find one’s optimal anxiety level in order to increase work productivity. The actual experiment was done in a slightly different way than that of Sheldon and Amy, though.

Hebbian version of the Yerkes–Dodson law

Above all, the biggest difference was that their experiment was based on the behaviors of rats instead of humans. In the experiment, rats were put in a maze with only one right way to escape and whenever they went to the wrong route, for instance entering a box through a white door, they received electrical shocks. (brutal, right?) In the end, they discovered that while increasing voltage made the rats to perform faster and better, after a certain point the rats started to slow down, freeze or retreat. This showed how certain level of stress can become a motivation and increase an individual’s performance though the optimal level may vary on individuals. Likewise, measuring stress and anxiety levels in research can bring meaningful insights to the research.

Why LooxidVR?

LooxidVR | CES2018 Best of Innovation in VR

Looxid Labs’ LooxidVR proved its potential in psychology and neuroscience research at this year’s CES. The VR headset combined with EEG sensors and eye-tracking cameras has a possibility of becoming a major research kit that fulfills both portability and efficacy. Instead of manually irritating Sheldon by rubbing balloons and eating bananas, Amy could have simply put Sheldon in a VR environment where he could be fully immersed in the experiment and measure his stress level with EEG sensors attached to the headset. So if you are a psychologist or a neuroscientist like Amy, consider enriching your experiment with this award-winning research kit.

LooxidVR pre-orders will start on Feb 1st, 2018. If you want to learn more about LooxidVR and Looxid Labs, feel free to visit our website at

Also, we are sending our newsletter on the VR trend and VR research periodically. Subscribe us if interested in receiving our newsletter.

Read More

The combined use of VR and EEG : An effective tool to understand daily language comprehension

Source: SD Times

Till now, we have introduced several case studies with neuroscience and psychology research articles about how to deepen those researches related to education, marketing, healthcare, and gaming fields by combining human physiological data such as electroencephalography (EEG) and pupil information with VR. It is not only about broadening the realm of Brain-computer interface, but the core value inside is enriching human life through a better understanding of physiological signals. Ranging from creating adaptive educational contents to encourage students’ full engagement up to using neurofeedback therapy for patients with PTSD or ADHD as well as physiological data-driven in-person marketing widely used in game industry, the applicability of neuroscience to education, marketing, medical science and more has already been proved. However, this is not the end. Recently, the validity of combining EEG with VR in studying language processing in naturalistic environments has been confirmed.

The importance of building contextually-rich realistic environments

As we might all know intuitively through everyday communication, the context plays a crucial role in language processing. Besides, visual cues along with auditory stimuli significantly help our brains process meaningful information during any kinds of human to human interaction. Consequently, realistic models of language comprehension should be developed to understand language processing in contextually rich environments. Nevertheless, researchers in this field have suffered designing their research environments; it is picky to set up a naturalistic environment that resembles our everyday life settings and gives enough control to both linguistic and non-linguistic information no matter how much the situation is contextually-rich. Anyone who has tackled the issue should pay attention to this article because the combination of VR and EEG could be the solution. This week’s review is about “The combined use of virtual reality and EEG to study language processing in naturalistic environments.” By combining VR and EEG, strictly controlled experiments in a more naturalistic environment would be comfortably in your hands to get an explicit understanding of how we process language.

VR to enhance reality level in your experiment

To start with, why should be VR utilized to design your experiment? As well defined in many sources, the virtual environment is a space where people can have identical sensory experiences just as in the real world, and where the users’ every action can be tracked in real time. Accordingly, what the strongest point VR fundamentally has is to allow researchers to achieve an increased level of validity in a study while simultaneously having full experimental control. EEG combined with VR would, therefore, make it possible to correlate humans’ physiological signals with their every single movement in the designed environment. Thus, the successful combination of the two has been used to study users’ driving behavior, spatial navigation, spatial presence and more.

Why not extend this kind of methodology further into studying language processing? Maybe some of you might doubt if human’s natural behavior can be well examined in a virtual environment. Since every line of a conversation in VR is an artificial voice, it might be hard for people to get fully engaged in the interaction inside VR. In other words, there exist some skeptical views that Human-Computer Interaction (HCI) and Human-Human Interaction (HHI) are different so that VR is only adaptable when studying HCI. However, it was turned out to be a meaningless worry. The study by Heyselaar, Hagoort, and Segaert (2017) proved in their experiment that the way people adapt their speech rate and pitch to an interlocutor has no difference whether it is a virtual one or a human. This significantly implies that it is plausible enough to observe language processing in a virtual environment to understand the one in our real life.

N400 response to be well observed in the VR setting

Johanne and others conducted an experiment to validate the combined use of VR and EEG as a tool to study neurophysiological mechanisms of language processing and comprehension. They decided to prove the validity by showing that the N400 response happens similarly in a virtual environment. The N400 refers to an event-related potential (ERP) component that peaks around 400ms after the critical stimuli; the previous study in a traditional setting have found that incongruence between the spoken and visual stimuli will cause enhanced N400. Therefore, the research team set up the situation containing mismatches of verbal and visual stimuli and analyzed brainwave to observe N400 response.

In the experiment, total 25 people were put into the virtual environment designed by Vizard — Virtual Reality Software — where eight tables are in a row with a virtual guest sitting at each table in a virtual restaurant. The participants were moved from a table to table following the preprogrammed procedure. The materials consisted of 80 objects and 96 sentences (80 experimental sentences and 16 filler ones). Both of them were relevant with restaurant setting, but only half of the object and sentence pairs were semantically matched. For instance, if there is a salmon dish on the table, and the virtual guest sitting at the table says “I just ordered this salmon,” it is a well-matched pair. On the other hand, if the paired sentence of a salmon is “I just ordered this pasta,” the two become mismatched. Each of the participants went through equal rate of match and mismatch situations and made 12 rounds through the restaurant during the entire experiment. At the end of the trial, they were asked two questions to assess whether the participants had paid attention during the trial and their perceptions of the virtual agents.

Fig. 1: Screenshot of the virtual environment

The EEG was recorded from 59 active electrodes during the entire rounds of the experiment. Epochs from 100ms preceding the onset of the critical noun to 1200ms after it was selected and the ERPs were further calculated and analyzed per participant and condition in three time windows: N400 window (350–600ms), an earlier window (250–350ms) and a later window (600–800ms). Finally, repeated measures of analyses of variance (ANOVAs) were performed, three variables were predetermined time windows, and the factors included condition (match, mismatch), region (vertical midline, left anterior, right anterior, left posterior, left interior), and the electrode.

The result was calculated as Fig. 2; it was revealed that ERPs seem more negative for the mismatch condition than for the match condition in all time windows and the difference was particularly significant during the N400 window. That is to say, the N400 response was observed in line with predictions, while leading to the conviction that VR and EEG combined can be used to study language comprehension.

Fig. 2: Grand-average waveforms time-locked to the onset of the critical nouns in the match and mismatch conditions. The topographic plots display the voltage differences between the two conditions (mismatch — match) in the three different time windows

Remaining problem: The use of two separate devices

Nevertheless, this study still contains shortcomings due to its limitation when using two different devices — the EEG cap and VR helmet — simultaneously. As the head-mount display (HMD) should tightly fit around the user’s head, it is somewhat challenging and burdensome to wear the EEG cap at once. Besides, if equipped with the EEG cap sensitive to movement, it is hard to realize virtual environment with its full potential where people’s dynamic interaction and actions should be taken. In fact, this limitation is a real bottleneck that brings the experiment far apart from setting a realistic environment.

Solution: The All-in-one device with VR compatible sensor


Is there any silver bullet to defy this barrier? Here it is. The problem addressed above can be fully solved with the all-in-one device fully equipped with VR compatible sensor. Here is the solution: LooxidVR. Recently winning Best Of Innovation Award at CES 2018, Looxid Labs have introduced its system that integrates two eye-tracking cameras and six EEG brainwave sensors into a phone-based VR headset. With LooxidVR, collecting and analyzing human physiological data concurrently with the users interacting with the fully immersive environment will become possible.

LooxidVR pre-orders will start on Feb 1st, 2018. Visit our website and keep track of our latest news. Catch the pre-order opportunity and enrich your current research and study.

Also, we are sending our newsletter on the VR trend and VR research periodically. Subscribe us if interested in receiving our newsletter.


  1. The combined use of virtual reality and EEG to study language processing in naturalistic environments | Behavior Research Methods
  2. Looxid Labs’ brain-monitoring VR headset could be invaluable for therapy | Engadget
  3. N400 | Scholarpedia

Read More