All Posts By

Looxid Labs

Combination of Virtual Reality and Eye Tracking: Explore the Mind of Consumers

By | BLOG

What do you think is the most relevant information about a product that can successfully induce a consumer’s purchase behavior? If you were a promotion manager for a granola selling company, how would you try to understand your potential consumers’ purchase behavior and deep psychology inside it?

Eye tracking: Keeping track of consumer attention

Source: bluekiteinsight.com

Eye tracking, the sensor technology that enables a device to measure exactly where and when people’s eyes are focused, is known to provide a better understanding of consumers’ visual attention. People tend to stare longer and look more times at the object that they are interested in. In addition, their visual path gives much information about cognitive flow. Therefore, carefully investigating visual logs of consumers — eye tracking data in other words — might help those who are desperately looking for ways to promote sales of particular products to get insignificant insight. On the consumer’s perspective, the general public might also be able to enjoy the far better shopping experience with time-to-time recommendation system based on their eye gaze information.

But how can we track consumer attention in the real world?

Recall your shopping experience. As you enter a supermarket and stand in front of the shelf stuffed with the product category you were looking for, you will skim through several products and finally pick one of them to your cart. As a matter of fact, the process of making purchase decisions happens within seconds. Consequently, it is highly important for retailing researchers to investigate consumers’ natural attentional process “in situ.”

The majority of current research, however, even when analyzing eye tracking data, is undertaken in laboratory settings. The laboratory environment would make it easy to exercise any experimental controls to investigate what you want to know deeply. On the other side, keeping experimental controls inevitably leads to low level of ecological validity. If the ecological validity level is poor, any kinds of well-analyzed result might become valueless as we cannot guarantee that similar effects would happen in the wild. This trade-off between control and ecological validity level has always been quite a serious issue for many researchers.

Virtual reality mobile eye tracking: A new research opportunity

Fortunately, the advent of a virtual reality (VR) is extending the previous trade-off frontier for the existing researches. It is because VR not only allows various levels of experimental control but also makes it available to build up shopping experience that feels like a reality. This sort of experimental environment now puts the researches on the point where the optimal combination of experimental control and ecological validity is implemented together. Therefore, with the help of VR, eye tracking technology can be used way more effectively to capture the user’s visual attention with better reliability. This week’s research — “Combining virtual reality and mobile eye tracking to provide a naturalistic experimental environment for shopper research”— has reviewed how mobile eye tracking can be used in the virtual reality and discussed the pros and cons of applying eye-tracking technology in terms of experimental environments. Particularly, this research focused on three different kinds of environments — conventional 2-D monitor based setting, virtual reality, and the real environment. Besides, the research has proposed the experiment in a virtual reality setting to discuss the validity of using mobile eye tracking in VR to study consumer behaviors.

Figure.1. Interacting in a virtual reality

First of all, this paper has set up criteria and rated both of relative superiority and inferiority among three different experimental settings for each criterion. The result of ratings, as written in the table below, might work as a useful guideline to decide which equipment to use and how to design eye tracking experiments. As we can read from the table, the “desktop eye tracking”, compared with “mobile eye tracking in the field”, has relative advantage in criteria that are concerned with experimental control (“Ease of creating/using non-existing stimuli”, “Ease of controlling and randomizing treatment and extraneous factors”, “Naturalness of the eye tracking task”, “Ease of analyzing and reacting to respondent’s attention and behavior in real time”, “Ease of generating large sample sizes”, “Ease of obtaining retailer permission to record”, “Ease of data preparation”, “Reliability of AOI coding”, “Reproducibility of experimental setting”). In contrast, “mobile eye tracking in the field” shows better rating over “desktop eye tracking” in criteria about the ecological validity (“Realism of stimulus display” and “Realism of interaction”).

Table.1. Criteria for deciding which environment to use — eye tracking specific criteria are highlighted in grey

How about “mobile eye tracking in virtual reality”? Interestingly, “mobile eye tracking in virtual reality” seems to be the compromising plan that appropriately mixes up relative advantages of both sides (“desktop eye tracking” and “mobile eye tracking in the field”). “Mobile eye tracking in virtual reality” is rated with high scores in almost every criterion where “desktop eye tracking” outperforms “mobile eye tracking in the field.” What is more, different from “desktop eye tracking,” “mobile eye tracking in virtual reality” is rated with enhanced scores in “Realism of stimulus display” and “Realism of interaction.” Although it still needs to tackle with the problem of cost-effectiveness and to meet further technological requirements concerning realistic visualization as well as convincing presentation of the setting, it is anticipated that mobile eye tracking in VR might open a lot of new research opportunities.

Fig.2. Trade-off between experimental control and ecological validity

Observing shopper behavior with eye tracking data in a virtual supermarket

Here is one of the new studies that adopted eye tracking in virtual reality in a new field: shopper research. In order to prove how mobile eye tracking in virtual reality can contribute to answering unresolved questions in the retailing study, this research team has tried to design the virtual store to test whether additional information about the product can help change the consumers’ final purchase decision.

In the virtual supermarket which was designed to create a realistic shopping experience, there were several shelves filled up with assortments of different granola and baking mixture products. The supermarket was presented in a virtual reality lab equipped with the front projection screen of the CAVE environment, and respondents went through the experiment wearing SMI eye tracking glasses. They underwent three successive stages. In the first stage, they had to choose the most preferred product out of 20 from the shelf. Then, the same set of products reappeared with the additional red frame highlighting the initially chosen product. Soon after that, the recommendations of six other products were highlighted with a blue frame. There was also a pop-up bubble with the additional information about the product presented right next to the product where the respondent gazed at for more than 200ms. In the end, the subjects were asked if they would stay with their initial product choice or not.

Fig.3. Example scenes of the virtual supermarket

The results showed that some subjects had changed their preference during the stages. In other words, their decisions were affected by additional information provided in VR, which in turn implies that the virtual supermarket induced quite an active interaction between human and technology, and that such an experiment setting is helpful in testing and observing consumer responses.

Soon, when eye tracking technology is integrated into hands of electronic devices, far more innovations in retailing researches and people’s shopping experiences would come true. For instance, a gaze-based assistant system which can provide individualized recommendations based off of a consumer’s preference might change the expectation of what shopping should be in the future.

Try out your research with virtual reality and eye tracking

Although the paper mainly focused on the field of shopping, such gaze-based assistant system that reflects a real-time preference of the user in a virtual environment can be widely used in many areas in which exploring people’s minds is important. If yet uncertain of its validity, check out some available technology that has successfully combined virtual reality with eye tracking and try it out to investigate no matter what you want to know. A great deal of valuable but so far hidden information such as consumers’ complicated in-store decision processes, critical interior design elements that significantly influence people’s mood, and more would be in your hand.

If you are interested in using a brain and eye interface in the virtual reality, visit our website www.looxidlabs.com and get relevant information of our newly released product that provides the world first mobile VR headset with an interface for both the eyes and the brain.

LooxidVR

In addition, we are sending our newsletter on the VR trend and VR research periodically. Subscribe us if interested in receiving our newsletter.

Reference

  1. Combining virtual reality and mobile eye tracking to provide a naturalistic experimental environment for shopper research | Journal of Business Research
  2. How Eye Tracking Works | Blue Kite Insight

Read More

Getting to Know Your Working Stress Level through EEG in the Virtual Reality

By | BLOG

How can a CEO ascertain that all of his or her members are working with their full capacity?
Are you sure that your brain is not too overloaded with the everyday working condition?

Source: DEFACTO BLOG

Now with the help of the combined use of electroencephalogram (EEG) and Virtual Reality (VR), you can find out the mental workload and stress level of you and other co-workers. Adopting this combined tool will lead to much more efficient operational decisions that achieve a fair distribution of workload and responsibility among various workers.

What affects job performance?

In fact, a multiple of variables, from workplace culture to the size of work equipment, hinder our ability from thoroughly assessing any situation, which ultimately affects job performance. Mainly, fatigue and stress are critical human factors that should not be taken lightly. EHS Today reported that about a half of US workers suffer from fatigue; this is not only the story of them, but the world workforce population complains of tiredness. The most critical problem in stress at work is that excessive workload and corresponding stress would directly lead to safety issues or sometimes severe injuries. In other words, tiredness affects our judgment and might put our health at risk, and thus, the perceived level of mental stress and workload for workers should be continuously monitored and evaluated so as to secure them from various industrial accidents. Though it sounds like something costly for managers to keep their eyes on, the action is imperative in a sense that managing workers’ stress level would contribute to their overall enhanced work effectiveness.

Bio-signals would help you check your mental workload

Yet, how can we assess one’s workload? There are mainly three types of workload assessment methodologies: subjective measures, performance measures, and physiological measures. Conventionally, people had to rely on the worker’s subjective assessment where the user determines and assesses how much he or she is mentally overloaded by themselves. As a matter of fact, a few versions of Subjective Workload Assessment Techniques (SWAT) have been developed. Nonetheless, such method cannot escape from its fundamental fail point that is it not sensitive enough to catch subtle mental workloads, which if accumulated, can, in turn, lead to chronic fatigue. The performance measures, which record performance scores and use these as an indicator of task demand, or difficulty, is way more objective but they are hard to be widely used due to their intrusiveness to various work settings.

Sensors | Using Psychophysiological Sensors to Assess Mental Workload During Web Browsing

The best and the most straightforward way to make a diagnosis of our physical state is to look at bio-signals. Apart from the conventional methods such as statistical analysis of special events or keeping track of the worker’s complaints, physiological information can be considered to evaluate human factors. In addition, among many other bio-signals, electroencephalogram (EEG) is well-known for high time-resolution, possibility to continuously monitor brain stress with the adequate accuracy, and most importantly, the recognition of human emotion, stress, vigilance, etc. That is, EEG can be utilized to monitor mental workload, emotion, and stress of the workers when they perform any task. Still, some of you might worry about the way to collect EEG signals from the real working environment, as it is hard to be simulated physically. However, now that the virtual reality (VR) technology has been fairly well advanced, simulating your working condition in a virtual environment is not a matter.

Measurement of stress recognition of crew members by EEG in a Virtual Environment

This week’s research review illustrates the measurement of mental workload through EEG in a virtually simulated environment — EEG-based Mental Workload and Stress Recognition of Crew Members in Maritime Virtual Simulator: A case study. The research team has focused their study on the maritime industry where human factors are considered to be one of the leading causes of accidents, attributing to nearly 96% of the entire maritime accidents. Even though the industry has achieved a notable improvement of ship equipment and the overall system, human factors have not been considered enough to enhance the whole safety level. Therefore, the research aimed to study cause and effect of human errors of crew members by monitoring mental workload, emotion and stress level of the maritime trainees.

Fig.1. Simulator at SMA

To be more specific, in order to study the relationship between maritime trainees’ mental workload, stress levels, and task performance, the research team conducted the experiment with four maritime trainees forming the crew. Consisted of an officer on watch (OOW), a steersman, a captain, and a pilot with each assigned with duty corresponding to that of the real crew member, the crew had to navigate the vessel to the destination within SMA’s Integrated Simulation Centre (ISC) where a highly realistic environment was simulated. During their voyage, each of the subject’s emotion level (positive, neutral, negative), workload (no, minimal, moderate, high), and stress (low, medium low, moderate low, medium, medium high, moderate high, high, very high) had been observed and were further analyzed after the experiment.

Fig.2. OOW, captain, and pilot in the simulator during the experiment

The following describes the result of the analysis. The OOW, who always had to maintain watch-keeping was in the most negative emotional state; the captain, who was required to give our orders to the crew and assigned with the most significant responsibility showed the highest workload; the captain and the pilot, who had relatively higher responsibility than OOW and steersman were recorded with higher stress level as well.

Though the experiment is still in a preliminary stage of studying human factors, the success in monitoring emotion, mental workload, and stress implies that the proposed approach can be applied far beyond the maritime domain. The EEG-based human factors evaluation tools can be used for any industry that involves a multiple of people working together. In addition, it is anticipated that such mechanism can broaden the research that studies the human-machine interaction.

LooxidVR: The All-in-one device with VR compatible EEG sensor and eye tracking camera

LooxidVR

Then what should be the next step? In order to achieve a more accurate measurement of human factors in a far more immersing environment, the data-collecting sensor and the environment which is being simulated should be correlated as closely as possible. LooxidVR, the winning product of CES 2018 Best of Innovation Award, is now here for you to provide a robust data acquisition of the user’s brain activity and even eye movement in VR environment. Made by Looxid Labs, Looxid VR is the world first mobile VR headset to provide an interface for both the eyes and the brain. Looxid Labs is ready to provide the integrated solution to many of those who are interested in exploring user’s mind. It will be especially helpful for researchers who are interested in recognizing diverse emotion state of the user such as stress, mental workload, and preference.

LooxidVR has begun pre-order from 1st, Feb. For more information, visit our website www.looxidlabs.com and do not miss the pre-order opportunity to enrich your current research and study.

Also, we are sending our newsletter on the VR trend and VR research periodically. Subscribe us if interested in receiving our newsletter.

Reference

  1. EEG-based Mental Workload and Stress Recognition of Crew Members in Maritime Virtual Simulator: A Case Study | http://ieeexplore.ieee.org/document/8120300/
  2. Human Factors In Safety: How do stress and fatigue affect work? | https://www.pro-sapien.com/blog/2017/10/human-factors-safety-how-stress-fatigue-affect-work/
  3. Workload Assessment | https://www.ergonomicsblog.uk/workload-assessment/

Read More

The Virtual Environment-based Adaptive System Helps Children with Autism to Enhance Social…

By | BLOG

The Virtual Environment-based Adaptive System Helps Children with Autism to Enhance Social Functioning

According to estimates from CDC (Centers for Disease Control and Prevention)’s Autism and Developmental Disabilities Monitoring (ADDM) Network, about 1 in 68 children in this world is suffering from Autism Spectrum Disorder (ASD), a developmental disability that can cause some significant social problems including difficulties communicating and interacting with others. Specifically, children with ASD have shown impairment in understanding complex facial emotional expressions of others and are slow when processing people’s faces. In other words, they can hardly get the sense of context when interacting with people, which might later cause more severe problems in communication.

Unfortunately, little is known about the diagnosis and even treatment for ASD; currently, there is no cure for ASD but only some evidence which states that early intervention treatment services can improve a child’s development. These services refer to medical therapy that helps the child talk, walk, and interact with others. However, the real problem that blocks children with ASD to overcome social interaction impairments lies in the lack of accessibility of the therapy. The traditional intervention paradigm, which requires a professional therapist to sit next to the child, is not accessible to the vast majority of ASD population. There aren’t as many trained therapists available to assist a lot of children in need of help, and even when they are accessible, it is burdensome for the most of the households with ASD child to afford excessive intervention costs.

Technology can help children with ASD to overcome social interaction disabilities

There is good news, though. Recent advances in computer and robotic technology are introducing innovative assistive technologies for ASD therapy. In particular, among all emerging technologies, virtual Reality (VR) is the most leading one since it has its potential to individualize autism therapy to offer useful technology-enabled therapeutic systems. As children suffering ASD manifest varying social deficits from one individual to another, it is exceedingly essential to provide proper help to each of them through personalized therapy; VR-based intervention system that keeps track of the child’s mental state can fulfill this customization need. Moreover, a number of studies indicated that many children with ASD are in favor of the advanced technology. This preference can be further interpreted to assume that the new intervention paradigm for ASD such as VR might be, and should be well adopted by children with ASD.

Multimodal Adaptive Social Interaction in Virtual Environment

To the point, this week’s research review covers the new VR-based intervention system by introducing Multimodal Adaptive Social Interaction in Virtual Environment (MASI-VR) for children with ASD. This study presents design, development and a usability study of MASI-VR platform. It first has aimed to design the multimodal VR-based social interaction platform that integrates eye gaze, EEG signals, and peripheral psychophysiological signals. The research team has proved the usefulness of the designed system, particularly for emotional face processing task. Through this review, we hope you to get the sense of how virtual environment based technological system works as a whole to help improve overall social functioning in autism.

Synthesizing different aspects of a social interaction

The research team has designed the VR system that incorporated various aspects of emotional social interaction. The system, in turn, aims to help children with ASD to learn proper processing of emotional faces.

Fig.1. System architecture of MASI-VR

It mainly consists of three parts: VR task engine and dialog management module; the central supervisory controller; peripheral interfaces that monitor eye gaze, EEG, and peripheral physiological signals to assess the subject’s affective state. When the central controller facilitates the event synchronization between the other two parts, the subject starts to undergo various social task while their physiological information is collected and analyzed in real time. The signals further work as a primary determinant to control the next stage within the virtual environment, letting the whole process to become individualized.

Fig.2. Various emotion and gestural animations

To be more specific, there were total seven characters of teenagers presented in the virtual environment, and they can change their facial emotional expressions among seven kinds (enjoyment, surprise, contempt, sadness, fear, disgust, and anger) in line with the situational context. In the pre-set VR cafeteria environment, the subject wanders around the virtual space and meets one of the characters who wishes to interact with the subject. In this situation, the subject can either choose or not choose to start a conversation with the avatar. If it decides to communicate, different kinds of conversational dialog missions will take place. After each session, the training trial begins for the subject to practice recognizing the character’s emotional state through observing its facial expression. At the end of each dialog, the face of the character will be presented with oval occlusion. The occlusion will gradually disappear following the gaze of the subject to give adaptive gaze feedback. This process encourages children with ASD to look at critical parts in the face that determines one’s emotional state such as areas around eyes and mouth. Therefore, if the subject succeeds in paying enough attention to those parts, the face reveals the emotion and the subject gets to choose what the emotion was.

Fig.3. The VR cafeteria environment for the social task

Effectiveness of MASI-VR in improving eventual social functioning

In order to prove the usability and effectiveness of the gaze-sensitive system, the nearly identical system only without gaze feedback was also tested by the control group. The performance difference showed that the adaptive system was significantly more helpful to enhance the subject’s engagement to the social task as well as the accuracy of recognizing the character’s facial emotion. In other words, MASI-VR is considerably useful in training core deficit areas of children with ASD. Though the study is still in the preliminary stage, the findings suggest that VR-based social interactive environment can be utilized to help improve the eventual social functioning of those with ASD.

LooxidVR monitors eye gaze and EEG in the virtual environment

Now that the effectiveness of Multimodal Adaptive Social Interaction in Virtual Environment for children with social communication disabilities has been proved, which device should be chosen to further enrich the study to develop the quality of the therapy?

LooxidVR

In the study, several different devices were used simultaneously to monitor each corresponding physiological signals from the subject. However, there exists some inconvenience caused in the process of installing and setting up all of those devices; it would be best if the entire data can be collected and analyzed in a single VR device. Though sounds like a future dream yet to be realized, there is one in this world that enables concurrent measurement of a person’s eye gaze and EEG data in VR situation. LooxidVR, the world first mobile VR headset to provide an interface for both the eyes and the brain, allows robust data acquisition through VR compatible sensor that measures the user’s brain activity and eye movement. Recently winning Best Of Innovation Award at CES 2018, Looxid Labs is ready to provide the integrated solution to many of those who are interested in exploring user’s mind. With LooxidVR, further development of in-person therapy for children with ASD to enhance social functioning would come true.

LooxidVR pre-orders will start on Feb 1st, 2018. For more information, visit our website www.looxidlabs.com and do not miss the pre-order opportunity to enrich your current research and study.

Also, we are sending our newsletter on the VR trend and VR research periodically. Subscribe us if interested in receiving our newsletter.

Reference

  1. Multimodal adaptive social interaction in virtual environment (MASI-VR) for children with Autism spectrum disorders (ASD)| Virtual Reality (VR), 2016 IEEE
  2. Autism Spectrum Disorder (ASD)|Centers for Disease Control and Prevention

Read More

Science Stuff that You shouldn’t Miss in The Big Bang Theory: The Yerkes and Dodson Law

By | BLOG

A Little Anxiety Won’t Kill You, It Will Make You Stronger

Imagine the sound that irritates you the most: nails on a chalkboard, a baby crying or (for some people) Taylor Swift music. While it may be frustrating for you, your stressful feelings may actually improve your performance. This idea is not new though; it also showed up in a beloved sitcom Big Bang Theory when Sheldon, a renowned physicist, tries to find his optimal anxiety level.

In Big Bang Theory episode 13 from season 8, Sheldon gets stuck with his work in Dark Matter and wants to make himself more efficient. In order to do so, he tries to optimize his work environment but sees no progress in it and believes that he has created too pleasant of an environment to work in. So instead of putting himself in a comfort zone, he thinks that he should increase his anxiety level and seeks the help of his girlfriend Amy who happens to be a neuroscientist.

Sheldon: According to a classic psychological experiment by Yerkes and Dodson, in order to maximize performance, one must create a state of productive anxiety.

They begin the experiment by first measuring the baseline of his brain activity and then by basically ‘making Sheldon irritated’ while he is wearing a EEG cap. For instance, while Sheldon is solving a maze, Amy starts to make squeaky noises by rubbing a balloon. Finding the sound intolerable, Sheldon ends up popping up the balloon and says that he was aiming for her heart. The experiment eventually fails as Sheldon vetoes to all the suggestions that Amy made.

Amy: Look, I know you don’t like it, but that’s the point of the experiment. I need to irritate you to find your optimal anxiety zone. And you said no to tickling, polka music or watching me eat a banana.

At this point, one might wonder if this experiment has a solid ground. So we delved into the experiment done by Yerkes and Dodson and the answer was, YES! It is useful to find one’s optimal anxiety level in order to increase work productivity. The actual experiment was done in a slightly different way than that of Sheldon and Amy, though.

Hebbian version of the Yerkes–Dodson law

Above all, the biggest difference was that their experiment was based on the behaviors of rats instead of humans. In the experiment, rats were put in a maze with only one right way to escape and whenever they went to the wrong route, for instance entering a box through a white door, they received electrical shocks. (brutal, right?) In the end, they discovered that while increasing voltage made the rats to perform faster and better, after a certain point the rats started to slow down, freeze or retreat. This showed how certain level of stress can become a motivation and increase an individual’s performance though the optimal level may vary on individuals. Likewise, measuring stress and anxiety levels in research can bring meaningful insights to the research.

Why LooxidVR?

LooxidVR | CES2018 Best of Innovation in VR

Looxid Labs’ LooxidVR proved its potential in psychology and neuroscience research at this year’s CES. The VR headset combined with EEG sensors and eye-tracking cameras has a possibility of becoming a major research kit that fulfills both portability and efficacy. Instead of manually irritating Sheldon by rubbing balloons and eating bananas, Amy could have simply put Sheldon in a VR environment where he could be fully immersed in the experiment and measure his stress level with EEG sensors attached to the headset. So if you are a psychologist or a neuroscientist like Amy, consider enriching your experiment with this award-winning research kit.

LooxidVR pre-orders will start on Feb 1st, 2018. If you want to learn more about LooxidVR and Looxid Labs, feel free to visit our website at www.looxidlabs.com.

Also, we are sending our newsletter on the VR trend and VR research periodically. Subscribe us if interested in receiving our newsletter.

Read More

The combined use of VR and EEG : An effective tool to understand daily language comprehension

By | BLOG
Source: SD Times

Till now, we have introduced several case studies with neuroscience and psychology research articles about how to deepen those researches related to education, marketing, healthcare, and gaming fields by combining human physiological data such as electroencephalography (EEG) and pupil information with VR. It is not only about broadening the realm of Brain-computer interface, but the core value inside is enriching human life through a better understanding of physiological signals. Ranging from creating adaptive educational contents to encourage students’ full engagement up to using neurofeedback therapy for patients with PTSD or ADHD as well as physiological data-driven in-person marketing widely used in game industry, the applicability of neuroscience to education, marketing, medical science and more has already been proved. However, this is not the end. Recently, the validity of combining EEG with VR in studying language processing in naturalistic environments has been confirmed.

The importance of building contextually-rich realistic environments

As we might all know intuitively through everyday communication, the context plays a crucial role in language processing. Besides, visual cues along with auditory stimuli significantly help our brains process meaningful information during any kinds of human to human interaction. Consequently, realistic models of language comprehension should be developed to understand language processing in contextually rich environments. Nevertheless, researchers in this field have suffered designing their research environments; it is picky to set up a naturalistic environment that resembles our everyday life settings and gives enough control to both linguistic and non-linguistic information no matter how much the situation is contextually-rich. Anyone who has tackled the issue should pay attention to this article because the combination of VR and EEG could be the solution. This week’s review is about “The combined use of virtual reality and EEG to study language processing in naturalistic environments.” By combining VR and EEG, strictly controlled experiments in a more naturalistic environment would be comfortably in your hands to get an explicit understanding of how we process language.

VR to enhance reality level in your experiment

To start with, why should be VR utilized to design your experiment? As well defined in many sources, the virtual environment is a space where people can have identical sensory experiences just as in the real world, and where the users’ every action can be tracked in real time. Accordingly, what the strongest point VR fundamentally has is to allow researchers to achieve an increased level of validity in a study while simultaneously having full experimental control. EEG combined with VR would, therefore, make it possible to correlate humans’ physiological signals with their every single movement in the designed environment. Thus, the successful combination of the two has been used to study users’ driving behavior, spatial navigation, spatial presence and more.

Why not extend this kind of methodology further into studying language processing? Maybe some of you might doubt if human’s natural behavior can be well examined in a virtual environment. Since every line of a conversation in VR is an artificial voice, it might be hard for people to get fully engaged in the interaction inside VR. In other words, there exist some skeptical views that Human-Computer Interaction (HCI) and Human-Human Interaction (HHI) are different so that VR is only adaptable when studying HCI. However, it was turned out to be a meaningless worry. The study by Heyselaar, Hagoort, and Segaert (2017) proved in their experiment that the way people adapt their speech rate and pitch to an interlocutor has no difference whether it is a virtual one or a human. This significantly implies that it is plausible enough to observe language processing in a virtual environment to understand the one in our real life.

N400 response to be well observed in the VR setting

Johanne and others conducted an experiment to validate the combined use of VR and EEG as a tool to study neurophysiological mechanisms of language processing and comprehension. They decided to prove the validity by showing that the N400 response happens similarly in a virtual environment. The N400 refers to an event-related potential (ERP) component that peaks around 400ms after the critical stimuli; the previous study in a traditional setting have found that incongruence between the spoken and visual stimuli will cause enhanced N400. Therefore, the research team set up the situation containing mismatches of verbal and visual stimuli and analyzed brainwave to observe N400 response.

In the experiment, total 25 people were put into the virtual environment designed by Vizard — Virtual Reality Software — where eight tables are in a row with a virtual guest sitting at each table in a virtual restaurant. The participants were moved from a table to table following the preprogrammed procedure. The materials consisted of 80 objects and 96 sentences (80 experimental sentences and 16 filler ones). Both of them were relevant with restaurant setting, but only half of the object and sentence pairs were semantically matched. For instance, if there is a salmon dish on the table, and the virtual guest sitting at the table says “I just ordered this salmon,” it is a well-matched pair. On the other hand, if the paired sentence of a salmon is “I just ordered this pasta,” the two become mismatched. Each of the participants went through equal rate of match and mismatch situations and made 12 rounds through the restaurant during the entire experiment. At the end of the trial, they were asked two questions to assess whether the participants had paid attention during the trial and their perceptions of the virtual agents.

Fig. 1: Screenshot of the virtual environment

The EEG was recorded from 59 active electrodes during the entire rounds of the experiment. Epochs from 100ms preceding the onset of the critical noun to 1200ms after it was selected and the ERPs were further calculated and analyzed per participant and condition in three time windows: N400 window (350–600ms), an earlier window (250–350ms) and a later window (600–800ms). Finally, repeated measures of analyses of variance (ANOVAs) were performed, three variables were predetermined time windows, and the factors included condition (match, mismatch), region (vertical midline, left anterior, right anterior, left posterior, left interior), and the electrode.

The result was calculated as Fig. 2; it was revealed that ERPs seem more negative for the mismatch condition than for the match condition in all time windows and the difference was particularly significant during the N400 window. That is to say, the N400 response was observed in line with predictions, while leading to the conviction that VR and EEG combined can be used to study language comprehension.

Fig. 2: Grand-average waveforms time-locked to the onset of the critical nouns in the match and mismatch conditions. The topographic plots display the voltage differences between the two conditions (mismatch — match) in the three different time windows

Remaining problem: The use of two separate devices

Nevertheless, this study still contains shortcomings due to its limitation when using two different devices — the EEG cap and VR helmet — simultaneously. As the head-mount display (HMD) should tightly fit around the user’s head, it is somewhat challenging and burdensome to wear the EEG cap at once. Besides, if equipped with the EEG cap sensitive to movement, it is hard to realize virtual environment with its full potential where people’s dynamic interaction and actions should be taken. In fact, this limitation is a real bottleneck that brings the experiment far apart from setting a realistic environment.

Solution: The All-in-one device with VR compatible sensor

LooxidVR

Is there any silver bullet to defy this barrier? Here it is. The problem addressed above can be fully solved with the all-in-one device fully equipped with VR compatible sensor. Here is the solution: LooxidVR. Recently winning Best Of Innovation Award at CES 2018, Looxid Labs have introduced its system that integrates two eye-tracking cameras and six EEG brainwave sensors into a phone-based VR headset. With LooxidVR, collecting and analyzing human physiological data concurrently with the users interacting with the fully immersive environment will become possible.

LooxidVR pre-orders will start on Feb 1st, 2018. Visit our website www.looxidlabs.com and keep track of our latest news. Catch the pre-order opportunity and enrich your current research and study.

Also, we are sending our newsletter on the VR trend and VR research periodically. Subscribe us if interested in receiving our newsletter.

Reference

  1. The combined use of virtual reality and EEG to study language processing in naturalistic environments | Behavior Research Methods
  2. Looxid Labs’ brain-monitoring VR headset could be invaluable for therapy | Engadget
  3. N400 | Scholarpedia

Read More

In-person therapy might become obsolete due to Virtual Reality

By | BLOG
Digital Health | Source: Personal Connected Health Alliance

The Consumer Electronics Show (CES) 2018 last week in Las Vegas featured latest trend in digital health, suggesting promising future where access to healthcare becomes easier, quicker and more ubiquitous than ever. The market value of digital health industry, including mobile health, telehealth, and wireless health, was estimated to be 96 billion dollars in 2016 and is expected to reach 142 billion dollars by 2018. Even those who are oblivious of its tremendous market size had a chance to have firsthand experience of the technologies at the show and were convinced that digital health will completely transform traditional healthcare industry.

Wearable is a key digital health technology

Out of all digital health technologies, wearables that have remote monitoring sensors to keep track of patients’ conditions and provide feedback accordingly gained huge traction as innovative tech at CES. In particular, there was a considerable number of startups at the show attempting to capture the brain activity using wearables, thus promoting relaxation, managing stress, aiding sleep and potentially making treatment for cognitive and behavioral disorders more effective and personalized. Among such varied startup companies, Looxid Labs was definitely the most distinctive one to look out for at this year’s CES, for its technology combines brainwave monitoring with Virtual Reality (VR).

A booth visitor trying out LooxidVR at CES 2018

So many visitors at Looxid Labs’ booth expressed huge interest in trying out LooxidVR, a mobile VR headset embedded with EEG sensors and eye-tracking cameras, to conduct cognitive and behavioral therapy for widespread ailments such as post-traumatic stress disorder (PTSD). Our several Medium posts have already discussed how VR combined with electroencephalography (EEG) monitoring can be used to overcome fear (Dare To Explore: VR Helps You Conquer Your Fears) and treat, more specifically, illness such as ADHD (VR Neurofeedback: A New Drug-free Treatment for Mental Disorders). Yet, owing to interests of many researchers and enterprises who visited the booth, this week’s research review will once again shed light on the effectiveness and future potential of VR therapy.

Virtual Reality cognitive behavioral therapy can promote tobacco cessation

This week’s research (Virtual Reality Behavioral Therapy) explores the development of personalized outpatient cognitive behavioral therapy (CBT) by utilizing VR and mobile health technologies. CBT involves a process of changing patients’ unhealthy thoughts, beliefs and actions, increasing more desirable behaviors; however, CBT has traditionally been conducted in real-life settings, for instance, a group therapy session, only. On the contrary, this study takes full advantage of VR and monitors neurophysiological responses during its CBT session to reduce smokers’ tobacco use. Moreover, the study aims to overcome the challenges of reduced treatment effectiveness for outpatients when they have less personal connection to therapists and feel difficult to integrate therapy session into their daily lives.

A virtual group therapy session | Source: Virtual Reality Behavioral Theory

Before getting immersed into VR, each research participant (a heavy smoker) was asked to wear a VR headset, an EEG monitoring device, and Zephyr Bioharness, which is a non-invasive wireless wearable that is worn around the chest and measures heart rate, skin temperature, and breathing rate. This neurophysiological sensor data allowed the researchers to determine the extent to which different messaging and content influence each subject. Once everything was all set to go, the subject entered a virtual room where he or she went through a virtual group therapy session with different types of avatars. During the session, each avatar presented 90 seconds of pre-scripted, personal experience of smoking. For example:

“Hi my name is John. I’m 54 years old and I smoke to take the edge off when I’m stressed. I worry about keeping my job. And sometimes I wonder if I can really handle everything. Having a smoke just gives me a second to think.”

Each of the avatars was given a distinct persona that is found on different smokers’ characteristics based on their smoking, socio-demographic, and lifestyle information: age, gender, education, income, existing diagnosed medical conditions, duration of smoking, cigarette brands, motives for smoking, etc. In addition, the avatars were created by transferring facial expressions and body movements of human performers to a 3D model prior to the experiment, and the personal messaging by the avatars was also recorded by the human performers.

After the simulation, the subjects reported their subjective experience of the VR therapy session, including verbal description of their emotions, stress-level, and general feelings; smoking urges test; content satisfaction questionnaire. Based on the subjects’ neurophysiological responses and subjective data, the researchers addressed and explored some key questions:

  • Do self-reported emotions correlate with the neurophysiological response across the experiment and during specific events?
  • Is there a significant change in the neurophysiological response between avatar stories?
  • Does the avatar predicted to evoke the peak emotion differ from the actual peak-emotion evoking avatar?

Through the evaluation of the subjects based upon the three questions, the researchers were able to validate the effectiveness of the avatar stories and customize content for a specific smoker. Tailored content and messaging allowed the avatars to more convincingly emulate human behavior and interaction, thus creating very personalized simulation that maximizes the smokers’ emotional response and ultimately promotes tobacco cessation.

Remote and personalized therapy becomes possible

Virtual Reality Therapy | Source: The Guardian

Last week we covered how physiological-data-driven approach to personalized content marketing can bring in a new phase of marketing industry. In a similar manner, this study suggests that VR therapy can also become more effective and individualized through neurophysiological analysis, and, as portable bio-sensing wearables come into wide use, that a large population of outpatients can experience the benefit of a therapeutic environment at the time and location of their choosing. VR plays an instrumental role in digital health as well. VR-based clinical treatment not only offers more room for control — “real-world therapeutic environments include random elements with at least some degree of session-to-session variability” — but also enables a rigorous assessment of treatment response for a wide range of patients without exhausting process. Advances in VR will offer tremendous opportunity for new medical interventions and for better public health messaging, and may in fact represent a major inflection point in clinical adoption of the technology.

LooxidVR | CES2018 Best of Innovation in VR

Looxid Labs’ LooxidVR flaunted its potential of transforming digital health industry at this year’s CES. The VR headset combined with bio-sensing hardware has a possibility of becoming a major digital health wearable that fulfills both portability and efficacy. Doctors will be able to remotely provide individualized therapy session to outpatients and make exhaustive and accurate evaluation of patient’s illness and recovery based on their neurophysiological responses. In-person therapy might be an obsolete option no longer offered in the future.

LooxidVR pre-orders will start on Feb 1st, 2018. If you want to learn more about LooxidVR and Looxid Labs, feel free to visit our website at www.looxidlabs.com.

Also, we are sending our newsletter on the VR trend and VR research periodically. Subscribe us if interested in receiving our newsletter.

Reference

  1. Virtual Reality Behavioral Therapy | Proceedings of the Human Factors and Ergonomics Society 2016 Annual Meeting
  2. New CES 2018 wearable tech boost your health and wellness | SCMP
  3. ‘After, I feel ecstatic and emotional’: could virtual reality replace therapy? | The Guardian

Read More

CES 2018: Looxid Labs Steps into Spotlight at the world’s biggest tech show

By | BLOG

This year’s CES is finally over. And we did not just survive the show, we nailed it. See how!

Best of Innovation, Best of CES 2018

Following in the megacorp footsteps of previous winners Google’s Tilt Brush last year and Samsung’s Gear VR in 2016, Looxid Labs’ LooxidVR has earned a CES 2018 Best of Innovation Awards in AR and VR. So how could this 3 year-old tech startup make such achievement?

LooxidVR―a mobile based VR headset for eye tracking and EEG recording― can track a user’s brain activity, eye movement, and pupil dilation. It features time-synchronised acquisition of eye and brain data concurrent with VR contents and provides an expandable API, which can be widely applicable in various VR industries that require better understanding of users’ emotional status such as stress level, preference, and engagement.

This has sparked interest to a variety of media including Engadget, Forbes, and MarketWatch in various countries. Some industry analysts interviewed us to write about our company in their report and to introduce our company to their investors. Above all, Engadget chose Looxid Labs as the finalist for the official Best of CES 2018 awards. In particular, we were nominated for the best startup category.

Global IT leading companies are also interested in combining Human physiological data with VR

Not only the media but also VR business representatives from top tech companies showed interest in Looxid Labs. Global IT leading companies considering VR are advancing their artificial intelligence technology through myriad variations on AI systems. Clearly, there is interest in improving human experience and analyzing physiological and behavioral data by using algorithms to provide customized VR content. As you may see in our older Medium post ‘How can physiological data-driven approach revolutionize VR content marketing?’, there are numerous attempts to explore a better way to gauge user preference and even emotional reactions in response to VR content. Here’s the proof. Many CEOs from VR content companies said that they’d love to purchase the LooxidVR which offers information about how users’ brain activates and where users look in VR and waited such an item to conduct BCI research.

New Interfaces for Automotive Industry was a major highlight in CES 2018 and LooxidVR can also play a part in it

Of the thousands of new products on display, it is safe to say that major highlight in this year’s CES was the next level interface such as brain interface or voice interface in automotive industry. Toyota announced that Amazon Alexa would be incorporated with Toyota and Lexus vehicles, so that Toyota and Lexus customers will soon be able to easily control their cars by easily speaking to Alexa. The service will also enable new features in the cars themselves, such the ability to start the engine remotely or lock the car doors by giving a command through another Alexa device.

Also, Nissan’s experimental “Brain-to-vehicle” technology demonstrated how drivers can control their car with their thoughts. As a driver wears a helmet studded with EEG sensors, its brain-to-vehicle interface predicts the driver’s actions and start performing them 0.2 to 0.5 seconds sooner. Its goal makes the driver have more pleasant driving experience by driving without any electronic assistance and providing augmented reality displays based on the driver’s thoughts.

LooxidVR Pre-Orders Kick Off on Feb. 1st.

Do you want to experience new interfaces which enable you detect how users are immersive in VR or engaged with VR content using their eye and brain interfaces? Are you a neuroscience researcher to investigate human cognition and emotion using EEG and Eye-tracking technology? Then, here is your yearning device “LooxidVR”. LooxidVR is designed to measure the user’s eye movements and brain activity with VR embeddable eye-tracking camera and EEG sensors and unlock new levels of user research by exploring users’ minds. LooxidVR can be widely applicable in various VR industries that require better understanding of users’ emotional status.

Do you want to purchase LooxidVR? LooxidVR pre-orders start on February 1st, 2018. If you are interested in learning more, please visit our website at www.looxidlabs.com.

Read More

How can physiological data-driven approach revolutionize VR content marketing?

By | BLOG
Fig 1. Netflix VR App | Source: Netflix

What is personalization in content marketing?

It was only until recently that marketers realize that spending money on content marketing without personalization is a fool’s errand. Now you can see a list of content with a phrase “recommendation for you” everywhere, especially in major media services such as YouTube, Netflix, and Amazon. Though seemingly randomly selected and organized, an individualized list of what each consumer would love to see entails an arduous effort to develop an optimized personalization algorithm along with collecting a large quantity of user data.

Among other media content providers, Netflix goes one step further. Netflix seeks to optimize not only what to recommend but also how to recommend to their users. (Read more detailed article at Artwork Personalization at Netflix). In this Medium article, Netflix tech team discusses how they personalize artwork for content that they provide. Stranger Things, with its second season ended in great success, has a variety of artwork (Fig 2) ready to be shown to users based on their preference and interests — if the accumulated data of a user’s clicks shows that the user is often intrigued by the artwork with main actors in it, the recommendation system will present the best imagery accordingly. Most of the Netflix’s users hardly notice what’s behind the scenes, but they would be amazed to find out what happens every time they click “play”.

Fig 2. Different types of artwork for Stranger Things | source: Artwork Personalization at Netflix

What are the challenges of personalization?

Nevertheless, Netflix tech team brings up some challenges that they are currently facing.

  • Challenge #1. Understand whether a user chooses to play a video due to its artwork or regardless of which image the system presents.
  • Challenge #2. Gauge the impact of changing artwork on a user’s decision
  • Challenge #3. Measure quantitatively how a specific artwork performs better than others

All three challenges that Netflix has encountered mostly arise from failure to find signals that best represent the performance of artwork. Netflix currently takes into consideration some significant signals from their users: the information of videos that they’ve played, their country, language, the device that they are using, and the time of day and the day of week that they’ve played. Yet, at best, Netflix can only infer a user’s intention from these signals, not exactly understanding why a user prefers a specific artwork and when a user is showing interest or indifference to the artwork. Especially in times like now, when immersive media is gaining huge traction and many media services (especially Netflix) are seeking to enter the Virtual Reality (VR) market, overcoming aforementioned challenges has never been more important in content marketing.

Can physiological data-driven approach help overcome the challenges?

Fig 3. Coca Cola Virtual Reality Christmas Ride | Source: Coca Cola

There are numerous studies that attempt to explore a better way to gauge user preference and even emotional reactions in response to media content. Today’s research review will take a look at a study that examines the implicit (cognitive and physiological measures) and explicit (preference) consumers’ response to four traditional TV commercials and four VR commercials.

The key fundamental of this research comes from previous findings that the activation of the prefrontal cortex is highly associated with personal liking and disliking of stimuli and that the immersive aspect of VR provides presence that might even surpass reality for some situations in which social, cultural, and physical features are properly simulated.

To start with, the researchers instructed each participant to put on a 16-Channel electroencephalogram (EEG) cap and two electrooculogram (EOG) electrodes to detect both dorsolateral prefrontal cortical (DLPFC) activity and eye movements during the experiment. Then, the participant watched, in a random order, four traditional commercials from different companies (Marvel, Nescafe, Volto, Coca Cola) and VR advertisements of the identical brands. The participants watched traditional TV commercials on a 2D screen, and they were fitted with “Oculus Rift” for viewing the VR advertisements. After each commercial, the participants were asked to evaluate their experience in 20 different adjectives including “Interesting”, “Exciting” and “Captivating” with a 7-point scale.

As the researcher hypothesized, significantly higher frontal activation (theta band activation) and higher scores for each adjective were observed in VR advertisements compared to the traditional ones, which should come as no surprise. Rather, what should be highlighted here is that there was a strong coherence between implicit (EEG and EOG) and explicit consumers’ preference (self-assessment); higher theta band activation was detected in the participant’s recorded EEG data when the participant reported that the content was more exciting, interesting or captivating. The key takeaway here is that analyzing cognitive and physiological responses can be an alternative, or if not, complementary solution for gauging user preference in VR content marketing.

What is the limitation of physiological data-driven approach?

Yet, Netflix and other content providers would still be reluctant to bring in this new attribute to their system because they worry about physiological sensors considerably degrading content experience, even if they could provide a highly individualized content recommendation to each user. They are afraid that they might fall between two stools. Looxid Labs can help content providers to overcome the challenge of traditional personalized recommendation system. Looxid Labs’ VR headset LooxidVR provides a way to seamlessly measure users’ physiological responses and gain insights into user preference without hampering immersive VR user experience.

LooxidVR | CES2018 Best of Innovation in VR

LooxidVR is a mobile VR headset with seamlessly integrated eye-tracking cameras and EEG sensors. Six EEG sensors are attached to the foam that cushions a wearer’s forehead, detecting their prefrontal cortical activity; two eye-tracking cameras track eye movements, pupil dilation and eye blinks. The acquisition of a user’s EEG and eye movement data synchronized with VR content can tell content marketers exactly what triggered user’s emotional arousal (excitement or disgust) and compare the impact of different content based on quantitative data. For example, if content marketers discover that a user was excited based on his/her prefrontal cortex activation when he/she viewed an intense car action scene in a movie, they can add very specific and individualized content to the user’s list of recommendation.

With physiological data-driven approach, content marketers can better understand their consumers’ preference and therefore provide relevant and stimulating content for each individual. If LooxidVR’s technology takes off as media service companies expect, the revolution in content marketing will be soon to arrive.

LooxidVR pre-orders will start on Feb 1st, 2018. If you want to learn more about LooxidVR and Looxid Labs, feel free to visit our website at www.looxidlabs.com.

Also, we are sending our newsletter on VR trends and VR research periodically. Subscribe us if interested in receiving our newsletter.

Reference

  1. How Personalization Is Changing Content Marketing
  2. Artwork Personalization at Netflix | Medium
  3. Consumer Neuroscience: the traditional and VR TV Commercial | Neuropsychological Trends

Read More

CES 2018: LooxidVR Wins a Best of Innovation Award

By | BLOG

The world’s biggest consumer tech event is back. And this time with LooxidVR, CES 2018’s most innovative VR product.

LooxidVR Wins the Best of Innovation Award in VR

Consumer Electronics Show(CES) is the world’s largest trade show annually held by the Consumer Technology Association (CTA) in Las Vegas. Among the hottest gadgets from all around the world, CES Best of Innovation Awards is given only to the first-class companies that show the best innovation in each sector in the light of its design, technology and customer value. This year’s award for VR sector went to Looxid Labs’ LooxidVR, following a CES 2016 Innovation Awards honoree Samsung’s Gear VR and a CES 2017 Best of Innovation Honoree Google’s Tilt Brush.

Media taking interest in LooxidVR

LooxidVR Disclosed in CES Unveiled

LooxidVR is the world’s first mobile-based VR headset with two eye tracking cameras and six brain-wave sensors that seamlessly measure user’s eye and brain activity. Using an eye and brain interface, Looxid Labs aims to develop an emotion recognition system to reveal users’ unspoken emotions in VR. This award-winning product was successfully presented at ‘CES Unveiled Las Vegas’ where visitors could have a first-hand experience of it. Thankfully, a lot of media including Forbes, Bloomberg, AFT and more showed great interest in the product. Also, they all had an amazing experience by trying out the demo.

Demo: visualized Eye and Brain Data
Heatmap (left), User behavior and response analysis (right)

In the demo, they were placed in a virtual museum where they could see a visualized panel for their eye and brain data including their pupil size and brain activity. They looked around the museum and saw how their physiological data change as they were enjoying the immersive VR experience in real-time. In the end, they could have a look at the result page that pinpoints where they’ve looked and quantitatively analyzes their behavior and responses such as retention rate and stress level. Those who tried our demo were excited to see its potential in various industries that require a better understanding of users’ emotional status such as education, marketing, and healthcare.

Successful Wrap Up of ‘CES Unveiled Las Vegas’

Overall, we have successfully wrapped up CES Unveiled Event and will continue to participate in CES from January 9th to 12th. If you want to take part in this amazing experience with LooxidVR, please come see us at :

Eureka Park #52907, Sands Expo Hall G on Jan. 9–12.

Pre-orders will start on Feb. 1, 2018 so if you want to receive upcoming news on our product, subscribe our website at looxidlabs.com and be ready.

Read More

VR Neurofeedback: A New Drug-free Treatment for Mental Disorders

By | BLOG

Let’s start off with a riddle: the number of diagnoses of this mental disorder has shot up 43 percent between 2003 and 2011 in the United States, reaching the total number of patients to almost 6 million, and this disorder is prevalent among children aged 4–17, especially boys.

Source: Daily Nexus

Current treatment for Attention Deficit Hyperactivity Disorder (ADHD)

The answer is ADHD. ADHD causes patients to have trouble controlling their impulses and staying attentive to a particular incidence for a long time. Since these behaviors significantly affect children’s social life and education as a whole, doctors, as well as parents, have called for effective and personalized treatment for ADHD.

In reflecting on the increased concerns about ADHD, National Institute of Mental Health (NIMH) conducted Multimodal Treatment for ADHD (MTA) study which involved a combination of psychoeducation, medication, behavioral interventions, parent training and school support. However, as suggested by Harvard Health Publishing, the outcomes of MTA have been found increasingly ineffective especially in a long-term; the positive effects of drugs began to fade after children had completed the intensive drug therapy session and were entirely disappeared by the 36-month mark. Besides, even though the multimodal treatment somewhat reduced ADHD symptoms, there was no significant improvement on some critical measures including academic performance, social functioning, and aggressive behavior. Indeed, the group of children who underwent MTA still lagged behind the healthy subjects on 91% of the evaluation variables.

Neurofeedback: a better alternative treatment?

Neurofeedback(NFB), also called EEG Biofeedback, is a way to train the brain to have more balanced and healthy mind and body. Mainly, NFB has been touted as a more reliable alternative for ADHD treatments for its non-invasive and drug-free therapy process. NFB for ADHD typically examines a low-frequency range (theta and alpha) and a high-frequency range (beta) in subjects’ recorded EEG since the literature suggests that ADHD subjects demonstrate an excessive theta and alpha waves but fewer beta waves.

In contrast to the literature, a research team in the Technical University of Denmark devised a new method to train ADHD subjects with NFB. This group of researchers decided to look into P300 potential―a large positive voltage in the recorded EEG that peaks 300 ms after relevant stimuli―instead of beta, theta, and alpha waves as P300 potential is widely regarded as an attention-related indicator. For the experiment, they implemented a brain-computer interface (BCI) inside a virtual Reality (VR) classroom for NFB attention training. (BCI inside a virtual reality classroom: a potential training tool for attention)

Fig. 1: ANISPELL

The experiment includes two attention training games in the virtual classroom. The first training game is called ANISPELL. The subject sees a 4 x 4 grid of animal images in gray-scale with a black background. Randomly, a row or a column lights up by displaying the original colors of the animal images with a white background. A row or column lights up for 100 ms and returns to gray-scale. Each trial has 15 light-up sessions. For the entire trial, the subjects are instructed to pay close attention to a specific animal. After the trial terminates, they have to recall the color of the animal and locate the most dominantly colored part of the animal. Lastly, the experimenters ask an additional question that is completely irrelevant to the designated animal so that the subjects stay attentive during the entire session.

Fig. 2: T-Search

The second game, called T-Search, is little more challenging than the first. The subjects go through twelve different images one at a time for five trials. Eight of the twelve images have several red ‘X’s and ‘T’s (Fig. 2a) while four of them contain a blue ‘T’ with the reds (Fig. 2b-c). At the end of each trial, the subjects are asked to locate the location of the blue “T” in a compartmentalized square (Fig. 2d) and count the number of red T’s that were present with the blue “T”. The cumulative scores of the two games were shown to the subjects to drive competitive spirit, thus intensifying their attentiveness.

Based on the detected P-300 signals in the subjects’ recorded EEG, the researchers were able to predict the attentiveness of the ADHD subjects and conclude that the attention training with NFB could not only reduce ADHD symptoms but also promote academic performance considerably.

Neurofeedback training in Virtual Reality (VR)

Fig. 3: A VR classroom scene

The researchers attributed the decent performance of their experiment to its children-friendly and easy-to-use setup that the literature has had difficulty establishing. In particular, the researchers emphasized the advantage of VR in constructing a desirable experiment environment. VR classroom, a setting to which children are exposed almost every day, provides a real-life and naturalistic environment where subjects can forget the controlled test lab environment. Also, the researchers could handily simulate distractions such as the car driving by outside the window or a bunch of colorful hula-hoops in a fully controlled environment.

VR-integrated NFB is not a mere alternative to commonly known ADHD treatments but is gaining huge traction as the next frontier for psychological and cognitive disorder treatment:

VR therapy significantly reduces the severity of PTSD symptoms and results in rapid extinction. The findings also suggested combining VR and EEG biofeedback as a potential treatment for stress-related disorders. It is because real-time neurophysiological data such as serum cortisol levels, heart rate variability and mid-frontal alpha EEG asymmetry may provide useful inputs for adjusting VR exposure therapy protocols to enhance stress resilience or accelerate treatment response. (Dare To Explore: VR Helps You Conquer Your Fear)

VR-integrated NFB opens up opportunities for noninvasive and drug-free treatment with almost no side-effects and increased control of experimental environments.

A key bottleneck for VR-integrated Neurofeedback

Despite a promising future that VR-integrated NFB can deliver, there is a critical bottleneck at stake: accuracy. Two aspects need to be considered in establishing NFB system with high accuracy: how robustly are EEG signals acquired and how well are EEG signals and VR contents synchronized in times series. Taking account of these two aspects will ensure the high accuracy of NFB system and therefore provide more reliable and personalized treatment for various psychological disorders.

LooxidVR | All-in-one mobile VR headset embedded with EEG sensors and eye-tracking cameras

LooxidVR can help alleviate the bottleneck. Embedded with EEG sensors and eye tracking cameras, LooxidVR helps researchers acquire the user’s robust EEG signals by employing processing algorithm to eliminate unwanted noises. Most importantly, LooxidVR facilitates time synchronized acquisition of eye tracking data, EEG signals, and stimuli so that researchers can obtain correlated data-sets from different modalities.

Interested in VR-integrated neurofeedback for treating mental disorders? Embrace the opportunity with LooxidVR.

LooxidVR pre-orders kick off on February 1st, 2018. If you are interested in learning more, please visit our website at www.looxidlabs.com.

Reference

  1. Neurofeedback for attention deficit hyperactivity disorder |Harvard Medical School
  2. ADHD, By the Numbers |ADDITUDE
  3. How Does BrainCore Neurofeedback Work?
  4. BCI Inside a virtual reality classroom: a potential training tool for attention | EPJ Nonlinear Biomedical Physics

Read More