Category

BLOG

Unveil User’s Architectural Preference in VR

By | BLOG
PHOTOGRAPH BY DANIEL ACKER / BLOOMBERG VIA GETTY

In our previous stories, we have continuously discussed infinite possibilities where a virtual reality (VR) can be utilized as a useful tool to explore the mind of the users. It becomes extraordinarily powerful when combined with technology, such as eye-tracking, EEG analysis, and neuroimaging, that in all tries to understand and unveil things originally hidden deep inside human’s mind. In particular, the latest research review — “Combination of Virtual Reality and Eye tracking: Explore the Mind of Consumers” — was about the development of a gaze-based assistant system in a virtual supermarket. The proposed system was able to provide individualized recommendations based on a consumer’s preference, which was determined by the customer’s attention level.

How can we know deep psychology inside preference?

The results of the previous research is inspirational as it has not only shown that the virtual supermarket can induce quite an active interaction between human and technology but also inferred that the customers’ in-time preference can be measured and adapted to the virtual environment. Yet, there is still a pending issue inside. Why did the consumer stare at a particular product longer than other products, did he like it more or was it rather because he strongly disliked it? What kind of feeling, or true reaction occurred inside his mind when he searched through granola of multiple choice? The bare suggestion that eye-tracking technology provides itself, though powerful, still cannot tell us what specifically people have in their minds.

The combined use of VR and EEG: Explore preference

Source: hippocratesineurope.com

The preference, or liking, can be rephrased into the term “affective response,” and it belongs to human emotions that is so much complicated than to be simply defined and determined by the superficial gaze information. On contrary, our brain which actively reacts to all sorts of stimuli contains much information about what we see and how we feel. Therefore, in this week, we decided to move inside of a research that aimed at investigating deeper nature of preference in a virtual reality (VR) through the use of electroencephalography (EEG) — “Affective response to architecture — investigating human reaction to spaces with different geometry.”

Investigating emotional response to spaces

The field of architecture is one of the most prominent areas that deal with the interaction between humans and the environment. As people react differently to various spaces they enter into, architects should be sensitive to those feelings in order to construct a space which is not only suitable for its use but also attractive to the mind of the users. In other words, searching for the right way to design an architectural space is enduring, but fundamental to most of the architects. A lot of people assume that it is the designers’ responsibility and ability to figure out the perceptual and cognitive influence of architectural space on people. However, much can be identified with the help of scientific measurement and analysis than merely with an individual’s insight. Hence, the research aimed at investigating emotional and cognitive reactions that are generated by various types of spaces through the quantification and measurement of EEG.

In order to achieve the stated objective, the research team has conducted two phases of the experiment. In the first stage, the study centered upon observing human behavior in a virtual environment through analyzing the participants’ self-assessment result. But above all, why did they choose VR? When designing an experiment setup, there always exists a trade-off between keeping control of experimental variables and presenting a realistic environment. In this situation, the virtual reality allows to manipulate experimental controls while maintaining design features in constant. Therefore, the research has chosen the virtual environment as a substitute for reality so as to overcome the trade-off. Then how did they design the virtual environment to observe human reactions to different types of architectural spaces?

Figure.1. Plan and sections of the four designed VR spaces

They have built four types of virtual environments: a square symmetrical space (Sq); a round-domed symmetrical space (Ro); a sharp-edged asymmetrical space with tilted surfaces (the surfaces refer to walls and a ceiling) (Sh); a curvy-shaped asymmetrical space with rounded smooth surfaces with no corners (Cu). The primary intention why they designed four different types of spaces in such way was to examine how people feel about interior with complex forms that have breaks and curves (Sh and Cu), as compared to a simple structure (Sq and Ro).

Figure.2. Upper left, external view of the four designed VR spaces

The participants were asked to enter each of the four spaces by walking via joystick; they passed through the corridor, opened the door, explored the space and left after they finished their exploration. After that, they filled out a questionnaire about their experience in each of the space, and rated their preference to it on a 5 point Likert scale.

In the second stage of the experiment, the new framework of examining physiological responses of humans to architectural space geometry has been adapted. Having the participants wear a wireless EEG device during the investigation, the same trial conducted in the first stage was held again to analyze the participants’ brain activity. In other words, the subjects walked through and explored the space as they did in the first experiment, but this time with wearing Emotive EPOC device.

VR experiment with SURVEY versus EEG

The results of the two experiments were revealed to be complementary. The first experiment could suggest that there exist some differences of what people felt about each space in terms of efficiency, aesthetic point of view, safety, pleasantness and level of interests. In addition, it was inferred from the respondents that participants who have no expertise in the field of design have a different tendency of space preference from those who work as designers.

Fig.3. Experiment 1

Then how was the result of the second experiment, the enhanced version of the first one with a reinforced analysis methodology? The participants’ brainwaves were successful in directly proving the different reactions of spaces which were indicated in the first experiment. What is more notable, however, is that the EEG examination could suggest an additional insight.

The figure below illustrates the NPC 1 and NPC 2 mapping of a participant, dots of each color indicate four different kinds of spaces. The first graph is based on a 10-s recording window while the second one focused on the first 2-s of exposure to a certain space. When looking more information and making comparisons between both of the two graphs, it is observed that different reactions to each space can be well distinguished in the early time window. That is, the adaptation and emotional response to an area occur within the short period. Besides, this finding is in line with other studies on eye-tracking which revealed that viewers of an artwork spend their first 2-s in doing a sweep of the image and grasping the overall gist.

Fig.4. Experiment 2

In a nutshell, the experiments conducted in virtual reality were able to provide a better understanding of affective response to architectural space, which can consequently contribute to building a better design that the users are in favor of. Furthermore, it was indicated that the use of EEG can visually show different physiological reactions in a more explicit way. When compared with the analysis of a subjective survey result, the brainwave can allow the researchers to get real time information about what happens in the users’ mind while they explore and adjust to a particular space.

Explore user mind with EEG in VR

To sum up, even the identical experiments and researches will yield qualitatively different results and contributions depending on the analysis methodology. In order to get a more profound understanding of humans and how they feel, think about and react to their surroundings, it is highly crucial to carefully collect and investigate physiological data. Electroencephalogram which has relatively high applicability can be a proper choice to a number of researchers.

If you are interested in trying out your research in VR and want to understand the users’ brain activities in the environment, visit our website www.looxidlabs.com and get relevant information of our newly released product, LooxidVR. This mobile-based VR headset is the world first to provide an interface for both the brain and the eyes through its embedded EEG sensors and an eye camera.

LooxidVR

In addition, we are sending our newsletter on the VR trend and VR research periodically. Subscribe us if interested in receiving our newsletter.

Reference

  1. Affective response to architecture — investigating human reaction to spaces with different geometry | Architectural Science Review
  2. Visual interest in Pictorial art During an Aesthetic Experience | Spatial Vision
  3. In the eye of the Beholder: Employing Statistical Analysis and eye Tracking for Analyzing Abstract Paintings | Proceedings of the 20th ACM international conference on multimedia

Read More

Combination of Virtual Reality and Eye Tracking: Explore the Mind of Consumers

By | BLOG

What do you think is the most relevant information about a product that can successfully induce a consumer’s purchase behavior? If you were a promotion manager for a granola selling company, how would you try to understand your potential consumers’ purchase behavior and deep psychology inside it?

Eye tracking: Keeping track of consumer attention

Source: bluekiteinsight.com

Eye tracking, the sensor technology that enables a device to measure exactly where and when people’s eyes are focused, is known to provide a better understanding of consumers’ visual attention. People tend to stare longer and look more times at the object that they are interested in. In addition, their visual path gives much information about cognitive flow. Therefore, carefully investigating visual logs of consumers — eye tracking data in other words — might help those who are desperately looking for ways to promote sales of particular products to get insignificant insight. On the consumer’s perspective, the general public might also be able to enjoy the far better shopping experience with time-to-time recommendation system based on their eye gaze information.

But how can we track consumer attention in the real world?

Recall your shopping experience. As you enter a supermarket and stand in front of the shelf stuffed with the product category you were looking for, you will skim through several products and finally pick one of them to your cart. As a matter of fact, the process of making purchase decisions happens within seconds. Consequently, it is highly important for retailing researchers to investigate consumers’ natural attentional process “in situ.”

The majority of current research, however, even when analyzing eye tracking data, is undertaken in laboratory settings. The laboratory environment would make it easy to exercise any experimental controls to investigate what you want to know deeply. On the other side, keeping experimental controls inevitably leads to low level of ecological validity. If the ecological validity level is poor, any kinds of well-analyzed result might become valueless as we cannot guarantee that similar effects would happen in the wild. This trade-off between control and ecological validity level has always been quite a serious issue for many researchers.

Virtual reality mobile eye tracking: A new research opportunity

Fortunately, the advent of a virtual reality (VR) is extending the previous trade-off frontier for the existing researches. It is because VR not only allows various levels of experimental control but also makes it available to build up shopping experience that feels like a reality. This sort of experimental environment now puts the researches on the point where the optimal combination of experimental control and ecological validity is implemented together. Therefore, with the help of VR, eye tracking technology can be used way more effectively to capture the user’s visual attention with better reliability. This week’s research — “Combining virtual reality and mobile eye tracking to provide a naturalistic experimental environment for shopper research”— has reviewed how mobile eye tracking can be used in the virtual reality and discussed the pros and cons of applying eye-tracking technology in terms of experimental environments. Particularly, this research focused on three different kinds of environments — conventional 2-D monitor based setting, virtual reality, and the real environment. Besides, the research has proposed the experiment in a virtual reality setting to discuss the validity of using mobile eye tracking in VR to study consumer behaviors.

Figure.1. Interacting in a virtual reality

First of all, this paper has set up criteria and rated both of relative superiority and inferiority among three different experimental settings for each criterion. The result of ratings, as written in the table below, might work as a useful guideline to decide which equipment to use and how to design eye tracking experiments. As we can read from the table, the “desktop eye tracking”, compared with “mobile eye tracking in the field”, has relative advantage in criteria that are concerned with experimental control (“Ease of creating/using non-existing stimuli”, “Ease of controlling and randomizing treatment and extraneous factors”, “Naturalness of the eye tracking task”, “Ease of analyzing and reacting to respondent’s attention and behavior in real time”, “Ease of generating large sample sizes”, “Ease of obtaining retailer permission to record”, “Ease of data preparation”, “Reliability of AOI coding”, “Reproducibility of experimental setting”). In contrast, “mobile eye tracking in the field” shows better rating over “desktop eye tracking” in criteria about the ecological validity (“Realism of stimulus display” and “Realism of interaction”).

Table.1. Criteria for deciding which environment to use — eye tracking specific criteria are highlighted in grey

How about “mobile eye tracking in virtual reality”? Interestingly, “mobile eye tracking in virtual reality” seems to be the compromising plan that appropriately mixes up relative advantages of both sides (“desktop eye tracking” and “mobile eye tracking in the field”). “Mobile eye tracking in virtual reality” is rated with high scores in almost every criterion where “desktop eye tracking” outperforms “mobile eye tracking in the field.” What is more, different from “desktop eye tracking,” “mobile eye tracking in virtual reality” is rated with enhanced scores in “Realism of stimulus display” and “Realism of interaction.” Although it still needs to tackle with the problem of cost-effectiveness and to meet further technological requirements concerning realistic visualization as well as convincing presentation of the setting, it is anticipated that mobile eye tracking in VR might open a lot of new research opportunities.

Fig.2. Trade-off between experimental control and ecological validity

Observing shopper behavior with eye tracking data in a virtual supermarket

Here is one of the new studies that adopted eye tracking in virtual reality in a new field: shopper research. In order to prove how mobile eye tracking in virtual reality can contribute to answering unresolved questions in the retailing study, this research team has tried to design the virtual store to test whether additional information about the product can help change the consumers’ final purchase decision.

In the virtual supermarket which was designed to create a realistic shopping experience, there were several shelves filled up with assortments of different granola and baking mixture products. The supermarket was presented in a virtual reality lab equipped with the front projection screen of the CAVE environment, and respondents went through the experiment wearing SMI eye tracking glasses. They underwent three successive stages. In the first stage, they had to choose the most preferred product out of 20 from the shelf. Then, the same set of products reappeared with the additional red frame highlighting the initially chosen product. Soon after that, the recommendations of six other products were highlighted with a blue frame. There was also a pop-up bubble with the additional information about the product presented right next to the product where the respondent gazed at for more than 200ms. In the end, the subjects were asked if they would stay with their initial product choice or not.

Fig.3. Example scenes of the virtual supermarket

The results showed that some subjects had changed their preference during the stages. In other words, their decisions were affected by additional information provided in VR, which in turn implies that the virtual supermarket induced quite an active interaction between human and technology, and that such an experiment setting is helpful in testing and observing consumer responses.

Soon, when eye tracking technology is integrated into hands of electronic devices, far more innovations in retailing researches and people’s shopping experiences would come true. For instance, a gaze-based assistant system which can provide individualized recommendations based off of a consumer’s preference might change the expectation of what shopping should be in the future.

Try out your research with virtual reality and eye tracking

Although the paper mainly focused on the field of shopping, such gaze-based assistant system that reflects a real-time preference of the user in a virtual environment can be widely used in many areas in which exploring people’s minds is important. If yet uncertain of its validity, check out some available technology that has successfully combined virtual reality with eye tracking and try it out to investigate no matter what you want to know. A great deal of valuable but so far hidden information such as consumers’ complicated in-store decision processes, critical interior design elements that significantly influence people’s mood, and more would be in your hand.

If you are interested in using a brain and eye interface in the virtual reality, visit our website www.looxidlabs.com and get relevant information of our newly released product that provides the world first mobile VR headset with an interface for both the eyes and the brain.

LooxidVR

In addition, we are sending our newsletter on the VR trend and VR research periodically. Subscribe us if interested in receiving our newsletter.

Reference

  1. Combining virtual reality and mobile eye tracking to provide a naturalistic experimental environment for shopper research | Journal of Business Research
  2. How Eye Tracking Works | Blue Kite Insight

Read More

Getting to Know Your Working Stress Level through EEG in the Virtual Reality

By | BLOG

How can a CEO ascertain that all of his or her members are working with their full capacity?
Are you sure that your brain is not too overloaded with the everyday working condition?

Source: DEFACTO BLOG

Now with the help of the combined use of electroencephalogram (EEG) and Virtual Reality (VR), you can find out the mental workload and stress level of you and other co-workers. Adopting this combined tool will lead to much more efficient operational decisions that achieve a fair distribution of workload and responsibility among various workers.

What affects job performance?

In fact, a multiple of variables, from workplace culture to the size of work equipment, hinder our ability from thoroughly assessing any situation, which ultimately affects job performance. Mainly, fatigue and stress are critical human factors that should not be taken lightly. EHS Today reported that about a half of US workers suffer from fatigue; this is not only the story of them, but the world workforce population complains of tiredness. The most critical problem in stress at work is that excessive workload and corresponding stress would directly lead to safety issues or sometimes severe injuries. In other words, tiredness affects our judgment and might put our health at risk, and thus, the perceived level of mental stress and workload for workers should be continuously monitored and evaluated so as to secure them from various industrial accidents. Though it sounds like something costly for managers to keep their eyes on, the action is imperative in a sense that managing workers’ stress level would contribute to their overall enhanced work effectiveness.

Bio-signals would help you check your mental workload

Yet, how can we assess one’s workload? There are mainly three types of workload assessment methodologies: subjective measures, performance measures, and physiological measures. Conventionally, people had to rely on the worker’s subjective assessment where the user determines and assesses how much he or she is mentally overloaded by themselves. As a matter of fact, a few versions of Subjective Workload Assessment Techniques (SWAT) have been developed. Nonetheless, such method cannot escape from its fundamental fail point that is it not sensitive enough to catch subtle mental workloads, which if accumulated, can, in turn, lead to chronic fatigue. The performance measures, which record performance scores and use these as an indicator of task demand, or difficulty, is way more objective but they are hard to be widely used due to their intrusiveness to various work settings.

Sensors | Using Psychophysiological Sensors to Assess Mental Workload During Web Browsing

The best and the most straightforward way to make a diagnosis of our physical state is to look at bio-signals. Apart from the conventional methods such as statistical analysis of special events or keeping track of the worker’s complaints, physiological information can be considered to evaluate human factors. In addition, among many other bio-signals, electroencephalogram (EEG) is well-known for high time-resolution, possibility to continuously monitor brain stress with the adequate accuracy, and most importantly, the recognition of human emotion, stress, vigilance, etc. That is, EEG can be utilized to monitor mental workload, emotion, and stress of the workers when they perform any task. Still, some of you might worry about the way to collect EEG signals from the real working environment, as it is hard to be simulated physically. However, now that the virtual reality (VR) technology has been fairly well advanced, simulating your working condition in a virtual environment is not a matter.

Measurement of stress recognition of crew members by EEG in a Virtual Environment

This week’s research review illustrates the measurement of mental workload through EEG in a virtually simulated environment — EEG-based Mental Workload and Stress Recognition of Crew Members in Maritime Virtual Simulator: A case study. The research team has focused their study on the maritime industry where human factors are considered to be one of the leading causes of accidents, attributing to nearly 96% of the entire maritime accidents. Even though the industry has achieved a notable improvement of ship equipment and the overall system, human factors have not been considered enough to enhance the whole safety level. Therefore, the research aimed to study cause and effect of human errors of crew members by monitoring mental workload, emotion and stress level of the maritime trainees.

Fig.1. Simulator at SMA

To be more specific, in order to study the relationship between maritime trainees’ mental workload, stress levels, and task performance, the research team conducted the experiment with four maritime trainees forming the crew. Consisted of an officer on watch (OOW), a steersman, a captain, and a pilot with each assigned with duty corresponding to that of the real crew member, the crew had to navigate the vessel to the destination within SMA’s Integrated Simulation Centre (ISC) where a highly realistic environment was simulated. During their voyage, each of the subject’s emotion level (positive, neutral, negative), workload (no, minimal, moderate, high), and stress (low, medium low, moderate low, medium, medium high, moderate high, high, very high) had been observed and were further analyzed after the experiment.

Fig.2. OOW, captain, and pilot in the simulator during the experiment

The following describes the result of the analysis. The OOW, who always had to maintain watch-keeping was in the most negative emotional state; the captain, who was required to give our orders to the crew and assigned with the most significant responsibility showed the highest workload; the captain and the pilot, who had relatively higher responsibility than OOW and steersman were recorded with higher stress level as well.

Though the experiment is still in a preliminary stage of studying human factors, the success in monitoring emotion, mental workload, and stress implies that the proposed approach can be applied far beyond the maritime domain. The EEG-based human factors evaluation tools can be used for any industry that involves a multiple of people working together. In addition, it is anticipated that such mechanism can broaden the research that studies the human-machine interaction.

LooxidVR: The All-in-one device with VR compatible EEG sensor and eye tracking camera

LooxidVR

Then what should be the next step? In order to achieve a more accurate measurement of human factors in a far more immersing environment, the data-collecting sensor and the environment which is being simulated should be correlated as closely as possible. LooxidVR, the winning product of CES 2018 Best of Innovation Award, is now here for you to provide a robust data acquisition of the user’s brain activity and even eye movement in VR environment. Made by Looxid Labs, Looxid VR is the world first mobile VR headset to provide an interface for both the eyes and the brain. Looxid Labs is ready to provide the integrated solution to many of those who are interested in exploring user’s mind. It will be especially helpful for researchers who are interested in recognizing diverse emotion state of the user such as stress, mental workload, and preference.

LooxidVR has begun pre-order from 1st, Feb. For more information, visit our website www.looxidlabs.com and do not miss the pre-order opportunity to enrich your current research and study.

Also, we are sending our newsletter on the VR trend and VR research periodically. Subscribe us if interested in receiving our newsletter.

Reference

  1. EEG-based Mental Workload and Stress Recognition of Crew Members in Maritime Virtual Simulator: A Case Study | http://ieeexplore.ieee.org/document/8120300/
  2. Human Factors In Safety: How do stress and fatigue affect work? | https://www.pro-sapien.com/blog/2017/10/human-factors-safety-how-stress-fatigue-affect-work/
  3. Workload Assessment | https://www.ergonomicsblog.uk/workload-assessment/

Read More

The Virtual Environment-based Adaptive System Helps Children with Autism to Enhance Social…

By | BLOG

The Virtual Environment-based Adaptive System Helps Children with Autism to Enhance Social Functioning

According to estimates from CDC (Centers for Disease Control and Prevention)’s Autism and Developmental Disabilities Monitoring (ADDM) Network, about 1 in 68 children in this world is suffering from Autism Spectrum Disorder (ASD), a developmental disability that can cause some significant social problems including difficulties communicating and interacting with others. Specifically, children with ASD have shown impairment in understanding complex facial emotional expressions of others and are slow when processing people’s faces. In other words, they can hardly get the sense of context when interacting with people, which might later cause more severe problems in communication.

Unfortunately, little is known about the diagnosis and even treatment for ASD; currently, there is no cure for ASD but only some evidence which states that early intervention treatment services can improve a child’s development. These services refer to medical therapy that helps the child talk, walk, and interact with others. However, the real problem that blocks children with ASD to overcome social interaction impairments lies in the lack of accessibility of the therapy. The traditional intervention paradigm, which requires a professional therapist to sit next to the child, is not accessible to the vast majority of ASD population. There aren’t as many trained therapists available to assist a lot of children in need of help, and even when they are accessible, it is burdensome for the most of the households with ASD child to afford excessive intervention costs.

Technology can help children with ASD to overcome social interaction disabilities

There is good news, though. Recent advances in computer and robotic technology are introducing innovative assistive technologies for ASD therapy. In particular, among all emerging technologies, virtual Reality (VR) is the most leading one since it has its potential to individualize autism therapy to offer useful technology-enabled therapeutic systems. As children suffering ASD manifest varying social deficits from one individual to another, it is exceedingly essential to provide proper help to each of them through personalized therapy; VR-based intervention system that keeps track of the child’s mental state can fulfill this customization need. Moreover, a number of studies indicated that many children with ASD are in favor of the advanced technology. This preference can be further interpreted to assume that the new intervention paradigm for ASD such as VR might be, and should be well adopted by children with ASD.

Multimodal Adaptive Social Interaction in Virtual Environment

To the point, this week’s research review covers the new VR-based intervention system by introducing Multimodal Adaptive Social Interaction in Virtual Environment (MASI-VR) for children with ASD. This study presents design, development and a usability study of MASI-VR platform. It first has aimed to design the multimodal VR-based social interaction platform that integrates eye gaze, EEG signals, and peripheral psychophysiological signals. The research team has proved the usefulness of the designed system, particularly for emotional face processing task. Through this review, we hope you to get the sense of how virtual environment based technological system works as a whole to help improve overall social functioning in autism.

Synthesizing different aspects of a social interaction

The research team has designed the VR system that incorporated various aspects of emotional social interaction. The system, in turn, aims to help children with ASD to learn proper processing of emotional faces.

Fig.1. System architecture of MASI-VR

It mainly consists of three parts: VR task engine and dialog management module; the central supervisory controller; peripheral interfaces that monitor eye gaze, EEG, and peripheral physiological signals to assess the subject’s affective state. When the central controller facilitates the event synchronization between the other two parts, the subject starts to undergo various social task while their physiological information is collected and analyzed in real time. The signals further work as a primary determinant to control the next stage within the virtual environment, letting the whole process to become individualized.

Fig.2. Various emotion and gestural animations

To be more specific, there were total seven characters of teenagers presented in the virtual environment, and they can change their facial emotional expressions among seven kinds (enjoyment, surprise, contempt, sadness, fear, disgust, and anger) in line with the situational context. In the pre-set VR cafeteria environment, the subject wanders around the virtual space and meets one of the characters who wishes to interact with the subject. In this situation, the subject can either choose or not choose to start a conversation with the avatar. If it decides to communicate, different kinds of conversational dialog missions will take place. After each session, the training trial begins for the subject to practice recognizing the character’s emotional state through observing its facial expression. At the end of each dialog, the face of the character will be presented with oval occlusion. The occlusion will gradually disappear following the gaze of the subject to give adaptive gaze feedback. This process encourages children with ASD to look at critical parts in the face that determines one’s emotional state such as areas around eyes and mouth. Therefore, if the subject succeeds in paying enough attention to those parts, the face reveals the emotion and the subject gets to choose what the emotion was.

Fig.3. The VR cafeteria environment for the social task

Effectiveness of MASI-VR in improving eventual social functioning

In order to prove the usability and effectiveness of the gaze-sensitive system, the nearly identical system only without gaze feedback was also tested by the control group. The performance difference showed that the adaptive system was significantly more helpful to enhance the subject’s engagement to the social task as well as the accuracy of recognizing the character’s facial emotion. In other words, MASI-VR is considerably useful in training core deficit areas of children with ASD. Though the study is still in the preliminary stage, the findings suggest that VR-based social interactive environment can be utilized to help improve the eventual social functioning of those with ASD.

LooxidVR monitors eye gaze and EEG in the virtual environment

Now that the effectiveness of Multimodal Adaptive Social Interaction in Virtual Environment for children with social communication disabilities has been proved, which device should be chosen to further enrich the study to develop the quality of the therapy?

LooxidVR

In the study, several different devices were used simultaneously to monitor each corresponding physiological signals from the subject. However, there exists some inconvenience caused in the process of installing and setting up all of those devices; it would be best if the entire data can be collected and analyzed in a single VR device. Though sounds like a future dream yet to be realized, there is one in this world that enables concurrent measurement of a person’s eye gaze and EEG data in VR situation. LooxidVR, the world first mobile VR headset to provide an interface for both the eyes and the brain, allows robust data acquisition through VR compatible sensor that measures the user’s brain activity and eye movement. Recently winning Best Of Innovation Award at CES 2018, Looxid Labs is ready to provide the integrated solution to many of those who are interested in exploring user’s mind. With LooxidVR, further development of in-person therapy for children with ASD to enhance social functioning would come true.

LooxidVR pre-orders will start on Feb 1st, 2018. For more information, visit our website www.looxidlabs.com and do not miss the pre-order opportunity to enrich your current research and study.

Also, we are sending our newsletter on the VR trend and VR research periodically. Subscribe us if interested in receiving our newsletter.

Reference

  1. Multimodal adaptive social interaction in virtual environment (MASI-VR) for children with Autism spectrum disorders (ASD)| Virtual Reality (VR), 2016 IEEE
  2. Autism Spectrum Disorder (ASD)|Centers for Disease Control and Prevention

Read More

Science Stuff that You shouldn’t Miss in The Big Bang Theory: The Yerkes and Dodson Law

By | BLOG

A Little Anxiety Won’t Kill You, It Will Make You Stronger

Imagine the sound that irritates you the most: nails on a chalkboard, a baby crying or (for some people) Taylor Swift music. While it may be frustrating for you, your stressful feelings may actually improve your performance. This idea is not new though; it also showed up in a beloved sitcom Big Bang Theory when Sheldon, a renowned physicist, tries to find his optimal anxiety level.

In Big Bang Theory episode 13 from season 8, Sheldon gets stuck with his work in Dark Matter and wants to make himself more efficient. In order to do so, he tries to optimize his work environment but sees no progress in it and believes that he has created too pleasant of an environment to work in. So instead of putting himself in a comfort zone, he thinks that he should increase his anxiety level and seeks the help of his girlfriend Amy who happens to be a neuroscientist.

Sheldon: According to a classic psychological experiment by Yerkes and Dodson, in order to maximize performance, one must create a state of productive anxiety.

They begin the experiment by first measuring the baseline of his brain activity and then by basically ‘making Sheldon irritated’ while he is wearing a EEG cap. For instance, while Sheldon is solving a maze, Amy starts to make squeaky noises by rubbing a balloon. Finding the sound intolerable, Sheldon ends up popping up the balloon and says that he was aiming for her heart. The experiment eventually fails as Sheldon vetoes to all the suggestions that Amy made.

Amy: Look, I know you don’t like it, but that’s the point of the experiment. I need to irritate you to find your optimal anxiety zone. And you said no to tickling, polka music or watching me eat a banana.

At this point, one might wonder if this experiment has a solid ground. So we delved into the experiment done by Yerkes and Dodson and the answer was, YES! It is useful to find one’s optimal anxiety level in order to increase work productivity. The actual experiment was done in a slightly different way than that of Sheldon and Amy, though.

Hebbian version of the Yerkes–Dodson law

Above all, the biggest difference was that their experiment was based on the behaviors of rats instead of humans. In the experiment, rats were put in a maze with only one right way to escape and whenever they went to the wrong route, for instance entering a box through a white door, they received electrical shocks. (brutal, right?) In the end, they discovered that while increasing voltage made the rats to perform faster and better, after a certain point the rats started to slow down, freeze or retreat. This showed how certain level of stress can become a motivation and increase an individual’s performance though the optimal level may vary on individuals. Likewise, measuring stress and anxiety levels in research can bring meaningful insights to the research.

Why LooxidVR?

LooxidVR | CES2018 Best of Innovation in VR

Looxid Labs’ LooxidVR proved its potential in psychology and neuroscience research at this year’s CES. The VR headset combined with EEG sensors and eye-tracking cameras has a possibility of becoming a major research kit that fulfills both portability and efficacy. Instead of manually irritating Sheldon by rubbing balloons and eating bananas, Amy could have simply put Sheldon in a VR environment where he could be fully immersed in the experiment and measure his stress level with EEG sensors attached to the headset. So if you are a psychologist or a neuroscientist like Amy, consider enriching your experiment with this award-winning research kit.

LooxidVR pre-orders will start on Feb 1st, 2018. If you want to learn more about LooxidVR and Looxid Labs, feel free to visit our website at www.looxidlabs.com.

Also, we are sending our newsletter on the VR trend and VR research periodically. Subscribe us if interested in receiving our newsletter.

Read More

The combined use of VR and EEG : An effective tool to understand daily language comprehension

By | BLOG
Source: SD Times

Till now, we have introduced several case studies with neuroscience and psychology research articles about how to deepen those researches related to education, marketing, healthcare, and gaming fields by combining human physiological data such as electroencephalography (EEG) and pupil information with VR. It is not only about broadening the realm of Brain-computer interface, but the core value inside is enriching human life through a better understanding of physiological signals. Ranging from creating adaptive educational contents to encourage students’ full engagement up to using neurofeedback therapy for patients with PTSD or ADHD as well as physiological data-driven in-person marketing widely used in game industry, the applicability of neuroscience to education, marketing, medical science and more has already been proved. However, this is not the end. Recently, the validity of combining EEG with VR in studying language processing in naturalistic environments has been confirmed.

The importance of building contextually-rich realistic environments

As we might all know intuitively through everyday communication, the context plays a crucial role in language processing. Besides, visual cues along with auditory stimuli significantly help our brains process meaningful information during any kinds of human to human interaction. Consequently, realistic models of language comprehension should be developed to understand language processing in contextually rich environments. Nevertheless, researchers in this field have suffered designing their research environments; it is picky to set up a naturalistic environment that resembles our everyday life settings and gives enough control to both linguistic and non-linguistic information no matter how much the situation is contextually-rich. Anyone who has tackled the issue should pay attention to this article because the combination of VR and EEG could be the solution. This week’s review is about “The combined use of virtual reality and EEG to study language processing in naturalistic environments.” By combining VR and EEG, strictly controlled experiments in a more naturalistic environment would be comfortably in your hands to get an explicit understanding of how we process language.

VR to enhance reality level in your experiment

To start with, why should be VR utilized to design your experiment? As well defined in many sources, the virtual environment is a space where people can have identical sensory experiences just as in the real world, and where the users’ every action can be tracked in real time. Accordingly, what the strongest point VR fundamentally has is to allow researchers to achieve an increased level of validity in a study while simultaneously having full experimental control. EEG combined with VR would, therefore, make it possible to correlate humans’ physiological signals with their every single movement in the designed environment. Thus, the successful combination of the two has been used to study users’ driving behavior, spatial navigation, spatial presence and more.

Why not extend this kind of methodology further into studying language processing? Maybe some of you might doubt if human’s natural behavior can be well examined in a virtual environment. Since every line of a conversation in VR is an artificial voice, it might be hard for people to get fully engaged in the interaction inside VR. In other words, there exist some skeptical views that Human-Computer Interaction (HCI) and Human-Human Interaction (HHI) are different so that VR is only adaptable when studying HCI. However, it was turned out to be a meaningless worry. The study by Heyselaar, Hagoort, and Segaert (2017) proved in their experiment that the way people adapt their speech rate and pitch to an interlocutor has no difference whether it is a virtual one or a human. This significantly implies that it is plausible enough to observe language processing in a virtual environment to understand the one in our real life.

N400 response to be well observed in the VR setting

Johanne and others conducted an experiment to validate the combined use of VR and EEG as a tool to study neurophysiological mechanisms of language processing and comprehension. They decided to prove the validity by showing that the N400 response happens similarly in a virtual environment. The N400 refers to an event-related potential (ERP) component that peaks around 400ms after the critical stimuli; the previous study in a traditional setting have found that incongruence between the spoken and visual stimuli will cause enhanced N400. Therefore, the research team set up the situation containing mismatches of verbal and visual stimuli and analyzed brainwave to observe N400 response.

In the experiment, total 25 people were put into the virtual environment designed by Vizard — Virtual Reality Software — where eight tables are in a row with a virtual guest sitting at each table in a virtual restaurant. The participants were moved from a table to table following the preprogrammed procedure. The materials consisted of 80 objects and 96 sentences (80 experimental sentences and 16 filler ones). Both of them were relevant with restaurant setting, but only half of the object and sentence pairs were semantically matched. For instance, if there is a salmon dish on the table, and the virtual guest sitting at the table says “I just ordered this salmon,” it is a well-matched pair. On the other hand, if the paired sentence of a salmon is “I just ordered this pasta,” the two become mismatched. Each of the participants went through equal rate of match and mismatch situations and made 12 rounds through the restaurant during the entire experiment. At the end of the trial, they were asked two questions to assess whether the participants had paid attention during the trial and their perceptions of the virtual agents.

Fig. 1: Screenshot of the virtual environment

The EEG was recorded from 59 active electrodes during the entire rounds of the experiment. Epochs from 100ms preceding the onset of the critical noun to 1200ms after it was selected and the ERPs were further calculated and analyzed per participant and condition in three time windows: N400 window (350–600ms), an earlier window (250–350ms) and a later window (600–800ms). Finally, repeated measures of analyses of variance (ANOVAs) were performed, three variables were predetermined time windows, and the factors included condition (match, mismatch), region (vertical midline, left anterior, right anterior, left posterior, left interior), and the electrode.

The result was calculated as Fig. 2; it was revealed that ERPs seem more negative for the mismatch condition than for the match condition in all time windows and the difference was particularly significant during the N400 window. That is to say, the N400 response was observed in line with predictions, while leading to the conviction that VR and EEG combined can be used to study language comprehension.

Fig. 2: Grand-average waveforms time-locked to the onset of the critical nouns in the match and mismatch conditions. The topographic plots display the voltage differences between the two conditions (mismatch — match) in the three different time windows

Remaining problem: The use of two separate devices

Nevertheless, this study still contains shortcomings due to its limitation when using two different devices — the EEG cap and VR helmet — simultaneously. As the head-mount display (HMD) should tightly fit around the user’s head, it is somewhat challenging and burdensome to wear the EEG cap at once. Besides, if equipped with the EEG cap sensitive to movement, it is hard to realize virtual environment with its full potential where people’s dynamic interaction and actions should be taken. In fact, this limitation is a real bottleneck that brings the experiment far apart from setting a realistic environment.

Solution: The All-in-one device with VR compatible sensor

LooxidVR

Is there any silver bullet to defy this barrier? Here it is. The problem addressed above can be fully solved with the all-in-one device fully equipped with VR compatible sensor. Here is the solution: LooxidVR. Recently winning Best Of Innovation Award at CES 2018, Looxid Labs have introduced its system that integrates two eye-tracking cameras and six EEG brainwave sensors into a phone-based VR headset. With LooxidVR, collecting and analyzing human physiological data concurrently with the users interacting with the fully immersive environment will become possible.

LooxidVR pre-orders will start on Feb 1st, 2018. Visit our website www.looxidlabs.com and keep track of our latest news. Catch the pre-order opportunity and enrich your current research and study.

Also, we are sending our newsletter on the VR trend and VR research periodically. Subscribe us if interested in receiving our newsletter.

Reference

  1. The combined use of virtual reality and EEG to study language processing in naturalistic environments | Behavior Research Methods
  2. Looxid Labs’ brain-monitoring VR headset could be invaluable for therapy | Engadget
  3. N400 | Scholarpedia

Read More