Category

BLOG

What Happens When Artificial Intelligence Can Read Our Emotion in Virtual Reality

By | BLOG
Apple: Animoji

Being surrounded by machines that understand our emotion is one of many ‘what ifs’ that is kind of creepy to even think about. Don’t get surprised. We will get to that future sooner or later owing to technological advances, but how?

How does a machine ‘sense’ our emotion?

At Apple’s September keynote, Apple X for the first time showed off its slick design to the world, and Apple phone lovers couldn’t help but shout “hooray!” with enthusiasm. What caught people’s eyes unexpectedly among others was Animoji, a dozen different animal emojis that mirror users’ facial expressions and that can be shared with others. Animoji seems interesting for sure, but what does it really mean for our communication in a digital world?

Nowadays, an overwhelming amount of human-to-human communications happen every second via different digital platforms, but they are quite often void of the essence of human nature: emotion. To facilitate machine-mediated communication, many tech giants are spending a great deal of time and effort on finding proper sensors that can empower digital machines to interpret our emotion. At least for smartphones, since we take pictures and talk on the phone in a daily basis, it comes naturally to engineers to use a camera (facial recognition) and microphone (virtual assistants―Siri, Google Assistant or Amazon Alexa) to ‘sense’ our emotion.

What about in VR?

Facebook Social VR

Social Virtual Reality (VR) is a new emerging digital platform that offers a virtual space where people with their avatars can interact with others. But how do we add an emotional texture to VR? That gets us to Massachusetts Institute of Technology (MIT) Media Lab.

A: circuit board with bluetooth connection B: PPG senor C: GSR Electrode

MIT Media Lab decided to add an extra layer of emotional skin to a virtual avatar. The researchers created an ‘emotional beast’ in VR that changes its appearance responding to a user’s emotional state. In order to detect a user’s emotion in VR, the team integrated a physiological sensing module including electrodes―for galvanic skin response (GSR) data collection―and photoplethysmogram (PPG) sensors ―for heart rate data collection―into the mask of a VR headset. GSR data reflects a user’s emotional arousal, but it is not enough to determine whether a user is aroused positively or negatively. Thus, a PPG sensor―using light to track the rate of blood flow and gauge a user’s anxiety and stress levels (negative arousal)―is needed to complement GSR data. Basically, these selected physiological sensors act as a medium for emotion recognition just as a camera and microphone in smartphones.

The researchers crafted two types of ‘emotional beasts’: fur-based and particle-based.

A fur-based emotional beast

The fur-based ‘emotional beast’ has the ability to contract and grow its fur to visually express the happiness of a user. Based on Lang’s Model, the team evaluated the four emotional states on a scale of 0 to 1. The fur-based beast grows its fur to full length if the evaluated emotion is ‘happy’ whereas the fur stays within the inner skin and thus results in the smooth outer skin if evaluated to be ‘neutral’.

A particle-based emotional beast

The particle-based ‘emotional beast’, on the other hands, takes account of two variables: the brightness and color. On a scale of 0 to 1, the arousal level of a user is estimated. At a high arousal level, the particles illuminate while at a neutral state almost invisible. In a similar manner, a user can express his/her frustration and anger to other avatars by converting the color of the particles from blue to red.

Indeed, MIT Media Lab has crafted visually scintillating artwork. These colorful and vibrant creatures enabled the users to express their emotions in most vivid way possible and thus brought a surface-level experience of VR to an emotional human-to-human interaction (See the video here).

How Can Emotion AI Revolutionize VR?

Yet what’s working behind ‘emotional beast’ is machine learning algorithm. The researchers let the system to learn the physiological data-sets and predict a person’s emotional states. Without this process, GSR and PPG data are just a bunch of numbers that tells us nothing. In fact, any system that detects emotion based on user-provided data absolutely entails machine learning process.

Although the “emotional beast” project has successfully portrayed how emotion detection technology can be used in VR, being able to perform human-to-human communication within VR may become of little interest to us if Artificial Intelligence(AI) comes into play―because VR coupled with Emotion AI will eventually touch every part of our lives and bring up so many ‘what ifs’.

“What if AI can gauge your preference towards all the products you’ve seen in a virtual shopping mall and then suggest a purchase list of the preferred products or even automatically purchase them for you?”

“What if AI can measure the concentration and excitement level of a middle school student listening to a lecture in VR and come up with the customized curriculum specifically for that student?”

“What if…”

These ‘what if’ scenarios of AI reading our emotion will not remain as a creepy pipe-dream anymore.

Reference

  1. Emotional Beasts: Visually Expressing Emotions through Avatars in VR
  2. Apple: Animoji

Read More

Measuring the Power of VR Education: When VR Classroom Needs EEG and Eye-tracking Technology

By | BLOG

Since Mark Zuckerberg opened Facebook’s door for Oculus’ VR technology, there have been a growing trend for the use of VR for business for the last couple of years, including learning.

Image Credit: https://www.digitalbodies.net/vr-ar-education-focus-strategic-vision-implementation/

Recently, Oculus Education announced to support several research institutions including Cornell, MIT, and Yale in order to better understand how VR can have the greatest impact on learning outcomes. In particular, the Oculus Education team sponsors new research that pinpoints and maximizes VR’s educational potential by investigating which properties of VR may have the greatest impact on learning, under which conditions, and in what subject matters and environments across academia, secondary and university-level education, life-long learning, etc.

Image Credit: Yale Center for Health & Learning Games

As a kickoff research, play4REAL, a new lab at Yale’s Center for Health & Learning Games, will develop and test VR games for health education and behavioral intervention used to investigate VR’s ability to influence the perception and experience of peer pressure and development of social norms in adolescents and young adults. MIT will develop, pilot, test, and evaluate a proof-of-concept multiplayer VR experience for high school students to understand the impact of VR in hands-on learning. Cornell’s Virtual Embodiment Lab will measure conceptual understanding, attitudes, and motivation while comparing the effectiveness of learning activities through table-top activities, computer simulations, and immersive, hands-on simulation in VR.

Image Credit: http://www.apps4oculus.com/vr-a-revolution-in-field-of-education/

While statistics on VR in K-12 schools and colleges have yet to be gathered, the steady growth of the market is reflected that education is one of the most exciting use cases for this emerging technology because VR is a useful tool to certainly add a reality to the hard sciences — biology, anatomy, geology and astronomy compared with traditional education. According to Jeremy Bailenson, who heads the VR lab at Stanford University, experiments show that students pay more attention to a lecturer if the lecturer looks them in the eye. While a lecturer can only look at each student one to two percent of the time in a traditional classroom of 50 students, the virtual imagery of the lecturer can increase the virtual reality gaze to any percentage the user want.

Image Credit: 3 Tips to Successfully Create Virtual Field Trips in Your Classroom | The Journal (By Cincy Wallace)

Thanks to VR’s capacity to represent real-life events and situations, there is a growing empirical evidence that VR is a valuable learning tool. However, there are still many issues that need further investigation including studying how its use can improve the intended performance and understanding and finding out ways to reach more effective learning when using this technology. For instance, a student’s engagement level should be measured while wearing a VR headset to help assess the effectiveness of the use of VR. It is because the immersion or engagement offered by VR is critical to its effectiveness.

LooxidVR, All-in-one mobile VR headset embedded with EEG and eye-tracking sensors

Yet, how immersive or engaged is the VR education? This is something that eye-tracking and EEG measurement is involved to quantify the student’s experience without causing distracting or discomfort. Looxid Labs’ all-in-one VR headset embedded with EEG and eye-tracking sensors can be used to record the student’s physiological information and also doesn’t place any cognitive load on the student. With both of these sensors combined, a robust measurement of the level of immersion or engagement that the student is experiencing, and understand how their brain and eyes respond too. Potentially, a collaboration between these innovative VR platform and EEG and eye-tracking technology can be the key to have the greatest impact on VR learning outcomes.

Reference

  1. Oculus Education Partners with Research Institutions to Explore VR’s Impact on Learning Outcomes | Oculus Blog
  2. Effectiveness of Virtual Reality-based Instruction on Students’ Learning Outcomes in K-12 and Higher Education: A meta-analysis
  3. Harvard University will Teach a MOOC in Virtual Reality
  4. Research Study Suggests VR can Have a Huge Impact in the Classroom

Read More

Eye and Brain Analyses Help Stave Off The Dangers of Self-Report

By | BLOG

Have you ever given others the benefit of doubt? If you have, on what grounds? Their facial expression? Their gesture? Their tone?

Limitation of Self-Report

At least for researchers, they usually put their trust in statements and numbers, a self-report survey or questionnaire mostly. Self-report is a classic method of gathering data, but at the same time, it is one of many methodologies that is frequently questioned for its reliability. In fact, there is a quite solid argument for questioning the validity of self-report:

  • Participants are not always truthful

Imagine you are asked to fill out a questionnaire on your drug use, suicidal impulse or sexual tendency. Would you be 100% honest about it?

  • Participants may not necessarily have a high introspective ability

Most people find it difficult to assess their feelings and thinking accurately and thoroughly.

  • Interpretation of rating points varies

Though more insightful than a yes-or-no question, a scale of 0–100 to rate your state of mind, for example, challenges you to “chop” your mental states into exactly 100 pieces and hand in the best representation of yourself. Even worse, everyone “chops” it in different ways.

So what can you do about it then?

More Objective and Quantitative Measures: Physiological Responses

The most desirable solution is to scrutinize every “move” people make that is so subtle to be noticed by an observer as well as the observed. Tracking such subtle “moves”, perhaps seemingly difficult at first sight, is not impossible with the help of physiological measures. In fact, people can hardly control involuntary and spontaneous responses and manipulate their physiological activities at a particular moment. Therefore, compared to self-report method, physiological responses are more objective and quantitative.

There is a variety of physiological indicators that have been frequently employed in research: electromyography (EMG) — electrical activity produced by skeletal muscles; galvanic skin response (GSR) — changes in electrical properties of the skin; electrocardiogram (ECG) — electrical activity of the heart; etc. They can bring about observations and insights that would have been difficult to capture otherwise, making up for the deficiency in the validity of subjective measures.

Clearly seeing the potential in physiological measures, one study decided to opt for electroencephalography (EEG) and eye-tracking techniques to measure cognitive load and compare self-report and physiological methods.

Measuring Cognitive Load: A Comparison of Self-report and Physiological Methods

This study compared three methods — self-report, EEG, and eye tracking — to measure cognitive load in solving puzzles with four different levels of difficulty (intrinsic cognitive load). The participants were instructed to solve four different puzzles with increasing difficulty from Puzzle 1 to Puzzle 4 and be fitted with an eye tracking device and an EEG headset during the experiment. The experiment was sequenced in the following order:

  • The operation span task (working memory capacity — recalling the consonant in between mathematical problems — and spatial visualization — paper-folding test)
  • Participant data survey (demographics, vision issues, prior knowledge, etc.)
  • Practice Puzzles 1, 2, and 3
  • Cognitive Load and Puzzle Self-Efficacy Survey (a 9-point response scale for the difficulty level)
  • Problem-Solving Puzzles 1, 2, 3, and 4, presented in a random order for each participant with Cognitive Load and Puzzle Self-Efficacy Survey in between each puzzle problem.
  • The exit survey
Table 1. Self-report Ratings of Cognitive Load (left) vs. Confusion Matrix for EEG Spectral Features (right)

The study first explored the correlation between the self-report ratings of cognitive load and the difficulty of puzzle (intrinsic cognitive load). As indicated in Table 1, the participants self-reported higher cognitive load on average as the intrinsic cognitive load increased.

Figure 1. The process by which the difficulty level of the puzzles and the self-report difficulty ratings for each puzzle are predicted from physiological data.

How about EEG? Based on the literature in cognitive science indicating that alpha waves decrease and theta waves increase as a task becomes more difficult, the spectral analysis (Figure 1) of EEG was carried out to differentiate the level of tasks. In Table 1, it should be noted that the algorithm did not classify any of Puzzle 1 and 2 samples as Puzzle 4, which shows quite accurate classification for the first two puzzles. Also, EEG analysis predicted the difficulty level of Puzzle 3 with a high accuracy of 71%. However, the algorithm failed to distinguish the difficulty levels of Puzzle 3 and 4 samples for the observed Puzzle 4.

Let’s compare two results. The EEG data appeared to better distinguish between Puzzle 2 and 3 than did the average self-report cognitive load ratings: there is no significant difference in mean self-report cognitive load ratings between Puzzle 2 (5.19) and 3 (5.28) statistically. However, neither of the two successfully distinguished the difficulty levels of Puzzle 3 and 4. Overall, better distinguishing puzzles from one another, the EEG analyses were more accurate in evaluating the participants’ cognitive load than self-reported cognitive load ratings.

A little digression here. You may wonder why eye tracking techniques are not discussed in the result. The study initially hypothesized that, based on the literature, there is increased pupil dilation for a complex task compared to an easy task. However, the experimenters acknowledged in the end that subtle changes in pupil and eye movement data were difficult to detect due to the low sampling rate of the eye tracking device and that cognitive load imposed during a puzzle task fluctuated over its duration so capturing changes at every moment is somewhat unnecessary.

Big Opportunities At Stake

Although the study misses out on the opportunity to explore the potential of eye tracking technology, the study successfully demonstrated that physiological measures can possibly serve as an alternative or, if not, a supplement to self-report measures.

Yes. Self-report method has its shortcoming, but its importance should not be undermined. It is ideal for large sample sizes to observe a trend and is an unobtrusive way of acquiring responses without too much hassle. However, if a research topic demands more objective analysis that is unfathomable through self-report and is confined to a small sample size, then self-report loses its effectiveness. So, it depends on what kind of research it is.

Nonetheless, with respect to evaluating cognitive ability and mental states, physiological method is unparalleled. For instance, in education, teachers can use physiological measurements to assess and improve students’ learning ability. Companies can acquire clients’ authentic feedback and improve on its product and service. Doctors can treat post-traumatic stress disorder patients with comprehensive assessment of recovery. There exists a huge room for application.

On a lighter note, in our Medium, there is a recent post about research in a virtual environment (Virtual Reality: Elevate Your Research From Mediocrity to Greatness). It highlights the advantages of integrating virtual reality into a study. A high ecological validity of virtual reality can feed a lifelike experience that can evoke participants’ genuine reactions such as goosebumps owing to phobia. So, it’s needless to mention the powerful synergy that virtual reality creates with physiological methods. Just something to think about!

(End of Document)

Reference

*For further information, contact us at www.looxidlabs.com.

Read More

Dare To Explore: VR Helps You Conquer Your Fears

By | BLOG
Samsung VR Headsets Help Millennials Overcome Their Fears in Persuasive News Ads, Adweek (By Gabriel Beltrone)

What are you afraid of? We all like to think we’re unique, but when it comes to our fears, there are the most common fears people hold. According to the study published in the journal Psychological Medicine, birds, insects and other animals topped the list of common fears in an enormous survey of 43,093 US adults, followed by mountains, tall buildings, bridges and other heights. Some fears are just universal and innate as a response to potential threat of dangers such as a frightening experience of storms, thunder, and lightning. However, other extreme or irrational fears to objects or situations known as phobias trigger entirely different reactions including rapid heart beats, the sweats, trembling and chest pain. Since phobias have no evolutionary purpose to avoid dangers, the phobias are considered as a sort of mental illness, subtypes of anxiety disorder.

Two out of every 100 people have five fears or more. (Illustration: Mona Chalabi)

For phobias, facing a specific fear in a gradual and consistent manner is the most effective and common treatment, called exposure therapy. As Mark Zuckerberg bought Oculus Rift for $2 billion in March 2014, VR environments is on the verge of treating phobias by placing the patients in a virtual world where they experience specific fears including heights, elevators, thunderstorms, public speaking and flying with a promise of greater immersion and more realism. According to Chris Brewin, a professor of clinical psychology at University College London, the potential of VR to treat phobias and fears is huge. In VR exposure therapy, patients are placed in a computer-generated three-dimensional virtual world and guided through the selected environment. Unlike the real environment in the standard exposure therapy, the virtual environments allows the therapist ultimate control over each patient for the perfect simulation.

We have seen the future: Keanu Reeves in The Matrix Reloaded. (Photograph: Allstar/ Warner Bros/ Sportsphoto Ltd)

Samsung’s ‘Be Fearless’ Gear VR campaign is one of the most impressive use cases that helped people face their fears of heights and public speaking and overcome them. In the campaign, Samsung gave 27 people the chance to participate in a four-week training program delivered with Gear VR before offering the chance to face their fear in real life. The participants were taken through virtual scenarios from travelling upwards in an transparent elevator to heli-skiing. Before advancing to the different levels of difficulty, they had to pass a scientific evaluation such as heart rate, eye movement, and self assessment of anxiety levels. According to Samsung, this training helped 87.5% of the group afraid of heights reduce their anxiety level by 23.6%.

https://medium.com/media/ad96a2dbc91c56356456d54766ace50a/href

Although clinical use of VR is in its infancy, VR therapy has slowly but surely made its way to the US shores for years, specifically to treat veterans returning from Iraq and Afghanistan. Findings of a study published in Advances in Integrative Medicine reported that VR therapy significantly reduces in severity of PTSD symptoms and result in rapid extinction . The findings also suggested combining VR and EEG biofeedback as a potential treatment of stress-related disorders. It is because real-time neurophysiological data such as serum cortisol levels, heart rate variability and mid-frontal alpha EEG asymmetry may provide useful inputs for adjusting VR exposure therapy protocols to enhance stress resilience or accelerate treatment response.

Big Idea of 2015: Healing with Virtual Reality, PBS. org (by Allison Eck)

A little bit of fear is normal and sometimes useful, but phobias can interfere with an individual’s ability to lead a normal life. Is fear holding you back? Let VR and EEG biofeedback train you and overcome your phobias.

Reference

  1. Halloween scare: what are the most common phobias?

2. Our Most Common Fears

3. Can virtual reality cure phobias?

4. Samsung’s ‘Be Fearless’ Gear VR campaign combats fear of heights

5. Virtual Reality Therapy for PTSD in the military

Read More

Virtual Reality: Elevate Your Research from Mediocrity to Greatness

By | BLOG

“You open your eyes and find out the electricity in the building has been cut off. Then, you hear an emergency evacuation message over the public address system: ‘Fire has broken out. Please evacuate the building immediately.’ While the sound of the emergency alarm keeps ringing in your ears, you follow the emergency signs to exit the building. You keep coughing and barely see what is going on beyond the smoke. After a while, you finally see the exit but encounter an injured person, trapped under a huge cabinet, asking for your help. Facing an agonizing dilemma, you feel light-headed and hear your heart pounding.”

Patil, Indrajeet, “Neuroanatomical Basis of Concern-Based Altruism in Virtual Environment”, 2017

What would you do? Risk your own life to rescue the person or continue on your way pretending not to have seen anything?

Profound Insights into Human Nature

What you just experienced through the text is the virtual-environment-based scenario constructed by a team of neuroscientists in Italy to study costly altruism which entails helping others at a cost to self.

Prior work in this field suggests that empathic concern (EC), a feeling of compassion or sympathy towards someone in need, is a primary motivation that drives costly altruism. However, existing research fails to effectively evoke self-relevance and underscore the “costly” aspect of altruism. This team of neuroscientists verifies that virtual reality (VR) with contextually abundant settings possesses a high degree of ecological validity and thus can significantly improve upon extant research.

In the aforementioned virtual environment, by pressing a button 150 times, each participant is able to move away the cabinet and rescue the avatar, one of the four computer-controlled avatars that is said to be controlled by another participant in another place and that each participant previously interacted with in a virtual room before the fire breaks out. In the debriefing session, none of the participants reported that they doubted the fact that the avatars are controlled by computer, manifesting that EC is effectively triggered even in a virtual world.

Not only have the experimenters been able to construct a dilemmatic frame with high ecological validity, but also they have been able to elicit more original and visceral responses by imbuing a sense of presence via VR; the results show that the participants were less likely to demonstrate costly altruism in a virtual setting (65%) than in a hypothetical text-based scenario (91%).

As in this research, VR can provide a mediated yet immersive environment to offer profound insights into human study such as Psychology, Cognitive Neuroscience and Behavioral Science. Even more life-changing innovation that VR brings into research is its ability to alter behavior and cognition of an individual.

Cognitive and Behavioral Transformation

In University College London, an interesting psychological research on overcoming excessive self-criticism was conducted in VR. In this experiment, the research team recruited female participants with excessive self-criticism. To begin with, each female participant is instructed to wear a head-tracked head-mounted display and a body-tracking suit so that her virtual body is spatially coincides with her real body. The VR session of the experiment is composed of four stages.

Falconer, Caroline J., “Embodying Compassion: A Virtual Reality Paradigm for Overcoming Excessive Self-Criticism”, 2014

The first stage (Image A) allows each participant to get accustomed to the virtual environment and her virtual body by making gestures, looking around surroundings, and looking at her own avatar in the mirror — all of which are designed to intensify a sense of embodiment.

When each participant enters the second stage, she sees, from the first person perspective of her own avatar, a seated child avatar crying into its hands (Image B). Then, she is instructed to deliver compassionate comments to soothe the crying child. While she calms down the child, the child is programmed to respond accordingly in different stages, from crying into its hands to sitting upright and elevating its head. All the movements and voice coming from each participant are recorded during this stage.

For the third stage, each participant goes through a perspective change: one group experiences the first person perspective (1PP) of the child avatar (Image C) and another group experiences the third person perspective (3PP) facing both her own avatar and the child avatar at a 1-meter distance apart (Image D). Then, each participant is given some time to assimilate to a new perspective.

At the last stage, each participant experiences a real-time replay of her compassion, which she delivered to the child at the second stage, from the child’s perspective (1PP) or from the observer’s perspective (3PP) depending on her group.

It is a seemingly rather complicated experiment, but the result offers some key takeaways for excessive self-criticism. Mere observation and practice of delivering compassionate comments reduced self-criticism, and the additional experience of receiving the compassion and care from one’s own self from the child avatar’s perspective (1PP) boosted more self-compassion and feelings of being safe than when one experienced it from the observer’s perspective (3PP). These key takeaways imply an unlimited potential of VR for treating and studying not only psychological disorders such as phobias and PTSD but also clinically-relevant emotions other than fear and anxiety.

A Bright Future Ahead

Over the last few years, VR industry has put its best foot forward to ramp up VR applications. VR companies have rolled out some decent consumer products — Oculus Rift, HTC Vive, Samsung Gear VR, Google Daydream — and have generated enthusiasm among VR adherents. Due to their efforts and advocacy, the concept of VR, which has been considered a distant future, has come to the fore for the public.

Nevertheless, despite of VR hype, VR has been only perceived as a new digital entertainment platform for movies and games. It is not until very recently that VR has received huge attention from many researchers in a variety of fields — a plethora of research and studies leveraging virtual environment have been published and numerous academic conferences have spotlighted VR as the next big milestone. In fact, utilizing VR technology, researchers can design more engaging experiments to obtain new insights into the human body and mind and transform human cognition and behavior to enhance the lives of people. Yes, by all means, VR can render a greater breadth and depth to your research. So what are you waiting for? Embrace VR. Join the bright future ahead.

(End of Doc.)

Reference

Patil, Indrajeet, et al. “Neuroanatomical Basis of Concern-Based Altruism in Virtual Environment.” Neuropsychologia, 2017, doi:10.1016/j.neuropsychologia.2017.02.015.

Falconer, Caroline J., et al. “Embodying Compassion: A Virtual Reality Paradigm for Overcoming Excessive Self-Criticism.” PLoS ONE, vol. 9, no. 11, Dec. 2014, doi:10.1371/journal.pone.0111933.

Read More

Crack the Shell of Oculus VR Headset Line-up: How Oculus is Preparing to Democratize the Virtual…

By | BLOG

Crack the Shell of Oculus VR Headset Line-up: How Oculus is Preparing to Democratize the Virtual Reality Market

At Facebook’s Oculus Connect 4, Oculus VR’s annual conference, Facebook’s CEO Mark Zuckerberg announced Oculus Go, $199 all-in-one untethered VR headset. Now, VR is poised to enter the mainstream with the roll-out of this attractively priced VR headset.

Introduction to VR Landscape: VR Companies’ Big Betting for Standalone Headsets

In the VR landscape, Oculus Rift and HTC Vive, both debuted in 2016, have been old enemies in successfully delivering completely absorbing VR experience. While the high-end VR industry faced obstacles on its journey to going mainstream, the phone-based VR like Samsung’s Gear VR and Google’s Daydream have been widely adopted thanks to their much lower prices. In particular, as Google rolled out to expand its Daydream VR platform to be available on Samsung Galaxy S8 and Note8 beyond niche Android phones, Google ecosystem has become emerging threats to Samsung’s Gear VR powered by Facebook-owned Oculus.

by David Jagneaux (Google Daydream Support Arrives For Samsung Galaxy S8 And S8+) / UploadVR

Since buying a mobile VR headset such as $79 Daydream View and $129 GearVR isn’t a huge investment, entry barriers to buy one of them are very low. However, these mobile VR headsets are less comfortable than PC-based high-end devices — Oculus Rift and HTC Vive — because of the pressure from the weight of an entire smartphone on the front part of the user’s face as well as less sophisticated technical completion they offer. In the midst of growing competition for being the market leader in VR, the idea of a standalone VR headset is a great big pie that will only grow larger for fans of VR. Intel was the first to kick off its Project Alloy for an all-in-one merged reality solution although the project was killed earlier this year. Also, Google announced that it would offer a new standalone VR headset in concurrence with HTC Vive and Lenovo at Google I/O conference. Last but not least, recently, at Facebook’s Oculus Connect 4, Oculus VR’s annual developer conference, Facebook’s CEO Mark Zuckerberg announced Oculus Go, $199 all-in-one untethered VR headset as well.

by Sean Buckley/CNET

Moving into a New Phase of Ascension: Loosen the Belt of Partnership?

As Samsung, Oculus’ major ally that has successfully promoted the Gear VR far better than Oculus, recently joined to support Google’s Daydream view, the strong relationship between Oculus and Samsung moved into a new phase. Firstly, Oculus doubled its VR hardware lineup by adding two new headsets — a full motion tracking wireless headset codenamed Santa Cruz and Oculus Go featuring built-in display and electronics. Thanks to its four different lineup — Gear VR, Go, Santa Cruz, and Rift, in case Samsung discontinues to produce GearVR and fully supports Daydream, Oculus would still have its own mobile VR headset category to support its developer ecosystem.

The new Oculus VR lineup (Sean Hollister/CNET)

As a sign of Samsung’s ambition to directly compete with the Rift for hardware sales, Samsung recently introduced its premium Odyssey VR headset which will run on the Windows 10 platform. Samsung’s Odyssey will be a flagship headset for Microsoft’s mixed reality platform and spur Microsoft on a strong brand power along with Oculus and Google in the VR landscape. However, Oculus officially mentions that its relationship with Samsung is stronger than ever. Oculus even provides its’ biggest VR platform update called Rift Core 2.0 to Gear VR headset right from the word go, which will bring new VR experience with Dash, enabling existing menus and UI in the dashboard for multi-tasking to be more magical just like in Minority Report. In addition, not only did Facebook’s VR chief Hugo Barra say that Oculus Go and Santa Cruz are not supposed to replace the Rift and GearVR, but also Oculus CTO John Carmack consistently praised Samsung about its ability to distribute Gear VR to a wide range of customers.

Microsoft Official Image

Oculus is Preparing to Democratize the Virtual Reality Market

VR is a platform that may change the life of billions of people, but the current VR user base is certainly too small to survive the hype cycle. Even if a VR game is released for several different VR headsets such as Oculus Rift, HTC Vive, and PlayStation VR, it’s difficult to be profitable. That’s why Facebook’s CEO Mark Zuckerberg, who believes VR is the next big thing, declared Facebook’s mission to get a billion people into virtual reality. As a first step, Mark Zuckerberg announced competitive pricing for its existing high-end hardware Oculus Rift and Touch controller bundle at $399, much cheaper compared to $599 HTC’s Vive. Furthermore, $199 Oculus Go will fill the “sweet spot” between expensive, high-end Oculus Rift and cheap, lightweight Gear VR. Oculus is also building the next generation of a standalone higher-end hardware Santa Cruz, a lot like wireless Rift. At Oculus Connect 4, Oculus CTO Carmack hinted that Oculus Go and the Santa Cruz will converge eventually.

Oculus Connect 4 | Day 2 Keynote: CTO John Carmack

HTC Vive is hot on the trail of Oculus by officially unveiling its standalone VR headset powered by Google Daydream at the Vive developer conference on November 14 as well. The announcement will come after the release of Oculus Go. We’re expecting the technical maturity of those standalone VR headsets and looking forward to seeing how these can change the VR market landscape. For now, it is still unclear how long it will take to get a billion people into VR and whom we shall bet on to win this horse racing. Yet, here is what we can definitely bet on: this sort of hard work by not only Oculus but also other competitors such as HTC Vive with Google Daydream will bring VR into the real world and democratize the VR market in the near future.

(End of Doc.)

HTC VIVE standalone VR headset concept image

Reference

Oculus Connect 4 | Day 1 & Day 2 Keynote (by Oculus)

Why is Oculus making four different VR headsets? (by Adi Robertson / The Verge)

With Facebook’s money, Oculus is fending off old enemies, former friends, and new foes (by Ben Lang / Road To VR)

Oculus’ standalone headsets point to a changing VR landscape (by Nicole Lee / Engadget)

Why this bulky standalone headset from Oculus is the future of VR (by Daniel Terdiman / Fast Company)

The VR price war is on : Facebook unveils $199 Oculus Go standalone headset (by Robert Hof / SiliconAngle)

HTC, Qualcomm to roll out standalone VR headset, but it’ll be available only in China (By Carl Velasco / Tech Times)

Samsung Odyssey is the Premium Windows VR Headset with Leading Specs, Integrated Audio (By Ben Lang / Road To VR)

Oculus’ prototype Santa Cruz headset feels like a wireless Oculus Rift (By Adi Robertson / The Verge)

Report: HTC to unveil standalone VR headset at Vive developer conference in November (By Scott Hayden / Road To VR)

Read More

Could VR Interface Enhance Users’ Sense of Immersion and Feeling of Presence in VR?

By | BLOG

Last June, it was reported that Apple is planning to acquire a German eye-tracking tech firm SMI. SMI is a leading company in the field of eye-tracking technology. Embedded into HTC Vive and Samsung Gear VR, its technology has been used in establishing ‘foveated rendering’ method that renders a high resolution to central vision area but blurs out the rest. With this move reported, it is no doubt that Apple is going to spur the development of smart glasses with AR/VR technologies. Since last year, there have been three takeovers of eye-tracking firms by global IT companies, and now many are scrambling to buy eye-tracking companies for VR users’ better experience. Last October, Google acquired Eyefluence, a startup that enables VR users to do a screen transition or take specific actions through their eye movements. In the wake of the deal, Facebook’ VR unit Oculus also acquired The EyeTribe to solidify its dominance in VR market. Why would such global IT companies seeking VR market dominance have their eyes on eye-tracking technology?

TechCrunch | Apple acquires SMI eye-tracking company (Posted Jun 26, 2017 by Lucas Matney)

Novel Approaches for VR Interfaces

VR creates an environment that maximizes users’ immersion and sense of presence by crossing the boundaries of time and space. It is characterized by VR users’ interaction with the simulated surroundings. Thus, a device that controls VR effectively and conveniently is essential in order to enhance complete immersion and close the gap between reality and virtual reality. In other words, it is of great importance to develop ‘Brain-Computer Interface’ that provides a visual interface indistinguishable from reality at a perceptual level, elicits physical feedbacks from a user’s actions with a haptic interface, and reflects a user’s emotional changes during his or her VR experience. The reason why VR devices stimulate our sight most intensively out of five senses at a perceptual level has to do with the fact that one fourth of cerebral cortex accounts for an image creation and vision, making us vulnerable to optical illusions. To illustrate, a user perceives VR through a process by which light hits the retina and reaches the brain via the optic nerve. These demands on VR should explain why the global ‘big players’ and startups are obsessed with seizing eye-tracking technology for visual interface development. However, since the essence of VR depends on how VR contents stimulate the brain nerves and how the brain interprets and responds to such stimulations, there is a fast-growing interest in Brain-Computer Interface (BCI) besides visual and haptic interface. (FYI, just jump back to the previous story ‘The Sneak Peek into My Brain: Can We Push the Boundaries of Communication in VR Space using Brain-Computer Interface?’)

Anatole Lécuyer (2010) Using eyes, hands and brain for 3D interaction with virtual environments: a perception-based approach. HDR defense

Visual Interface — Visual Attention and Feedback

The camera-based VR interface, one of the visual interface, includes Leap Motion’s controller, which detects a user’s hand positions and movements, SoftKinetic(Soni) and Nimble VR(Oculus). One of the applications of the visual interface appears in Minority Report: a PC recognizes Tom Cruise’s in-air hand gesture and takes actions according to the inputs. Without such body movements, ‘hand-free VR’ will be possible in near future if eye movements can act as inputs by embedding a camera into the lenses of the VR headset. Tobii, SMI, Eyefluence and The EyeTribe have been the leading companies developing eye-tracking technology and, except for Tobii, all of them were recently acquired by Apple, Google and Facebook respectively. Similarly, FOVE, with embedded eye-tracking technology in its VR headset, has shown the possibility of visual interface to implement a user’s immersion and interaction through the previously-mentioned technique ‘foveated rendering’ and a direct eye-contact between a virtual character and a user. In addition, BinaryVR’s technology, which utilizes 3D camera to recognize a user’s facial expression and creates a 3D avatar in VR, is another example of convergence, combining facial recognition technology with visual interface.

Haptic Interface — Motor Action and Haptic Feedback

A haptic interface, a touch-based interface in which a user can feel the movement and texture of objects in VR, is indispensable VR interface for intensifying immersion and sense of presence. Ultrahaptics is one of the world’s leading companies to develop haptic interface. It possesses gesture control technology that employs ultrasonic waves to recognize 3D motion in the air and give air-haptic-based tactile feedbacks. While Ultrahaptics makes a tactile sensation through ultrasonic waves, Tactical Haptics attaches a reactive grip, a motion controller, to a normal VR controller as an auxiliary to receive haptic feedback. Moreover, having raised crowdfunding on Kickstarter before, Gloveone allows users to control objects in VR using its Haptic Glove. DEXMO, a tactile device developed by Dexta Robotics, changes the direction and magnitude of applied force dramatically according to the hardness of an object, providing a weak feedback when a soft object such as a sponge or a cake is touched but a strong feedback when a hard object such as a brick or a pipe is touched. Last but not least, KOR-FX, surpassing the traditional haptic feedback by hands, enables a user to feel the vibrations with his or her entire body through a vest for immersive RPG games.

CNN | Devices with feeling: new tech creates buttons and shapes in mid-air (Posted April 1, 2015 by Jacopo Prisco)

Brain-Computer Interface, the Ultimate VR Interface through User Emotion Recognition

The previously-introduced interfaces such as visual and haptic interface each focus on inducing emotional immersion with feedback like reality by a user’s vision and touch in VR. Since the brain is the backbone of our senses and perception neuroscientifically, brain-computer interface should naturally pop into our mind when we discuss both visual and haptic interface just as “where the needle goes, the thread follows”. The ultimate version of VR should be able to interpret a user’s perception and sense that approximately 4 billion neurons create and take specific actions in VR through the brain by reading neural signals — the result of electrical activity of the brain. In this context, Facebook, at this year’s developer conference F8, announced that it’s working on developing BCI technology that can translate thoughts into text messages and sound into tactile information. Just as Facebook, Looxid Labs is seamlessly integrating non-invasive BCI with VR that enhances users’ immersion and sense of presence in VR using their physiological signals such as Electroencephalogram(EEG), eye-tracking and pupil size. Looxid Labs aims to carry forward current BCI to the realm of VR by developing an ultimate emotion recognition system using eye and brain interface that allows users to emotionally engage with VR contents and directly interprets users’ emotions by simply wearing a very VR headset.

Read More

How to Unlock VR’s Potential

By | BLOG

Unleashing Emotional Connection between VR Contents and Users

Golden State Warriors’ Kevin Durant shoots a 3-point basket over Cleveland Cavaliers’ LeBron James for the go ahead basket during the fourth quarter of Game 3 of the NBA Finals (By GIESON CACHO at The Mercury News)

In Game 3 of the NBA 2017 finals, the Golden State Warriors and Cleveland Cavaliers had got steamed up until the last minute. One of best highlights from the Game 3 was absolutely Kevin Durant’s clutch pull-up 3-pointer to give the Golden State Warriors turning around. When you watch those VR highlights in the NextVR app, a streaming app from a leading VR broadcaster of live events, you can get fully immersed and feel presence to have real courtside NBA experience. NextVR is a powerhouse of VR live streaming technology, best known for beaming live VR footage of Manchester United vs. FC Barcelona soccer match in July 2015, that partnered with the NBA to show games via VR exclusively thanks to its previous vivid VR contents such as pass and offside as well as a soccer ball coming straight and a front block tackle. NextVR which actually launched as 3D TV production company Next3D in 2009 changed its route toward VR industry after 3D TV failed in the market and now provides professional live VR streaming contents including sports games and live concerts.

NextVR said its key success factor is to provide users with feeling of presence and emotional experiences as participants in the stadium beyond simple spectators of the game in 3D, thereby allowing the users to get better life-like experiences compared to other 360-degree and VR videos. For example, during live stream of a sports game, the NextVR takes advantage of a few techniques to make the users feel like sitting in the front row as follows:

  • to capture the players’ field warming up or attractions at stadium from the pointview of the courtside seats right next to players,
  • to provide users with far more engaging and dramatic experiences to watch games being played close to their eyes in the stadium by changing the camera angle with the flow of the game, and
  • to lead the users’ visual concentration through the announcer’s message.

According to CNET interview with NextVR’s executive chairman Brad Allen, the average viewing time spiked from 7 minutes to 42 minutes as the NBA season progressed. He also hinted the importance of giving users incentives to keep wearing inconvenient VR headsets even if they have every reason to take the headsets off because they’re so big and bulky. Back to basics, how can we unleash the best user experience that goes beyond the inconvenience of wearing VR headsets for users?

3D vs. VR: Immersion and Interaction are the Most Competitive Advantages of VR

A global boom of 3D TV and content creation technology accelerated by the incredible success of Hollywood 3D blockbuster ‘Avatar’ was selected as top 10 tech ‘fails’ of 2010 by CNN and finally made its epic downfall as shunned by consumers. Even though 3D TV was considered as promising next-generation multimedia, it has failed to create an entire ecosystem due to its evident limitations — requiring people not only to buy high-end hardware but to wear uncomfortable glasses — and the absence of killer contents. From a usability standpoint, VR also has weaknesses similar to 3D TV technology because: i) VR headset should be bought and worn by users, ii) users’ VR adaptation needs to be supplemented by more advanced technology, and iii) VR killer apps are not enough. However, it is the most competitive advantages of VR to provide users with immersion incomparable to watching a big screen 3D TV. Unlike 3D TV where users watch the contents from the observers’ point of view, VR allows users to enter into the VR world as participants and interact more closely with objects in VR. Nonetheless, VR market growth is still on its way for stagnation because current VR users haven’t yet experienced sufficient immersion and interaction with the VR contents.

Baobab Studios’ second entry ‘Asteroids!’

Successful Immersion and Interaction in VR depending on Emotional Connection between Contents and Users

Out of emerging VR content creators, Baobab studios, a VR animation company teamed up with former Pixar, DreamWorks, and Disney employees, is the pioneer of the best VR storytelling. Baobab studios’ first work ‘Invasion’ is introduced as a typical example of VR content which provides immersion and interaction in the VR environment. Its storyline contains the encounter between the bunny and two aliens in 360-degree view of snowy field. What makes this work different from other VR videos is that the user is able to enjoy the animation with full immersion from the bunny’s point of view. When the bunny appears in the first scene and looks at the user’s eyes straight and sniffs like a living creature, the user focuses on all the bunny’s actions including its gaze and attention and finally makes emotional bond that identifies the user him- or herself with the bunny. Baobab Studios’ second entry ‘Asteroids’ uses a wider variety of settings to create stronger emotional connection and interaction between protagonist characters and the user than previous one ‘Invasion’. First of all, the pet robot protagonist caught the user’s attention with groaning sound like “brrr…brrr” and flickering light for the user’s emotional connection. Next, ‘Mac’, one of the aliens who appeared in the previous ‘Invasion’, appeared and guided the user to turn his or her head left and right while playing a ball with the pet robot and throwing the ball right in front of the user’s eyes. It helps the user emotionally connect him or her with the protagonist ‘Mac’. Last but not least, when another alien ‘Cheez’ is wiping the spaceship window and bumping into an asteroid off course, the user feel like to become ‘Mac’ in a moment of extreme tension through organic and emotional connection. In short, as the emotional connection between the characters in the VR content and the user is combined with the storytelling, the user maximize his or her emotional connection with the content, and the user’s actual emotional interaction are triggered as well.

Meteora, Greece by Jason Blackeye

Making VR More Realistic than Reality beyond Uncanny Valley through Users’ Emotional Interaction

Since VR is what users enjoy through the device, it is critical to determine the success or failure of the VR market whether it can provide users with seamless experience. In particular, users’ seamless experience in VR environment indicates to be emotionally connected with VR beyond the uncanny valley by interacting with the contents based on their emotions. Here, the uncanny valley refers to a phenomenon when advanced immersive VR technology reaches a certain level, users could feel strong eeriness and revulsion, but when the technology exceeds the level not to distinguish reality from VR, users’ favor and affinity with VR increases again. The success factors of previously introduced NextVR’s live VR streaming contents and Baobab studios’ VR animations were to get users fully immersed in the VR as the participants as well. In order to achieve VR immersion, both of them intentionally adopted the settings which enables the users to feel presence and emotionally connect with VR contents. And yet, there still exist several limitations to create emotional connection and interaction between users and the contents through intentional setup. Therefore, it is essential to complete VR more realistic than reality by not only ensuring emotional connection between VR contents and users but also providing adaptive interaction based on the users’ exact feelings. In this sense, Looxid Labs has developed seamless user emotion recognition system in VR by adopting eye-tracking and brainwave analysis as a medium for analyzing users’ emotions with high accuracy. Our goal is to introduce users’ emotions into a virtual environment and enable users to emotionally engage with a virtual character knowing that the users’ emotional states. Through our emotion recognition system, users’ emotional states can be classified with high accuracy using their eye and brain information, and then the users’ emotional connection can be used for VR interface.

Read More

The Sneak Peek into My Brain: Can We Push the Boundaries of Communication in VR Space using Brain…

By | BLOG
by Dmitry Ratushny at unsplash

On the second day of Facebook’s annual developer conference F8 held on April 18 to 19, Regina Dugan, Facebook’s vice president of engineering, got on stage as the last speaker and asked a thought-provoking question ‘So what if you could type directly from your brain?’

Regina Dugan was the first woman to have served as a director in the Defense of Advanced Research Projects Agency(DARPA) responsible for R&D of United States Department of Defense (USDoD) and moved to Google as the vice president of ‘Advanced Technologies and Projects (ATAP)’. Last year, Facebook had announced that Regina Dugan joined Facebook to lead its secretive R&D team ‘Building 8’ kicked off in April 2016. She is responsible for making Facebook’s vision of the future real to develop Brain-Computer Interface (BCI) as a communication tool that helps people be interconnected with each other in the coming VR / AR era. In order to achieve the vision, the ‘Building 8’ composed of 60 neuroscientists, machine learning professionals, and system integration engineers has been developing computer interface powered by human brain to type 100 words per minute by decoding users’ neural activity using optical imaging. In addition to Facebook’s ambitious plan to text by thinking, she unveiled Facebook’s so-called ‘silent speech interface’ that could enable people to ‘feel’ sound through their skin and understand the sound in their brain to push the boundaries of communication beyond languages around the world. How can we make those futuristic BCI ideas — transforming our thoughts into text messages and hearing the sound through the skin — happen?

https://medium.com/media/dabf1c6842d622cf1b2216217b82d77b/href

Prelude to BCI: Delivering Information Directly to the Human Brain

In human brain, there are 86 billion neurons that fire a thousand times per second — 1 kHz/s for each neuron. Since these neurons can’t be activated all at the same time, the times of firing for the neurons are divided by 100. In other words, our brain can produce about 1 TB of data per second enabling our brain to stream 40 HD movies every second as a result. However, when we pulled out the data out of our brain and converted them into sound, the streaming speed is slowed down to four HD movies per second similar to the speed of 1980s dial-up modem. Here’s a problem to make speech as an inefficient communication tool. Per Regina Dugan, that’s why Facebook has been thinking of the futuristic ideas in which people can text a friend by directly interpreting their brainwave and hear through their skin as seamless interfaces for VR / AR environment. This BCI technology would power the speed to retain and transmit much more information. To illustrate, the direct brain-to-brain communication enables Chinese people to think in Mandarin and then Spanish people to feel it instantly in Spanish by transforming the Chinese information extracted from Chinese people’ brainwave into Spanish without the use of speech. At F8, she introduced a demo videoclip as a recent outcome of the Building 8’s initial research. In the video, Facebook engineers showed the experiment about hearing through skin using special actuators to deliver specific frequencies for a person’s brain and then translate actual sound by letting the person’s skin mimic the cochlea in the ear instead of hearing the sound directly.

Brain-Computer Interface at Pixabay

Transforming the World by Thinking and Creating Rapport between Human and Computer

The Matrix, a Sci-Fi film released in 1999, reflected the worldview ‘reality is actually an extremely complicated VR program’. In the movie, while trapped inside their pods, the humans’ brains are connected to computers where a virtual world exists known as the matrix. Whereas the movie illustrated invasive BCI technology to connect human brain into the virtual world, recent research trends in BCI technology have tried to leverage non-invasive uses. For example, MIT currently built a robot that can read human thoughts non-invasively through an EEG helmet and then perform advanced tasks to pick up and sort objects. In addition, a team of researchers from the Technische Universität München developed the ‘Brainflight’ technology to fly planes with pilots’ thoughts alone with a cap connected to EEG electrodes. BCI lab at Graz University in Austria also explored the possibility to control and grow a character in popular video game World of Warcraft only with EEG. As wearable devices that read users’ brainwave such as Emotiv and Neurosky are emerging, functionality of BCI controller is getting more attention. In particular, those who are disabled or difficult to communicate mainly want to use the wearables to control other devices or services with their thoughts. Furthermore, this sort of BCI technology is capable of being applied in various industries by quantifying the states of stress, emotions, moods, concentration, etc.

Matrix at Pixabay

Looxid Labs Provides Brand New Interaction in VR Space using Eye and Brain Interface

Looxid Labs is seamlessly integrating non-invasive BCI with VR space using users’ biological signals including EEG, eye-tracking, and pupil size. Since VR is a space where users can enter into by moving from real to digital world, it is difficult to make a big difference from the conventional PC or mobile based user experience by simply watching the extra large display in the users’ view. Thus, in order to make users become more immersed and enhance users’ feeling of presence in VR, should it be considered to implement adaptive interaction for VR as well as provide users with comfortable experiences in terms of the advanced hardware and compelling VR contents. In other words, users’ ‘immersion’ and ‘feeling of presence’ are the most important factors to implement VR, and the users should become both a main character and a participant of the VR contents. Therefore, as the PC-to-mobile environment contributes to switching user interfaces from keyboard/mouse to touch interface, VR requires a brand new interface that reflects the users’ real-world experience. In order to provide users with adaptive interactivity in both real and virtual world, we consider emotional connection between the contents and users as a seamless interface for VR. That’s why we are developing the emotion recognition system in VR using eye and brain interface that directly interprets users’ emotions once users wear the VR headsets. Our goal is to implement ultimate BCI in which the users’ eye and brain information are seamlessly transformed into emotional data and enables users to emotionally engage with VR contents.

Interactivity at Pixabay

Read More

What emotion recognition is — You may not even notice its importance

By | BLOG

In January 2016, Apple acquired AI startup Emotient that reads people’s emotions by analyzing facial expressions. Following the announcement in November 2016, Facebook bought a startup FacioMetrics which developed face tracking and emotion detection technology as well. Global IT companies including not only Apple and Facebook but Google and Microsoft have been devoted to develop emotion recognition technologies using facial recognition and a variety of human physiological signals. The emotion recognition solution market is worth USD 6.7 billion in 2016 and estimated to reach USD 67.1 billion by 2021. Recently, even in the global crowdfunding platforms such as Kickstarter and Indiegogo, there are several campaigns to raise money for wearable devices which help people manage their stress and emotions. What is the value of human emotions that startups as well as global IT companies are trying to measure?

Definition of Emotions

In general, emotions are a relatively brief conscious experience characterized by intense mental activity and a high degree of pleasure or displeasure. Despite several scientific discourses on the definition of emotions, there is still no consensus for the meaning itself. Psychologist William James and physicist Carl Lange have come up with the James-Lange theory in which physiological change is primary, and emotion is then experienced when the brain reacts to the information received via the body’s nervous system. Taken together, recent researches on emotions suggest that emotional systems comprise both neural and bodily states that provide immediate means for protection of the individual and that maximize adaptation to survival-salient events. In addition, feelings, synonym of emotions, are mental experiences of body states and also regarded as defense mechanisms to respond to stimuli generated within an individual’s body.

Emotional Response from a Cognitive Neuroscience Perspective

Cognitive Neuroscience explains emotions as responses from human limbic system to stimuli detected by sensory systems including five senses — sight(ophthalmoception), smell(olfacception), touch(tactioception), taste(gustaoception), hearing(audioception) — and points out the amygdala as the primary emotional engine of the brain called ‘emotion circuits in the brain’. Interestingly, there may be differences in the responses of the neuroanatomical emotional circuits or the autonomic nervous system causing emotions. Considering both conscious and unconscious awareness, the former refers to an emotional response caused by the stimuli when an individual perceives an object while the latter describes another emotional response in an opposite situation. There might be even greater differences in autonomic nervous system in response to unconscious awareness. The response of autonomic nervous system to emotional stimuli is performed in the order of Perception-Valuation-Action (PVA) enabling to analyze emotions by measuring bodily functions that follow emotional changes through sympathetic and parasympathetic nerve antagonism.

Reading Emotions from Physiological Reactions ‘Inside Out’

In the 1970s, American psychologist Paul Eckman classified human basic emotions into six categories: anger, disgust, fear, happiness, sadness, and surprise. In the movie ‘Inside Out’ actually consulted by Paul Eckman, five emotions except for surprise appeared. Recently, there have been various attempts to classify the emotions by measuring bodily responses through sympathetic and parasympathetic nerve antagonism and then classifying emotions. To illustrate emotion classification using bodily responses, when a person is excited, his muscles become tense, his palms get sweaty, and both his heart rate and body temperature increase. In the situation, there are three general indicators that can be used for emotion recognition. First, Electromyography (EMG) measures the electrical impulses of muscles during contraction. Second, the skin conductance response (SCR), also known as galvanic skin response (GSR), is the phenomenon that the skin momentarily becomes a better conductor of electricity when a person is in tension. Third, an increase in the electrocardiogram (ECG) signal indicates a state of stress or frustration. In addition to three indicators, changes in pupil size and reactivity are closely related to emotional changes. In particular, pupils dilate slightly in response to any exciting or interesting stimulus while pupil size tends to decrease in response to unpleasant stimulus.

Accuracy in Emotion Recognition using Brainwaves Still Remains Insufficient

Among emotion recognition technologies based on human physiological responses, using the electroencephalogram (EEG) directly recorded at the head surface provides better accuracy than using indirect information such as facial expressions or voices which analyzes emotions revealed. Existing emotion recognition techniques using EEG mainly apply basic machine learning algorithms that extract features defined in brainwave and matching them into emotion indexes defined by prior researches. However, in spite of its relatively high accuracy, the traditional emotion analytics using EEG has some limitations in emotion classification. It is difficult to achieve more than a certain level of accuracy because of low quality of EEG signals and insufficient quantity of the data. Furthermore, it is not easy to define the emotion indexes themselves.

Looxid Labs Enhances Emotion Recognition Accuracy by Applying Deep Learning Algorithms to EEG Signals

As a result, there has been a limit to applying supervised learning to match defined emotion indexes with well known features in EEG even if the machine learning technology was advanced. Therefore, in order to overcome these limitations, Looxid Labs aims to extract a variety of emotion indexes from many people through deep learning algorithm based on representation learning. In other words, we are developing a technology that improves the accuracy of emotion classification by finding hidden patterns in the EEG signals themselves. In addition to EEG signals, changes in pupil size and reactivity are also combined to improve the accuracy of emotion recognition. Through Looxid Labs’ technology, it is possible to take advantage of the emotional states of the individual, which is hard to generalize and objectify, as business indexes customized to the needs of various industries.

Read More