Category

BLOG

Could VR Interface Enhance Users’ Sense of Immersion and Feeling of Presence in VR?

By | BLOG

Last June, it was reported that Apple is planning to acquire a German eye-tracking tech firm SMI. SMI is a leading company in the field of eye-tracking technology. Embedded into HTC Vive and Samsung Gear VR, its technology has been used in establishing ‘foveated rendering’ method that renders a high resolution to central vision area but blurs out the rest. With this move reported, it is no doubt that Apple is going to spur the development of smart glasses with AR/VR technologies. Since last year, there have been three takeovers of eye-tracking firms by global IT companies, and now many are scrambling to buy eye-tracking companies for VR users’ better experience. Last October, Google acquired Eyefluence, a startup that enables VR users to do a screen transition or take specific actions through their eye movements. In the wake of the deal, Facebook’ VR unit Oculus also acquired The EyeTribe to solidify its dominance in VR market. Why would such global IT companies seeking VR market dominance have their eyes on eye-tracking technology?

TechCrunch | Apple acquires SMI eye-tracking company (Posted Jun 26, 2017 by Lucas Matney)

Novel Approaches for VR Interfaces

VR creates an environment that maximizes users’ immersion and sense of presence by crossing the boundaries of time and space. It is characterized by VR users’ interaction with the simulated surroundings. Thus, a device that controls VR effectively and conveniently is essential in order to enhance complete immersion and close the gap between reality and virtual reality. In other words, it is of great importance to develop ‘Brain-Computer Interface’ that provides a visual interface indistinguishable from reality at a perceptual level, elicits physical feedbacks from a user’s actions with a haptic interface, and reflects a user’s emotional changes during his or her VR experience. The reason why VR devices stimulate our sight most intensively out of five senses at a perceptual level has to do with the fact that one fourth of cerebral cortex accounts for an image creation and vision, making us vulnerable to optical illusions. To illustrate, a user perceives VR through a process by which light hits the retina and reaches the brain via the optic nerve. These demands on VR should explain why the global ‘big players’ and startups are obsessed with seizing eye-tracking technology for visual interface development. However, since the essence of VR depends on how VR contents stimulate the brain nerves and how the brain interprets and responds to such stimulations, there is a fast-growing interest in Brain-Computer Interface (BCI) besides visual and haptic interface. (FYI, just jump back to the previous story ‘The Sneak Peek into My Brain: Can We Push the Boundaries of Communication in VR Space using Brain-Computer Interface?’)

Anatole Lécuyer (2010) Using eyes, hands and brain for 3D interaction with virtual environments: a perception-based approach. HDR defense

Visual Interface — Visual Attention and Feedback

The camera-based VR interface, one of the visual interface, includes Leap Motion’s controller, which detects a user’s hand positions and movements, SoftKinetic(Soni) and Nimble VR(Oculus). One of the applications of the visual interface appears in Minority Report: a PC recognizes Tom Cruise’s in-air hand gesture and takes actions according to the inputs. Without such body movements, ‘hand-free VR’ will be possible in near future if eye movements can act as inputs by embedding a camera into the lenses of the VR headset. Tobii, SMI, Eyefluence and The EyeTribe have been the leading companies developing eye-tracking technology and, except for Tobii, all of them were recently acquired by Apple, Google and Facebook respectively. Similarly, FOVE, with embedded eye-tracking technology in its VR headset, has shown the possibility of visual interface to implement a user’s immersion and interaction through the previously-mentioned technique ‘foveated rendering’ and a direct eye-contact between a virtual character and a user. In addition, BinaryVR’s technology, which utilizes 3D camera to recognize a user’s facial expression and creates a 3D avatar in VR, is another example of convergence, combining facial recognition technology with visual interface.

Haptic Interface — Motor Action and Haptic Feedback

A haptic interface, a touch-based interface in which a user can feel the movement and texture of objects in VR, is indispensable VR interface for intensifying immersion and sense of presence. Ultrahaptics is one of the world’s leading companies to develop haptic interface. It possesses gesture control technology that employs ultrasonic waves to recognize 3D motion in the air and give air-haptic-based tactile feedbacks. While Ultrahaptics makes a tactile sensation through ultrasonic waves, Tactical Haptics attaches a reactive grip, a motion controller, to a normal VR controller as an auxiliary to receive haptic feedback. Moreover, having raised crowdfunding on Kickstarter before, Gloveone allows users to control objects in VR using its Haptic Glove. DEXMO, a tactile device developed by Dexta Robotics, changes the direction and magnitude of applied force dramatically according to the hardness of an object, providing a weak feedback when a soft object such as a sponge or a cake is touched but a strong feedback when a hard object such as a brick or a pipe is touched. Last but not least, KOR-FX, surpassing the traditional haptic feedback by hands, enables a user to feel the vibrations with his or her entire body through a vest for immersive RPG games.

CNN | Devices with feeling: new tech creates buttons and shapes in mid-air (Posted April 1, 2015 by Jacopo Prisco)

Brain-Computer Interface, the Ultimate VR Interface through User Emotion Recognition

The previously-introduced interfaces such as visual and haptic interface each focus on inducing emotional immersion with feedback like reality by a user’s vision and touch in VR. Since the brain is the backbone of our senses and perception neuroscientifically, brain-computer interface should naturally pop into our mind when we discuss both visual and haptic interface just as “where the needle goes, the thread follows”. The ultimate version of VR should be able to interpret a user’s perception and sense that approximately 4 billion neurons create and take specific actions in VR through the brain by reading neural signals — the result of electrical activity of the brain. In this context, Facebook, at this year’s developer conference F8, announced that it’s working on developing BCI technology that can translate thoughts into text messages and sound into tactile information. Just as Facebook, Looxid Labs is seamlessly integrating non-invasive BCI with VR that enhances users’ immersion and sense of presence in VR using their physiological signals such as Electroencephalogram(EEG), eye-tracking and pupil size. Looxid Labs aims to carry forward current BCI to the realm of VR by developing an ultimate emotion recognition system using eye and brain interface that allows users to emotionally engage with VR contents and directly interprets users’ emotions by simply wearing a very VR headset.

Read More

How to Unlock VR’s Potential

By | BLOG

Unleashing Emotional Connection between VR Contents and Users

Golden State Warriors’ Kevin Durant shoots a 3-point basket over Cleveland Cavaliers’ LeBron James for the go ahead basket during the fourth quarter of Game 3 of the NBA Finals (By GIESON CACHO at The Mercury News)

In Game 3 of the NBA 2017 finals, the Golden State Warriors and Cleveland Cavaliers had got steamed up until the last minute. One of best highlights from the Game 3 was absolutely Kevin Durant’s clutch pull-up 3-pointer to give the Golden State Warriors turning around. When you watch those VR highlights in the NextVR app, a streaming app from a leading VR broadcaster of live events, you can get fully immersed and feel presence to have real courtside NBA experience. NextVR is a powerhouse of VR live streaming technology, best known for beaming live VR footage of Manchester United vs. FC Barcelona soccer match in July 2015, that partnered with the NBA to show games via VR exclusively thanks to its previous vivid VR contents such as pass and offside as well as a soccer ball coming straight and a front block tackle. NextVR which actually launched as 3D TV production company Next3D in 2009 changed its route toward VR industry after 3D TV failed in the market and now provides professional live VR streaming contents including sports games and live concerts.

NextVR said its key success factor is to provide users with feeling of presence and emotional experiences as participants in the stadium beyond simple spectators of the game in 3D, thereby allowing the users to get better life-like experiences compared to other 360-degree and VR videos. For example, during live stream of a sports game, the NextVR takes advantage of a few techniques to make the users feel like sitting in the front row as follows:

  • to capture the players’ field warming up or attractions at stadium from the pointview of the courtside seats right next to players,
  • to provide users with far more engaging and dramatic experiences to watch games being played close to their eyes in the stadium by changing the camera angle with the flow of the game, and
  • to lead the users’ visual concentration through the announcer’s message.

According to CNET interview with NextVR’s executive chairman Brad Allen, the average viewing time spiked from 7 minutes to 42 minutes as the NBA season progressed. He also hinted the importance of giving users incentives to keep wearing inconvenient VR headsets even if they have every reason to take the headsets off because they’re so big and bulky. Back to basics, how can we unleash the best user experience that goes beyond the inconvenience of wearing VR headsets for users?

3D vs. VR: Immersion and Interaction are the Most Competitive Advantages of VR

A global boom of 3D TV and content creation technology accelerated by the incredible success of Hollywood 3D blockbuster ‘Avatar’ was selected as top 10 tech ‘fails’ of 2010 by CNN and finally made its epic downfall as shunned by consumers. Even though 3D TV was considered as promising next-generation multimedia, it has failed to create an entire ecosystem due to its evident limitations — requiring people not only to buy high-end hardware but to wear uncomfortable glasses — and the absence of killer contents. From a usability standpoint, VR also has weaknesses similar to 3D TV technology because: i) VR headset should be bought and worn by users, ii) users’ VR adaptation needs to be supplemented by more advanced technology, and iii) VR killer apps are not enough. However, it is the most competitive advantages of VR to provide users with immersion incomparable to watching a big screen 3D TV. Unlike 3D TV where users watch the contents from the observers’ point of view, VR allows users to enter into the VR world as participants and interact more closely with objects in VR. Nonetheless, VR market growth is still on its way for stagnation because current VR users haven’t yet experienced sufficient immersion and interaction with the VR contents.

Baobab Studios’ second entry ‘Asteroids!’

Successful Immersion and Interaction in VR depending on Emotional Connection between Contents and Users

Out of emerging VR content creators, Baobab studios, a VR animation company teamed up with former Pixar, DreamWorks, and Disney employees, is the pioneer of the best VR storytelling. Baobab studios’ first work ‘Invasion’ is introduced as a typical example of VR content which provides immersion and interaction in the VR environment. Its storyline contains the encounter between the bunny and two aliens in 360-degree view of snowy field. What makes this work different from other VR videos is that the user is able to enjoy the animation with full immersion from the bunny’s point of view. When the bunny appears in the first scene and looks at the user’s eyes straight and sniffs like a living creature, the user focuses on all the bunny’s actions including its gaze and attention and finally makes emotional bond that identifies the user him- or herself with the bunny. Baobab Studios’ second entry ‘Asteroids’ uses a wider variety of settings to create stronger emotional connection and interaction between protagonist characters and the user than previous one ‘Invasion’. First of all, the pet robot protagonist caught the user’s attention with groaning sound like “brrr…brrr” and flickering light for the user’s emotional connection. Next, ‘Mac’, one of the aliens who appeared in the previous ‘Invasion’, appeared and guided the user to turn his or her head left and right while playing a ball with the pet robot and throwing the ball right in front of the user’s eyes. It helps the user emotionally connect him or her with the protagonist ‘Mac’. Last but not least, when another alien ‘Cheez’ is wiping the spaceship window and bumping into an asteroid off course, the user feel like to become ‘Mac’ in a moment of extreme tension through organic and emotional connection. In short, as the emotional connection between the characters in the VR content and the user is combined with the storytelling, the user maximize his or her emotional connection with the content, and the user’s actual emotional interaction are triggered as well.

Meteora, Greece by Jason Blackeye

Making VR More Realistic than Reality beyond Uncanny Valley through Users’ Emotional Interaction

Since VR is what users enjoy through the device, it is critical to determine the success or failure of the VR market whether it can provide users with seamless experience. In particular, users’ seamless experience in VR environment indicates to be emotionally connected with VR beyond the uncanny valley by interacting with the contents based on their emotions. Here, the uncanny valley refers to a phenomenon when advanced immersive VR technology reaches a certain level, users could feel strong eeriness and revulsion, but when the technology exceeds the level not to distinguish reality from VR, users’ favor and affinity with VR increases again. The success factors of previously introduced NextVR’s live VR streaming contents and Baobab studios’ VR animations were to get users fully immersed in the VR as the participants as well. In order to achieve VR immersion, both of them intentionally adopted the settings which enables the users to feel presence and emotionally connect with VR contents. And yet, there still exist several limitations to create emotional connection and interaction between users and the contents through intentional setup. Therefore, it is essential to complete VR more realistic than reality by not only ensuring emotional connection between VR contents and users but also providing adaptive interaction based on the users’ exact feelings. In this sense, Looxid Labs has developed seamless user emotion recognition system in VR by adopting eye-tracking and brainwave analysis as a medium for analyzing users’ emotions with high accuracy. Our goal is to introduce users’ emotions into a virtual environment and enable users to emotionally engage with a virtual character knowing that the users’ emotional states. Through our emotion recognition system, users’ emotional states can be classified with high accuracy using their eye and brain information, and then the users’ emotional connection can be used for VR interface.

Read More

The Sneak Peek into My Brain: Can We Push the Boundaries of Communication in VR Space using Brain…

By | BLOG
by Dmitry Ratushny at unsplash

On the second day of Facebook’s annual developer conference F8 held on April 18 to 19, Regina Dugan, Facebook’s vice president of engineering, got on stage as the last speaker and asked a thought-provoking question ‘So what if you could type directly from your brain?’

Regina Dugan was the first woman to have served as a director in the Defense of Advanced Research Projects Agency(DARPA) responsible for R&D of United States Department of Defense (USDoD) and moved to Google as the vice president of ‘Advanced Technologies and Projects (ATAP)’. Last year, Facebook had announced that Regina Dugan joined Facebook to lead its secretive R&D team ‘Building 8’ kicked off in April 2016. She is responsible for making Facebook’s vision of the future real to develop Brain-Computer Interface (BCI) as a communication tool that helps people be interconnected with each other in the coming VR / AR era. In order to achieve the vision, the ‘Building 8’ composed of 60 neuroscientists, machine learning professionals, and system integration engineers has been developing computer interface powered by human brain to type 100 words per minute by decoding users’ neural activity using optical imaging. In addition to Facebook’s ambitious plan to text by thinking, she unveiled Facebook’s so-called ‘silent speech interface’ that could enable people to ‘feel’ sound through their skin and understand the sound in their brain to push the boundaries of communication beyond languages around the world. How can we make those futuristic BCI ideas — transforming our thoughts into text messages and hearing the sound through the skin — happen?

https://medium.com/media/dabf1c6842d622cf1b2216217b82d77b/href

Prelude to BCI: Delivering Information Directly to the Human Brain

In human brain, there are 86 billion neurons that fire a thousand times per second — 1 kHz/s for each neuron. Since these neurons can’t be activated all at the same time, the times of firing for the neurons are divided by 100. In other words, our brain can produce about 1 TB of data per second enabling our brain to stream 40 HD movies every second as a result. However, when we pulled out the data out of our brain and converted them into sound, the streaming speed is slowed down to four HD movies per second similar to the speed of 1980s dial-up modem. Here’s a problem to make speech as an inefficient communication tool. Per Regina Dugan, that’s why Facebook has been thinking of the futuristic ideas in which people can text a friend by directly interpreting their brainwave and hear through their skin as seamless interfaces for VR / AR environment. This BCI technology would power the speed to retain and transmit much more information. To illustrate, the direct brain-to-brain communication enables Chinese people to think in Mandarin and then Spanish people to feel it instantly in Spanish by transforming the Chinese information extracted from Chinese people’ brainwave into Spanish without the use of speech. At F8, she introduced a demo videoclip as a recent outcome of the Building 8’s initial research. In the video, Facebook engineers showed the experiment about hearing through skin using special actuators to deliver specific frequencies for a person’s brain and then translate actual sound by letting the person’s skin mimic the cochlea in the ear instead of hearing the sound directly.

Brain-Computer Interface at Pixabay

Transforming the World by Thinking and Creating Rapport between Human and Computer

The Matrix, a Sci-Fi film released in 1999, reflected the worldview ‘reality is actually an extremely complicated VR program’. In the movie, while trapped inside their pods, the humans’ brains are connected to computers where a virtual world exists known as the matrix. Whereas the movie illustrated invasive BCI technology to connect human brain into the virtual world, recent research trends in BCI technology have tried to leverage non-invasive uses. For example, MIT currently built a robot that can read human thoughts non-invasively through an EEG helmet and then perform advanced tasks to pick up and sort objects. In addition, a team of researchers from the Technische Universität München developed the ‘Brainflight’ technology to fly planes with pilots’ thoughts alone with a cap connected to EEG electrodes. BCI lab at Graz University in Austria also explored the possibility to control and grow a character in popular video game World of Warcraft only with EEG. As wearable devices that read users’ brainwave such as Emotiv and Neurosky are emerging, functionality of BCI controller is getting more attention. In particular, those who are disabled or difficult to communicate mainly want to use the wearables to control other devices or services with their thoughts. Furthermore, this sort of BCI technology is capable of being applied in various industries by quantifying the states of stress, emotions, moods, concentration, etc.

Matrix at Pixabay

Looxid Labs Provides Brand New Interaction in VR Space using Eye and Brain Interface

Looxid Labs is seamlessly integrating non-invasive BCI with VR space using users’ biological signals including EEG, eye-tracking, and pupil size. Since VR is a space where users can enter into by moving from real to digital world, it is difficult to make a big difference from the conventional PC or mobile based user experience by simply watching the extra large display in the users’ view. Thus, in order to make users become more immersed and enhance users’ feeling of presence in VR, should it be considered to implement adaptive interaction for VR as well as provide users with comfortable experiences in terms of the advanced hardware and compelling VR contents. In other words, users’ ‘immersion’ and ‘feeling of presence’ are the most important factors to implement VR, and the users should become both a main character and a participant of the VR contents. Therefore, as the PC-to-mobile environment contributes to switching user interfaces from keyboard/mouse to touch interface, VR requires a brand new interface that reflects the users’ real-world experience. In order to provide users with adaptive interactivity in both real and virtual world, we consider emotional connection between the contents and users as a seamless interface for VR. That’s why we are developing the emotion recognition system in VR using eye and brain interface that directly interprets users’ emotions once users wear the VR headsets. Our goal is to implement ultimate BCI in which the users’ eye and brain information are seamlessly transformed into emotional data and enables users to emotionally engage with VR contents.

Interactivity at Pixabay

Read More

What emotion recognition is — You may not even notice its importance

By | BLOG

In January 2016, Apple acquired AI startup Emotient that reads people’s emotions by analyzing facial expressions. Following the announcement in November 2016, Facebook bought a startup FacioMetrics which developed face tracking and emotion detection technology as well. Global IT companies including not only Apple and Facebook but Google and Microsoft have been devoted to develop emotion recognition technologies using facial recognition and a variety of human physiological signals. The emotion recognition solution market is worth USD 6.7 billion in 2016 and estimated to reach USD 67.1 billion by 2021. Recently, even in the global crowdfunding platforms such as Kickstarter and Indiegogo, there are several campaigns to raise money for wearable devices which help people manage their stress and emotions. What is the value of human emotions that startups as well as global IT companies are trying to measure?

Definition of Emotions

In general, emotions are a relatively brief conscious experience characterized by intense mental activity and a high degree of pleasure or displeasure. Despite several scientific discourses on the definition of emotions, there is still no consensus for the meaning itself. Psychologist William James and physicist Carl Lange have come up with the James-Lange theory in which physiological change is primary, and emotion is then experienced when the brain reacts to the information received via the body’s nervous system. Taken together, recent researches on emotions suggest that emotional systems comprise both neural and bodily states that provide immediate means for protection of the individual and that maximize adaptation to survival-salient events. In addition, feelings, synonym of emotions, are mental experiences of body states and also regarded as defense mechanisms to respond to stimuli generated within an individual’s body.

Emotional Response from a Cognitive Neuroscience Perspective

Cognitive Neuroscience explains emotions as responses from human limbic system to stimuli detected by sensory systems including five senses — sight(ophthalmoception), smell(olfacception), touch(tactioception), taste(gustaoception), hearing(audioception) — and points out the amygdala as the primary emotional engine of the brain called ‘emotion circuits in the brain’. Interestingly, there may be differences in the responses of the neuroanatomical emotional circuits or the autonomic nervous system causing emotions. Considering both conscious and unconscious awareness, the former refers to an emotional response caused by the stimuli when an individual perceives an object while the latter describes another emotional response in an opposite situation. There might be even greater differences in autonomic nervous system in response to unconscious awareness. The response of autonomic nervous system to emotional stimuli is performed in the order of Perception-Valuation-Action (PVA) enabling to analyze emotions by measuring bodily functions that follow emotional changes through sympathetic and parasympathetic nerve antagonism.

Reading Emotions from Physiological Reactions ‘Inside Out’

In the 1970s, American psychologist Paul Eckman classified human basic emotions into six categories: anger, disgust, fear, happiness, sadness, and surprise. In the movie ‘Inside Out’ actually consulted by Paul Eckman, five emotions except for surprise appeared. Recently, there have been various attempts to classify the emotions by measuring bodily responses through sympathetic and parasympathetic nerve antagonism and then classifying emotions. To illustrate emotion classification using bodily responses, when a person is excited, his muscles become tense, his palms get sweaty, and both his heart rate and body temperature increase. In the situation, there are three general indicators that can be used for emotion recognition. First, Electromyography (EMG) measures the electrical impulses of muscles during contraction. Second, the skin conductance response (SCR), also known as galvanic skin response (GSR), is the phenomenon that the skin momentarily becomes a better conductor of electricity when a person is in tension. Third, an increase in the electrocardiogram (ECG) signal indicates a state of stress or frustration. In addition to three indicators, changes in pupil size and reactivity are closely related to emotional changes. In particular, pupils dilate slightly in response to any exciting or interesting stimulus while pupil size tends to decrease in response to unpleasant stimulus.

Accuracy in Emotion Recognition using Brainwaves Still Remains Insufficient

Among emotion recognition technologies based on human physiological responses, using the electroencephalogram (EEG) directly recorded at the head surface provides better accuracy than using indirect information such as facial expressions or voices which analyzes emotions revealed. Existing emotion recognition techniques using EEG mainly apply basic machine learning algorithms that extract features defined in brainwave and matching them into emotion indexes defined by prior researches. However, in spite of its relatively high accuracy, the traditional emotion analytics using EEG has some limitations in emotion classification. It is difficult to achieve more than a certain level of accuracy because of low quality of EEG signals and insufficient quantity of the data. Furthermore, it is not easy to define the emotion indexes themselves.

Looxid Labs Enhances Emotion Recognition Accuracy by Applying Deep Learning Algorithms to EEG Signals

As a result, there has been a limit to applying supervised learning to match defined emotion indexes with well known features in EEG even if the machine learning technology was advanced. Therefore, in order to overcome these limitations, Looxid Labs aims to extract a variety of emotion indexes from many people through deep learning algorithm based on representation learning. In other words, we are developing a technology that improves the accuracy of emotion classification by finding hidden patterns in the EEG signals themselves. In addition to EEG signals, changes in pupil size and reactivity are also combined to improve the accuracy of emotion recognition. Through Looxid Labs’ technology, it is possible to take advantage of the emotional states of the individual, which is hard to generalize and objectify, as business indexes customized to the needs of various industries.

Read More

How We Can Connect Virtual World to Real World

By | BLOG

The key message at Google I/O conference, held on May 17th to 19th for 3 days in Mountain View, California, was Artificial Intelligence (AI). At the event, Google’s CEO Sundar Pichai unveiled brand new services including ‘Google Lens’ — an AI camera app using computer vision technology, ‘Google Home’ — a speaker with AI assistant ‘Google Assistant’ and ushered the transition to ‘AI First’ era. The essence of the vision ‘AI First’ is to develop products and services that interact with people more seamlessly in real world. ‘Google Assistant’, a service soaking Google’s ‘AI First’ vision, is a representative NUI (Natural User Interface) based service enabling users to search information and execute applications with voice interfaces to command computers without mouse or fingers.

NUI, Next Generation Interface for ‘AI First’ Era

A ‘NUI’ is a user interface technology that controls digital devices using modalities that interact with computers such as sensory, behavioral, and cognitive abilities. Typical NUI mainly refers gesture interfaces recognizing human actions, multi-touch interfaces recognizing various human touches, and sensory interfaces recognizing human intention, etc. As VR is more real than reality itself and its market is growing fast, NUI is becoming more and more important as an interactive element with ‘VR contents’ that will provide a valuable experience for users. Since an HMD (Head Mounted Display) is mainly used as a device to implement VR and AR, NUI is an interface to provide high degree of freedom and optimal usability for users.

NUI Technology Available as a VR Controller

Traditional users of Oculus Rift and HTC Vive, which have been widely used as VR hardwares, mainly use touch controllers like human hands, remote controllers, and motion controllers like gloves. In recent, one of typical NUIs for VR controllers is gesture based interfaces which recognizes a user’s hand movement. MS Kinect and Leap Motion using cameras are the most widely used products using gesture interface, and Myo is a gesture control armband developed by Thalmic Labs. In particular, Leap Motion’s technology provides users with fast recognition speed and accurate motion recognition. Even though voice interfaces have not been widely used as VR controllers, AI assistant services such as Apple’s ‘Siri’, ‘Google Assistant’, Amazon’s ‘Alexa’, and Samsung’s ‘Bixby’ are adopting voice interfaces. Voice interfaces are growing beyond a simple keyword recognition technology to provide IoT services enabling people to interact with a variety of several devices as well as artificial intelligence services including speech recognition, meaning recognition, and contextual reasoning.

World-leading Companies are Working on Brain Interfaces

Recently, global tech companies such as Tesla and Facebook announced their plans to develop sensory interfaces that allow people to interact with computers just by thinking. In March, Tesla’s CEO Elon Musk firstly opened the door by revealing to launch Neuralink, a company to connect human brains with computers, and introduced ‘neural lace’ plan, which is a technology that allows computers to understand human ideas as well as upload or download them by connecting the brain with computers in an invasive form — putting a microchip in the brain. Since then, at Facebook’s annual developer conference F8, held on April 18th to 19th, Regina Dugan, head of Facebook’s R&D division ‘Building 8’, unveiled that Facebook is building brain-computer interfaces for ‘brain typing technology’ by scanning users’ brainwave. She also added that Building 8 has developed bone conduction-based hardware and software that enables users to hear through their skin. For Facebook, which considers VR and AR as the next generation platform to replace smartphones, it will be a major challenge to be a pioneer of direct brain interface technologies that can directly control the VR and AR environments and interact through human brain.

Looxid Labs Creates an Interconnected Channel between Virtual World and Real World

Looxid Labs is seamlessly integrating a user emotion recognition system with VR environment by using eye interfaces as well as brain interfaces — catching the attention from global tech companies including Tesla and Facebook. Our emotion recognition system for VR users is composed of miniaturized embeddable sensor module to detect eye and brain activities, the emotion recognition API to deliver robust eye and brain signals in real-time, and our exceptional machine learning algorithm that detects and classifies users’ emotional states into business indexes accurately. We plan to take advantage of our machine learning algorithm technology to transfer users’ eye and brain data to the VR contents and enables VR users to interact with VR contents in various fields. Our goal is to be the world leading company which creates interconnected reality between virtual world and real world through emotional interaction.

Read More