Last June, it was reported that Apple is planning to acquire a German eye-tracking tech firm SMI. SMI is a leading company in the field of eye-tracking technology. Embedded into HTC Vive and Samsung Gear VR, its technology has been used in establishing ‘foveated rendering’ method that renders a high resolution to central vision area but blurs out the rest. With this move reported, it is no doubt that Apple is going to spur the development of smart glasses with AR/VR technologies. Since last year, there have been three takeovers of eye-tracking firms by global IT companies, and now many are scrambling to buy eye-tracking companies for VR users’ better experience. Last October, Google acquired Eyefluence, a startup that enables VR users to do a screen transition or take specific actions through their eye movements. In the wake of the deal, Facebook’ VR unit Oculus also acquired The EyeTribe to solidify its dominance in VR market. Why would such global IT companies seeking VR market dominance have their eyes on eye-tracking technology?
Novel Approaches for VR Interfaces
VR creates an environment that maximizes users’ immersion and sense of presence by crossing the boundaries of time and space. It is characterized by VR users’ interaction with the simulated surroundings. Thus, a device that controls VR effectively and conveniently is essential in order to enhance complete immersion and close the gap between reality and virtual reality. In other words, it is of great importance to develop ‘Brain-Computer Interface’ that provides a visual interface indistinguishable from reality at a perceptual level, elicits physical feedbacks from a user’s actions with a haptic interface, and reflects a user’s emotional changes during his or her VR experience. The reason why VR devices stimulate our sight most intensively out of five senses at a perceptual level has to do with the fact that one fourth of cerebral cortex accounts for an image creation and vision, making us vulnerable to optical illusions. To illustrate, a user perceives VR through a process by which light hits the retina and reaches the brain via the optic nerve. These demands on VR should explain why the global ‘big players’ and startups are obsessed with seizing eye-tracking technology for visual interface development. However, since the essence of VR depends on how VR contents stimulate the brain nerves and how the brain interprets and responds to such stimulations, there is a fast-growing interest in Brain-Computer Interface (BCI) besides visual and haptic interface. (FYI, just jump back to the previous story ‘The Sneak Peek into My Brain: Can We Push the Boundaries of Communication in VR Space using Brain-Computer Interface?’)
Visual Interface — Visual Attention and Feedback
The camera-based VR interface, one of the visual interface, includes Leap Motion’s controller, which detects a user’s hand positions and movements, SoftKinetic(Soni) and Nimble VR(Oculus). One of the applications of the visual interface appears in Minority Report: a PC recognizes Tom Cruise’s in-air hand gesture and takes actions according to the inputs. Without such body movements, ‘hand-free VR’ will be possible in near future if eye movements can act as inputs by embedding a camera into the lenses of the VR headset. Tobii, SMI, Eyefluence and The EyeTribe have been the leading companies developing eye-tracking technology and, except for Tobii, all of them were recently acquired by Apple, Google and Facebook respectively. Similarly, FOVE, with embedded eye-tracking technology in its VR headset, has shown the possibility of visual interface to implement a user’s immersion and interaction through the previously-mentioned technique ‘foveated rendering’ and a direct eye-contact between a virtual character and a user. In addition, BinaryVR’s technology, which utilizes 3D camera to recognize a user’s facial expression and creates a 3D avatar in VR, is another example of convergence, combining facial recognition technology with visual interface.
Haptic Interface — Motor Action and Haptic Feedback
A haptic interface, a touch-based interface in which a user can feel the movement and texture of objects in VR, is indispensable VR interface for intensifying immersion and sense of presence. Ultrahaptics is one of the world’s leading companies to develop haptic interface. It possesses gesture control technology that employs ultrasonic waves to recognize 3D motion in the air and give air-haptic-based tactile feedbacks. While Ultrahaptics makes a tactile sensation through ultrasonic waves, Tactical Haptics attaches a reactive grip, a motion controller, to a normal VR controller as an auxiliary to receive haptic feedback. Moreover, having raised crowdfunding on Kickstarter before, Gloveone allows users to control objects in VR using its Haptic Glove. DEXMO, a tactile device developed by Dexta Robotics, changes the direction and magnitude of applied force dramatically according to the hardness of an object, providing a weak feedback when a soft object such as a sponge or a cake is touched but a strong feedback when a hard object such as a brick or a pipe is touched. Last but not least, KOR-FX, surpassing the traditional haptic feedback by hands, enables a user to feel the vibrations with his or her entire body through a vest for immersive RPG games.
Brain-Computer Interface, the Ultimate VR Interface through User Emotion Recognition
The previously-introduced interfaces such as visual and haptic interface each focus on inducing emotional immersion with feedback like reality by a user’s vision and touch in VR. Since the brain is the backbone of our senses and perception neuroscientifically, brain-computer interface should naturally pop into our mind when we discuss both visual and haptic interface just as “where the needle goes, the thread follows”. The ultimate version of VR should be able to interpret a user’s perception and sense that approximately 4 billion neurons create and take specific actions in VR through the brain by reading neural signals — the result of electrical activity of the brain. In this context, Facebook, at this year’s developer conference F8, announced that it’s working on developing BCI technology that can translate thoughts into text messages and sound into tactile information. Just as Facebook, Looxid Labs is seamlessly integrating non-invasive BCI with VR that enhances users’ immersion and sense of presence in VR using their physiological signals such as Electroencephalogram(EEG), eye-tracking and pupil size. Looxid Labs aims to carry forward current BCI to the realm of VR by developing an ultimate emotion recognition system using eye and brain interface that allows users to emotionally engage with VR contents and directly interprets users’ emotions by simply wearing a very VR headset.