Could VR Interface Enhance Users’ Sense of Immersion and Feeling of Presence in VR?

By | BLOG

Last June, it was reported that Apple is planning to acquire a German eye-tracking tech firm SMI. SMI is a leading company in the field of eye-tracking technology. Embedded into HTC Vive and Samsung Gear VR, its technology has been used in establishing ‘foveated rendering’ method that renders a high resolution to central vision area but blurs out the rest. With this move reported, it is no doubt that Apple is going to spur the development of smart glasses with AR/VR technologies. Since last year, there have been three takeovers of eye-tracking firms by global IT companies, and now many are scrambling to buy eye-tracking companies for VR users’ better experience. Last October, Google acquired Eyefluence, a startup that enables VR users to do a screen transition or take specific actions through their eye movements. In the wake of the deal, Facebook’ VR unit Oculus also acquired The EyeTribe to solidify its dominance in VR market. Why would such global IT companies seeking VR market dominance have their eyes on eye-tracking technology?

TechCrunch | Apple acquires SMI eye-tracking company (Posted Jun 26, 2017 by Lucas Matney)

Novel Approaches for VR Interfaces

VR creates an environment that maximizes users’ immersion and sense of presence by crossing the boundaries of time and space. It is characterized by VR users’ interaction with the simulated surroundings. Thus, a device that controls VR effectively and conveniently is essential in order to enhance complete immersion and close the gap between reality and virtual reality. In other words, it is of great importance to develop ‘Brain-Computer Interface’ that provides a visual interface indistinguishable from reality at a perceptual level, elicits physical feedbacks from a user’s actions with a haptic interface, and reflects a user’s emotional changes during his or her VR experience. The reason why VR devices stimulate our sight most intensively out of five senses at a perceptual level has to do with the fact that one fourth of cerebral cortex accounts for an image creation and vision, making us vulnerable to optical illusions. To illustrate, a user perceives VR through a process by which light hits the retina and reaches the brain via the optic nerve. These demands on VR should explain why the global ‘big players’ and startups are obsessed with seizing eye-tracking technology for visual interface development. However, since the essence of VR depends on how VR contents stimulate the brain nerves and how the brain interprets and responds to such stimulations, there is a fast-growing interest in Brain-Computer Interface (BCI) besides visual and haptic interface. (FYI, just jump back to the previous story ‘The Sneak Peek into My Brain: Can We Push the Boundaries of Communication in VR Space using Brain-Computer Interface?’)

Anatole Lécuyer (2010) Using eyes, hands and brain for 3D interaction with virtual environments: a perception-based approach. HDR defense

Visual Interface — Visual Attention and Feedback

The camera-based VR interface, one of the visual interface, includes Leap Motion’s controller, which detects a user’s hand positions and movements, SoftKinetic(Soni) and Nimble VR(Oculus). One of the applications of the visual interface appears in Minority Report: a PC recognizes Tom Cruise’s in-air hand gesture and takes actions according to the inputs. Without such body movements, ‘hand-free VR’ will be possible in near future if eye movements can act as inputs by embedding a camera into the lenses of the VR headset. Tobii, SMI, Eyefluence and The EyeTribe have been the leading companies developing eye-tracking technology and, except for Tobii, all of them were recently acquired by Apple, Google and Facebook respectively. Similarly, FOVE, with embedded eye-tracking technology in its VR headset, has shown the possibility of visual interface to implement a user’s immersion and interaction through the previously-mentioned technique ‘foveated rendering’ and a direct eye-contact between a virtual character and a user. In addition, BinaryVR’s technology, which utilizes 3D camera to recognize a user’s facial expression and creates a 3D avatar in VR, is another example of convergence, combining facial recognition technology with visual interface.

Haptic Interface — Motor Action and Haptic Feedback

A haptic interface, a touch-based interface in which a user can feel the movement and texture of objects in VR, is indispensable VR interface for intensifying immersion and sense of presence. Ultrahaptics is one of the world’s leading companies to develop haptic interface. It possesses gesture control technology that employs ultrasonic waves to recognize 3D motion in the air and give air-haptic-based tactile feedbacks. While Ultrahaptics makes a tactile sensation through ultrasonic waves, Tactical Haptics attaches a reactive grip, a motion controller, to a normal VR controller as an auxiliary to receive haptic feedback. Moreover, having raised crowdfunding on Kickstarter before, Gloveone allows users to control objects in VR using its Haptic Glove. DEXMO, a tactile device developed by Dexta Robotics, changes the direction and magnitude of applied force dramatically according to the hardness of an object, providing a weak feedback when a soft object such as a sponge or a cake is touched but a strong feedback when a hard object such as a brick or a pipe is touched. Last but not least, KOR-FX, surpassing the traditional haptic feedback by hands, enables a user to feel the vibrations with his or her entire body through a vest for immersive RPG games.

CNN | Devices with feeling: new tech creates buttons and shapes in mid-air (Posted April 1, 2015 by Jacopo Prisco)

Brain-Computer Interface, the Ultimate VR Interface through User Emotion Recognition

The previously-introduced interfaces such as visual and haptic interface each focus on inducing emotional immersion with feedback like reality by a user’s vision and touch in VR. Since the brain is the backbone of our senses and perception neuroscientifically, brain-computer interface should naturally pop into our mind when we discuss both visual and haptic interface just as “where the needle goes, the thread follows”. The ultimate version of VR should be able to interpret a user’s perception and sense that approximately 4 billion neurons create and take specific actions in VR through the brain by reading neural signals — the result of electrical activity of the brain. In this context, Facebook, at this year’s developer conference F8, announced that it’s working on developing BCI technology that can translate thoughts into text messages and sound into tactile information. Just as Facebook, Looxid Labs is seamlessly integrating non-invasive BCI with VR that enhances users’ immersion and sense of presence in VR using their physiological signals such as Electroencephalogram(EEG), eye-tracking and pupil size. Looxid Labs aims to carry forward current BCI to the realm of VR by developing an ultimate emotion recognition system using eye and brain interface that allows users to emotionally engage with VR contents and directly interprets users’ emotions by simply wearing a very VR headset.

Read More

How to Unlock VR’s Potential

By | BLOG

Unleashing Emotional Connection between VR Contents and Users

Golden State Warriors’ Kevin Durant shoots a 3-point basket over Cleveland Cavaliers’ LeBron James for the go ahead basket during the fourth quarter of Game 3 of the NBA Finals (By GIESON CACHO at The Mercury News)

In Game 3 of the NBA 2017 finals, the Golden State Warriors and Cleveland Cavaliers had got steamed up until the last minute. One of best highlights from the Game 3 was absolutely Kevin Durant’s clutch pull-up 3-pointer to give the Golden State Warriors turning around. When you watch those VR highlights in the NextVR app, a streaming app from a leading VR broadcaster of live events, you can get fully immersed and feel presence to have real courtside NBA experience. NextVR is a powerhouse of VR live streaming technology, best known for beaming live VR footage of Manchester United vs. FC Barcelona soccer match in July 2015, that partnered with the NBA to show games via VR exclusively thanks to its previous vivid VR contents such as pass and offside as well as a soccer ball coming straight and a front block tackle. NextVR which actually launched as 3D TV production company Next3D in 2009 changed its route toward VR industry after 3D TV failed in the market and now provides professional live VR streaming contents including sports games and live concerts.

NextVR said its key success factor is to provide users with feeling of presence and emotional experiences as participants in the stadium beyond simple spectators of the game in 3D, thereby allowing the users to get better life-like experiences compared to other 360-degree and VR videos. For example, during live stream of a sports game, the NextVR takes advantage of a few techniques to make the users feel like sitting in the front row as follows:

  • to capture the players’ field warming up or attractions at stadium from the pointview of the courtside seats right next to players,
  • to provide users with far more engaging and dramatic experiences to watch games being played close to their eyes in the stadium by changing the camera angle with the flow of the game, and
  • to lead the users’ visual concentration through the announcer’s message.

According to CNET interview with NextVR’s executive chairman Brad Allen, the average viewing time spiked from 7 minutes to 42 minutes as the NBA season progressed. He also hinted the importance of giving users incentives to keep wearing inconvenient VR headsets even if they have every reason to take the headsets off because they’re so big and bulky. Back to basics, how can we unleash the best user experience that goes beyond the inconvenience of wearing VR headsets for users?

3D vs. VR: Immersion and Interaction are the Most Competitive Advantages of VR

A global boom of 3D TV and content creation technology accelerated by the incredible success of Hollywood 3D blockbuster ‘Avatar’ was selected as top 10 tech ‘fails’ of 2010 by CNN and finally made its epic downfall as shunned by consumers. Even though 3D TV was considered as promising next-generation multimedia, it has failed to create an entire ecosystem due to its evident limitations — requiring people not only to buy high-end hardware but to wear uncomfortable glasses — and the absence of killer contents. From a usability standpoint, VR also has weaknesses similar to 3D TV technology because: i) VR headset should be bought and worn by users, ii) users’ VR adaptation needs to be supplemented by more advanced technology, and iii) VR killer apps are not enough. However, it is the most competitive advantages of VR to provide users with immersion incomparable to watching a big screen 3D TV. Unlike 3D TV where users watch the contents from the observers’ point of view, VR allows users to enter into the VR world as participants and interact more closely with objects in VR. Nonetheless, VR market growth is still on its way for stagnation because current VR users haven’t yet experienced sufficient immersion and interaction with the VR contents.

Baobab Studios’ second entry ‘Asteroids!’

Successful Immersion and Interaction in VR depending on Emotional Connection between Contents and Users

Out of emerging VR content creators, Baobab studios, a VR animation company teamed up with former Pixar, DreamWorks, and Disney employees, is the pioneer of the best VR storytelling. Baobab studios’ first work ‘Invasion’ is introduced as a typical example of VR content which provides immersion and interaction in the VR environment. Its storyline contains the encounter between the bunny and two aliens in 360-degree view of snowy field. What makes this work different from other VR videos is that the user is able to enjoy the animation with full immersion from the bunny’s point of view. When the bunny appears in the first scene and looks at the user’s eyes straight and sniffs like a living creature, the user focuses on all the bunny’s actions including its gaze and attention and finally makes emotional bond that identifies the user him- or herself with the bunny. Baobab Studios’ second entry ‘Asteroids’ uses a wider variety of settings to create stronger emotional connection and interaction between protagonist characters and the user than previous one ‘Invasion’. First of all, the pet robot protagonist caught the user’s attention with groaning sound like “brrr…brrr” and flickering light for the user’s emotional connection. Next, ‘Mac’, one of the aliens who appeared in the previous ‘Invasion’, appeared and guided the user to turn his or her head left and right while playing a ball with the pet robot and throwing the ball right in front of the user’s eyes. It helps the user emotionally connect him or her with the protagonist ‘Mac’. Last but not least, when another alien ‘Cheez’ is wiping the spaceship window and bumping into an asteroid off course, the user feel like to become ‘Mac’ in a moment of extreme tension through organic and emotional connection. In short, as the emotional connection between the characters in the VR content and the user is combined with the storytelling, the user maximize his or her emotional connection with the content, and the user’s actual emotional interaction are triggered as well.

Meteora, Greece by Jason Blackeye

Making VR More Realistic than Reality beyond Uncanny Valley through Users’ Emotional Interaction

Since VR is what users enjoy through the device, it is critical to determine the success or failure of the VR market whether it can provide users with seamless experience. In particular, users’ seamless experience in VR environment indicates to be emotionally connected with VR beyond the uncanny valley by interacting with the contents based on their emotions. Here, the uncanny valley refers to a phenomenon when advanced immersive VR technology reaches a certain level, users could feel strong eeriness and revulsion, but when the technology exceeds the level not to distinguish reality from VR, users’ favor and affinity with VR increases again. The success factors of previously introduced NextVR’s live VR streaming contents and Baobab studios’ VR animations were to get users fully immersed in the VR as the participants as well. In order to achieve VR immersion, both of them intentionally adopted the settings which enables the users to feel presence and emotionally connect with VR contents. And yet, there still exist several limitations to create emotional connection and interaction between users and the contents through intentional setup. Therefore, it is essential to complete VR more realistic than reality by not only ensuring emotional connection between VR contents and users but also providing adaptive interaction based on the users’ exact feelings. In this sense, Looxid Labs has developed seamless user emotion recognition system in VR by adopting eye-tracking and brainwave analysis as a medium for analyzing users’ emotions with high accuracy. Our goal is to introduce users’ emotions into a virtual environment and enable users to emotionally engage with a virtual character knowing that the users’ emotional states. Through our emotion recognition system, users’ emotional states can be classified with high accuracy using their eye and brain information, and then the users’ emotional connection can be used for VR interface.

Read More