ARTICLE BY J.C. KUANG
The honeymoon has passed; after a glamorous but lumbering start, virtual and augmented reality have secured their footing as new and viable digital media for creating, consuming, and sharing content. Regardless of your personal opinion, some of the brightest minds in the tech space, from the largest conglomerates to the smallest startups, are racing to stake a claim on emergent standards in hardware, software platforms, and new media ecosystems.
As this struggle continues, it can be easy for organizations who wish to enter the XR market to lose sight of certain important factors in constructing an engaging and accessible experience. That is, when there is so much concern over a headset’s PPI or its FOV, the ability to create a seamless experience designed to maximize presence and efficacy in its user can be lost. After all, what is the point of a headset that delivers a top-of-the-line XR experience if it only works while staring directly ahead, or if an everpresent bundle of cables trips you up every few steps?
To this end, companies such as LEAP Motion create virtual reality interfaces which mitigate the problems traditionally associated with immersive media experiences. For them, presence is key, and manipulating the virtual should be as simple as manipulating the real.
LEAP Motion’s coup de grace is, in simplest terms, an extremely compact camera. However, the exceptional engineering behind its platform is the basis for an unprecedented new development in human-computer interaction. By using sophisticated image recognition, LEAP Motion tracks user’s hands in real time without the aid of any kind of controller or tracker, mapping them to several points of articulation (per hand) in a virtual space, allowing headsets to recognize extremely fine gestures and movement. Comparisons may be drawn to current design philosophies in the Oculus Touch interface, which uses judiciously placed sensors to make an educated guess at what gesture the user is making based on what buttons are being pressed and where individual sensors are resting, which produces a somewhat similar, albeit less precise, effect.
Oftentimes, conceptually ambitious concepts such as this can come up against Murphy’s Law, and fall short in real-world scenarios due to unforeseen interference or circumstances. However, my experience with the platform was quite promising; LEAP tracks hands accurately within a generous FOV, and has little trouble recognizing fine gestures such as pinching and opening and closing of the hand, perhaps even more smoothly than current Microsoft Hololens developer kits. The ability to interface with a computer using familiar skeuomorphic elements that are fundamentally no different than one's own body (which is our interface to the world) represents an important breakthrough in how we think about XR.
Current trends have made it abundantly clear that presence is the most effective tool in creating effective XR experiences, and one of the ways that developers can achieve this is by emphasizing intuition in its interfaces. Imparting knowledge about its inner workings in an intuitive and unobtrusive way should be a bedrock principle for XR design in the immediate future.