Standalone 6DoF represents one of the most important criteria distinguishing high-end VR from other mass-market headsets. However, as 6DoF becomes the norm, what is the next technology that OEMs will race to incorporate? As the window for Oculus to formally announce its new standalone 6DoF headset grows smaller, it will be important for consumers and VR practitioners to examine how interaction with virtual environments will evolve with the next generation of hardware.
Below are three potential technologies to replace 6DoF as the new differentiator of high-end standalone VR:
Current 6DoF technology extrapolates tracking data from IMUs mounted in the headset itself to determine a user's location in physical space. While this is more than adequate for enabling basic navigation of 3D environments, it offers no solutions enabling users to interact directly with virtual spaces. Current solutions are focused on intuitive controls via hand gestures, or increasing the number of sensors on a user's body and tracking directly.
As sensor technology and wearable computing power improve, computer vision extrapolation and improved HMD sensors oriented towards the user's body could fully track a user's limbs and enable 1:1 interaction in a virtual environment. Alternatively, HMDs could also harness external sensors in other wearable computers, such as pedometers, heart-rate sensors, and smartwatches.
Tracking an entire body as opposed to a handful of gestures presents the unparalleled potential to increase presence in VR, especially in situations where two or more users interact with one another. In addition, full-body interaction could enable compelling new possibilities for navigating VR, enabling vertical traversal or even high-stress, athletic movement in conjunction with omnidirectional treadmills or bespoke physical environments. While such tracking is already fairly well-developed in experimental high-end VR, its arrival to standalone is contingent on solving technological problems which reduce the necessary amount of external sensors necessary to accurately track the human body and migrating these to a more convenient and less obtrusive all-in-one design.
Applications for this technology would significantly broaden use cases in training. Developers afforded the freedom to track users' bodies could create larger environments and incorporate body movements into training procedures. This would be of particular importance to frontline workers across a variety of verticals, including logistics and engineering.
Foveated rendering remains one of the most popular and enduring solutions to the screen-door effect in current VR displays. By dynamically increasing the visual fidelity of an image on a specific area of the display (usually determined by eye-tracking technology), users can experience significantly higher resolutions within the scope of their gaze, while the surrounding display area decreases in quality accordingly. While several forms of foveated rendering are currently being explored, from varifocal lenses to custom-built nested panels, the appeal of foveated rendering lies in its intelligent allocation of current computing resources to deliver a screen door-free virtual experience at truly immersive resolutions, which will eventually become indistinguishable from biological visual systems. While current-generation standalone VR headsets already exhibit marked improvements in display quality over their PC-tethered counterparts, the implementation of any foveated rendering solution is not yet cost effective for standalone HMDs.
By improving the resolution of VR displays, use cases incorporating fine details, such as small text, are greatly more viable. Unlocking the ability to convey lots of information textually represents new possibilities in entertainment, as well as overall best practices in UI/UX design for virtual environments.
While VR is traditionally dependent on harnessing visual and aural sensory information to generate presence, the practice of harnessing other senses has already become an established area of interest for many OEMs and third-party hardware providers. Continued experimentation into touch-feedback has yielded simple controllers as well as complex full-body force suits which offer degrees of increased presence in VR. Some high-end LBVRE experiences have already implemented similar hardware.
Meanwhile, the excitement around the use of smell in VR remains a more challenging area, mostly confined to academic research. While strong scientific evidence corroborates the heightened influence of smell on cognition and perception, the technology required to implement olfactory VR tech has yet to arrive. For standalone VR in particular, an increase in hardware footprint and compute load makes implementation diffcult.
"4D" VR represents the highest possible potential for full immersion in VR, hijacking most or all of a user's senses so that they receive no indication of their physical reality. As such, specific applications are broad, and will disrupt many XR use cases. However, the requisite technology will need to become more affordable before coming to market.