VR has become a mainstay for many training programs, but a handful of companies are already imagining what the next chapter of VR will entail: from emotional tracking to understanding the physical world at a molecular level. 

With the move to the Orange County Convention Center West Building this year, I/ITSEC may have been in a new/old location, but the floor was still filled with the A/M/V/XR headsets and simulators that the expo has become synonymous with. But among the rows of standard gear was a trace of what the next chapter will bring for extended reality. 

Toward the back of the show floor, past the impressive large booths with signage spinning above them, almost to the back wall of the expo, sat a small display showcasing science fiction-like hand tracking that allows users to do away with the hand controllers that many have found difficult to master. “For it to be so immersive, hand tracking inside a headset needs to be really precise, and it needs to be really good. That's what our solution delivers. That level of tracking is now possible. Now is a great time for hand tracking,” explains Ultraleap Communications Officer Tessa Urlwin. A small band of sensors can snap onto the headset of most off-the-shelf headsets and is built into select high-end models, such as the Varjo XR3. 

The push to free the user from controllers and other technological constraints allows for a more authentic XR experience, explains Urlwin. “It really puts the user in their own body, especially if it's a training application, because they're interacting with their own hands. Then they can learn things at a faster pace. They can interact how you would in the real world, which is a fantastic way to interact and learn.”

Resembling a miniaturized Kinect but with a far finer level of motion-capturing capabilities, the underlying technology showcased by Ultraleap has been around for over a decade. Still, the technology has yet to truly catch on with the mainstream. One industry that has seemingly embraced hands-free technologies is the medical field. 

While Ultraleap’s journey brought them from the mainstream to the medical and now training industries, another vendor is on a nearly reverse path. “We've been making hardware and software for neuroscience and bioscience for years. Then over the years, we saw our customers combining our existing products with VR devices with increasing frequency. They were using VR in their research, and they wanted to collect physiological data during VR experiences. So, we set out to build Galea,” explains Joseph Artuso, the Chief Commercial Officer at OpenBCI

“Galea is a combination of multiple types of sensors into a single headset that can be combined with VR. It collects data from the brain, the muscles, the heart, the skin, and the eyes, and allows you to access that inside of Unity or Unreal, and create some really unique kinds of experiences in VR with it.” 

Artuso notes an increasing demand for hands-free, real-world-like interactions within virtual experiences. “People are looking for new ways to control these new types of computers. VR and AR are getting better, and they’re merging with neuro tech. We're seeing companies looking for new modes of interaction and new modes of control that go beyond the keyboard and mouse.” This includes throttling a user’s experience based on their personal stress levels. “You can create a closed loop between the experience itself and the way the individual user is reacting. Imagine a situation where as you get more stressed, the intensity of the game or the number of tasks that are being thrown at you may increase if you're trying to train them to be used to this type of high-energy environment, or it could decrease to make it more manageable for different levels of abilities.” 

OpenBCI’s technology is already available with select higher-end headsets, including the Varjo Aero and the Varjo XR. The company is now working to improve its existing standalone, aftermarket units.

At the OpenBCI booth, a group has gathered to gawk at the Galea device. Artuso quickly jumps in with zealous-like excitement to share OpenBCI’s vision for the future. Pointing to the VR headsets spread across the show floor, he explains, “We believe the future computers are head-mounted wearables that will be personalized based on data from sensors like the ones we're putting in Galea. We view this as a kind of a development kit, a first step towards a new type of computer that’s a wearable that is in sync with our bodies.” For Artuso and team, this desire to create hyper-realistic virtual worlds where multiple users can work together using their own hands and minds while gathering ultra-refined performance data feedback is more than a sales pitch; for them, it’s a glimpse into a future that’s now, finally, being realized. 

Across the floor, at one of the larger booths, Metrea Simulations returns with a new name (having to change it after another metaverse-focused tech company took on the Meta name) and with improvements to technologies previously showcased at I/ITSEC. “We're showing our new platform, NOR, and within this is our Air Tactics Trainer. Last year we had three F-16 rigs. This year, we've added the helicopter, the Eurofighter, the jetpack, and a bunch of other improvements. We’re also showing our JTAC setup,” explains Niclas Colliander. Managing Director at Metrea. “Essentially, a ground view of the world with a laser designator that can interact with all the airplanes in this scenario and do close air support training or joint fires training. And all these are networked running together in the same environment right now.” 

This ability to have multiple users across various aircraft and roles enables a more authentic experience. One that, like the hands-free tech, allows users to better train using closer to real-world simulations that were previously not easily achievable. 

That authenticity is key for accurate training, according to Dr. Christopher Fink, a Senior Physicist and Chief Technology Officer at JRM Technologies. “There are a lot of image generator technology companies today, mostly for training and simulation. But a lot of those image generators can only show you what the scene would look like to the human eye during the daytime; they're not showing you what it would look like to an infrared camera or through night vision goggles. What good is a training simulator that doesn't allow you to train in a relevant environment?” 

To address this gap, JRM is looking to bring the real world to VR not through cameras, multi-user rigs, or body sensors but via a push to create more real-world-like items within virtual environments. By classifying and coding individual items within the virtual worlds to reflect their real-world material classification on a molecular level, JRM can create virtual items and worlds that more accurately reflect their physical counterparts. “Primarily, we're bringing the ability to see that virtual world in a different wave band or with a particular sensor in mind,” Dr. Fink notes, while also quick to acknowledge the initial programming can be arduous. 

“Everything needs to be materially classified ahead of time. To calculate the correct amount of reflection at different wavelengths, we need to know what each surface in the scene is made of.” By doing this upfront work, the game engine can then create numerous scenarios using the same items. “We can control different effects; environmental weather effects, time of day, wind speed, as well as sensor effects, optics, or electronics control and adjust them and see the effect in real time to the simulation.” 

While each of these technologies presents its own unique steps forward for the XR training community, when combined, they show a future of training that looks far more like the real world than what virtual worlds have offered up to this point.