is working with Maxar, Unreal Engine and other community partners to advance digital twins. Hannes Walter, VP for Product Management, Synthetic Training Solutions at, provided UnrealFest 2023 delegates with an interesting data point on the evolution of digital twins as simulation underpinnings during his October 3 presentation Creating Large-Scale 3-D Digital Twins Using AI and Unreal Engine. Marty Kauchak, MS&T editor, attended the session. An extract of the session is provided.

The author hesitates to label simulation and training advancements as transformational much less disruptors. But that is the outcome of the rapid maturation of digital twins as S&T enablers – in this case, as large-scale entities.     

The executive initially presented the case to pursue this type of digital twin, noting that some simulation scenarios simply need a vast scale. While training audiences may need a large-scale twin to replicate a city, expansive training range or other venue, the creation of such an enabler is both costly and time consuming. “Yet, we want to have a complete digital twin. Incomplete 3-D twins yield incorrect simulation results and outdated 3-D twins result in imprecise training of humans. For example, it is very well known that in pilot training, pilots need to be trained on the most current airport data.” Walter added the additional imperative of fidelity – to make the large-scale digital twin as realistic as possible, to enhance the effectiveness of the simulator-based scenario in which the person is immersed.'s focus on large-scale digital twins includes supporting high-fidelity scenarios for pilots (above) and other high-risk training enterprises. Image credit:

What’s Needed in a Large-Scale Digital Twin

The community subject matter expert then synchronized the attributes needed in large-scale digital twins to meet the training audience’s increasingly rigorous requirements noted above. While this digital twin variant can be used for gaming and in the adjacent S&T market, Walter emphasized the technology enabler must be consistent with the real world for the latter use cases. For S&T purposes, large-scale digital twins must have a vector-based representation that covers all relevant aspects. “We must have raster and terrain data all integrated into this so we can use this for simulation.” Using’s competencies to provide and optimize an AI-based simulation approach, he added, “We then need embedded semantic segmentation and meta-data and it should cover the globe’s infrastructure in 3-D without gaps or missing areas.” The large-scale digital twin must also be high performant for efficient computation and framerates, “which was also a challenge for us. We were very creative to overcome those requirements,” Walter noted. Large-scale digital twins must also be scalable to support massive multiplayer and massive agent applications.

The Approach to Employing AI’s industry partners to advance and integrate AI include, in one case, Maxar. This team’s SYNTH3D product was reported to seamlessly integrate cutting-edge technology and robust analytical capabilities of’s patented generative AI to extract 3-D buildings from Maxar’s Vivid imagery basemap. Maxar’s satellite data was noted to be 50cm (20in) resolution, color-corrected, and having other attributes. SYNTH3D is available on various platforms, including NVIDIA’s Omniverse and Epic Games’ Unreal Engine. 

MS&T Kauchak UE2023 large scale battlefield .jpg is responding to one training community requirement -- in this case for large-scale digital twins (above). Image credit: 

Fast-Paced Evolution and its industry partners are using AI, more capable platforms and increasingly accurate satellite data to advance large-scale digital twins. This product niche is certain to advance to meet the ever-more rigorous requirements of high-risk training enterprises.