Soar Technology (SoarTech) won a contract from the U.S. Air Force to develop automatic speech recognition and cognitive agent training capability in support of the Air Force Warning and Control System (AWACS) mission.
The contract was awarded by The Air Force Life Cycle Management Center's Simulators Division, in partnership with AFWERX, as part of a Direct to Phase II Small Business Innovative Research (SBIR) Pitch Day. SoarTech proposed and demonstrated the effectiveness of a scalable and authorable Automated Speech Recognition (ASR) tool, LinGo, to develop a training and assessment capability for complex tactical communications.
Communication is one of the six core Crew Resource Management (CRM) skills and the cockpit/CRM program requires aircrews to learn and demonstrate these skills during classroom and simulator training. While automatic speech recognition (ASR) is currently used for training where the trainee communicates with an automated entity, LinGo will expand the training utility of ASR by effectively replacing human teammates with synthetic cognitive agents and will capture trainee speech to automatically assess their performance reducing instructor requirements.
“SoarTech is incredibly excited for the opportunity to bring our experience and capabilities in natural language understanding and simulated intelligent agents to bear for the Air Force,” said Dr. Brian Stensrud, director of simulation. “These are areas where we have been researching and maturing technology for over 20 years, and we feel that this combination can provide significant value for Air Force training in a variety of domains and use cases.”
Amanda Bond, SoarTech lead scientist, said that LinGo will allow the Air Force to author their own custom standard and non-standard ASR vocabularies for integration in existing and future training systems. Via the ASR, not only will communication be objectively measured and assessed, but synthetic entities can use the parsed speech data as direct input, allowing the entities to react to a trainee’s commands, both by responding verbally and by taking action within a scenario.
“This enables training to take place with one or many AWACS crew members being played by synthetic entities, providing increased flexibility for mission training while still supporting crew communication and coordination,” said Bond.