The U.S. Department of Defense officially adopted a set of ethical principles concerning the military's use of artificial intelligence (AI). The principles act as a guide for all services to follow as they design, develop, and employ AI processes and capabilities.
According to the principles, the use of AI in the military must be responsible, equitable, traceable, reliable, and governable. Army Futures Command's Artificial Intelligence Task Force (AITF) is charged with understanding how to best incorporate AI into the Army's modernization enterprise. Each of these areas of focus will help ensure that future use of AI is done at the highest standards adopted by many government, industry, and academic partners.
The principles were adopted on the recommendation of the Defense Innovation Board and come as the Army looks to use AI as an enabling technology in all of its modernization priorities.
"Like most high-level principles, they can be subjective and contextual in practice," said Dr. Stephen Russell, Information Sciences Division Chief at the U.S. Army Combat Capabilities Development Command's Army Research Laboratory, and Director of the Lab's Internet of Battlefield Things Collaborative Research Alliance. "For example, responsibility often requires a qualitative assessment and thus would require related processes with quantitative controls that may be challenging to define."
Brig. Gen. Matthew Easley, AI Task Force director, said these principles will help influence and inform the moral and responsible use of AI in the full spectrum of operations within the Army.
"Our nation's laws and values must always be taken into consideration when adopting and investing in the design, development, and deployment of AI technologies within the Army," he said.
As the Army works to make decisions faster and cost efficient, or buy more time for decisions, AI is a proven tool for commanders to harness. Working under these ethical principles ensures that a framework for decision-making is universally applied.
Artificial intelligence projects already exist in nearly every cross-functional team in Army Futures Command, but Easley said the Army is looking to take it a step further.
"Projects aren't enough – we need to do more than just a couple AI projects," he said. "We need to build infrastructure so we can help the rest of the Army do its own AI projects."
This infrastructure will be a major step forward in solving what experts call the "Input-Output Problem" – meaning that the Army will be better equipped to process the volume of data it receives and turn it into actionable information. The Defense Science Board found that "given the limitations of human abilities to rapidly process the vast amounts of data available today, autonomous systems are now required to find trends and analyze patterns."
"A lot of our data-sharing is being done with a human in the loop - and rightly so," Easley said. "But we want machines to look at a potential battlefield and identify targets to a human decision-maker faster."