Last updated: Military AI: High stakes require pragmatic, risk-based approach

Military AI: High stakes require pragmatic, risk-based approach

1 share

Listen to article

Download audio as MP3

During conversations about AI adoption at the Pentagon and military AI implications, something I heard during a seminar years ago often creeps into my thoughts. Matt Waxman, Columbia law professor and expert on the law of armed conflict, tried to throw water on a heated argument between two students by imploring, “Know what you don’t know.”

That eventually led to a détente as one of the students backed off his assumptions about the other student’s expertise. Waxman’s quip was a slight twist on former Secretary of Defense Donald Rumsfeld’s evergreen adage about understanding the known unknowns.

When it comes to the broader ethical and societal implications of AI adoption, the tech-pessimists—as they’re often described in Silicon Valley—sometimes fear monger the end of times, but have some legitimate concerns. My camp, the tech-optimists, can be overly rosy about the long-term societal problems. Both camps require a deeper dive into AI, examining and identifying what we do and what we do not know, then analyzing those blind spots for their implications for building in the future.

AI adoption in the military

What we do know: AI adoption at the US Department of Defense appears to be falling into three major buckets of efficiency, intelligence and data, and autonomy (though many use cases may be defined under multiple buckets).

AI is already transforming administrative tasks, streamlining compliance documentation, contracting, and reporting. This reduces reliance on manual labor, reallocating hours spent on routine tasks toward more complex work.

The downside? Some jobs will disappear. The upside? AI-driven automation can boost productivity, freeing up skilled professionals for higher-value tasks.

AI-driven data integration could also reduce costs in unexpected ways, such as in depot work. Digitizing and being able to access unstructured data may be able to help with maintenance, retrofitting, and reengineering efforts to be able to better extend the lifecycle of technology by being able to access previous plans and more quickly engineer spare parts, driving down costs.

While military AI will lead to personnel refocus, the DoD cannot assume workforce shifts will happen naturally. A deliberate transition plan—focused on retraining personnel for higher-value analytical and strategic roles—will be critical so that AI enhances, rather than replaces, human expertise.

This isn’t just about preserving jobs; it’s a strategic necessity. China, for instance, can overwhelm challenges with sheer numbers of people it can assign to a problem. The US must instead rely on a more agile, AI-augmented force to maintain a competitive edge.

Yet, even with gains, we will have to protect against a major unknown: Will shifting to automation processes for efficiency cause a loss of institutional knowledge and mean dependencies on AI processes that prove fragile?

Military intelligence and AI

In the area of intelligence and data, the AI gains stand to be immense. In fact, the technology has the potential to upend a whole data industry. But the gains at DoD have the potential to be particularly interesting.

First, DoD has an incredible amount of data in varying formats. Current AI advances mean that this unstructured and structured data, when digitized, will finally be able to talk to each other. AI-powered data fusion incorporating high side, OSINT, and foreign language sources will improve decision making and situation awareness. This has implications for everything from identifying supply chain vulnerabilities to tracking adversary movements faster and more accurately.

AI-driven data integration has the potential to disrupt traditional data analytics; predictive modeling can improve threat assessments, cyber defenses, and strategic planning by uncovering patterns human analysts might miss.

AI automation won’t displace intelligence analysts, but will give them the tools to focus on strategic challenges instead of data processing.

The last area of AI advancement will involve increased autonomy of systems. This looks like automating targeting, satellite development for defense against hypersonics, and retrofitting traditional capabilities like tanks to create autonomous systems to minimize human exposure and casualties in high-risk breach operations. These are major advances that can save the lives of servicemen and civilians alike.

Autonomous AI is not actually a new or untested. Iron Dome is an AI based, autonomous defensive system with a 95% success rate at intercepting incoming missiles. Humans intervening to scope each interception would be impossible.

Mitigating AI intelligence risks

But a major unknown persists: Could an adversary introduce data manipulation or poisoning that could create blind spots and workforce overdependent on the AI in a way that they lose the skills they currently have to do complex research?

This is a fear of many tech pessimists, as they imagine fully autonomous lethal drones whose programming may essentially overrule the creators and make decisions that run counter to ethical or legal standards. However, many of these concerns can be mitigated by a thoughtful approach in development and implementation that identifies risks and risk mitigation steps.

Most importantly, we should be pulling in experts at every stage of the system development process, because they will give guidance to the engineers at multiple stages of development to minimize risk.

What should that look like? One example would be pulling in JAGs who are experts in the law of armed conflict to talk through the legal standards for targeting, to include how to think through legal standards like necessity and proportionality, ensuring compliance with US law.

Additionally, when a system failure has a high potential for casualties, we can build in human in the loop decisions and pursue semi-autonomous systems.

Building an AI framework for the future

AI will continue reshaping warfare, intelligence, and military operations, and the stakes are too high for either unbridled optimism or alarmist fearmongering. Instead, we need a pragmatic, risk-aware approach—one that acknowledges what we don’t know, integrates legal and ethical expertise early, and builds AI frameworks that align with strategic priorities and moral imperatives.

But there are still unknowns: we will still need to figure out how to create enforceable rules of engagement for autonomous systems. Can we build AI systems that can explain their decisions transparently to ensure accountability in complex environments?

If the US fails to get AI ethics right and answer the big questions, the consequences go beyond system failures or unintended harm—they extend to strategic disadvantage.

Adversaries who do not share our ethical constraints may develop autonomous capabilities faster, deploy them with fewer safeguards, and set global norms that do not align with democratic values.

The challenge isn’t just ensuring AI remains aligned with legal and ethical principles, but also making sure that doing so doesn’t leave the US vulnerable in future conflicts. We must shape AI before it shapes us.

With intelligent ERP, you’re always mission ready.Learn more HERE.

Search by Topic beginning with