IGF Mauritius

Internet Governance Forum Mauritius

Cybersecurity General Generative AI Geopolitics of IG

From Substitution to Augmentation: Rethinking AI in Warfare

Roughly two thousand years ago, the full combat load of a Roman legionary weighed approximately 100 pounds (45 kg). Remarkably, modern militaries exhibit a similar pattern. U.S. Army soldiers still carry around 45 kg in full combat gear, while South Korean and British troops typically carry about 40 kg. Despite advancements in weaponry and logistics, the total weight and composition of a soldier’s load have remained surprisingly consistent throughout the centuries.

Today’s soldiers carry nearly the same weight as Roman infantrymen because, as equipment becomes lighter, more manageable, and more precise, the scope of responsibility placed on each individual soldier has expanded, ultimately augmenting the capabilities of the human combatant. This observation provides crucial insights into AI integration in contemporary warfare. Technological advancements such as AI do not merely substitute or replace the existing role but rather augment the role of the individual combatant on the modern battlefield.

This blog post builds on the insight that military technologies tend not to substitute the role of the human combatant but rather to augment it. While much of the current discourse focuses on substitution, I argue that the military integration of AI should be understood not through the lens of substitution, but through augmentation that better reflects the historical and functional continuity of human-machine relationships in warfare.

From Substitution to Augmentation

While the integration of AI into contemporary warfare has become a global trend, many scholars and researchers continue to approach the military use of AI through the lens of substitution. For example, some envision using AI to reduce manpower by replacing certain military roles, or to offload decision-making and situational judgment from humans to algorithms during combat. Others anticipate AI taking over tasks that are physically dangerous, cognitively overwhelming, or ethically burdensome for human soldiers.

This substitution-oriented mindset stems from a perspective that treats humans and technologies as separate and even exchangeable entities. Current ongoing debates—namely, who should ultimately take command and control in military operations, humans or machines—are often limited to a binary framing that positions the two as competing agents. Similarly, the concept of human-AI teaming, though seemingly collaborative, also treats humans and machines as symmetric entities whose complementary strengths are leveraged to fulfill a given task more effectively.

Historically, technology and humans have not been symmetric or competing entities in warfare. Rather than substituting the human combatant, technological innovation has consistently empowered and augmented their role, enabling soldiers to perform more diverse and complex tasks than was possible with earlier tools. Substitution, strictly speaking, often occurs between technologies, not between technology and the human who uses it. For example, cavalry units disappeared from modern warfare and were replaced by tanks. Yet this shift represents a substitution of tools—horses with armored vehicles—not a replacement of the human role itself. Rather, the role of the cavalryman has expanded from penetrating enemy lines to performing more sophisticated tasks, such as providing combined arms support, enabled by the new machine. It simply means that the cavalryman is no longer riding a horse, but operating a tank, and could play a more diverse role.

Let’s examine how this augmentation perspective applies across strategic and tactical dimensions of warfare.

Strategic Level Augmentation

At the strategic level, both substitution and augmentation perspectives converge on the same insight: AI is fundamentally limited in its strategic capabilities. This is one domain where both perspectives find common ground – AI can neither substitute human strategists nor meaningfully augment their decision-making.

Strategy, by its very nature, is not about optimization but about meaning-making. It requires an understanding of adversaries as sentient, political agents—capable of deception, interpretation, long-term perspective, and moral reasoning. As Kenneth Payne argues in “I, Warbot” (2021), AI is fundamentally ill-equipped for strategic reasoning because it lacks the cognitive architecture necessary for reflection, intention, empathy, and political imagination.

AI, no matter how sophisticated, is not a mind but a machine trained to extrapolate from past data. It does not form intent, understand context, or grasp the symbolic and narrative dimensions of war. AI cannot reconcile military action with political purpose, which has been an essential feature of strategy since Clausewitz. Strategic reasoning involves uncertainty, ambiguity, and paradox—qualities that resist algorithmic clarity. While machines excel at bounded problems with quantifiable metrics, strategy is shaped by human fears, beliefs, and values.

While machines excel at bounded problems with clear goals and quantifiable metrics, strategy is a domain of judgment, shaped by human fears, beliefs, and values. These are not domains where machines merely lag behind—they are domains machines cannot enter. AI does not possess a theory of mind; it cannot model the mental states of allies or adversaries. Nor can it understand reputation, signaling, or deterrence in any meaningful way—it can only mimic them based on surface-level correlations in training data. Strategy remains a fundamentally human endeavor. 

Tactical Level Augmentation

At the tactical level, the substitution perspective on AI integration is divided into two competing schools of thought. The first argues that AI can effectively substitute human combatants in various tactical roles. As demonstrated in DARPA’s AlphaDogfight trials, AI systems have outperformed experienced human pilots in simulated aerial combat by executing tactical maneuvers at speeds and decision rates impossible for humans. The second school within the substitution framework argues that AI cannot effectively replace human combatants due to the uncertainty of the battlefield. Warfare, especially ground combat, is dominated by uncertainty, friction, and chaotic contingencies that Clausewitz famously called the “fog of war.” No matter how sophisticated an AI system might be, it cannot eliminate or fully navigate these conditions. The battlefield remains a space of irrational forces: emotion, confusion, physical exhaustion, and cognitive overload under extreme pressure. AI’s goal-oriented optimization struggles in environments where objectives shift rapidly, ethical considerations must be weighed dynamically, and adaptability to completely unexpected situations is essential. 

Both perspectives, however, share a fundamental flaw: they view the relationship between humans and AI through the lens of substitution rather than augmentation. The augmentation framework acknowledges the battlefield’s unpredictability and emotional intensity but reaches a different conclusion. While AI cannot eliminate tactical uncertainty, it can enhance a soldier’s ability to navigate it.

The key insight of tactical augmentation lies in its focus on preparedness rather than performance. Instead of deploying AI to replace human decision-makers in combat, military organizations can use AI-powered systems to prepare soldiers for the uncertainty they will face. This approach transforms AI from a competitor into an enabler of human tactical excellence.

AI-powered training systems offer unique advantages in this domain. Through sophisticated simulation environments, machine-generated scenarios, and randomized adversarial inputs, AI can expose soldiers to a far wider range of tactical challenges than traditional training methods. For combatants, especially, repeated exposure to AI-generated tactical challenges builds neural pathways for faster recognition and response when similar situations arise in actual combat. These training systems also allow soldiers to develop psychological resilience under extreme conditions and greater confidence under stress. Even when enlisted personnel are simply asked to imagine possible scenarios and discuss responses with peers, morale often improves, and units report stronger confidence in their combat readiness.

This seemingly minor psychological shift can produce meaningful improvements in tactical resilience. When a soldier faces an unexpected tactical situation for the first time during a high-stakes operation, the cognitive load is overwhelming. But when that same soldier has encountered dozens of AI-generated variations of similar scenarios during training, their brain has already developed partial solutions and recognition patterns that can be rapidly adapted to the real situation. Moreover, this type of psychological readiness allows soldiers not only to survive uncertainty, but at times, to exploit it. In many battlefield situations, even though a soldier’s response is not perfect, responding quickly and decisively can be enough to break the enemy’s momentum, cause confusion, and seize the initiative.

In this light, tactical augmentation does not seek to eliminate friction or uncertainty. Rather, it aims to prepare the human mind to operate effectively in spite of these conditions. That is, the concept of augmentation does not revolve around whether AI can reduce uncertainty on the battlefield, but instead offers a perspective focused on enhancing human capacity to respond to such uncertainty.

So how should militaries integrate AI? In the next post, I will explore this question using Jeffrey Ding’s diffusion theory as a guiding framework.

The post From Substitution to Augmentation: Rethinking AI in Warfare appeared first on Internet Governance Project.