When something is different, a certain kind of silence descends upon a military training range. Not the quiet of inactivity—quite the reverse, in fact. The typical controlled chaos of coordinated personnel, logistics vehicles, and radio chatter has been replaced at some exercises in recent years by something leaner and, to an outsider, almost unsettling. There are fewer corpses on the ground. reduced footprints. Additionally, there is a server rack operating algorithms in the background that replace the human workers who previously performed those tasks. The scope of what is being attempted is truly hard to comprehend, and U.S. defense companies are constructing that future more quickly than most people realize.
The easiest way to comprehend what’s going on is to look at what CAE and General Atomics have been working on in secret: AI-powered modeling and simulation tools that add computer-generated forces that can change in real time to virtual training environments. These opponents aren’t the cumbersome, predictable ones from earlier simulation programs, the ones that followed predetermined patterns that a novice could learn to play in a matter of sessions. The change was explained simply by Brian Stensrud, a technical fellow for AI at CAE USA Defense & Security: AI allows you to create behavioral models that function as teammates and opponents at different levels of complexity, training a single operator with a much smaller footprint of simulated entities. Time, logistics, and actual money are all saved as a result. AI-powered training is expected to become scalable in a manner that human-intensive exercises just cannot.
| Category | Details |
|---|---|
| Topic Focus | AI Integration in U.S. Military Training Simulations |
| Key Government Body | U.S. Department of Defense (DoD) |
| DoD AI Investment (FY2025) | $1.8 billion (up from $874M in FY2022) |
| Projected AI Defense Market | $178 billion by 2034 |
| Key Defense Firms Involved | CAE, General Atomics, Lockheed Martin, Palantir, Anduril, Raytheon |
| Key AI Initiative | Project Maven (launched 2017); Project FUZE (2025) |
| AI Contract Awards (2025) | Up to $200M each to Anthropic, Google, OpenAI, xAI |
| Global Military AI Spending | Estimated $9.2B in 2023; projected $38.8B by 2028 |
| North America AI Defense Market | Projected $78 billion |
| Reference Website | defense.gov |
The financial background of all of this is important. According to Frost & Sullivan, the DoD’s investment in AI more than doubled from $874 million in fiscal year 2022 to $1.8 billion in fiscal year 2025. That is a signal, not a line item. Additionally, it is anticipated that the defense AI market as a whole will reach $178 billion by 2034, expanding at a rate of more than thirty percent per year. In order to speed up the adoption of frontier AI models across defense operations, the Pentagon’s Chief Digital and AI Office awarded contracts worth up to $200 million each to four AI firms in mid-2025: Anthropic, Google, OpenAI, and xAI. This is because numbers like that draw serious players. The barracks and Silicon Valley have formally united.
The language coming out of Washington has changed so much that it’s difficult to ignore. Discussions about defense AI used to feel theoretical, with caveats about deadlines and constraints. That reluctance has mostly vanished. In July 2025, the White House unveiled its AI Action Plan, which included over 90 policy measures aimed at securing international positioning, building infrastructure, and accelerating innovation. The quiet part was spoken aloud by David Sacks, the administration’s head of AI policy: artificial intelligence has the potential to change the global power structure, and the United States must prevail. Budgets for procurement tend to shift when such language is used.
AI-driven simulations are being used by General Atomics not only for troop training but also as a testbed for new drone capabilities. The company runs iterations in virtual environments that would be too costly and possibly hazardous to duplicate in real flight tests. The company’s technical director for autonomy and AI, Anastacia MacAllister, made what may be the most sensible statement in this entire field: AI is ready, it’s here, but what matters is using the right tool for the right job and truly understanding the technology you’re deploying. It seems clear. It seems to be more difficult than it seems, especially in big defense companies that are still using outdated infrastructure.
CSIS analysts have referred to the underlying goal of all this training work as a “second Manhattan Project for AI.” The Army built the cities where the scientists lived and worked, secured the facilities, and established the logistical networks that allowed research to proceed quickly, which is why the original Manhattan Project was successful. It is now argued that similar enabling infrastructure is needed for AI, including compute, energy, data integrity, protected networks, and a skilled workforce capable of working at the nexus of machine intelligence and military operations. This was outlined in November 2025 by CSIA scholars Jake Kwon and Benjamin Jensen, who pointed out that securing AI infrastructure requires counterintelligence, cyber protection teams, and cross-functional groups that comprehend how adversaries attempt to infiltrate critical systems.
There is a lot of adversarial pressure. China declared in 2017 that it would be the world leader in artificial intelligence by 2030, and it has been systematically pursuing this goal ever since, incorporating AI throughout its defense sector as part of a military-civil fusion policy. In 2025, defense technology venture capital reached a record $49.1 billion, almost twice as much as the previous year. $14.2 billion was raised by U.S. defense-tech startups alone, significantly more than in Europe. Anduril Industries, a California-based company developing AI-enabled military systems, is the poster child. It raised $2.5 billion at a valuation exceeding $30 billion and won a $642 million contract from the Marine Corps for AI-powered counter-drone systems. This is being pursued with real money and urgency.
The real question is whether the training simulations being created now will last long enough to adequately prepare soldiers for the real-world situations they will encounter. It’s worth considering MacAllister’s caution regarding data quality. In tech circles, and increasingly in defense circles as well, there is a common belief that more data inevitably leads to better results. It doesn’t. The quality of the data feeding these simulations determines the quality of the behavioral models operating within them, and treating data as a resource rather than merely a byproduct of current processes is necessary to get that right. A training simulation based on incomplete or faulty data may result in soldiers who are highly skilled at winning the simulation. Being ready is not the same as that.
One thing military leaders have in common is that people are still involved. Vice Adm. The ultimate decision is still made by a human, but AI makes it possible for that decision to be made at previously unattainable speeds, according to Brad Cooper of U.S. Central Command. For training environments, CAE is creating what it refers to as an omnipresent AI observer—a digital system that monitors trainee performance, records actions, and provides customized feedback without replacing the instructor. Augmentation, not replacement, is the aim. Defense ethicists are taking the question of whether that balance holds as technology advances very seriously, and there isn’t yet a definitive answer.
It’s already evident that defense companies are working with a sense of urgency that differs from typical procurement timelines when incorporating AI into military training simulations. Drones, autonomous targeting, and logistics optimization are examples of AI-enabled systems that alter the nature of warfare, as demonstrated by the Russia-Ukraine conflict. Everyone in the defense establishment observed that battle and made judgments. The findings generally pointed to the same conclusion: the side that successfully integrates AI at scale will have an advantage that is difficult to overcome by conventional force size. American defense companies were aware of that message. One simulation at a time, they are currently constructing the response.
