Project Maven, the Pentagon's flagship artificial intelligence programme, has moved from internal controversy to institutional consensus, with former skeptics now among its most committed advocates, according to a book excerpt by journalist Katrina Manson published by Wired.
Launched in 2017 under then-Deputy Secretary of Defense Robert Work, Project Maven was designed to apply machine learning to the analysis of drone surveillance footage — automating a task that had overwhelmed human analysts. From the outset, the initiative divided opinion both inside the Pentagon and in Silicon Valley, most visibly when Google withdrew from a Maven contract in 2018 following a staff rebellion over the ethics of building AI for lethal military applications.
How the Pentagon's AI Skeptics Became Converts
Manson's account, drawn from her forthcoming book, traces how that early turbulence gave way to a more settled consensus within the Department of Defense. Officials who once questioned whether commercial AI technology could be reliably integrated into sensitive military workflows — or whether the legal and ethical frameworks existed to govern its use — now describe Maven as a proven model. The programme has expanded well beyond its original computer vision brief, encompassing a wider range of AI-enabled decision-support tools across the US military.
Former skeptics at the Pentagon are now true believers — a conversion that signals AI has moved from experimental to foundational in American military doctrine.
The transformation is significant not merely as institutional biography. It reflects a broader strategic calculation: that adversaries, most notably China and Russia, are integrating AI into their own military systems at pace, and that the United States cannot afford principled hesitation. That logic has proved persuasive across successive administrations and defence leadership teams.
What Project Maven Actually Does — and Who Controls It
Project Maven operates under the authority of the Department of Defense and is currently managed by the Chief Digital and Artificial Intelligence Office (CDAO), established in 2022 to consolidate the Pentagon's fragmented AI efforts. The programme is not governed by a single binding legal instrument specific to AI targeting; instead, it operates within existing frameworks including DoD Directive 3000.09, which requires human judgment in decisions to apply lethal force. That directive is departmental policy, not statute — meaning it can be revised administratively without congressional action.
The distinction between binding law and internal policy matters. Critics of autonomous weapons systems argue that DoD Directive 3000.09, while meaningful, lacks the durability and external accountability of legislation or international treaty. The US has resisted calls at the United Nations for a binding treaty on lethal autonomous weapons systems, a position that places it at odds with a growing coalition of states and civil society organisations.
Silicon Valley's Complicated Return to Defence Work
The Google episode of 2018 marked a high-water point for tech-sector resistance to defence AI contracts. Since then, the landscape has shifted. Microsoft, Amazon, Palantir, and a new generation of defence-focused AI startups have moved aggressively into the space. Google itself has re-engaged with Pentagon AI work, though the company has been selective about the contracts it accepts.
Manson's framing — describing those who shape these programmes as the "gods of AI warfare" — points to a concentration of influence among a relatively small group of technologists, programme managers, and senior officials. The decisions they make about algorithm design, data sourcing, and human-machine interaction protocols carry consequences that extend far beyond any individual procurement contract.
The excerpt does not detail specific operational deployments of Maven-derived systems, and the Pentagon does not publicly confirm the programme's role in active military operations. According to the DoD, Maven capabilities have been used in support of operations in multiple theatres, though specifics remain classified.
Governance Gaps That Linger
Even as institutional enthusiasm for Project Maven has grown, the governance architecture surrounding military AI has not kept pace. The US AI Safety Institute, housed within the National Institute of Standards and Technology, focuses primarily on civilian applications. No equivalent body with statutory authority oversees AI in defence contexts. Congressional oversight exists through the armed services committees, but dedicated legislative frameworks specific to military AI remain limited.
International coordination is similarly underdeveloped. The Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy, endorsed by the US and 50-plus states in 2023, is explicitly non-binding and advisory. It establishes norms but creates no enforcement mechanism and carries no legal obligation.
This gap between technological capability and governance infrastructure is not unique to the United States. But given the scale of American military AI investment — and the influence US doctrine exerts on allied militaries — the choices made in Washington carry disproportionate global weight.
What This Means
The institutionalisation of Project Maven signals that military AI in the US has passed a point of no return — the policy debate has shifted from whether to deploy AI in warfighting to how, under what constraints, and with whose oversight; questions that binding governance frameworks have yet to fully answer.
