How Big Tech learned to love America's military
Silicon Valley companies are abandoning safety policies to win Pentagon contracts, turning everyday AI into weapons systems

Samuel Corum/Getty Images
Since Donald Trump's presidential election victory, major tech companies have abandoned years of policies restricting military work and sought out lucrative defense contracts and deeper connections with the Pentagon.
Executives from Meta, OpenAI, and Palantir will be sworn in Friday as Army Reserve officers. OpenAI signed a $200 million defense contract this week. Meta is partnering with defense startup Anduril to build AI-powered combat goggles for soldiers.
All while Trump is pushing a $1 trillion defense budget — the largest in U.S. history.
The companies that build Americans' everyday digital tools are now getting into the business of war. Tech giants are adapting consumer AI systems for battlefield use, meaning every ChatGPT query and Instagram scroll now potentially trains military targeting algorithms. Meanwhile, safety guardrails are being dismantled just as these dual-use technologies become central to warfare.
A reversal for Silicon Valley
The relationship between Silicon Valley and the military isn't new. DARPA funding helped create the internet, GPS, and even Siri. For decades, military research has flowed into civilian applications: The Pentagon has developed the technology, and companies have commercialized it for everyday use.
But for years, the reverse flow barely existed. When tech companies attempted to collaborate with the military, their employees revolted. Google employees staged unprecedented protests over Project Maven, a Pentagon program that used AI to analyze drone footage. Almost 5,000 workers signed petitions demanding that the company cancel the contract, and dozens resigned.
The backlash worked. Google didn't renew the Maven contract, and it established AI principles that restricted military applications. For years afterward, major tech companies maintained policies against weapons development, with employees successfully pushing back against military partnerships.
That resistance crumbled as the economics of AI became unsustainable. Training and running large language models costs hundreds of millions of dollars, and consumer revenue alone can't cover the bills. For many companies, working with the military isn't just an opportunity — it may be essential for survival.
The most striking symbol of this partnership will take place Friday, when Silicon Valley executives will literally put on Army uniforms. Meta's chief technology officer, Andrew "Boz" Bosworth, Palantir's CTO, Shyam Sankar, and OpenAI executives Kevin Weil and Bob McGrew will be sworn in as lieutenant colonels in the Army's inaugural "Detachment 201" program.
The tech reservists will serve about 120 hours a year, advising on AI-powered systems and assisting the Defense Department in recruiting other high-tech specialists. They'll be spared basic training and given more flexibility than typical reservists to work remotely. Due to their private-sector status, each will hold the rank of lieutenant colonel, placing them immediately in senior leadership roles.
"We need to go faster, and that's exactly what we are doing here," Gen. Randy George, the Army's chief of staff, told The Wall Street Journal.
The arrangement creates an unprecedented level of integration between private companies and military planning. The executives won't work on projects involving their own employers, but they'll have direct input into military strategy while their companies compete for massive defense contracts.
The corporate partnerships are moving just as fast. Last month, Meta and Anduril announced they're collaborating to build augmented reality headsets for U.S. soldiers, starting with technology that provides real-time battlefield intelligence through heads-up displays.
The devices will rely on Meta's Llama AI model and Anduril's command and control software. The goal, according to Anduril's CEO, is to "turn warfighters into technomancers."
Safety guardrails come down
As companies embrace military contracts, they're quietly abandoning safety commitments. The Midas Project, a nonprofit that tracks policy changes at major AI companies, has documented about 30 significant modifications to ethical guidelines since 2023.
OpenAI removed values such as "impact-driven," which emphasized that employees "care deeply about real-world implications," replacing them with "AGI focus." Google modified its safety framework to suggest it would only follow guidelines if competitors adopted similar measures. OpenAI and others have explicitly reversed previous bans on military applications.
Meanwhile, oversight is actually weakening. In May, Defense Secretary Pete Hegseth cut the Pentagon's independent weapons testing office in half, reducing staff from 94 to 45 people. The office, established in the 1980s after weapons performed poorly in combat, now has fewer resources to evaluate AI systems just as they become central to warfare.
The timing couldn't be more significant. As conflicts like the Israel-Iran war demonstrate the growing role of AI in warfare, the companies that once resisted military partnerships are now integral to America's defense strategy.
The question facing Americans is whether they're comfortable with this new arrangement — one in which their daily digital interactions train the AI systems that target enemies abroad.