Google has solidified a controversial agreement with the Pentagon, allowing its Gemini AI models to be used within the U.S. military’s classified networks for 'any lawful purpose.' The deal follows similar arrangements with OpenAI, xAI, Nvidia, Microsoft, and Amazon, marking a significant shift in the tech giant’s approach to military collaboration.
Employee Backlash Falls Short
Nearly 600 Google employees signed an open letter opposing the deal, echoing concerns raised during the 2018 Project Maven controversy, when employee protests forced Google to abandon a Pentagon AI initiative. However, this time, Google’s leadership has remained steadfast, dismissing internal dissent and affirming its commitment to military partnerships.
'Google proudly works with the U.S. military and plans to continue to do so,' the company stated in a memo to staff.
The diminished influence of tech workers is attributed to widespread layoffs and cost-cutting measures across the industry, which have eroded employee leverage. Unlike in 2018, threats of resignations and petitions are no longer enough to sway corporate decisions.
Broader Implications for AI and National Security
Critics argue that Google’s stance reflects a broader trend of tech companies prioritizing government contracts over ethical concerns. The removal of a pledge against developing AI for weapons or surveillance from Google’s AI principles in 2025 underscores this shift. Laura Nolan, a former Google employee who resigned over Project Maven, noted that workers today are increasingly uneasy about contributing to military systems.
Meanwhile, Anthropic, the only major AI lab to refuse similar Pentagon terms, faces designation as a 'supply chain risk,' with the military ordered to cease using its products within six months. Anthropic is challenging this designation in court.
As Google doubles down on its Pentagon deal, the tech industry’s relationship with the U.S. military continues to evolve, raising questions about the future of AI ethics and worker accountability.
