Google deletes policy against using AI for weapons or surveillance

The pledge had been in place since 2018.
 By 
Amanda Yeo
 on 
The Google logo on a wall, above the slogan "Making AI helpful for everyone."
Credit: Jakub Porzycki / NurPhoto via Getty Images

Google has quietly deleted its pledge not to use AI for weapons or surveillance, a promise that had been in place since 2018.

First spotted by Bloomberg, Google has updated its AI Principles to remove an entire section on artificial intelligence applications it pledged not to pursue. Significantly, Google's policy had previously stated that it would not design nor deploy AI technology for use in weapons, or in surveillance technology which violates "internationally accepted norms."

Now it seems that such use cases might not be entirely off the table.

"There’s a global competition taking place for AI leadership within an increasingly complex geopolitical landscape," read Google's blog post on Tuesday. "We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights. And we believe that companies, governments, and organizations sharing these values should work together to create AI that protects people, promotes global growth, and supports national security."

While Google's post did concern its AI Principles update, it did not explicitly mention the deletion of its prohibition on AI weapons or surveillance. 

When reached for comment, a Google spokesperson directed Mashable back to the blog post.

"[W]e're updating the principles for a number of reasons, including the massive changes in AI technology over the years and the ubiquity of the technology, the development of AI principles and frameworks by global governing bodies, and the evolving geopolitical landscape," said the spokesperson.

A screenshot of Google's AI Principles as of Jan. 30, listing the "Applications we will not pursue."
Google's AI Principles listing the "Applications we will not pursue" as of Jan. 30. Credit: Screenshot: Mashable / Google

Google first published its AI Principles in 2018, following significant employee protests against its work with the U.S. Department of Defense. (The company had already infamously removed "don't be evil" from its Code of Conduct that same year.) Project Maven aimed to use AI to improve weapon targeting systems, interpreting video information to increase military drones' accuracy. 

In an open letter that April, thousands of employees expressed a belief that "Google should not be in the business of war," and requested that the company "draft, publicize and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology."

The company's AI Principles were the result, with Google ultimately not renewing its contract with the Pentagon in 2019. However, it looks as though the tech giant's attitude toward AI weapons technology may now be changing.

Google's new attitude toward AI weapons could be an effort to keep up with competitors. Last January, OpenAI amended its own policy to remove a ban on "activity that has high risk of physical harm," including "weapons development" and "military and warfare." In a statement to Mashable at the time, an OpenAI spokesperson clarified that this change was to provide clarity concerning "national security use cases."

"It was not clear whether these beneficial use cases would have been allowed under 'military' in our previous policies," said the spokesperson.

Opening up the possibility of weaponised AI isn't the only change Google made to its AI Principles. As of Jan. 30, Google's policy listed seven core objectives for AI applications: "be socially beneficial," "avoid creating or reinforcing unfair bias," "be built and tested for safety," "be accountable to people," "incorporate privacy design principles," "uphold high standards of scientific excellence," and "be made available for uses that accord with these principles."

Now Google's revised policy has consolidated this list to just three principles, merely stating that its approach to AI is grounded in "bold innovation," "responsible development and deployment," and "collaborative process, together." The company does specify that this includes adhering to "widely accepted principles of international law and human rights." Still, any mention of weapons or surveillance is now conspicuously absent.

Amanda Yeo
Amanda Yeo
Assistant Editor

Amanda Yeo is an Assistant Editor at Mashable, covering entertainment, culture, tech, science, and social good. Based in Australia, she writes about everything from video games and K-pop to movies and gadgets.

Mashable Potato

Recommended For You
Anthropic wants to hire a weapons expert. It's not what you think.
Anthropic's AI, Claude

AI has made us all surveillance targets. This tool helps you fight back.
A collage of four different posters. One features a large eyeball and the words "Big Tech is watching you." Another reads "Privacy is Theft. Surveillance is Protection. Data is Profit."


Instagram reportedly deletes Bellesa sex toy shop account for using the word 'clitoris'
illustration showing screenshot of email banning bellesa instagram account

Anthropic changes safety policy amid intense AI competition
Claude logo on screen with coding in the background, on screen.

Trending on Mashable
NYT Connections hints today: Clues, answers for April 3, 2026
Connections game on a smartphone

Wordle today: Answer, hints for April 3, 2026
Wordle game on a smartphone


The Earth is glowing in new Artemis II pictures of home
One half of the Earth is seen floating in space through the open door of the Orion spacecraft.

NYT Strands hints, answers for April 3, 2026
A game being played on a smartphone.
The biggest stories of the day delivered to your inbox.
These newsletters may contain advertising, deals, or affiliate links. By clicking Subscribe, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.
Thanks for signing up. See you at your inbox!