Google promises it won't use AI to 'cause overall harm,' after employee rebellion

Google wants to apply its "don't be evil" ethos to AI.
 By 
Karissa Bell
 on 
Google promises it won't use AI to 'cause overall harm,' after employee rebellion
In Google's newly published AI principles, CEO Sundar Pichai promises the company won't use AI to cause harm or create weapons. Credit: Getty Images/justin sullivan

Google is attempting to apply its "don't be evil" ethos to artificial intelligence.

Today, CEO Sundar Pichai published a lengthy set of "AI Principles" in which he promises the company won't use AI to "cause overall harm" or create weapons.

Meant to address growing employee concerns about how the company approaches AI, the document includes "seven principles to guide our work going forward," as well as four "AI applications we will not pursue."

The latter group includes:

1. Technologies that cause or are likely to cause overall harm.  Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.

2. Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.

3. Technologies that gather or use information for surveillance violating internationally accepted norms.

4. Technologies whose purpose contravenes widely accepted principles of international law and human rights.

The move comes less than a week after the company announced that it planned to sever ties with the Pentagon, after a contract with the Department of Defense sparked internal protests at the company over something called Project Maven. Employees were concerned that the company, once known for its slogan "don't be evil," was using its AI capabilities to help improve U.S. military drones.

"We believe that Google should not be in the business of war," employees wrote. "Therefore we ask that Project Maven be cancelled, and that Google draft, publicize and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology."

Though Pichai's principles don't address Project Maven directly, it does commit to not using AI to create weapons or "other technologies whose principal purpose" is to injure people.

Still, the CEO was careful to note that Google still plans to work with the military "in many other areas."

"We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas," Pichai wrote. "These include cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue. These collaborations are important and we’ll actively look for more ways to augment the critical work of these organizations and keep service members and civilians safe."

Some critics were also quick to point out that some of the language comes with more than a few loopholes that give Google significant leeway in implementing its new standards.

But while it's not clear yet whether these principles go far enough to address employees' concerns, the new rules could have an impact that reaches far beyond the walls of Google. Though other tech companies haven't faced the same level of criticism over military contracts, Google's move could pressure other companies to make similar commitments.

Outside of how it handles work with the government, Pichai's principles also address other controversial areas of AI, such as a promises to "avoid creating or reinforcing unfair bias" and to "incorporate privacy design principles."

Issues like privacy and bias have become increasingly important as tech companies grapple with how to responsibly implement increasingly powerful AI tools. And while many experts have called for some type of AI ethics regulation, most companies have been figuring it out as they go along.

But with Google publicly committing to standards, even basic ones, it could set an example for others to do the same.

Mashable Image
Karissa Bell

Karissa was Mashable's Senior Tech Reporter, and is based in San Francisco. She covers social media platforms, Silicon Valley, and the many ways technology is changing our lives. Her work has also appeared in Wired, Macworld, Popular Mechanics, and The Wirecutter. In her free time, she enjoys snowboarding and watching too many cat videos on Instagram. Follow her on Twitter @karissabe.

Mashable Potato

Recommended For You
The Star Wars Unlimited TCG Spark of Rebellion Booster Display is down to under market value at Amazon
Star Wars: Unlimited TCG Spark of Rebellion Booster Display card game.

Verizon outage cause revealed
A shot of a tall building from below, with a red Verizon sign on its facade.


Microsoft 365 outage cause revealed
microsoft 365 logo on phone

Former DOGE hire still has 'god level' Social Security data, whistleblower says
A woman with white hair holding a sign saying "we need social security"

Trending on Mashable
NYT Connections hints today: Clues, answers for April 3, 2026
Connections game on a smartphone

Wordle today: Answer, hints for April 3, 2026
Wordle game on a smartphone

What's new to streaming this week? (April 3, 2026)
A composite of images from film and TV streaming this week.

Google launches Gemma 4, a new open-source model: How to try it
Google Gemma

The biggest stories of the day delivered to your inbox.
These newsletters may contain advertising, deals, or affiliate links. By clicking Subscribe, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.
Thanks for signing up. See you at your inbox!