Anthropic is testing AI’s capacity for sabotage

The Claude developers are eyeing potential misuse in current AI models.
 By 
Chance Townsend
 on 
Anthropic text surrounded by asterik logos
Credit: Mashable Composite; Anthropic / Claude

As the hype around generative AI continues to build, the need for robust safety regulations is only becoming more clear.

Now Anthropic—the company behind Claude AI—is looking at how its models could deceive or sabotage users. Anthropic just dropped a paper laying out their approach.

Anthropic’s latest research — titled "Sabotage Evaluations for Frontier Models" — comes from its Alignment Science team, driven by the company's "Responsible Scaling" policy.


You May Also Like

The goal is to gauge just how capable AI might be at misleading users or even "subverting the systems we put in place to oversee them." The study focuses on four specific tactics: Human Decision Sabotage, Code Sabotage, Sandbagging, and Undermining Oversight.

Think of users who push ChatGPT to the limit, trying to coax it into generating inappropriate content or graphic images. These tests are all about ensuring that the AI can’t be tricked into breaking its own rules.

In the paper, Anthropic says its objective is to be ready for the possibility that AI could evolve into something with dangerous capabilities. So they put their Claude 3 Opus and 3.5 Sonnet models through a series of tests, designed to evaluate and enhance their safety protocols.

The Human Decision test focused on examining how AI could potentially manipulate human decision-making. The second test, Code Sabotage, analyzed whether AI could subtly introduce bugs into coding databases. Stronger AI models actually led to stronger defenses against these kinds of vulnerabilities.

The remaining tests — Sandbagging and Undermining Oversight — explored whether the AI could conceal its true capabilities or bypass safety mechanisms embedded within the system.

For now, Anthropic’s research concludes that current AI models pose a low risk, at least in terms of these malicious capabilities.

"Minimal mitigations are currently sufficient to address sabotage risks," the team writes, but "more realistic evaluations and stronger mitigations seem likely to be necessary soon as capabilities improve."

Translation: watch out, world.

Headshot of a Black man
Chance Townsend
Assistant Editor, General Assignments

Chance Townsend is the General Assignments Editor at Mashable, covering tech, video games, dating apps, digital culture, and whatever else comes his way. He has a Master's in Journalism from the University of North Texas and is a proud orange cat father. His writing has also appeared in PC Mag and Mother Jones.

In his free time, he cooks, loves to sleep, and greatly enjoys Detroit sports. If you have any tips or want to talk shop about the Lions, you can reach out to him on Bluesky @offbrandchance.bsky.social or by email at [email protected].

Mashable Potato

Recommended For You
Anthropic: Chinese AI firms created 24,000 fraudulent accounts for 'distillation attacks'
Deepseek logo is displayed on a mobile phone screen with the flag of China in background

Not so fast: Anthropic and US military might do business after all
Anthropic logo

Anthropic challenges Department of War designation as AI dispute escalates
Anthropic logo on mobile device

Meet Claude Mythos: Leaked Anthropic post reveals the powerful upcoming model
Claude by Anthropic on smartphone

Anthropic used mostly AI to build Claude Cowork tool
Anthropic logo displayed on a phone screen and AI sign displayed on a screen

Trending on Mashable
NYT Connections hints today: Clues, answers for April 3, 2026
Connections game on a smartphone

Wordle today: Answer, hints for April 3, 2026
Wordle game on a smartphone


What's new to streaming this week? (April 3, 2026)
A composite of images from film and TV streaming this week.

You can track Artemis II in real time as Orion flies to the moon
Victor Glover and Reid Wiseman piloting the Orion spacecraft
The biggest stories of the day delivered to your inbox.
These newsletters may contain advertising, deals, or affiliate links. By clicking Subscribe, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.
Thanks for signing up. See you at your inbox!