OpenAI terminates accounts of confirmed state-affiliated bad actors

The terminations are part of what's being called an "early detection effort."
 By 
Chase DiBenedetto
 on 
The OpenAI and Microsoft logos reflecting each other on a dark screen.
Credit: Jonathan Raa/NurPhoto via Getty Images

OpenAI has confirmed that state-affiliated bad actors are using the company's tech for malicious purposes, a validation of what many have feared since the company's rise to prominence in the generative AI race.

The discovery comes as part of a collaboration with Microsoft Threat Intelligence, a community of thousands of security experts, researchers, and threat hunters that analyze and detect cyber threats.

Using the network's intelligence gathering, OpenAI discovered at least five confirmed state-affiliated actors that were using OpenAI services for querying open-source information, translating, finding coding errors, and running basic coding tasks, the company explained. The actors included two China-affiliated actors known as Charcoal Typhoon and Salmon Typhoon; an Iran-affiliated actor known as Crimson Sandstorm; a North Korea-affiliated actor known as Emerald Sleet; and a Russia-affiliated actor known as Forest Blizzard.


You May Also Like

The accounts were said to be relying on OpenAI's services to bolster potential cyber attacks, but Microsoft did not detect any significant uses of the most-highly monitored LLMs.

"These include reconnaissance, such as learning about potential victims’ industries, locations, and relationships; help with coding, including improving things like software scripts and malware development; and assistance with learning and using native languages," Microsoft explained. "Language support is a natural feature of LLMs and is attractive for threat actors with continuous focus on social engineering and other techniques relying on false, deceptive communications tailored to their targets’ jobs, professional networks, and other relationships."

Microsoft distinguished this announcement as an early-detection effort, intended to expose "early-stage, incremental moves that we observe well-known threat actors attempting."

The collaboration aligns with recent moves from the White House to require safety testing and government supervision for AI systems that could impacts national and economic security, public health, and general safety. "While attackers will remain interested in AI and probe technologies’ current capabilities and security controls, it’s important to keep these risks in context. As always, hygiene practices such as multifactor authentication (MFA) and Zero Trust defenses are essential because attackers may use AI-based tools to improve their existing cyberattacks that rely on social engineering and finding unsecured devices and accounts."

While OpenAI admits that its current models are limited in their ability to detect cyber attacks, the company committed to future security investments, including:

  • Investments in technology and teams, including its Intelligence and Investigations and Safety, Security, and Integrity teams, to detect threats.

  • Collaborations with industry partners and other stakeholders to exchange information about malicious uses.

  • Continued public reporting of security threats and solutions.

"Although we work to minimize potential misuse by such actors, we will not be able to stop every instance," OpenAI wrote. "But by continuing to innovate, investigate, collaborate, and share, we make it harder for malicious actors to remain undetected across the digital ecosystem and improve the experience for everyone else."

Chase sits in front of a green framed window, wearing a cheetah print shirt and looking to her right. On the window's glass pane reads "Ricas's Tostadas" in red lettering.
Chase DiBenedetto
Social Good Reporter

Chase joined Mashable's Social Good team in 2020, covering online stories about digital activism, climate justice, accessibility, and media representation. Her work also captures how these conversations manifest in politics, popular culture, and fandom. Sometimes she's very funny.

Mashable Potato

Recommended For You
OpenAI says it will change ChatGPT safety protocols in the wake of mass shooting
OpenAI logo

ARC Raiders opts to replace AI-generated dialogue with professional voice actors
By Jack Dawes
ARC Raiders

OpenAI reportedly testing ChatGPT ads soon
A thumb taps on a phone screen displaying a colorful OpenAI logo.

OpenAI to finally bring ads to ChatGPT
Photo illustration of the chatgpt logo on a smartphone. The same logo can be seen faded in the background


Trending on Mashable
NYT Connections hints today: Clues, answers for April 3, 2026
Connections game on a smartphone

Wordle today: Answer, hints for April 3, 2026
Wordle game on a smartphone


What's new to streaming this week? (April 3, 2026)
A composite of images from film and TV streaming this week.

You can track Artemis II in real time as Orion flies to the moon
Victor Glover and Reid Wiseman piloting the Orion spacecraft
The biggest stories of the day delivered to your inbox.
These newsletters may contain advertising, deals, or affiliate links. By clicking Subscribe, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.
Thanks for signing up. See you at your inbox!