OpenAI was hacked last year, according to new report. It didn't tell the public for this reason.

The hacker stole details about how its AI technologies work.
 By 
Kimberly Gedeon
 on 
OpenAI logo with ChatGPT app on a phone
OpenAI suffered a breach last year, according to a new report. But customer data was safe from prying eyes.. Credit: T. Schneider / Shutterstock.com

A hacker snatched details about OpenAI's AI technologies early last year, The New York Times reported. The cybercriminal allegedly swiped sensitive information from a discussion forum where employees chatted about the company's latest models.

The New York Times was hush-hush about the source of this news, claiming that "two people familiar with the incident" spilled the beans. However, they maintain that the cybercriminal only breached the forum — not the core systems that power OpenAI's AI algorithms and framework.

OpenAI reportedly revealed the hack to employees during an all-hands meeting in April 2023. It also informed the board of directors. However, OpenAI executives decided against sharing the news publicly.


You May Also Like

Why did OpenAI keep the breach under wraps?

According to The New York Times, OpenAI didn't reveal the hack to the public because information about customers was not stolen.

The company also did not share the breach with the FBI or any other law enforcement entities.

"The executives did not consider the incident a threat to national security because they believed the hacker was a private individual with no known ties to a foreign government," the newspaper said.

The New York Times' sources say that some OpenAI employees expressed fear that China-based adversaries could steal the company's AI secrets, causing a threat to U.S. national security.

Leopold Aschenbrenner, leader of OpenAI's superalignment team (a unit focused on ensuring that AI doesn't get out of control) at the time, reportedly shared the same sentiments about the lax security and being an easy target for foreign enemies.

Aschenbrenner said he was fired early this year for sharing an internal document with three external researchers for "feedback." He insinuates his firing was unfair; he scanned the document for any sensitive information, adding that it's normal for OpenAI employees to reach out to other experts for a second opinion.

However, The New York Times points out that studies conducted by Anthropic and OpenAI reveal that AI "is not significantly more dangerous" than search engines like Google.

Still, AI companies should ensure that their security is tight. Legislators are pushing for regulations that slap hefty fines on companies with AI technologies that cause societal harm.

Mashable Image
Kimberly Gedeon
East Coast Tech Editor

Kimberly Gedeon, at Mashable since 2023, is a tech explorer who enjoys doing deep dives into the most popular gadgets, from the latest iPhones to the most immersive VR headsets. She's drawn to strange, avant-garde, bizarre tech, whether it's a 3D laptop, a gaming rig that can transform into a briefcase, or smart glasses that can capture video. Her journalism career kicked off about a decade ago at MadameNoire where she covered tech and business before landing as a tech editor at Laptop Mag in 2020.

Mashable Potato

Recommended For You
OpenAI reportedly testing ChatGPT ads soon
A thumb taps on a phone screen displaying a colorful OpenAI logo.

SpaceX may be going public with a big fundraising target
A SpaceX Falcon Heavy rocket lifting off, next to a building bearing the SpaceX logo.


OpenAI may sell $300 smart speaker with camera — in 2027
Sam Altman speaking at a microphone


Trending on Mashable
NYT Connections hints today: Clues, answers for April 4, 2026
Connections game on a smartphone

NYT Connections hints today: Clues, answers for April 3, 2026
Connections game on a smartphone

Wordle today: Answer, hints for April 4, 2026
Wordle game on a smartphone

Google launches Gemma 4, a new open-source model: How to try it
Google Gemma

The biggest stories of the day delivered to your inbox.
These newsletters may contain advertising, deals, or affiliate links. By clicking Subscribe, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.
Thanks for signing up. See you at your inbox!