OpenAI supports California's AI-watermarking bill

The ChatGPT maker supports making AI provenance more transparent
 By 
Cecily Mauran
 on 
The OpenAI logo on a smartphone against the backdrop of colorful light flares
OpenAI is all for a California law requiring invisible watermarks of AI-generated content. Credit: Andrey Rudakov / Bloomberg / Getty Images

OpenAI has expressed its support for a California bill requiring AI-generated content to be labeled as such.

According to Reuters, OpenAI sent a letter of support to California State Assembly member Buffy Wicks, who authored the bill, titled the California Provenance, Authenticity and Watermarking Standards Act (AB 3211). The legislation passed by the state Assembly vote would require AI companies to put an invisible watermark on all content made or "significantly modified" by their AI models. Next, the bill is up for vote at the state Senate and would go on to review by CA Governor Gavin Newsom if passed.

The bill would also require AI companies to provide "watermark decoders" so that users can easily identify whether content is AI-generated or not.


You May Also Like

Image-generating models on the market vary in their levels of photorealism, protective guardrails, and copyright protections. DALL-E 3 is OpenAI's latest text-to-image model. As of February, images generated by the model on ChatGPT contain C2PA metadata, which provides the image's provenance. Similarly, Google has its own SynthID tool for watermarking images created by its model, Gemini. Grok-2, which Elon Musk's xAI company released, seems to have the least amount of restrictions since it can generate images of public figures and copyrighted works. Midjourney, one of the most advanced image models, is currently embroiled in a legal battle over copyright infringement.

In the letter viewed by Reuters, OpenAI underscored the importance of transparency of the provenance of images and other AI-generated content. "New technology and standards can help people understand the origin of content they find online, and avoid confusion between human-generated and photorealistic AI-generated content," said OpenAI Chief Strategy Officer Jason Kwon in the letter.

The issue of AI-generated deepfakes and misinformation is especially significant with the upcoming U.S. Presidential Election. Already, AI-generated images of Kamala Harris speaking at a communist rally and Taylor Swift endorsing Donald Trump have circulated on social media.

Mashable Image
Cecily Mauran
Tech Reporter

Cecily is a tech reporter at Mashable who covers AI, Apple, and emerging tech trends. Before getting her master's degree at Columbia Journalism School, she spent several years working with startups and social impact businesses for Unreasonable Group and B Lab. Before that, she co-founded a startup consulting business for emerging entrepreneurial hubs in South America, Europe, and Asia. You can find her on X at @cecily_mauran.

Mashable Potato

Recommended For You
OpenAI reportedly testing ChatGPT ads soon
A thumb taps on a phone screen displaying a colorful OpenAI logo.

OpenAI to finally bring ads to ChatGPT
Photo illustration of the chatgpt logo on a smartphone. The same logo can be seen faded in the background


OpenAI explains how its AI agents avoid malicious links and prompt injection
OpenAI logo on phone screen

ChatGPT GPT-4o users are raging at OpenAI on Reddit right now
ChatGPT GPT-4o

Trending on Mashable
NYT Connections hints today: Clues, answers for April 3, 2026
Connections game on a smartphone

Wordle today: Answer, hints for April 3, 2026
Wordle game on a smartphone


NYT Strands hints, answers for April 3, 2026
A game being played on a smartphone.

Wordle today: Answer, hints for April 2, 2026
Wordle game on a smartphone
The biggest stories of the day delivered to your inbox.
These newsletters may contain advertising, deals, or affiliate links. By clicking Subscribe, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.
Thanks for signing up. See you at your inbox!