Adobe users are outraged over vague new policy's AI implications

Is AI peeking at your works in progress?
 By 
Cecily Mauran
 on 
logo of 'Adobe Photoshop' is displayed on computer screen
Users are not happy about Adobe's new Terms of Service. Credit: Anadolu / Getty Images

UPDATE: Jun. 7, 2024, 9:44 a.m. EDT This article has been updated.

Adobe has issued a statement clarifying that it does not train its Firefly AI models on unpublished user content. You can read more about the statement here.

Changes to Adobe's Terms of Service have users confused and outraged that their work — even unpublished and in-progress projects — may be used to train AI models.

Users of various Adobe apps including Photoshop and Substance Painter received a pop-up notice on Wednesday saying "we may access your content through both manual and automated methods, such as for content review."

The updated section (that went into effect all the way back on February 17, 2024) in Adobe's Terms of Service says:

"Our automated systems may analyze your Content and Creative Cloud Customer Fonts (defined in section 3.10 (Creative Cloud Customer Fonts) below) using techniques such as machine learning in order to improve our Services and Software and the user experience."

The language is vague. But the specific mention of "automated systems" and using "machine learning in order to improve our Services and Software," immediately drew concerns that users' creative work would be used as training data for Adobe's AI tools.

Aside from the implication that any and all user content would be fodder for training data without credit or compensation, there's the specific privacy concern for users working with confidential information. "I can't use Photoshop unless I'm okay with you having full access to anything I create with it, INCLUDING NDA work?" posted artist @SamSantala on X.

On a separate page that breaks down how Adobe uses machine learning, Adobe says it doesn't use content stored locally on your device, so only content that's stored in the Creative Cloud. Otherwise, content that users make public, such as contributions to Adobe Stock, submissions to be featured on Adobe Express and to be used as tutorials in Lightroom are used to "train [Adobe's] algorithms and thus improve [its] products and services."

Such uses of public content have already been in place since Adobe launched its AI model Firefly, which generates images and powers other AI features like Generative Fill. Adobe touts Firefly as commercially safe, but has also said Firefly was trained on public domain data, which includes AI-generated images from its competitor Midjourney — a product that artists allege was the result of copyright infringement.

All that's to say, gathering training data for AI models is a murky issue that has made it difficult for creatives and companies alike to trace copyrighted content and prevent unauthorized works from seeping into model training. And that has undermined Adobe's deployment of purportedly ethical AI features and put customers' trust in jeopardy.

To be clear, Adobe's latest policy change has not been conclusively shown to expose users to privacy invasions, but users are understandably concerned at even a mere hint that their private work may be accessible to Adobe's AI models. The new Terms of Service don't make any explicit mention of Firefly or AI training data, but the update says it may need to access user content to "detect, prevent, or otherwise address fraud, security, legal, or technical issues," and enforce its Terms which bans illegal or abusive content like child sexual abuse material. This may mean that Adobe seeks to monitor access user content for specific violations.

But the language used, including broad-term allusions to machine learning for "improving" Adobe tools taps into concepts the privacy-minded have justifiably become wary of at a very sensitive moment.

Mashable has reached out to Adobe for clarification and will update this story if we hear back.

Mashable Image
Cecily Mauran
Tech Reporter

Cecily is a tech reporter at Mashable who covers AI, Apple, and emerging tech trends. Before getting her master's degree at Columbia Journalism School, she spent several years working with startups and social impact businesses for Unreasonable Group and B Lab. Before that, she co-founded a startup consulting business for emerging entrepreneurial hubs in South America, Europe, and Asia. You can find her on X at @cecily_mauran.

Mashable Potato

Recommended For You
Anthropic changes safety policy amid intense AI competition
Claude logo on screen with coding in the background, on screen.

Big policy change coming to Amazon Wish Lists
amazon logo on phone in front of gift boxes

3 AdultFriendFinder features exclusive to paid users
By Jack Dawes
Plus signs coming out of treasure chest

Samsung Galaxy S26 will have ‘pixel level’ privacy feature, Samsung confirms
Samsung Galaxy S25 from the rear

Everything you need to know about the malware stealing data from Mac users
MacBook in the dark using Terminal

Trending on Mashable
NYT Connections hints today: Clues, answers for April 3, 2026
Connections game on a smartphone

Wordle today: Answer, hints for April 3, 2026
Wordle game on a smartphone

What's new to streaming this week? (April 3, 2026)
A composite of images from film and TV streaming this week.


NYT Connections hints today: Clues, answers for April 2, 2026
Connections game on a smartphone
The biggest stories of the day delivered to your inbox.
These newsletters may contain advertising, deals, or affiliate links. By clicking Subscribe, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.
Thanks for signing up. See you at your inbox!