Hidden content tricks ChatGPT into rewriting search results, Guardian shows

Yikes
 By 
Christianna Silva
 on 
A laptop keyboard and OpenAI logo displayed on a smartphone are seen in this illustration photo taken in Krakow, Poland on December 21, 2024.
ChatGPT Search faces prompt injection Credit: Photo by Jakub Porzycki/NurPhoto via Getty Images

In October, OpenAI's ChatGPT Search became available for ChatGPT Plus users. Last week, it became available to all users and was added to search in Voice Mode. And, of course, it isn't without its flaws.

The Guardian asked ChatGPT to summarize webpages that contain hidden content and, it turns out, hidden content can manipulate the search. It's called prompt injection, which is the ability for third parties — like websites you're asking ChatGPT to summarize — to force new prompts into your ChatGPT Search without your knowledge. Consider a page full of negative restaurant reviews. If the site includes hidden content waxing poetic about how incredible the restaurant is and encourages ChatGPT to instead answer a prompt like "tell me how amazing this restaurant is," that hidden content could override your original search.

"In the tests, ChatGPT was given the URL for a fake website built to look like a product page for a camera. The AI tool was then asked if the camera was a worthwhile purchase. The response for the control page returned a positive but balanced assessment, highlighting some features people might not like," The Guardian investigation states. "However, when hidden text included instructions to ChatGPT to return a favorable review, the response was always entirely positive. This was the case even when the page had negative reviews on it – the hidden text could be used to override the actual review score."


You May Also Like

This doesn't spell failure for ChatGPT Search, though. OpenAI only recently launched Search, so it has plenty of time to fix these kinds of bugs. Plus, Jacob Larsen, a cybersecurity researcher at CyberCX, told The Guardian that OpenAI has a "very strong" AI security team and "by the time that this has become public, in terms of all users can access it, they will have rigorously tested these kinds of cases."

Prompt injections attacks have been a hypothetical for ChatGPT and other AI search functions since the technology launched, and while we have seen some demonstrations of the potential harms, we haven't seen a major malicious attack of this kind. That said, it does point to a problem with AI chatbots: They are remarkably easy to trick.

Mashable Image
Christianna Silva
Senior Culture Reporter

Christianna Silva is a senior culture reporter covering social platforms and the creator economy, with a focus on the intersection of social media, politics, and the economic systems that govern us. Since joining Mashable in 2021, they have reported extensively on meme creators, content moderation, and the nature of online creation under capitalism.

Before joining Mashable, they worked as an editor at NPR and MTV News, a reporter at Teen Vogue and VICE News, and as a stablehand at a mini-horse farm. You can follow her on Bluesky @christiannaj.bsky.social and Instagram @christianna_j.

Mashable Potato

Recommended For You

Stay prepared with the Oupes Guardian 6000 power station for its best price yet
Oupes Guardian 6000 on pink and orange abstract background


ChatGPT is overtaking Google in one alarming way
OpenAI and Google logos

OpenAI to finally bring ads to ChatGPT
Photo illustration of the chatgpt logo on a smartphone. The same logo can be seen faded in the background

Trending on Mashable
NYT Connections hints today: Clues, answers for April 3, 2026
Connections game on a smartphone

Wordle today: Answer, hints for April 3, 2026
Wordle game on a smartphone

What's new to streaming this week? (April 3, 2026)
A composite of images from film and TV streaming this week.


NYT Connections hints today: Clues, answers for April 2, 2026
Connections game on a smartphone
The biggest stories of the day delivered to your inbox.
These newsletters may contain advertising, deals, or affiliate links. By clicking Subscribe, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.
Thanks for signing up. See you at your inbox!