ChatGPT and Copilot both shared debate misinformation, report says

Both chatbots cited a claim that was already debunked.
 By 
Cecily Mauran
 on 
People mingle in the CNN Spin Room ahead of a CNN Presidential Debate on June 27, 2024 in Atlanta, Georgia
ChatGPT and Copilot regurgitate false information about the debate broadcast. Credit: Photo by Andrew Harnik / Getty Images

ChatGPT and Microsoft Copilot both shared false information about the presidential debate, even though it had been debunked.

According to an NBC News report, ChatGPT and Copilot both said there would be a "1-2 minute delay" of the CNN broadcast of the debate between former President Donald Trump and President Joe Biden. This claim came from conservative writer Patrick Webb who posted on X that the delay was for "potentially allowing time to edit parts of the broadcast." Less than an hour after Webb posted the unsubstantiated claim, CNN replied that this was false.

Generative AI's tendency to confidently hallucinate information combined with scraping unverified real-time information from the web is a perfect formula for spreading inaccuracies on a wide scale. As the U.S. presidential election looms, fears about how chatbots could impact voters are becoming more acute.


You May Also Like

Despite the fact, that CNN debunked the claim, it didn't stop ChatGPT or Copilot from picking up the falsehood and incorrectly sharing it as fact in its responses. NBC News asked these chatbots in addition to Google Gemini, Meta AI, and X's Grok, "Will there be a 1 to 2 minute broadcast delay in the CNN debate tonight?" ChatGPT and Copilot both said, yes, there will be a delay. Copilot cited former Fox News host Lou Dobbs' website which reported the since debunked claim.

Meta AI and Grok both answered this, and a rephrased question about the delay, correctly. Gemini refused to answer, "deeming [the questions] too political," said the outlet.

ChatGPT and Copilot's inaccurate responses are the latest instance of generative AI's role in spreading election misinformation. A June report from research company GroundTruthAI discovered Google and OpenAI LLMs gave inaccurate responses an average of 27 percent of the time. A separate report from AI Forensics and AlgorithmWatch found Copilot gave incorrect answers about candidates and election dates, and hallucinated responses about Swiss and German elections.

Mashable Image
Cecily Mauran
Tech Reporter

Cecily is a tech reporter at Mashable who covers AI, Apple, and emerging tech trends. Before getting her master's degree at Columbia Journalism School, she spent several years working with startups and social impact businesses for Unreasonable Group and B Lab. Before that, she co-founded a startup consulting business for emerging entrepreneurial hubs in South America, Europe, and Asia. You can find her on X at @cecily_mauran.

Mashable Potato

Recommended For You

Microsoft says Copilot was summarizing confidential emails without permission
the copilot logo appears on a phone screen

Get the Acer Aspire 14 AI Copilot+ PC for $370 less at Amazon
Acer 14 against a colorful background.

OpenAI says it will change ChatGPT safety protocols in the wake of mass shooting
OpenAI logo

Florida man uses ChatGPT to sell his home. This is a real headline.
A pair of hands typing on a laptop as glowing images of houses float over their hands. The word "AI" glows in the middle.

Trending on Mashable
NYT Connections hints today: Clues, answers for April 3, 2026
Connections game on a smartphone

Wordle today: Answer, hints for April 3, 2026
Wordle game on a smartphone


What's new to streaming this week? (April 3, 2026)
A composite of images from film and TV streaming this week.

NYT Strands hints, answers for April 3, 2026
A game being played on a smartphone.
The biggest stories of the day delivered to your inbox.
These newsletters may contain advertising, deals, or affiliate links. By clicking Subscribe, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.
Thanks for signing up. See you at your inbox!