AI-Generated Misinformation: When the Fake Quotes Start Rolling In

Bryna Dilman

The New Shape of Misinformation

In the past, a communications team could rely on a few basic truths. 

If a quote appeared in a major outlet, someone had likely said it. If a journalist published a claim, they could usually be contacted for clarification. And if a narrative began to spread, its origin could be traced. 

But as generative AI tools become more widespread, those certainties are beginning to erode.

We have entered an era where public figures are being quoted for things they never said, in articles for which they never gave interviews. These are not malicious forgeries created by bad actors in the traditional sense. They are the byproduct of large language models trained on vast swaths of the internet, capable of producing convincing news-style content on demand. 

What makes this shift uniquely challenging for media teams is not just the speed or scale of misinformation, but the way it can seem believable simply because it looks and sounds like real journalism.

In recent months, several executives and spokespersons have found themselves issuing public corrections for AI-generated content that was never published by any credible source but was shared as a screenshot or excerpt in public discourse. In some cases, the original AI-generated material was prompted by users attempting to summarize a company’s stance on an issue or generate hypothetical responses. In others, the text was treated as genuine and circulated widely before anyone thought to verify it.

Pope Leo XIV is among those who have had to disown AI-fabricated quotes that were attributed to them. (Edgar Beltrán | The Pillar)

This isn’t just a reputational issue. It is a logistical one. Every instance of misinformation demands time, context-gathering, internal confirmation and public clarification. The resource burden is significant, and few media teams have excess capacity for dealing with narratives that originate not from a journalist or outlet, but from a synthetic approximation of one.

The Challenge of Plausible Fabrication

What distinguishes AI-generated misinformation from traditional fake news is the blurred intent. While disinformation campaigns have a strategic goal to undermine trust, distort public opinion or create confusion, AI-generated content is often the result of low-friction, speculative prompting. A user might ask a chatbot what a CEO could plausibly say about a recent event. Within seconds, they have a quote in formal language, attributed to the individual, and styled in a way that mimics legitimate coverage. It is easy to imagine how this could be used unintentionally in a blog post, cited by another language model or passed along in a team message without context. 

The result is the same: A media team is now responsible for correcting a fabricated claim.

Even if the initial quote is not picked up by a reputable outlet, its format and tone give it credibility. Screenshots of AI-generated content can circulate on social media, land in internal briefings or trigger questions from stakeholders. Without context, these claims are easily accepted as true. Once a false quote enters the information ecosystem, it can echo through new articles, influencer commentary and even competitor messaging.

The consequences go beyond temporary confusion. AI-generated misinformation can introduce reputational risk that lingers long after the original content has been discredited. It can prompt journalists to follow up on invented controversies, force executives to address fictional remarks in real interviews and divert communications resources toward cleanup rather than strategy. 

For teams already managing fast-moving news cycles, the added burden of correcting fabricated narratives can be substantial.

The Need for Institutional Memory

Traditional tools like media-monitoring software or search engine alerts are poorly equipped to detect this kind of problem. If the content was never published on a known outlet, it will not appear in most dashboards. If it lives only in screenshot form or circulates via internal chat channels, the signal can be nearly impossible to trace. 

What media teams need is not just monitoring, but memory. The ability to look back at exactly what was said, when, and to whom, becomes essential when the task is not responding to real quotes but disproving fabricated ones.

Maintaining a centralized, verifiable archive of communications is now a core requirement for any media team managing reputational risk. Broadsight supports this by providing structured records of journalist interactions, approved statements and source material. When a questionable quote surfaces, teams can quickly verify whether it was ever issued and respond with confidence. While no system can prevent misinformation from being created, having accurate records in place limits its ability to spread unchecked.

Having a structured archive also improves readiness. Teams can analyze how similar issues were handled in the past, identify preferred spokespeople and track how narratives evolved. If an AI-generated quote mimics the language of a previous statement, having that context readily available allows comms professionals to clarify the differences and protect against misinterpretation.

A Strategic Shift in Comms Posture

The age of AI-generated content requires a shift in posture from media and communications teams. The risk is no longer confined to what was said publicly or leaked accidentally. It now includes what could plausibly have been said, and what an algorithm might assert in a convincing tone. 

Navigating that ambiguity requires more than vigilance. It demands systems that retain institutional memory and give comms teams the confidence to push back.

It also requires a mindset shift. Visibility alone is no longer enough; verification must become the priority. Public messaging isn’t just about controlling the narrative, but about maintaining clarity over what has and hasn’t been said. In the absence of a clear internal record, even a minor fabrication can spiral into a damaging distraction.

What was once a rare crisis scenario is fast becoming routine. The quote might be fabricated. The damage, unfortunately, is not.

To see how Broadsight helps you keep the record straight so you can set the record straight, visit Broadsight.ca.

Sign up below and we’ll be in touch with monthly updates about Broadsight, along with news and insights to keep you on the cutting edge of communications work in an AI era.