In an age when every scroll brings a new headline, a dramatic image, or a chart “proving” something, knowing what to trust has never been harder. Disinformation has always existed—but today, it scales at the speed of AI. And the real danger, as Professor Iryna Gurevych argues, is not merely falsehoods—but misleading truths: statements or visuals that are technically accurate yet intentionally framed to nudge us toward the wrong conclusion.
In her deeply insightful and widely applauded Royal Society Milner Award Lecture 2025, titled “How to Spot and Debunk Misleading Content”, Prof. Gurevych takes the audience on a rigorous but accessible journey through the modern landscape of misinformation: how it works, why it works, why machines fall for it just like we do, and most importantly—what we can do about it.
This blog post introduces her lecture and distills its most potent ideas, examples, and warnings. If you plan to watch the video, consider this your roadmap. If you’ve already watched it, think of this as your field guide to return to.
Why Misleading Content Is So Effective
Misleading content is dangerous because it doesn’t announce itself as false. Unlike blatant lies, these pieces twist or repurpose true information in a way that quietly shifts our assumptions.
Prof. Gurevych offers three striking examples:
1. The Chart That Made GPT-5 Look Better Than It Was
When OpenAI unveiled GPT-5, they showcased a performance chart comparing the model to its predecessors. It looked impressive—dramatically so. But a closer look revealed mismatched numbers, identical bars assigned different values, and even a bar labeled 52 appearing higher than one labeled 69.
A subtle graphical manipulation—no fake numbers needed—was enough to convey a false story of huge improvement.
This single example captures a key insight:
Charts don’t lie, but chart designers can mislead.
2. The Photo That “Proved” Trump Could Draw a Massive Crowd
A viral post declared:
“New Jersey is in play for Donald Trump. Could Joe Biden draw a crowd like this?”
The photo was breathtaking—hundreds of thousands of people filling a coastline. The implied message: Trump commands extraordinary public magnetism.
But the image was neither from New Jersey nor a political rally. It was a concert in Rio de Janeiro.
Misleading, not because the image was fake—but because the caption was.
3. A Scientific Paper Used to “Prove” Hydroxychloroquine Cured COVID-19
A social media post claimed Hydroxychloroquine “worked for COVID-19” and attached a real, peer-reviewed study as evidence.
The study was legitimate… but from 2005.
It examined a different coronavirus.
And it reported results from in-vitro (cell culture), not human trials.
All of this was true.
But none of it supported the claim.
This example illustrates a recurring pattern in misinformation:
Facts can be real.
Interpretations can be false.
How Professional Fact-Checkers Actually Work
Many assume fact-checking means “look for evidence that disproves the claim.” But this doesn’t work in real-world scenarios. Often, new claims appear before any counter-evidence exists.
Fact-checkers instead follow a more holistic reasoning process:
-
Identify the source.
Who is making the claim? Are they credible? What is their agenda? -
Gather context.
Political stance, incentive structure, emotional framing. -
Examine cited evidence.
Is it from an unrelated disease? An outdated study? A manipulated image? -
Evaluate reasoning, not just evidence.
What assumptions are being smuggled in?-
“Cell cultures have cells → humans have cells → results apply to humans.”
→ Fallacy of composition -
“This drug worked on SARS-CoV-1 → it works on SARS-CoV-2.”
→ False equivalence
-
These are not trivial.
They require deep domain knowledge, contextual reading, and critical reasoning.
Teaching Machines to Understand Misleading Content
A major theme of the lecture:
AI suffers from the same vulnerabilities humans do. Often worse.
Prof. Gurevych explains innovative attempts to close this gap:
-
Creating datasets of misrepresented scientific claims.
-
Training large language models to reconstruct fallacious arguments.
-
Evaluating whether AI can detect logical fallacies on its own.
-
Testing whether evidence biases AI toward false conclusions (it does).
Her team found:
🟥 Models classify false claims as true when provided with misleading evidence—even more confidently.
Why?
Because current AI systems excel at pattern-matching, not reasoning.
Image Misinformation: “Image, Tell Me Your Story”
Analyzing deceptive images is even harder:
-
Is the image new or reused?
-
Where was it taken?
-
When was it taken?
-
Why was it captured?
-
What does the scene actually depict?
To address this, her team created a method for automatically generating a “contextual story” of an image—mirroring how human fact-checkers think.
Key discovery:
🟩 AI models are surprisingly good at identifying locations.
🟥 But they struggle with recognizing events, time, and source credibility.
And crucially:
🟥 Models inherit biases from vision-language datasets (e.g., recognizing European landmarks better than African or Asian ones).
Misleading Charts: The Hidden Frontier
Perhaps the most fascinating part of the lecture involves charts—one of the least studied forms of misinformation.
Her team investigates:
-
How easily AI can be fooled by chart distortions
(spoiler: very easily) -
Ways to defend against misleading visuals
(e.g., converting charts back into tables, then regenerating honest charts) -
Creating datasets of real and synthetic misleading visualizations
-
Building detectors to warn users about chart manipulation
One especially powerful experiment:
When LLMs were shown misleading charts, their accuracy dropped below random chance.
Yet, simple transformations—like reconstructing the underlying table—dramatically improved reasoning.
What Makes This Lecture Stand Out
Beyond the technical contributions, Prof. Gurevych offers something rare:
a philosophical and humanistic reflection on truth itself.
In conflict zones, she notes, there may be no universal truth—only competing realities shaped by belief, experience, and emotion. Technology cannot solve that. But individuals can cultivate:
-
Open-mindedness
-
Curiosity
-
Exposure to diverse people, cultures, and perspectives
-
A habit of questioning emotional framing
-
Awareness of our own cognitive filters
She ends with a call to action:
“Be the change you want to see in the world.”
Even if, she jokes, we can’t be entirely sure Gandhi actually said it.
Why You Should Watch This Lecture
If you’ve ever wondered:
-
Why smart people fall for bad information
-
Why AI models sometimes hallucinate falsehoods
-
How to “read” images, charts, and scientific claims skeptically
-
How misinformation evolves in the age of multimodal AI
-
What tools researchers are building to defend society
-
Why truth can be contextual—and why that matters…
…this lecture is for you.
It is not merely about misinformation.
It is about how we think, how machines think, and how both can fail.
More importantly, it’s about how we can build resilience—at the individual level, the societal level, and the technological level.
Final Thoughts
Prof. Gurevych’s lecture blends rigorous computer science, cognitive psychology, and real-world examples to show that misinformation is not a simple problem of “true vs. false.”
It is a problem of:
-
Interpretation
-
Context
-
Intent
-
Framing
-
Design
-
Bias
—and ultimately, critical thinking.
Before you watch the full lecture, keep one idea in mind:
Misleading content succeeds not by lying, but by guiding your mind to lie to itself.
This talk gives you the tools to fight back.