Sunday, April 26, 2026

When No Information Can Be Trusted: How Societies Learn to Decide in the Age of Disinformation

 From the Printing Press to Artificial Intelligence


Introduction: The End of Trust as We Knew It

Every generation believes it is living through an unprecedented crisis of truth. Today’s version is disinformation: algorithmically amplified falsehoods, synthetic media, deepfakes, and coordinated manipulation campaigns that make it seem as if no piece of information can be trusted. News outlets contradict each other. Experts reverse positions. Images can no longer be believed. Even firsthand testimony is suspect. The result is a creeping sense of epistemic despair: if nothing is reliable, how can anyone make informed decisions at all?

Yet history suggests something surprising. Periods of radical information disruption are not anomalies; they are recurring features of human civilization. Each time the way information is produced, distributed, or validated changes dramatically, societies enter a phase of confusion, polarization, and moral panic. But they do not collapse into permanent irrationality. Instead, they adapt. They develop new norms, institutions, heuristics, and decision-making strategies that work despite uncertainty.

To understand where we are going, we must first understand where we have been.

This essay compares our current disinformation crisis to earlier information revolutions—the printing press, the rise of mass newspapers, radio, television, and the early internet—and shows how decision-making evolves when trust erodes. The central argument is this: when information becomes unreliable, societies shift from truth-based decision-making to robustness-based decision-making. Truth never disappears, but it becomes something that must be tested, not simply believed.


I. The Printing Press: When Truth First Went Viral

In the early 15th century, Johannes Gutenberg’s printing press transformed Europe. Before printing, information was slow, scarce, and expensive. Manuscripts were copied by hand. Errors were localized. Authority rested with institutions—churches, monarchies, guilds—because they controlled knowledge production.

Printing shattered this equilibrium.

Suddenly, pamphlets could be produced cheaply and distributed widely. Literacy expanded. Competing interpretations of scripture circulated freely. Political tracts, satire, conspiracy theories, and religious polemics flooded the public sphere. Martin Luther’s theses spread faster than the Catholic Church could respond. Rumors about Jews poisoning wells or heretics conspiring against society triggered violence.

Contemporaries described the moment in eerily familiar terms: too much information, too fast, from unreliable sources. Theologians worried that ordinary people could not distinguish truth from heresy. Authorities lamented the breakdown of epistemic order.

And they were right—temporarily.

The printing press did not immediately produce enlightenment. It produced chaos. Religious wars, propaganda battles, and mass paranoia followed. But out of this turmoil emerged new epistemic tools: peer review, standardized texts, scientific journals, libraries, and eventually the scientific method itself. Truth became something that required replication, citation, and community verification.

Key lesson: When information abundance overwhelms authority, societies respond by inventing processes for evaluating claims rather than relying on trusted sources alone.


II. Newspapers and Yellow Journalism: The Birth of Mass Manipulation

Fast forward to the 19th century. Industrialization enabled mass-circulation newspapers. Literacy rates soared. Urban populations craved news, and publishers discovered that sensationalism sold better than sober reporting.

The era of “yellow journalism” was born.

Newspapers routinely fabricated stories, exaggerated events, and inflamed public sentiment. The Spanish–American War was famously fueled by misleading headlines and false atrocity stories. Competing papers published mutually incompatible versions of reality. Political actors learned to manipulate narratives at scale.

Once again, observers declared the death of truth.

Yet something interesting happened. Readers adapted. They learned which newspapers exaggerated and which corrected themselves. They compared headlines across outlets. They became skeptical of anonymous claims. Meanwhile, journalism slowly professionalized. Codes of ethics emerged. Fact-checking became a norm—not because publishers were virtuous, but because credibility became economically valuable.

Importantly, no authority imposed truth from above. Instead, truth became a competitive advantage within a noisy ecosystem.

Key lesson: When manipulation becomes widespread, audiences evolve selective skepticism, and credibility becomes a resource that must be earned repeatedly.


III. Radio and Television: Centralized Truth and Its Illusions

Radio and television reversed the decentralization of print. Suddenly, millions heard the same voice at the same time. Governments and broadcasters recognized the power of this medium immediately. Radio unified nations—but also enabled propaganda on an unprecedented scale.

The most extreme example was Nazi Germany, where radio was used to create a closed epistemic environment. But even in democratic societies, television created the illusion of authoritative truth. Anchors spoke with confidence. Images felt real. Trust was outsourced to institutions again.

For a while, this worked. Shared narratives stabilized societies. But the cost was subtle: epistemic passivity. Audiences learned to consume rather than evaluate information. When trust was later violated—by wars based on false premises or scandals covered up by trusted broadcasters—the backlash was severe.

The collapse of trust in institutions during the late 20th century was partly a reaction to this era of centralized epistemic authority.

Key lesson: Centralized trust can stabilize societies temporarily, but it creates fragility. When the authority fails, the epistemic collapse is dramatic.


IV. The Early Internet: Hope, Naivety, and the Myth of Perfect Information

The early internet revived utopian dreams. Many believed that free access to information would automatically produce better decisions. With facts available to everyone, truth would triumph over ignorance.

This belief underestimated two things: human psychology and incentive structures.

The internet did not eliminate gatekeepers; it replaced them with algorithms optimized for engagement. It did not reward truth; it rewarded attention. As social media scaled, emotional, polarizing, and identity-affirming content spread faster than careful analysis. Disinformation did not merely persist—it flourished.

The shock many feel today comes from a mistaken assumption: that access to information equals access to truth. History suggests otherwise. Information abundance without epistemic tools produces confusion, not clarity.

Key lesson: Information access is not enough. Decision-making depends on how information is filtered, tested, and integrated into action.


V. When Truth Becomes Unreliable, Decision-Making Evolves

Across these historical moments, one pattern repeats: when trust in information collapses, people do not stop deciding. They change how they decide.

From Belief to Probability

In stable epistemic environments, people seek certainty. In unstable ones, they adopt probabilistic thinking—often intuitively rather than formally. Instead of asking “Is this true?” they ask “How likely is this, and what happens if I’m wrong?”

This shift mirrors the rise of probability theory itself, which gained prominence during periods of uncertainty in trade, navigation, and science.

From Authority to Convergence

When no single source is trusted, people look for convergence across independent systems. If markets, logistics networks, scientific instruments, and adversaries all behave as if something is true, confidence increases—even if official statements are unreliable.

Reality reveals itself through constraints. You can lie in language, but not indefinitely in physics or economics.

From Narratives to Outcomes

In high-disinformation environments, narratives become cheap. Outcomes become expensive. People increasingly judge claims by their track record rather than their rhetoric. Who predicted correctly? Who adjusted when wrong? Who bore the cost of failure?

This is why traders, engineers, and field practitioners often outperform pundits in uncertain environments.


VI. Why Disinformation Does Not Scale Forever

It is tempting to believe that disinformation will eventually overwhelm society completely. History suggests this is unlikely.

Falsehood spreads easily, but it scales poorly.

Large-scale systems—supply chains, healthcare, infrastructure, ecosystems—must function according to reality. Persistent epistemic failure produces material consequences: shortages, collapses, losses. Groups that repeatedly make decisions based on false information lose power, wealth, or legitimacy.

This creates what might be called epistemic natural selection. Not everyone becomes rational—but systems that align better with reality outcompete those that do not.

Disinformation causes damage, but it also generates feedback. Over time, societies learn which epistemic strategies survive.


VII. The Present Moment: AI, Deepfakes, and Total Uncertainty

Artificial intelligence intensifies every historical pattern described above.

AI can generate infinite plausible content. Images, audio, and video can no longer be taken at face value. Authority-based verification breaks down. Even experts can be fooled.

This feels unprecedented—but structurally, it is an acceleration, not a rupture.

What changes is speed and scale. What remains is adaptation.

We are already seeing early responses:

  • Emphasis on provenance and cryptographic verification

  • Greater skepticism toward viral content

  • Renewed interest in slow journalism and long-form analysis

  • Increased reliance on trusted personal networks rather than mass media

Most importantly, decision-making is shifting away from epistemic certainty toward resilience under uncertainty.


VIII. The Future: Explicit Epistemic Tools

Historically, societies eventually formalize the strategies they develop informally. The future of decision-making in a disinformation-rich world will likely include:

  • Uncertainty-aware interfaces that present ranges rather than conclusions

  • AI systems designed to challenge beliefs, not reinforce them

  • Institutional penalties for large-scale epistemic negligence, even when intent is ambiguous

  • Cultural norms that value updating beliefs over defending positions

Truth will not disappear—but it will become less rhetorical and more operational. What matters will not be what sounds right, but what predicts correctly.


IX. The Paradox of Distrust

There is a final, counterintuitive lesson from history.

Societies do not become more rational because they trust more. They become more rational because they learn when not to trust.

Careful distrust—applied to claims, not people—forces testing, comparison, and humility. Blind trust enables manipulation. Blind cynicism enables paralysis. Between these extremes lies epistemic maturity.

The printing press did not destroy truth. It forced humanity to invent better ways of finding it. The same is happening now.


Conclusion: Truth After Belief

We are not living through the end of informed decision-making. We are living through the end of naïve epistemology—the belief that truth is something handed down by trusted voices.

In its place is emerging a harder, more demanding form of rationality. One that accepts uncertainty. One that values feedback over rhetoric. One that treats truth not as a possession, but as a process.

In a world where information cannot be trusted, truth survives not as belief, but as practice.

And history suggests that those who learn to practice it—slowly, skeptically, and adaptively—will shape whatever comes next.

No comments: