Tuesday, February 3, 2026

Post 5 — How Scientific Fields Collapse: Lessons from Psychology, Genomics, Economics, and Cancer Research

One of the most uncomfortable insights in Smaldino & McElreath’s The Natural Selection of Bad Science is that scientific collapse is not an anomaly. It is a recurring, predictable, evolutionary outcome — a form of cultural extinction event triggered by misaligned incentives. Fields don’t collapse because bad people ruin them. Fields collapse because adaptation to the wrong incentives gradually hollows them out from the inside.

This post examines why entire scientific disciplines sometimes enter periods of crisis, retrenchment, or mass retraction — and why these collapses follow predictable patterns. We will walk through four influential examples:

  1. Social Psychology and the Priming Crisis

  2. Early Genomics and the Biomarker Bubble

  3. Macroeconomics and the Austerity Shock

  4. Cancer Biomarkers and the Reproducibility Meltdown

These case studies reveal the same evolutionary dynamics in action:

  • Incentives reward discoverability, not verifiability.

  • Labs evolve toward high-output, low-effort strategies.

  • Hype cycles amplify low-quality discoveries.

  • Replication is too slow and too weak.

  • The field enters an ecological collapse where noise drowns signal.

Understanding these collapses isn’t just historical curiosity — it’s a blueprint for diagnosing the health of scientific ecosystems today.


1. What Does It Mean for a Scientific Field to “Collapse”?

A scientific collapse is not a sudden event. It is a long, slow attrition of reliability.

You know a field is collapsing when:

  • Replication rates fall below noise levels.

  • Key findings become unstable or contradictory.

  • Statistical tools are misused and normalized.

  • Methodological shortcuts become standard.

  • Top journals reward surprising results over robustness.

  • Industry partners lose trust in the field’s output.

  • Foundational theories must be rewritten or abandoned.

Collapses often culminate in:

  • mass retractions

  • major methodological reforms

  • devastating replication studies

  • an exodus of credibility

  • a shift in intellectual prestige to competing disciplines

The model by Smaldino & McElreath predicts exactly this process:
low-effort strategies multiply faster than corrective mechanisms can contain them.

Eventually, reliability becomes unsalvageable and the field must rebuild from scratch.


2. Case Study 1: Social Psychology and the Priming Collapse

Few scientific collapses are as famous — or as thoroughly documented — as the downfall of social priming research in the 2000s.

The Incentives

  • Publish cute, surprising results.

  • Use small samples (cheap, fast).

  • Don’t preregister — flexibility helps results appear significant.

  • Maximize media attention.

This created the ideal evolutionary environment for:

  • low-effort experiments

  • flexible analysis (“the garden of forking paths”)

  • inflated false positives

  • publication bias

Psychologists weren’t malicious — they were adapting to their environment.

The Peak of the Bubble

Between 1995 and 2010, dozens of sensational papers emerged:

  • Priming people with the elderly stereotype made them walk slower.

  • Thinking about money made one less social.

  • Subtle cues could alter voting patterns.

Journals loved it. TED Talks loved it.
It was a golden age — for a while.

The Collapse

In 2011–2016:

  • Large-scale replication attempts failed spectacularly.

  • Many priming effects could not be reproduced even with much larger samples.

  • The field entered a crisis of credibility.

Daniel Kahneman, Nobel laureate, described the situation as:

“A train wreck.”

Kahneman advised priming researchers to “clean up their act,” but — as the model predicts — replication was too slow and too weak. The incentives remained unchanged for decades, allowing weak methods to evolve unchecked.


3. Case Study 2: Early Human Genomics — The Biomarker Bubble

Before GWAS became rigorous, the early 2000s saw a massive surge of “candidate gene” studies.

The Incentives

  • Link any gene to any disease with small samples.

  • Publish novel associations quickly.

  • Use lenient statistical thresholds.

  • Avoid replication because it’s expensive.

Small labs could produce dozens of “gene X predicts trait Y” papers each year.

The Outcome

A 2009 meta-analysis concluded:

“Over 90% of candidate gene associations were false.”

Why?

Because the field rewarded:

  • speed over sample size

  • novelty over rigor

  • positive results over null results

  • exploratory p-hacking over confirmatory research

The evolutionary analogy is clear:

  • Labs that produced splashy claims reproduced academically.

  • Labs that insisted on high effort died out.

The Collapse

By the mid-2010s, the field was forced to abandon most of its foundational claims.

GWAS later showed that:

  • Most common complex traits involve hundreds of genes.

  • Individual candidate genes rarely explain meaningful variance.

  • Many early associations were artifacts of population structure.

This collapse led to massive reallocations of funding and prestige.

But the damage was done: a decade of biomedical research had been built on unreliable foundations.


4. Case Study 3: Economics and the Reinhart & Rogoff Shock

Macroeconomics rarely faces replication pressure — yet it experienced one of the most famous data-driven collapses of the 21st century.

The Claim

A massively influential paper by Reinhart & Rogoff (2010) claimed:

Countries with >90% debt-to-GDP ratio experience sharply reduced growth.

This result justified widespread global austerity policies.

The Incentives

  • Prestigious economists influence policy.

  • Top journals favor macro-wide conclusions.

  • Replication datasets are often restricted.

This created an ecosystem where high-impact results multiplied without robust verification.

The Collapse

In 2013, a group of graduate students replicated the analysis and found:

  • Coding errors

  • Excluded countries

  • Incorrect weighting procedures

  • Statistical miscalculations

When the errors were fixed, the 90% threshold disappeared.

The Fallout

Despite the correction:

  • Policy consequences had already played out.

  • Countries had adopted austerity measures.

  • Billions in economic decisions were made based on flawed evidence.

According to Smaldino & McElreath’s logic:

  • Replication came after the selective advantage (policy impact) was realized.

  • Thus, replication had no evolutionary power.

R&R’s original paper remained highly cited.

The field quickly moved on without structural reform.


5. Case Study 4: Cancer Biomarker Research — A Structural Meltdown

The cancer literature provides one of the clearest real-world confirmations of the model.

A 2005 paper in Nature analyzed cancer biomarker studies and concluded:

88% of them could not be reproduced.

Eighty-eight percent.

This wasn’t a series of bad apples — it was systemic.

The Incentives

  • Pharma companies reward promising preliminary data.

  • Journals love breakthroughs.

  • Novel biomarkers attract enormous grants.

  • Clinical translation is slow, so failed replication is invisible.

Labs evolved toward maximum publication output:

  • flexible analyses

  • small sample sizes

  • no preregistration

  • selective reporting

  • low methodological effort

The Collapse

When industry groups attempted replication:

  • almost none of the biomarkers validated

  • many were statistical mirages

  • entire avenues of clinical trial design were affected

And yet, even after the collapse:

  • many labs continued to publish low-quality biomarker studies

  • replication remained rare

  • journals continued favoring novelty

Again, replication arrived too late and with too little force.


6. Why Collapses Follow Predictable Patterns

Across these fields, the same evolutionary mechanisms emerge:

(1) Incentives reward rapid, positive, surprising results.

This lowers the “effort threshold” for survival.

(2) Labs evolve low-effort, high-output strategies.

These labs have higher reproductive fitness.

(3) Noise accumulates faster than replication can eliminate it.

False positives proliferate exponentially.

(4) Replication attempts appear late, often after a field matures.

The ecosystem is already saturated with unreliable findings.

(5) Replication has weak punitive power.

Failed replications do not harm lab survival.

(6) The field hits a tipping point where signal-to-noise ratio collapses.

Once noise dominates, theory collapses.

(7) A painful reform period begins.

Reforms often include:

  • preregistration

  • large-scale consortia

  • stricter statistical norms

  • data sharing

  • high-powered studies

  • adversarial collaborations

This is the “ecological reset” phase — analogous to a burned forest slowly regrowing.


7. Why Some Fields Avoid Collapse

Not all fields collapse. Some stay robust.

Fields that avoid collapse tend to have:

1. Strong replication culture

(e.g., physics, some areas of chemistry)

2. Large, expensive experiments where p-hacking is impossible

(e.g., particle physics, astrophysics)

3. Community-wide data sharing

(genetics after 2010)

4. Strict statistical conventions

(e.g., neuroimaging after the “dead salmon” paper)

5. No reward for novelty without rigor

(e.g., clinical trial pipelines)

Fields with these traits experience slow, stable cumulative progress.

They are evolutionarily stable strategies under the model.


8. Warning Signs: Is a Field Approaching Collapse?

Based on the model and history, signs of impending collapse include:

  • rapid proliferation of positive results

  • high publication volume with small sample sizes

  • widespread p-values just below 0.05

  • low rates of data sharing

  • lack of preregistration

  • theoretical fragmentation (each lab has its own model)

  • replication studies consistently failing

  • large gaps between media claims and real effects

  • heavy reliance on “hidden moderators” to explain failures

If multiple signs appear simultaneously, the field may be entering a pre-collapse trajectory.


9. Lessons for Scientists Today

The collapses above are not moral failures.
They are adaptive responses to maladaptive incentives.

The key lesson:

If a field rewards volume over rigor, it will evolve toward low effort and eventually collapse.

Smaldino & McElreath capture this evolutionary truth with mathematical precision:
selection pressures shape scientific methods as surely as natural selection shapes beak sizes.

To preserve scientific integrity, we must shift fitness away from speed and novelty, and toward accuracy, transparency, and theoretical stability.


10. Coming Up Next

Post 6 — The Role of Hype Cycles: How Media, Journals, and Funding Agencies Accelerate the Spread of Bad Science

This post will explore:

  • Why hype amplifies low-quality discoveries

  • The social psychology of “breakthrough culture”

  • How journalists, TED Talks, and grant committees shape the evolution of scientific methods

  • Historical examples of hype-driven bubbles

  • How hype interacts with the evolutionary model in the paper

No comments: