Friday, February 6, 2026

Post 8: The Evolution of Fraud—Why Misaligned Incentives Make Cheating Inevitable

 Based on Smaldino & McElreath (2016), “The Natural Selection of Bad Science”


Introduction: Fraud Doesn’t Begin With Villains—It Begins With Incentives

When people imagine scientific fraud, they picture caricatures:
a rogue scientist faking data to become famous; a malicious figure cooking numbers for personal gain.

Reality is much more subtle—and much more disturbing.

Fraud evolves.

Just like biological traits arise in response to environmental pressures, fraudulent behaviors arise within scientific ecosystems because those behaviors confer competitive advantage under certain institutional conditions. Smaldino & McElreath’s paper argues that the same incentives that select for poor methodological rigor also select for increasingly bold forms of cheating.

This evolutionary view challenges the idea that individual misconduct is the root of the crisis. Instead:

Fraud is an adaptive response to misaligned incentives, not a personal flaw in an otherwise healthy system.

In this post, we explore:

  • How small methodological shortcuts evolve into systemic fraud

  • Why fraud emerges even among people with good intentions

  • Historical and modern examples of fraud evolution

  • How fraudulent strategies spread within academic lineages

  • Why policing fraud is so difficult in an ecosystem selecting for it

  • And critically: how we can redesign incentives to prevent fraud’s natural selection


1. The Continuum of Cheating: From Innocent Flexibility to Full Fabrication

Fraud rarely begins with outright forgery. It evolves gradually through a dangerous continuum:

Stage 1: Innocent flexibility

Researchers try multiple statistical models because “they want to understand the data better.”

Stage 2: Selective reporting

Negative results are dropped “because the journal won’t accept them.”

Stage 3: HARKing (Hypothesizing After Results are Known)

Researchers rewrite hypotheses post-hoc to match significant results.

Stage 4: Data massaging

Removal of outliers that “don’t make sense,”
or reclassifying categories to achieve significance.

Stage 5: Fabrication-lite

Inventing a few missing values, adjusting means slightly, or copying data points to “fix noise.”

Stage 6: Full data fabrication

Creating entire datasets from imagination.

The key insight is this:
At each step, competitive advantage increases while detection risk remains low.
Evolution always favors strategies with the highest payoff relative to cost.

Misaligned incentives—publish or perish, novelty over accuracy, prestige over honesty—act as selective pressures moving individuals along this continuum.


2. Why Good People Drift Toward Bad Science

People do not enter science as cheaters. They enter as idealists.
But as evolutionary biologists know, behavior adapts to the environment.

Three forces push researchers toward unethical behavior:


2.1 Selection for productivity over accuracy

A researcher who produces 12 papers a year—thanks to flexible methods or data manipulation—is more likely to get:

  • job offers

  • grants

  • tenure

  • speaking invitations

  • media attention

Meanwhile, the careful, slower researcher is deemed “less productive.”

This is pure Darwinian selection, not moral selection.


2.2 Lack of punishment mechanisms

In nature, cheating thrives when policing is absent.
In academia:

  • Fraud detection rates are extremely low

  • Replication rarely occurs

  • Retractions are rare and slow

  • Institutions protect successful researchers

  • Journals avoid scandals to protect their reputation

Low policing + high reward = the perfect conditions for cheating to thrive.


2.3 Cognitive dissonance and rationalization

Once minor cheating yields rewards, researchers begin to rationalize:

  • “Everyone does it.”

  • “The result is basically true.”

  • “This helps me survive until tenure.”

  • “The reviewers won’t understand anyway.”

  • “I know the effect is real—I just need cleaner numbers.”

This psychological lubricant allows unethical behavior to seem justified.


3. Fraud Evolves Because It Works

Smaldino & McElreath’s core argument is simple and devastating:

The system selects for those who succeed—not those who are honest.

Every generation of researchers learns from the successful.

And who is successful?

The ones who:

  • publish frequently

  • produce flashy claims

  • get into prestigious journals

  • secure big grants

  • attract media coverage

If these successes are achieved through questionable practices, then those practices become heritable—not genetically, but culturally, through lab training and mentoring.


4. Fraud Spreads Through Academic Lineages

Just as biological traits spread through reproduction, research practices spread through academic genealogy.

4.1 “Descendants” adopt their mentors’ strategies

If a PI produces statistically improbable results regularly, their trainees absorb:

  • their data methods

  • their publication strategies

  • their analysis shortcuts

  • their attitude toward p-values and significance

  • their tolerance for exaggeration

This creates academic “lineages” with distinct methodological cultures.

Evidence: David L. Stern & the “lineage effect”

Stern’s work showed that even fruit flies inherit certain behavior patterns culturally across generations.
Labs do too.

Good practices and bad practices propagate through lineages.

4.2 Fraud clusters geographically and institutionally

Just like infections spreading in populations, fraud patterns cluster:

  • similar manipulation techniques

  • same statistical artifacts

  • same impossible distributions

  • same writing styles

  • same figure duplications

These clusters reveal that fraud is not random—it is learned.


5. Historical Examples: Fraud Evolution in Action

Fraud is not new, but it is increasingly detected in clusters, consistent with evolutionary models.


5.1 The Cyril Burt IQ scandal

Cyril Burt published “twin studies” claiming extremely high heritability of intelligence.
Later, investigators found:

  • nonexistent coauthors

  • fabricated correlations

  • copied data patterns

For decades, he thrived.
His fraudulent work shaped education policies.
And his students inherited his methods.

This is classic evolutionary propagation: successful phenotype → expanded lineage.


5.2 Anil Potti (Duke University cancer genomics)

Potti published numerous high-impact cancer biomarker papers.
Later:

  • analyses showed fabricated patient data

  • bioinformatic methods were manipulated

  • clinical trials were influenced

His lab’s success created a generation of scientists trained on toxic practices.


5.3 Diederik Stapel (Social psychology)

Stapel produced extremely clean datasets that were “too perfect.”
His fraud persisted because:

  • he trained students with similar data expectations

  • his results matched reviewers’ theoretical biases

  • replication was rare

The ecosystem protected him.


5.4 Yoshinori Watanabe (Cell biology)

Watanabe’s lab was caught manipulating blots and fluorescence images.
Investigations revealed:

  • systemic training in visual data manipulation

  • multiple students involved

  • institutional reluctance to punish

Fraud had become a lab culture, not individual misconduct.


6. Why Fraud Thrives in the Current Scientific Ecosystem

Fraud spreads because the ecosystem selects for it.
Smaldino & McElreath highlight several systemic pressures:

6.1 Lack of replication removes constraints

Replication is the natural predator of fraud.
But when replication is rare, fraud proliferates.

6.2 High competition intensifies selective pressure

When survival depends on out-producing rivals:

  • statistical flexibility becomes adaptive

  • selective reporting becomes strategic

  • fabrication becomes tempting

This is akin to bacteria evolving antibiotic resistance under selective pressure.

6.3 Journals reward “too good to be true” results

Fraudsters know what reviewers want:

  • large effect sizes

  • perfect curves

  • clean p-values

  • dramatic conclusions

This mirrors sexual selection in nature: whatever trait is preferred, individuals evolve to exaggerate it.

6.4 Institutions protect high-performers

Universities benefit from:

  • prestige

  • funding

  • high-impact publications

  • media attention

They often resist investigating misconduct because the fraudster benefits the institution.

This is group-level incentive misalignment.


7. The Fragility of Policing Mechanisms

Unlike biological evolution, which often has built-in constraints, scientific culture has weak policing:

7.1 Journal peer review rarely checks raw data

Reviewers lack time, expertise, or access.

7.2 Retractions take years

Retraction Watch tracks retractions that took decades.

7.3 Whistleblowers face retaliation

Whistleblowing can destroy careers.

7.4 Detection methods lag behind fabrication techniques

For instance:

  • easy digital manipulation of images

  • generative AI for synthetic data

  • deep statistical obfuscation

  • complex bioinformatic pipelines

The result:
Fraud evolves faster than policing.


8. Can Fraud Ever Be Eliminated? Evolutionary Theory Says No—But It Can Be Minimized

In natural ecosystems, cheating strategies never disappear entirely.
But they can be controlled by making cheating:

  • less rewarding

  • more risky

  • more detectable

The same must be done in science.

8.1 Increase the cost of cheating

  • mandatory raw data and code availability

  • unblinded access to analysis pipelines

  • random replication audits

  • statistical anomaly detectors

  • funding agency spot checks

8.2 Reduce the rewards

  • prioritize quality over quantity

  • reward transparency

  • value incremental progress

  • shift journal prestige toward replicable work

8.3 Enhance policing

  • fast-track retractions

  • strong whistleblower protections

  • specialized forensic-statistics units

  • replication consortia to investigate suspicious papers

8.4 Change cultural expectations

The real transformation begins when lab culture shifts from “show me significance” to “show me validity.”


Conclusion: Fraud Is Not a Disease of Individuals—It Is an Evolutionary Outcome of the System

This is the most sobering conclusion of Smaldino & McElreath’s work:

Fraud is inevitable in a system that rewards fraudulent strategies.

Unless incentives change, the evolution of scientific misconduct will continue—and accelerate.

Fraud is not merely a failure of morality.
It is a failure of ecology.
A failure of institutional design.
A failure of evolutionary pressures.

We can restore integrity only by reshaping the selective landscape so that:

  • honesty becomes adaptive

  • replication becomes central

  • transparency becomes mandatory

  • quality becomes rewarded

Only then will the evolution of fraud slow—and perhaps stabilize at manageable levels.

Thursday, February 5, 2026

Post 7: Replication as the Immune System of Science—and Why It’s Failing

 Based on Smaldino & McElreath (2016), “The Natural Selection of Bad Science”


Introduction: When a Body Stops Fighting Infections

In biology, the immune system acts as the organism’s defense mechanism. It detects pathogens, neutralizes them, and remembers patterns to prevent future infections. A healthy immune system keeps the organism stable despite continuous exposure to new threats.

Science has its own immune system: replication.
Replication checks whether published findings are real or illusions created by noise, bias, or methodological sloppiness. It is one of the core pillars of scientific progress.

Yet today, the immune system of science is malfunctioning.
Replication rates are low. Replication studies are systematically discouraged. Large-scale replication projects expose entire fields where more than half the findings do not replicate. And instead of strengthening the scientific body, the ecosystem appears to be spiraling toward chronic autoimmune disorders and epidemics of unreliability.

In this post, we examine:

  • The biological analogy: What makes replication an immune system?

  • Why the “immune system” is suppressed by modern incentives.

  • What Smaldino & McElreath’s model reveals about the evolutionary decline of replication attempts.

  • Real-world replication crisis examples from psychology, cancer biology, neuroscience, and economics.

  • How academia ends up with “opportunistic infections.”

  • What an immune-restoration program for science might look like.


1. Replication as the Immune System: A Deep Analogy

The immune system’s key functions:

  1. Detect errors and invaders

  2. Neutralize harmful pathogens

  3. Maintain homeostasis

  4. Build long-term resilience

  5. Evolve and adapt as threats evolve

Replication in science performs precisely the same functions:

1.1 Detection

Independent researchers check:

  • Did the experiment produce the same effect size?

  • Was the result driven by noise?

  • Were the statistical methods sufficiently robust?

1.2 Neutralization

If a result fails replication:

  • journals may issue corrections

  • meta-analyses update effect sizes

  • failed ideas lose prominence

  • fraudulent or careless work gets exposed

1.3 Homeostasis

Replication maintains epistemic stability—the idea that science converges on truth over time.

1.4 Memory

Each replication teaches the field something:

  • which methods are reliable

  • which sample sizes are needed

  • what effect sizes are realistic

  • what pitfalls must be avoided

1.5 Evolution

Replication helps the field adapt by promoting better practices.

So why does the immune system seem to be failing?


2. The Immune Suppression: Pressures Against Replication

Smaldino & McElreath’s model shows that incentives suppress replication, making it rare, weak, and strategically unprofitable.

2.1 Replication is slow

A replication attempt may take:

  • months of careful method reconstruction

  • large sample sizes

  • precise controls

  • detailed statistical transparency

Meanwhile, original (and often weaker) studies can be completed faster.

2.2 Replication is low-status

In modern academia:

  • journals seldom publish replications

  • hiring committees value novelty

  • grants rarely fund confirmatory work

  • replication is seen as derivative or uncreative

In other words, replication is treated as menial labor, not scientific contribution.

2.3 Replication is risky

If you attempt to replicate another lab’s work:

  • you may antagonize senior scientists

  • you may be labeled confrontational

  • you may face pushback or retaliation

  • you may damage collaborative relationships

Few early-career researchers want to risk such conflicts.

2.4 Replication is costly

Unlike exploratory studies, replication requires:

  • larger sample sizes

  • stricter controls

  • more preregistration

  • more time investment

  • specialized skills in forensic-level methodology

Thus, replication is expensive but undervalued.


3. What the Smaldino & McElreath Model Shows

The model reveals a deadly evolutionary dynamic:

3.1 Labs with low rigor but high output are rewarded

They produce many “positive” findings—even if false.

3.2 Scientists with high rigor produce fewer papers

They lose in grant competitions and job markets.

3.3 Replication becomes too costly

As labs adopt weaker methods, replication attempts become:

  • harder

  • rarer

  • less successful

3.4 The success rate of replication falls over time

A direct prediction of the model:

As bad methods spread, replication rates collapse.

3.5 The ecosystem adapts to noise

The population of labs evolves toward:

  • smaller sample sizes

  • higher flexibility in analysis

  • greater p-hacking

  • lower reproducibility

In evolutionary terms:
The “species” of high-quality research goes extinct.


4. Real-World Evidence: The Replication Crisis Across Disciplines

Smaldino & McElreath wrote before many major replication reports came out. Yet their predictions match reality.

4.1 Psychology

The Open Science Collaboration (2015) reproduced 100 classic findings.

Result?

  • Only 39% replicated.

  • Effect sizes were on average half the original.

  • Some foundational theories were undermined.

This was essentially the equivalent of screening an entire population and discovering widespread immune deficiency.


4.2 Cancer Biology

The Reproducibility Project: Cancer Biology attempted to replicate 50 high-profile papers.

Outcome so far:

  • Only 11% fully replicated.

  • Many results showed drastically reduced effects.

  • Some relied on materials or methods that labs refused to share.

Given cancer biology drives billions in funding, this is like discovering that most of the medical literature for a disease is unreliable.


4.3 Neuroscience

Button et al. (2013) demonstrated that median sample sizes in neuroscience are too small, creating “power failure” so severe that:

  • effect sizes are inflated

  • false-positive rates skyrocket

  • replication is nearly impossible

This is akin to a diagnostic test with 20% sensitivity being used as the gold standard.


4.4 Economics

The “Many Analysts” projects showed:

  • same dataset + same question

  • 120 analysis teams

  • wildly different answers

How can we replicate a result if analysts cannot even agree on the method?


4.5 Genomics & Biomedical Sciences

Ioannidis (2005) famously explained mathematically why most published findings are false.

Replication failures in genetics revealed:

  • missing heritability

  • misinterpreted associations

  • population structure artifacts

  • pervasive p-hacking in GWAS

  • difficulty reproducing basic gene-expression studies

Across disciplines, the story is the same:
Replication is sick, and the organism is weakening.


5. Opportunistic Infections: What Happens When Replication Fails

In medicine, when the immune system collapses, opportunistic pathogens thrive:

  • fungal infections

  • latent viruses

  • cancers

  • antibiotic-resistant bacteria

Academia shows similar symptoms.

5.1 Fraud spreads more easily

Fraudulent papers go unnoticed because nobody replicates them.

5.2 Noise becomes indistinguishable from signal

Low-powered studies create a fog of contradictory results.

5.3 Predatory journals explode

They take advantage of weak replication policing.

5.4 Entire fields diverge

Separate subfields evolve incompatible methodologies.

5.5 Incentive-driven false positives become dominant

The ecosystem becomes a breeding ground for low-quality but high-output “pathogens.”


6. Why the Immune System Fails: A Systemic Evolutionary Explanation

Smaldino & McElreath argue that replication declines because the system evolves toward lower rigor.

6.1 Replication costs increase

As methods weaken, replication becomes harder.

6.2 Novelty bias becomes stronger

Early-career researchers must publish flashy papers to survive.

6.3 Institutions mismeasure success

Counting papers instead of verifying impact.

6.4 Labs evolve toward quantity-maximizing strategies

This crowds out replication-focused labs.

6.5 Replication becomes a public good

Like clean air, everyone benefits from replication—but individuals do not benefit from contributing to it.

This is a classic game-theoretic tragedy of the commons.


7. How to Restore the Immune System: A Treatment Plan

Fixing replication is like rebuilding immune function.

7.1 Mandate data and code availability

A vaccine against method ambiguity.

7.2 Institute replication grants

Fund replication explicitly.

7.3 Create publication incentives for confirmatory work

Journal prestige should attach to quality, not novelty.

7.4 Registered reports as immune boosters

If a study is accepted before data collection, p-hacking incentive evaporates.

7.5 Large-scale collaborative replications

Economies of scale reduce the cost barrier.

7.6 Penalize non-replicable labs

Introduce metrics for long-term reproducibility.

7.7 Teach statistical literacy rigorously

More immune cells = more protection.


Conclusion: Science Needs Its Immune System Back

Replication is not optional.
It is not secondary.
It is not an afterthought.

It is the immune system of science, required to detect, eliminate, and prevent the spread of false findings. But as the incentives of academia shift toward quantity, speed, and novelty, replication is increasingly suppressed—just as an immune system collapses under chronic stress or malnourishment.

Smaldino & McElreath’s evolutionary model demonstrates that this suppression is not an accident. It is the inevitable outcome of the selective pressures that dominate modern academia.

If we want science to be healthy, we must restore and strengthen the immune system. That means rebuilding replication as a mainstream, celebrated, well-funded, and high-prestige component of scientific practice.

Wednesday, February 4, 2026

Post 6 -- The Ecology of Modern Science: Competition, Cooperation, and Collapse

 Based on Smaldino & McElreath (2016), “The Natural Selection of Bad Science”


Introduction: Science as an Ecosystem—But a Degraded One

If you walk into a rainforest, you witness dynamic interactions: predator and prey, mutualism, competition, niche partitioning, evolutionary trade-offs. Ecology teaches us that systems adapt—but not always toward greater “goodness”. Sometimes they adapt toward survival shortcuts, parasitism, invasive dominance, or collapse.

Modern science behaves very much like such an ecosystem. This is the argument that sits at the heart of Smaldino & McElreath’s 2016 paper: research institutions do not select for truth-finding efficiency; they select for strategies that maximize professional survival, often at the cost of scientific integrity.

In this post, we step away from equations and instead interpret the paper through a broader ecological lens. We ask:

  • What “species” exist in the academic ecosystem?

  • What competition pressures distort adaptation?

  • Why does “cheating” (or corner-cutting) evolve so naturally?

  • How do these pressures produce runaway selection for low-quality research?

Let’s explore.


1. The Scientific Ecosystem: Who Lives Here?

Ecologists categorize organisms by roles—producers, consumers, decomposers. Science has its own functional guilds:

1.1 Explorers (slow, careful, high-quality)

These align most closely with the ideal of science:

  • thoughtful hypothesis construction

  • rigorous statistical reasoning

  • careful replication

  • incremental but robust discoveries

In the analogy, they are slow-growing trees—deep roots, solid wood, long lifespan.

1.2 Exploiters (fast, flashy, low-quality)

These labs or researchers produce:

  • many papers per year

  • flashy statistical significance

  • weakly designed experiments

  • exaggerated statements

  • irreproducible claims

Ecologically, they resemble invasive species—quick growth, low resource investment, rapid colonization.

1.3 Predators (journals, rankings, funders)

Predators shape prey behavior. Journals and funding agencies exert:

  • aggressive selection for novelty

  • “predatory” attention toward surprising results

  • pressure to publish frequently

  • biases toward positive results

They don’t “eat” scientists literally; they consume scientists’ time, energy, and incentives.

1.4 Scavengers (meta-analysts, critics, reformers)

They pick up the remains:

  • replication failures

  • systematic reviews of conflicted data

  • post-mortems of entire research fields

They recycle waste—an essential role, but one overwhelmed by the scale of what must be cleaned.

You can begin to see already why problems emerge: fast-growing invasive species outcompete slow-growing trees when the environment rewards speed over durability.


2. Environmental Pressures: The Selective Forces Distorting Science

In ecology, environmental pressures shape evolutionary direction. In academia, the environmental pressures include:

2.1 Publish-or-perish metrics

This is the strongest selection force.

  • Tenure depends on publication count.

  • Grants depend on publication count.

  • Promotions depend on publication count.

Slow, careful, thoughtful (but fewer) papers lose to fast, frequent, flashy output.

2.2 Journal prestige as habitat quality

Top journals function like patches of high-quality habitat with limited space. The individuals that reach them are often those who:

  • exaggerate novelty

  • optimize statistically for significance

  • oversell or overspeculate

Slow, cautious, nuanced research often cannot thrive in these patches.

2.3 Grant funding as a limiting resource

Like food scarcity in an ecosystem, scarce funding leads to:

  • fierce competition

  • favoritism for risky, sexy, newsworthy ideas

  • penalties for “boring” but necessary replication

2.4 Career bottlenecks: Postdoc → Faculty transition

This bottleneck creates evolutionary sweeps:

  • only the most prolific survive

  • survival probabilities depend on output speed

  • quality becomes less relevant

  • risk-taking (in the statistical sense) is rewarded

Together, these pressures create a landscape where invasive strategies thrive.


3. Evolutionarily Stable Strategies: Why Bad Practices Survive

In ecology, an evolutionarily stable strategy (ESS) arises when a strategy, once common, cannot be outcompeted by alternatives.

In modern academia, the ESS is distressingly simple:

Produce as many statistically significant, novel results as possible using minimal time per project.

This ESS is not in line with truth discovery. But once adopted widely, it is difficult to reverse because:

3.1 Slow science loses competitions

Careful labs never reach the publication numbers of fast labs. So they fail in grant competitions and hiring rounds.

3.2 Reputation becomes decoupled from truth

A lab that publishes 15 papers a year appears more “successful” than one that produces two carefully validated papers.

3.3 The ecosystem becomes “locked in”

When every institution measures success using the same metrics, every participant must adapt or perish. Even well-meaning, careful scientists are forced to play the game or risk extinction.


4. Ecological Collapse: What Happens When Bad Science Takes Over?

When an ecosystem is dominated by opportunistic invaders, you get collapse:

  • soil nutrient loss

  • biodiversity crashes

  • long-term resilience disappears

In science, the analogs are:

4.1 Replicability crisis

Field after field demonstrates:

  • low reproducibility

  • inflated effect sizes

  • contradictory results

  • entire literatures built on fragile foundations

4.2 Epistemic pollution

Low-quality publications accumulate like pollution:

  • meta-analyses become impossible

  • true effects are masked

  • pseudoscience gains legitimacy

  • real progress becomes slower

4.3 Career disillusionment and attrition

Talented scientists burn out when forced to compete on quantity rather than quality.

4.4 Loss of public trust

When the public sees contradictory findings, fraud scandals, and frequent retractions, trust erodes.

This is the scientific equivalent of ecological desertification—once the soil is lost, recovery is extremely hard.


5. Ecological Anecdotes That Mirror Academic Dysfunction

Anecdote 1: The cane toad (Australia)

Introduced to control beetles, the cane toad multiplied explosively and destabilized ecosystems. It adapted well to the incentives but generated harmful outcomes.

Academic parallel:
Inventing “impact factor” was like introducing cane toads. It solved one problem but introduced many more.


Anecdote 2: The chestnut blight fungus (North America)

A fast-growing pathogen wiped out slow-growing, foundational species.

Academic parallel:
Fast-publication labs crowd out foundational, rigorous labs.


Anecdote 3: The tragedy of the commons

Each individual herder benefits from adding more cattle, but collectively they destroy the pasture.

Academic parallel:
Each scientist benefits individually from publishing more—even low-quality papers.
Collectively, academia becomes a wasteland of irreproducible findings.


6. The Paper’s Core Claim in Ecological Terms

To recast Smaldino & McElreath in ecological language:

The incentives of modern academia create a habitat where invasive, fast-replicating research strategies thrive, driving out slow, careful, high-quality science through natural selection.

This is not moral failure, individual laziness, or corruption.
It is ecological inevitability under the current environment.


7. Toward a Restoration Ecology of Science

If we think like restoration ecologists, what interventions help restore ecosystems?

7.1 Change the selective environment

  • reward replication

  • reward transparency

  • reward null results

  • reduce dependence on publication count

7.2 Diversify habitats

  • establish journals that value careful, long-term research

  • create grant categories for incremental or confirmatory work

7.3 Reintroduce apex predators

Predators regulate ecosystems. In science, the predators could be:

  • replicability audits

  • statistical screening tools

  • meta-analytic policing

  • data availability requirements

These would eat away at low-quality work.

7.4 Create refugia for slow science

Institutions like the IAS (Princeton) or EMBL partially serve this role by giving scientists time without pressure to produce.

7.5 Facilitate succession

Allow the ecosystem to shift toward more stable, long-lived scientific strategies.


Conclusion: Science Needs Ecological Thinking

The ecosystem analogy is powerful because it reframes the conversation away from blaming individuals and toward understanding systemic evolution.

In ecology, systems adapt to whatever pressures exist. If the pressures reward destructive behaviors, destructive organisms thrive.
The same is true in academia.

Smaldino & McElreath’s insight is that bad science is not an accident—it is the product of natural selection in a distorted environment.

To fix science, we must change the environment.

Tuesday, February 3, 2026

Post 5 — How Scientific Fields Collapse: Lessons from Psychology, Genomics, Economics, and Cancer Research

One of the most uncomfortable insights in Smaldino & McElreath’s The Natural Selection of Bad Science is that scientific collapse is not an anomaly. It is a recurring, predictable, evolutionary outcome — a form of cultural extinction event triggered by misaligned incentives. Fields don’t collapse because bad people ruin them. Fields collapse because adaptation to the wrong incentives gradually hollows them out from the inside.

This post examines why entire scientific disciplines sometimes enter periods of crisis, retrenchment, or mass retraction — and why these collapses follow predictable patterns. We will walk through four influential examples:

  1. Social Psychology and the Priming Crisis

  2. Early Genomics and the Biomarker Bubble

  3. Macroeconomics and the Austerity Shock

  4. Cancer Biomarkers and the Reproducibility Meltdown

These case studies reveal the same evolutionary dynamics in action:

  • Incentives reward discoverability, not verifiability.

  • Labs evolve toward high-output, low-effort strategies.

  • Hype cycles amplify low-quality discoveries.

  • Replication is too slow and too weak.

  • The field enters an ecological collapse where noise drowns signal.

Understanding these collapses isn’t just historical curiosity — it’s a blueprint for diagnosing the health of scientific ecosystems today.


1. What Does It Mean for a Scientific Field to “Collapse”?

A scientific collapse is not a sudden event. It is a long, slow attrition of reliability.

You know a field is collapsing when:

  • Replication rates fall below noise levels.

  • Key findings become unstable or contradictory.

  • Statistical tools are misused and normalized.

  • Methodological shortcuts become standard.

  • Top journals reward surprising results over robustness.

  • Industry partners lose trust in the field’s output.

  • Foundational theories must be rewritten or abandoned.

Collapses often culminate in:

  • mass retractions

  • major methodological reforms

  • devastating replication studies

  • an exodus of credibility

  • a shift in intellectual prestige to competing disciplines

The model by Smaldino & McElreath predicts exactly this process:
low-effort strategies multiply faster than corrective mechanisms can contain them.

Eventually, reliability becomes unsalvageable and the field must rebuild from scratch.


2. Case Study 1: Social Psychology and the Priming Collapse

Few scientific collapses are as famous — or as thoroughly documented — as the downfall of social priming research in the 2000s.

The Incentives

  • Publish cute, surprising results.

  • Use small samples (cheap, fast).

  • Don’t preregister — flexibility helps results appear significant.

  • Maximize media attention.

This created the ideal evolutionary environment for:

  • low-effort experiments

  • flexible analysis (“the garden of forking paths”)

  • inflated false positives

  • publication bias

Psychologists weren’t malicious — they were adapting to their environment.

The Peak of the Bubble

Between 1995 and 2010, dozens of sensational papers emerged:

  • Priming people with the elderly stereotype made them walk slower.

  • Thinking about money made one less social.

  • Subtle cues could alter voting patterns.

Journals loved it. TED Talks loved it.
It was a golden age — for a while.

The Collapse

In 2011–2016:

  • Large-scale replication attempts failed spectacularly.

  • Many priming effects could not be reproduced even with much larger samples.

  • The field entered a crisis of credibility.

Daniel Kahneman, Nobel laureate, described the situation as:

“A train wreck.”

Kahneman advised priming researchers to “clean up their act,” but — as the model predicts — replication was too slow and too weak. The incentives remained unchanged for decades, allowing weak methods to evolve unchecked.


3. Case Study 2: Early Human Genomics — The Biomarker Bubble

Before GWAS became rigorous, the early 2000s saw a massive surge of “candidate gene” studies.

The Incentives

  • Link any gene to any disease with small samples.

  • Publish novel associations quickly.

  • Use lenient statistical thresholds.

  • Avoid replication because it’s expensive.

Small labs could produce dozens of “gene X predicts trait Y” papers each year.

The Outcome

A 2009 meta-analysis concluded:

“Over 90% of candidate gene associations were false.”

Why?

Because the field rewarded:

  • speed over sample size

  • novelty over rigor

  • positive results over null results

  • exploratory p-hacking over confirmatory research

The evolutionary analogy is clear:

  • Labs that produced splashy claims reproduced academically.

  • Labs that insisted on high effort died out.

The Collapse

By the mid-2010s, the field was forced to abandon most of its foundational claims.

GWAS later showed that:

  • Most common complex traits involve hundreds of genes.

  • Individual candidate genes rarely explain meaningful variance.

  • Many early associations were artifacts of population structure.

This collapse led to massive reallocations of funding and prestige.

But the damage was done: a decade of biomedical research had been built on unreliable foundations.


4. Case Study 3: Economics and the Reinhart & Rogoff Shock

Macroeconomics rarely faces replication pressure — yet it experienced one of the most famous data-driven collapses of the 21st century.

The Claim

A massively influential paper by Reinhart & Rogoff (2010) claimed:

Countries with >90% debt-to-GDP ratio experience sharply reduced growth.

This result justified widespread global austerity policies.

The Incentives

  • Prestigious economists influence policy.

  • Top journals favor macro-wide conclusions.

  • Replication datasets are often restricted.

This created an ecosystem where high-impact results multiplied without robust verification.

The Collapse

In 2013, a group of graduate students replicated the analysis and found:

  • Coding errors

  • Excluded countries

  • Incorrect weighting procedures

  • Statistical miscalculations

When the errors were fixed, the 90% threshold disappeared.

The Fallout

Despite the correction:

  • Policy consequences had already played out.

  • Countries had adopted austerity measures.

  • Billions in economic decisions were made based on flawed evidence.

According to Smaldino & McElreath’s logic:

  • Replication came after the selective advantage (policy impact) was realized.

  • Thus, replication had no evolutionary power.

R&R’s original paper remained highly cited.

The field quickly moved on without structural reform.


5. Case Study 4: Cancer Biomarker Research — A Structural Meltdown

The cancer literature provides one of the clearest real-world confirmations of the model.

A 2005 paper in Nature analyzed cancer biomarker studies and concluded:

88% of them could not be reproduced.

Eighty-eight percent.

This wasn’t a series of bad apples — it was systemic.

The Incentives

  • Pharma companies reward promising preliminary data.

  • Journals love breakthroughs.

  • Novel biomarkers attract enormous grants.

  • Clinical translation is slow, so failed replication is invisible.

Labs evolved toward maximum publication output:

  • flexible analyses

  • small sample sizes

  • no preregistration

  • selective reporting

  • low methodological effort

The Collapse

When industry groups attempted replication:

  • almost none of the biomarkers validated

  • many were statistical mirages

  • entire avenues of clinical trial design were affected

And yet, even after the collapse:

  • many labs continued to publish low-quality biomarker studies

  • replication remained rare

  • journals continued favoring novelty

Again, replication arrived too late and with too little force.


6. Why Collapses Follow Predictable Patterns

Across these fields, the same evolutionary mechanisms emerge:

(1) Incentives reward rapid, positive, surprising results.

This lowers the “effort threshold” for survival.

(2) Labs evolve low-effort, high-output strategies.

These labs have higher reproductive fitness.

(3) Noise accumulates faster than replication can eliminate it.

False positives proliferate exponentially.

(4) Replication attempts appear late, often after a field matures.

The ecosystem is already saturated with unreliable findings.

(5) Replication has weak punitive power.

Failed replications do not harm lab survival.

(6) The field hits a tipping point where signal-to-noise ratio collapses.

Once noise dominates, theory collapses.

(7) A painful reform period begins.

Reforms often include:

  • preregistration

  • large-scale consortia

  • stricter statistical norms

  • data sharing

  • high-powered studies

  • adversarial collaborations

This is the “ecological reset” phase — analogous to a burned forest slowly regrowing.


7. Why Some Fields Avoid Collapse

Not all fields collapse. Some stay robust.

Fields that avoid collapse tend to have:

1. Strong replication culture

(e.g., physics, some areas of chemistry)

2. Large, expensive experiments where p-hacking is impossible

(e.g., particle physics, astrophysics)

3. Community-wide data sharing

(genetics after 2010)

4. Strict statistical conventions

(e.g., neuroimaging after the “dead salmon” paper)

5. No reward for novelty without rigor

(e.g., clinical trial pipelines)

Fields with these traits experience slow, stable cumulative progress.

They are evolutionarily stable strategies under the model.


8. Warning Signs: Is a Field Approaching Collapse?

Based on the model and history, signs of impending collapse include:

  • rapid proliferation of positive results

  • high publication volume with small sample sizes

  • widespread p-values just below 0.05

  • low rates of data sharing

  • lack of preregistration

  • theoretical fragmentation (each lab has its own model)

  • replication studies consistently failing

  • large gaps between media claims and real effects

  • heavy reliance on “hidden moderators” to explain failures

If multiple signs appear simultaneously, the field may be entering a pre-collapse trajectory.


9. Lessons for Scientists Today

The collapses above are not moral failures.
They are adaptive responses to maladaptive incentives.

The key lesson:

If a field rewards volume over rigor, it will evolve toward low effort and eventually collapse.

Smaldino & McElreath capture this evolutionary truth with mathematical precision:
selection pressures shape scientific methods as surely as natural selection shapes beak sizes.

To preserve scientific integrity, we must shift fitness away from speed and novelty, and toward accuracy, transparency, and theoretical stability.


10. Coming Up Next

Post 6 — The Role of Hype Cycles: How Media, Journals, and Funding Agencies Accelerate the Spread of Bad Science

This post will explore:

  • Why hype amplifies low-quality discoveries

  • The social psychology of “breakthrough culture”

  • How journalists, TED Talks, and grant committees shape the evolution of scientific methods

  • Historical examples of hype-driven bubbles

  • How hype interacts with the evolutionary model in the paper