Tuesday, February 10, 2026

Post 11: Can We Fix Science? Restoring Selection for Quality

 Based on Smaldino & McElreath (2016), “The Natural Selection of Bad Science”


Introduction:

Evolution Won’t Fix Itself—but We Can Change the Environment

By this point in the series, we’ve seen the core argument of Smaldino & McElreath’s paper: bad science evolves because the academic environment rewards it. Not a single individual needs to be malicious, lazy, or fraudulent. When incentives reward volume over accuracy, methods that maximize volume will dominate—even if they erode scientific reliability.

That brings us to the most pressing question of all:

Can we fix it?

Can we re-engineer the selective environment so that high-quality science once again becomes evolutionarily stable?

This post explores solutions—not surface-level fixes, not motivational posters, but structural reforms grounded in evolutionary logic. We’ll examine what changes might realign incentives, strengthen replication, and restore the long-term health of the scientific ecosystem.


1. Lessons from Evolutionary Biology:

If you want a different trait, change the selection pressure

In natural systems, you don’t get cooperation simply because everyone agrees cooperation is good.
You get it because:

  • cheaters are punished

  • cooperators gain rewards

  • environments favour long-term stability over short-term exploitation

Science is no different.

Smaldino & McElreath’s model tells us plainly:

If academia continues to reward fast, flashy, low-rigour work, the system will produce fast, flashy, low-rigour science.

The only way to restore quality is to induce a different evolutionary equilibrium.

The goal is not to change individual scientists (most are already well-intentioned).
The goal is to change the rules of the game.


2. Why Awareness and Training Are Not Enough

Over many decades, scientific organizations have launched initiatives:

  • Responsible Conduct of Research (RCR) training

  • workshops on good statistical practice

  • reproducibility awareness campaigns

  • ethics pledges

  • open science seminars

These are valuable, but alone they fail for one simple reason:

Culture cannot override entrenched structural incentives.

A workshop encouraging careful methodology cannot compete with:

  • your next grant renewal

  • tenure review

  • student throughput

  • impact factor expectations

  • publication quotas

When survival depends on productivity, “good intentions” get selected out.

Fixing science requires modifying the fitness landscape.


3. Strategy 1

Reform the Metrics that Drive Selection

Metrics are the oxygen of academia.
We measure:

  • publication count

  • impact factor of journals published in

  • h-index

  • citation numbers

  • grant income

  • student count

These metrics form the environment shaping evolutionary adaptation.
If we want better adaptation, we must change the environment.


3.1 Stop Counting Papers

The simplest intervention:
decouple career advancement from publication quantity.

Instead of:

  • “How many papers did you publish?”

use:

  • “What problems did you solve?”

  • “What uncertainties did you eliminate?”

  • “What reusable resources did you create?”

  • “How robust are your findings?”

Imagine a tenure committee that reviews 5 representative works, in detail, rather than 60 papers skimmed.

Countries like the UK have experimented with similar ideas (REF selects a small number of outputs), but implementation has been inconsistent.


3.2 Reward Replication Explicitly

A system that values only novel positive results is destined for decay.

Solutions:

  • Dedicated career pathways for replication specialists

  • Top journals publishing high-quality replications

  • Counting replications as equal in prestige to novel discoveries

  • Grant calls specifically for verification work

  • “Registered Replication Reports” modeled after the Psychological Science initiative

Replications are the immune system of science.
We cannot survive long with compromised immunity.


3.3 Emphasize Open Data, Open Methods, and Reproducibility

Quality survives when transparency enforces accountability.

Policies that help:

  • Mandatory availability of data and code

  • Linking datasets to publications via DOIs

  • Enforcement of preregistration for hypothesis-driven work

  • Requiring statistical scripts as supplementary material

  • Automated statistical checks before publication

The key idea:
make it harder for low-rigour research to hide.


4. Strategy 2

Change Institutional Incentives from the Ground Up

Metrics are symptoms of a deeper structural issue:
universities themselves are rewarded for volume—of papers, grants, enrollment.

We can address this.


4.1 Redesign Tenure and Promotion Criteria

Faculty evaluation should weigh:

  • originality and depth

  • replicability and rigor

  • quality of mentoring

  • contributions to shared datasets

  • codes, tools, and infrastructure

  • interdisciplinary bridges

  • long-term, foundational projects

Prestigious institutes like EMBL and HHMI already de-emphasize publication count, and their output quality is significantly higher.


4.2 Stabilize Early-Career Positions

Precarity forces young scientists into risky behaviour (p-hacking, haste, hype).
Better structures:

  • multi-year, stable research fellowships

  • protected time for large, rigorous experiments

  • guaranteed minimum funding for newly tenured faculty

  • career bridges between postdoc and faculty that aren’t elimination tournaments

Imagine science as a marathon instead of a gladiatorial arena.


4.3 Fix the Grant System

Grants strongly shape evolution.
Reforms could include:

  • awarding funds partly by lottery after quality screening (New Zealand tests this)

  • funding research programs, not individual “projects”

  • creating long-term grants (7–10 years) for risky but foundational work

  • requiring replication or verification components in proposals

  • capping the number of grant proposals individuals can submit

A system that rewards ideas over products will produce better science.


5. Strategy 3

Rebuild the Role of Journals and Publishers

Journals are selective environments with enormous influence.

Reforms include:


5.1 Eliminate Novelty Bias

Many journals explicitly prioritize “surprising” results.
This encourages labs to:

  • chase small effects

  • amplify borderline findings

  • massage statistics

  • selectively report

If journals instead asked:

“Is this claim true and demonstrated rigorously?”

we would see a shift in lab strategies within a decade.


5.2 Introduce Results-Blind Review

A powerful approach already adopted by some outlets:

  • Review only the research question, design, and methods.

  • Accept the paper before the results exist.

  • Publish regardless of outcome.

This eliminates publication bias at the root.


5.3 Promote Negative Results

Negative results are essential for knowledge but rarely published.

Solutions:

  • dedicated journals

  • special sections in major outlets

  • citation incentives for null findings (meta-analyses rely on them)

Publishing negatives reduces the false-positive enrichment that fuels bad science.


6. Strategy 4

Strengthen Replication as a Community Norm

Even if journals don’t reform quickly, communities can.


6.1 Collaborative Large-Scale Replications

Examples:

  • Many Labs Project

  • OSF’s Reproducibility Initiative

  • Registered Replication Reports

These build:

  • shared methodology

  • cross-validation

  • community pressure for quality

They also remove perverse incentives—no single lab “owns” the result or benefits disproportionately.


6.2 Automated Replication Pipelines

Fields like computational biology and machine learning can integrate:

  • continuous reproducibility checks

  • containerized workflows

  • pipeline testing (e.g., Nextflow, Snakemake)

  • version-controlled analyses

If code can be re-run automatically, fraud and sloppiness become nearly impossible.


7. Strategy 5

Cultivate Cultural Evolution—But Only After Structural Change

Culture alone can’t beat misaligned incentives, but once structural reforms take hold, culture can reinforce them.

Helpful cultural shifts include:

  • valuing slow, deep work

  • celebrating careful null results

  • admiring correct but unglamorous findings

  • encouraging mentorship over output

  • building reputations for “reliability,” not “impact factor”

Science should not feel like a performance—it should feel like a craft.


8. The Key Principle:

Realigning Incentives with Truth

Smaldino & McElreath’s core insight is brutally simple:

If truth-seeking is not rewarded, it will go extinct.

Everything else in this post is downstream from that principle.

We fix science by making truth—not speed, not flash, not novelty—the central axis of academic fitness.

Once the incentives change, evolution will do the rest. Good science will flourish again, not because scientists try harder, but because the environment finally selects for quality.


Conclusion:

Repair Is Possible—But Requires Evolutionary Engineering

Science has survived revolutions, political pressures, ideological hijacking, wars, and paradigm shifts.
It can survive this crisis, too—but not by accident.

We need:

  • new metrics

  • new funding frameworks

  • new journal practices

  • restored replication capacity

  • cultural reinforcement

If we redesign the selective pressures, the scientific ecosystem will reorganize accordingly.

Bad science evolved naturally under misaligned incentives.
Good science can evolve naturally under better ones.

No comments: