Wednesday, February 11, 2026

Survival of the Scrupulous: Evolutionary Strategies for Doing Solid Science in a Broken System

Smaldino & McElreath’s The Natural Selection of Bad Science paints a bleak picture: the academic ecosystem selects for speed, flash, and quantity over accuracy.

The result is an environment where low-rigour strategies often dominate.

But here’s the twist.

Even in harsh evolutionary landscapes, niches exist.
Some organisms survive not by mimicking the majority, but by exploiting openings the majority overlooks.

This essay explores evolutionarily stable strategies (ESS) for researchers who want to:

  1. Maintain deep methodological rigour

  2. Avoid questionable research practices (QRPs)

  3. Still build competitive careers

  4. Contribute meaningfully to the reliability and stability of knowledge

These strategies don’t require idealism.
They are practical, adaptive, robust—designed to work within the current environment.

Think of this as an evolutionary survival guide for conscientious scientists.


1. Specialize in Slow, Hard, Defensible Work

Become the “tortoise strategy” in a habitat full of hares

In evolution, slow strategies can win when:

  • the environment punishes errors severely, or

  • reliability becomes the bottleneck resource.

Academia is beginning to shift in this direction:

  • Meta-analyses now dominate many fields.

  • Journals increasingly value robustness and transparency.

  • Institutes like HHMI and EMBL reward quality over quantity.

  • Funding agencies have begun emphasizing methodological innovation and replicability.

Strategy:
Become the person whose work is trusted, cited, and used for a decade—not just a news cycle.

Concrete tactics:

  • Build core datasets or reference maps that become foundational resources.

  • Create hard-to-replace expertise in experimental design or statistical methodology.

  • Develop tools, software, pipelines, or protocols that become industry standards.

  • Focus on problems that cannot be solved with low effort or QRPs.

Think:
the approach of Sydney Brenner, Max Perutz, Jennifer Doudna, or Michael Nielsen—problems that require deep conceptual work rather than quick output.


2. Become a Methodological Apex Predator

In nature, organisms survive by being better at detecting deception than their competitors are at producing it.

High-rigour scientists can thrive by developing:

  • strong statistical literacy

  • fluency in experimental design

  • skill in identifying confounds, biases, and artefacts

  • mastery of cutting-edge analytic methods

This creates powerful advantages:

  1. You avoid false-positive traps others fall into.

  2. Your papers withstand heavy scrutiny.

  3. Reviewers eventually learn that “when you say something, it’s probably correct.”

  4. You attract collaborators who need reliability.

  5. You catch errors that would sink weaker labs.

In an ecosystem full of noise, clarity is currency.


3. Choose Problems Where Low Rigour Cannot Compete

An evolutionary trick: select environments where cheaters lose.

Examples:

3.1 Fields that require large datasets

You cannot p-hack millions of datapoints.

  • genomics

  • epidemiology

  • structural biology

  • neuroimaging consortia

  • palaeogenomics

  • protein structure prediction

  • computational linguistic corpora

3.2 Fields where reproducibility is built-in

  • physics

  • crystallography

  • materials science

  • mathematical biology

  • certain areas of computational neuroscience

3.3 Fields where experiments take long and shortcuts are obvious

Examples:

  • long-read sequencing pipelines

  • high-resolution electron microscopy

  • large animal models

  • field ecology with multi-year data series

Cheating is difficult when the ecosystem itself enforces rigour.


4. Build an “Open Science Shield”

Transparency as evolutionary defense and strategic advantage

You can weaponize openness as a competitive strategy.

Why?

Because:

  • QRPs thrive in darkness.

  • Rigour thrives in daylight.

  • Transparent work attracts collaborators and citations.

  • Reviewers become more lenient when they can verify things.

  • Open pipelines become long-term assets.

  • Public datasets act as continuous advertising.

Practical tactics:

  • Release analysis code on GitHub.

  • Publish preregistered study designs.

  • Share intermediary results and QC plots.

  • Publish negative results on preprint servers.

  • Use notebooks (Jupyter, RMarkdown) that fully document workflow.

  • Build reproducible pipeline containers (Docker, Singularity).

Open science is not charity.
It is reputation insurance and network-building.


5. Use the “Two-Speed Lab” Model

An evolutionary mixed strategy that exploits niche partitioning

Many successful, ethical labs operate with two parallel workstreams:

Workstream A: Deep foundational projects

Slow, careful, rigorous, and high-impact.

Workstream B: Fast, low-risk but high-quality analyses

Examples:

  • method comparisons

  • secondary analyses of public data

  • short perspective pieces

  • data visualization papers

  • workflow automation papers

  • replication studies with open datasets

This creates:

  • a steady stream of publications

  • a consistent CV signal

  • protection from being outcompeted

  • intellectual room for deep work

Think of it like stable foraging:
slow-growing trees + fast-growing shrubs.


6. Win Through Collaboration, Not Competition

In nature, cooperation often beats cheating in stable groups.

QRPs are usually individual strategies.
Rigour often emerges from collaboration, because:

  • more eyes catch more errors

  • reputational risk is shared

  • complementary expertise increases quality

  • interdisciplinary teams produce stronger papers

  • multi-institutional work has higher credibility

If you build a network of trustworthy collaborators, you create an environment where:

  • you gain citations

  • you gain coauthorships

  • you gain visibility

  • you gain methodological support

  • you reduce workload on data cleaning and validation

Nature’s lesson:
coalitions stabilize against cheaters.


7. Leverage Emerging “Rigour-Friendly” Incentives

Evolution shifts. Early adopters of new niches prosper.

Major meta-incentives are changing rapidly:

  • NIH has begun requiring rigor & reproducibility sections

  • funders request data-sharing plans

  • journals offer Registered Reports

  • replication studies are being funded

  • computational pipelines are moving toward full reproducibility

  • AI-assisted QC tools are exploding

Young scientists who master these skills early will have advantages for 10–20 years.

Examples of niche specializations that will be crucial:

  • statistical QC

  • pipeline reproducibility

  • AI-based artefact detection

  • FAIR-compliant data curation

  • robust experimental design

  • preregistration and meta-science expertise

This is adaptive specialization.

You evolve into a niche where selection pressures favour rigour.


8. Build a Reputation for Being Right (Not Just Prolific)

Reputations have evolutionary inertia.

Even in a flawed system, reputational signals matter:

  • reviewers trust you

  • editors recognize your name

  • collaborators seek you out

  • funders remember low-risk applicants

  • your students carry forward the brand

You gain this through:

  • accurate predictions

  • methods that solve real problems

  • papers that are cited for reliability

  • refutations that are respected even when controversial

  • preprints that withstand public scrutiny

  • talks where you critique your own work openly

A reputation for reliability is an evolutionarily stable attractor.

It cannot be outcompeted easily because:

  • it brings long-term fitness

  • it attracts resources

  • it increases survivability in changing environments

This is the “oak tree strategy” of academia.


9. Hide Your Rigour, Not Your Productivity

A counterintuitive strategy borrowed from animal behaviour.

Some animals survive by appearing more aggressive than they are.
Others by appearing more harmless than they are.

A scientist can survive by appearing more “productive-looking” than the raw output suggests.

Examples:

  • post substantial preprint work-in-progress

  • maintain an active GitHub log

  • present at conferences regularly

  • share datasets incrementally

  • post method notes or short technical reports

  • communicate findings on blogs or social media

These increase visible activity without compromising rigour.

This is a signaling strategy.

The key:
signal high engagement while practicing high caution.


10. Choose Your Predator Wisely: Strategic Advisor and Environment Selection

Your evolutionary pressure depends on where you grow.

A supportive PI or institution can offset bad incentives by:

  • valuing quality explicitly

  • offering stable timelines

  • sheltering early-career work

  • rewarding replication and careful design

  • giving students intellectual independence

  • maintaining ethical group norms

Some labs are survival traps.
Others are evolutionary sanctuaries.

Choosing the right environment is itself an evolutionary strategy.


11. Master the Art of Saying “No” to Bad Incentives

Survival often depends on avoiding maladaptive temptations.

You do not have to:

  • chase the latest hype

  • p-hack to survive

  • overstate your conclusions

  • rush sloppy manuscripts

  • inflate your claims in grant proposals

  • run underpowered studies

  • manufacture novelty

  • fight twenty small battles instead of one meaningful one

Adaptive restraint is a real evolutionary strategy.

It conserves energy.
It preserves integrity.
It protects long-term career arcs.


Conclusion:

Rigour Can Survive—If You Evolve Strategically

Bad incentives may dominate the environment, but evolution rarely drives all diversity to extinction.
There are always:

  • niches

  • mixed strategies

  • hidden advantages

  • long-term payoffs

  • coalition-based protections

  • reputational stabilizers

  • structural shelters

You can survive—and thrive—through strategies that evolve around the system’s flaws.

The key principles are:

  • Choose hard problems

  • Become impossible to replace

  • Use transparency as strength

  • Build coalitions

  • Specialize in future-proof skills

  • Signal activity without sacrificing integrity

  • Pick your environment strategically

  • Aim for long-term fitness, not short-term flash

Science may be evolving badly at the systemic level.
But as an individual organism, you can evolve differently.

You can become the kind of scientist whose work sets the foundation that others rely on—even in a noisy, messy ecosystem.

And ultimately, that’s what real success looks like.

Tuesday, February 10, 2026

Post 11: Can We Fix Science? Restoring Selection for Quality

 Based on Smaldino & McElreath (2016), “The Natural Selection of Bad Science”


Introduction:

Evolution Won’t Fix Itself—but We Can Change the Environment

By this point in the series, we’ve seen the core argument of Smaldino & McElreath’s paper: bad science evolves because the academic environment rewards it. Not a single individual needs to be malicious, lazy, or fraudulent. When incentives reward volume over accuracy, methods that maximize volume will dominate—even if they erode scientific reliability.

That brings us to the most pressing question of all:

Can we fix it?

Can we re-engineer the selective environment so that high-quality science once again becomes evolutionarily stable?

This post explores solutions—not surface-level fixes, not motivational posters, but structural reforms grounded in evolutionary logic. We’ll examine what changes might realign incentives, strengthen replication, and restore the long-term health of the scientific ecosystem.


1. Lessons from Evolutionary Biology:

If you want a different trait, change the selection pressure

In natural systems, you don’t get cooperation simply because everyone agrees cooperation is good.
You get it because:

  • cheaters are punished

  • cooperators gain rewards

  • environments favour long-term stability over short-term exploitation

Science is no different.

Smaldino & McElreath’s model tells us plainly:

If academia continues to reward fast, flashy, low-rigour work, the system will produce fast, flashy, low-rigour science.

The only way to restore quality is to induce a different evolutionary equilibrium.

The goal is not to change individual scientists (most are already well-intentioned).
The goal is to change the rules of the game.


2. Why Awareness and Training Are Not Enough

Over many decades, scientific organizations have launched initiatives:

  • Responsible Conduct of Research (RCR) training

  • workshops on good statistical practice

  • reproducibility awareness campaigns

  • ethics pledges

  • open science seminars

These are valuable, but alone they fail for one simple reason:

Culture cannot override entrenched structural incentives.

A workshop encouraging careful methodology cannot compete with:

  • your next grant renewal

  • tenure review

  • student throughput

  • impact factor expectations

  • publication quotas

When survival depends on productivity, “good intentions” get selected out.

Fixing science requires modifying the fitness landscape.


3. Strategy 1

Reform the Metrics that Drive Selection

Metrics are the oxygen of academia.
We measure:

  • publication count

  • impact factor of journals published in

  • h-index

  • citation numbers

  • grant income

  • student count

These metrics form the environment shaping evolutionary adaptation.
If we want better adaptation, we must change the environment.


3.1 Stop Counting Papers

The simplest intervention:
decouple career advancement from publication quantity.

Instead of:

  • “How many papers did you publish?”

use:

  • “What problems did you solve?”

  • “What uncertainties did you eliminate?”

  • “What reusable resources did you create?”

  • “How robust are your findings?”

Imagine a tenure committee that reviews 5 representative works, in detail, rather than 60 papers skimmed.

Countries like the UK have experimented with similar ideas (REF selects a small number of outputs), but implementation has been inconsistent.


3.2 Reward Replication Explicitly

A system that values only novel positive results is destined for decay.

Solutions:

  • Dedicated career pathways for replication specialists

  • Top journals publishing high-quality replications

  • Counting replications as equal in prestige to novel discoveries

  • Grant calls specifically for verification work

  • “Registered Replication Reports” modeled after the Psychological Science initiative

Replications are the immune system of science.
We cannot survive long with compromised immunity.


3.3 Emphasize Open Data, Open Methods, and Reproducibility

Quality survives when transparency enforces accountability.

Policies that help:

  • Mandatory availability of data and code

  • Linking datasets to publications via DOIs

  • Enforcement of preregistration for hypothesis-driven work

  • Requiring statistical scripts as supplementary material

  • Automated statistical checks before publication

The key idea:
make it harder for low-rigour research to hide.


4. Strategy 2

Change Institutional Incentives from the Ground Up

Metrics are symptoms of a deeper structural issue:
universities themselves are rewarded for volume—of papers, grants, enrollment.

We can address this.


4.1 Redesign Tenure and Promotion Criteria

Faculty evaluation should weigh:

  • originality and depth

  • replicability and rigor

  • quality of mentoring

  • contributions to shared datasets

  • codes, tools, and infrastructure

  • interdisciplinary bridges

  • long-term, foundational projects

Prestigious institutes like EMBL and HHMI already de-emphasize publication count, and their output quality is significantly higher.


4.2 Stabilize Early-Career Positions

Precarity forces young scientists into risky behaviour (p-hacking, haste, hype).
Better structures:

  • multi-year, stable research fellowships

  • protected time for large, rigorous experiments

  • guaranteed minimum funding for newly tenured faculty

  • career bridges between postdoc and faculty that aren’t elimination tournaments

Imagine science as a marathon instead of a gladiatorial arena.


4.3 Fix the Grant System

Grants strongly shape evolution.
Reforms could include:

  • awarding funds partly by lottery after quality screening (New Zealand tests this)

  • funding research programs, not individual “projects”

  • creating long-term grants (7–10 years) for risky but foundational work

  • requiring replication or verification components in proposals

  • capping the number of grant proposals individuals can submit

A system that rewards ideas over products will produce better science.


5. Strategy 3

Rebuild the Role of Journals and Publishers

Journals are selective environments with enormous influence.

Reforms include:


5.1 Eliminate Novelty Bias

Many journals explicitly prioritize “surprising” results.
This encourages labs to:

  • chase small effects

  • amplify borderline findings

  • massage statistics

  • selectively report

If journals instead asked:

“Is this claim true and demonstrated rigorously?”

we would see a shift in lab strategies within a decade.


5.2 Introduce Results-Blind Review

A powerful approach already adopted by some outlets:

  • Review only the research question, design, and methods.

  • Accept the paper before the results exist.

  • Publish regardless of outcome.

This eliminates publication bias at the root.


5.3 Promote Negative Results

Negative results are essential for knowledge but rarely published.

Solutions:

  • dedicated journals

  • special sections in major outlets

  • citation incentives for null findings (meta-analyses rely on them)

Publishing negatives reduces the false-positive enrichment that fuels bad science.


6. Strategy 4

Strengthen Replication as a Community Norm

Even if journals don’t reform quickly, communities can.


6.1 Collaborative Large-Scale Replications

Examples:

  • Many Labs Project

  • OSF’s Reproducibility Initiative

  • Registered Replication Reports

These build:

  • shared methodology

  • cross-validation

  • community pressure for quality

They also remove perverse incentives—no single lab “owns” the result or benefits disproportionately.


6.2 Automated Replication Pipelines

Fields like computational biology and machine learning can integrate:

  • continuous reproducibility checks

  • containerized workflows

  • pipeline testing (e.g., Nextflow, Snakemake)

  • version-controlled analyses

If code can be re-run automatically, fraud and sloppiness become nearly impossible.


7. Strategy 5

Cultivate Cultural Evolution—But Only After Structural Change

Culture alone can’t beat misaligned incentives, but once structural reforms take hold, culture can reinforce them.

Helpful cultural shifts include:

  • valuing slow, deep work

  • celebrating careful null results

  • admiring correct but unglamorous findings

  • encouraging mentorship over output

  • building reputations for “reliability,” not “impact factor”

Science should not feel like a performance—it should feel like a craft.


8. The Key Principle:

Realigning Incentives with Truth

Smaldino & McElreath’s core insight is brutally simple:

If truth-seeking is not rewarded, it will go extinct.

Everything else in this post is downstream from that principle.

We fix science by making truth—not speed, not flash, not novelty—the central axis of academic fitness.

Once the incentives change, evolution will do the rest. Good science will flourish again, not because scientists try harder, but because the environment finally selects for quality.


Conclusion:

Repair Is Possible—But Requires Evolutionary Engineering

Science has survived revolutions, political pressures, ideological hijacking, wars, and paradigm shifts.
It can survive this crisis, too—but not by accident.

We need:

  • new metrics

  • new funding frameworks

  • new journal practices

  • restored replication capacity

  • cultural reinforcement

If we redesign the selective pressures, the scientific ecosystem will reorganize accordingly.

Bad science evolved naturally under misaligned incentives.
Good science can evolve naturally under better ones.

OKRA’S SILK ROAD JOURNEY: HOW A SLIMY AFRICAN POD WON THE MUGHAL HEART AND EVERY INDIAN LUNCHBOX

Few vegetables have a PR challenge like okra — ladies’ finger, bhindi, vendakkai.

People joke about its sliminess.
Children avoid it.
Westerners fear it.

But in India?
It is one of the most beloved vegetables.

What’s even more surprising:
okra is African, not Indian.
And its arrival story is dramatic.


AN AFRICAN TRAVELLER

Okra originated in the regions around:

  • Ethiopia

  • South Sudan

  • Eritrea

It spread along the Nile Valley into Egypt, then through Arab traders into the Middle East.

Evidence suggests okra reached the Indian Ocean world via:

  • Omani traders

  • Yemeni merchants

  • medieval Indian Ocean sailors

By 1100–1300 CE, okra is mentioned in Tamil and Telugu texts.

This makes it one of the earliest African vegetables adopted into Indian cuisine.


THE MUGHALS FALL IN LOVE

The Mughals adored okra — especially Akbar’s court.

Ain-i-Akbari mentions several preparations:

  • bhindi masala with pomegranate seeds

  • fried okra with pepper

  • okra cooked in clarified butter

Okra has a unique trait that Mughal chefs exploited brilliantly:

Its mucilage (slime) thickens gravies naturally.

In Khyber, Kabul, Multan and Delhi—okra became key in:

  • qormas

  • stews

  • rich masalas


REGIONAL JOURNEYS OF OKRA

Tamil Nadu

Vendakkai is temple-friendly and appears in:

  • vendakkai poriyal

  • puli kuzhambu

  • sambar

Kerala

Ladies’ fingers are roasted for:

  • theeyal

  • mezhukkupuratti

Andhra

Okra fry with gunpowder masala is iconic.

Karnataka

Bendekayi gojju is a sweet–tangy masterpiece.

Maharashtra

Bhendi bhaaji with goda masala is a weekday staple.

North India

Bhindi masala, crispy kurkuri bhindi, bhindi do pyaza—Delhi dhabas made okra glamorous.


WHY INDIA ADORED OKRA WHILE EUROPE DID NOT

  • Indian cuisine uses spices + oil → which neutralise sliminess

  • Slime helps cling spices to pods

  • It grows extremely well in India’s heat

  • It requires very little water

  • It yields frequently

  • It is cheap

In every way, okra is a perfect match for India.


THE ECONOMIC POWER OF OKRA

India is:

  • the world’s largest producer

  • the world’s largest consumer

  • a major exporter to the Gulf

In rural markets, okra price determines daily menus.


A FUN ANECDOTE: THE ROYAL BRINJAL VS OKRA WAR

In many villages in North India, elderly people still playfully argue:

  • “Bhindi sabziyon ki rani hai.” (Okra is queen of vegetables)

  • “Nahin, baingan raja hai!” (No, brinjal is king!)

This mock rivalry reveals how deeply okra has entered the cultural vocabulary.


CONCLUSION

Okra may not be indigenous to India, but today, India is the beating heart of global okra culture.

No lunchbox, temple feast, or dhaba menu feels complete without it.

Sunday, February 8, 2026

Post 10: The Mathematics of Declining Research Quality—A Deep Dive into the Model

Smaldino & McElreath’s (2016) The Natural Selection of Bad Science is often discussed for its cultural and sociological insights—publish-or-perish, career pressure, replication failures—but fewer people have actually read the mathematics. Yet the mathematics is the engine that powers the paper’s central claim:

If incentives reward production of positive, novel results, then low-rigour research strategies will evolve automatically—even when no one intends harm.

Today’s post is a full walkthrough of the model: what it assumes, how it works, what it predicts, and why its implications are unavoidable under current scientific incentive structures.


1. Why a Mathematical Model?

Intuition is useful, but evolution does not always behave intuitively.

Sometimes selection leads to unexpected outcomes:

  • Cooperation collapses even when everyone agrees it should exist.

  • A trait that is costly (e.g., low rigour → more errors) can still spread if it brings relative advantage.

  • Populations evolve toward states that are stable, not optimal.

Smaldino & McElreath turned to evolutionary game theory and cultural evolution to formalize these dynamics.
Their model is not about people being evil, ignorant, or lazy.
It is about strategies being selected by an environment shaped by:

  • publication counts,

  • significance thresholds,

  • novelty bias,

  • grant success metrics.

In this sense, it is no different from modelling how bacteria evolve in a petri dish.
The “nutrient agar” here is the academic career structure.


2. What Is Being Modelled?

The model simulates research labs (or research strategies), each defined by two key traits:

2.1 Effort (e)

Effort represents rigour, specifically:

  • careful experimental design

  • large sample sizes

  • proper controls

  • thorough analysis

High effort → higher replication success, slower output.
Low effort → faster output, more false positives.

2.2 Productivity (h)

Productivity is the probability of publishing a paper in a given time step.

In the model, effort and productivity are inversely related:

High effort → low productivity.
Low effort → high productivity.

This captures real world lab dynamics: the fastest labs are rarely the most careful.


3. The Core Equations

Now let’s walk through the main mathematical components.


3.1 Producing Results

Each lab attempts studies.
For each study:

  • There is a true effect with base probability b (background rate of true hypotheses).

  • The lab’s rigour (effort e) determines false positive/false negative rates.

The probability of obtaining a publishable, positive result is influenced by:

  • the prevalence of true hypotheses (b)

  • the statistical power of the lab (increasing with effort)

  • the false positive rate (decreasing with effort)

Low effort produces many publishable false positives.
High effort produces fewer but more accurate results.


3.2 Publication and Fitness

Labs gain “fitness” (academic success) through publications.

Fitness increases when:

  • many papers are produced

  • papers get published (positive results only)

  • labs obtain grants (which depend on publication counts)

Fitness is a mathematical proxy for:

  • student recruitment

  • resource access

  • prestige

  • survival through competition

Thus:

A lab’s ability to survive and reproduce (i.e., spawn new labs) is directly tied to publication output.

Quality matters only indirectly, through replication.


3.3 Replication as a Purifying Force

A small proportion r of studies attempt replication.

Let:

  • s = success rate of replication (dependent on original lab’s effort)

  • f = failure rate

Replication outcomes affect fitness:

  • Successful replication → boosts reputation

  • Failed replication → decreases reputation or eliminates strategy

But here is the critical insight:

Replications occur too infrequently to counterbalance the huge productivity advantage of low-effort labs.

This is the mathematical foundation of the replicability crisis.


4. Evolutionary Dynamics: Inheritance, Variation, Competition

Labs reproduce (spawn new labs) with probability proportional to their fitness.
Offspring labs inherit parental traits with mutation.

Just like biological evolution:

  • successful strategies proliferate

  • unsuccessful ones go extinct

  • small random variations introduce novelty

The model is iterated over many generations.


5. What the Model Predicts

The results are stark.

5.1 Effort declines over generations

Even if the initial population begins with very high rigour, the following happens:

  1. Low-effort labs publish more.

  2. They gain more funding, prestige, and visibility.

  3. They produce more “offspring” labs.

  4. They dominate the population.

Mathematically, effort e declines toward the minimum boundary of the model.

This is not moral failure.
It is adaptive optimisation under current incentives.


5.2 False positives increase

Because low-effort labs proliferate, the overall false positive rate increases dramatically.

The model shows a clear, monotonic rise in:

  • Type I errors

  • exaggerated effect sizes

  • non-reproducible claims

This matches empirical data from psychology, cancer biology, genetics, and economics.


5.3 Replication cannot rescue the system

Even when the replication rate is increased, the system continues declining.

Why?

Two reasons:

5.3.1 Replication rate is too small relative to publication volume

Low-effort labs produce so many papers that the replication system is overwhelmed.

5.3.2 Replications have low prestige

In real life (and in the model), failed replications rarely cause career extinction.
They mostly create noise.

Replication is like a weak immune system facing overwhelming infection.


5.4 The only way to reverse decline is to change incentives

The simulations demonstrate that no amount of moral encouragement or “awareness of good practice” can solve the problem.

Rigour can evolve back upward only if:

  • replication is heavily rewarded,

  • low-effort labs are severely punished, and

  • publication counts stop driving survival.

Without structural overhaul, decline is inevitable.


6. Mathematical Intuition: Why Low Rigour Wins

Let's illustrate the intuition using a simplified scenario.

Suppose two labs:

  • Careful Lab: produces 2 solid papers/year

  • Fast Lab: produces 6 weak papers/year

If careers are evaluated on paper count, then:

  • Fast Lab will obtain more grants

  • Fast Lab will attract more students

  • Fast Lab will expand faster

  • Fast Lab’s descendants will dominate

Even if 60% of Fast Lab’s papers are false:

  • They still outcompete

  • Replications catch only a small fraction

  • The ecosystem floods with noise

  • Careful labs eventually die off

The evolutionary equilibrium favors strategies maximizing short-term output, not long-term reliability.

This mirrors classic tragedy-of-the-commons models.


7. Why the Model Matters

This model is not a toy.

It explains real data:

  • rising retraction rates

  • inflated effect sizes

  • widespread p-hacking

  • the reversals of “established findings”

  • competitive “science bubbles” that collapse

  • the growth of high-output, low-rigour labs

It demonstrates mathematically that the crisis is not accidental.

It is the expected outcome of:

  • rewarding quantity over quality

  • rewarding novelty over verification

  • rewarding significance over accuracy

  • punishing slow, careful work

This is the same mathematics that explains:

  • antibiotic resistance

  • overfishing collapse

  • immune evasion in viruses

  • the evolution of cheating in social species

Where selection pressures go, evolution follows.


8. The Paper’s Most Important Mathematical Lesson

The most important insight is simple yet devastating:

If you reward scientists for publishing a lot, you will get a lot of publications.
But you will not get a lot of truth.

Quality cannot win an evolutionary game where fitness depends on quantity.

Only by changing the payoff structure can the system evolve toward healthier equilibrium.


Conclusion: Mathematics as a Warning Signal

Smaldino & McElreath’s model is a mathematical smoke alarm.
It quantifies what many intuitively sensed: science is evolving maladaptively.

The system will not self-correct.
Culture cannot fix what incentives break.
The only remedy is to rebuild the environment so that truth—not productivity—determines academic survival.

In the next posts, we’ll explore solutions—structural, cultural, and institutional—that could reverse these evolutionary trends.

Saturday, February 7, 2026

Turning ChatGPT into a One-Click Prompt Power Tool

If you use ChatGPT heavily—for research, coding, writing, or reviewing papers—you’ve probably noticed two things:

  • ChatGPT often suggests useful follow-up actions like “Summarize this paper” or “Critique this answer”
  • You still have to manually copy, paste, and submit those prompts

That friction adds up. This post explains how I built a Chrome extension that turns ChatGPT into a one-click prompt launcher.


What the Extension Does

1. Clickable Assistant Suggestions

Any bullet-point suggestion generated by ChatGPT becomes clickable. One click:

  • Inserts the text into the input box
  • Automatically submits the prompt

No typing. No copy-paste.

2. Advanced Prompt Menu Inside ChatGPT

The extension adds a floating menu with expert-level actions:

  • 📄 Summarize a paper (structured, reviewer-style, methods-focused)
  • 🧠 Critique reasoning or list assumptions
  • ⚙️ Explain or debug code
  • ✍️ Rewrite text for clarity or reviewers

Each item is fully clickable and auto-submits.


Why This Is Non-Trivial

Automating ChatGPT is harder than it looks:

  • Inline scripts are blocked by Content Security Policy (CSP)
  • React ignores synthetic keyboard events
  • ChatGPT no longer uses a <textarea>
  • Content scripts cannot directly control page JavaScript

This extension solves all of these problems properly.


Architecture Overview


content.js      → UI, DOM detection, click handling
page-script.js  → Trusted page-context execution
postMessage     → Secure communication bridge

The content script handles the UI. The page script runs inside ChatGPT’s own context, allowing safe interaction with React-controlled inputs.


Manifest (Chrome Extension Setup)

{
  "manifest_version": 3,
  "name": "ChatGPT Clickable Prompts & Advanced Menu",
  "version": "1.0",
  "content_scripts": [
    {
      "matches": ["https://chatgpt.com/*"],
      "js": ["content.js"],
      "run_at": "document_idle"
    }
  ],
  "web_accessible_resources": [
    {
      "resources": ["page-script.js"],
      "matches": ["https://chatgpt.com/*"]
    }
  ]
}

The web_accessible_resources entry is critical—it allows CSP-safe script injection.


Injecting a Trusted Page Script


function injectPageScript() {
  const s = document.createElement("script");
  s.src = chrome.runtime.getURL("page-script.js");
  document.documentElement.appendChild(s);
}

This avoids inline scripts, which ChatGPT blocks.


Handling ChatGPT’s Input Box

ChatGPT now uses a contenteditable div, not a textarea:


div[contenteditable="true"][role="textbox"]

To insert text in a way React accepts:


document.execCommand("insertText", false, text);
box.dispatchEvent(new InputEvent("input", { bubbles: true }));

Auto-Submitting the Prompt


setTimeout(() => {
  const sendBtn =
    document.querySelector('button[data-testid="send-button"]') ||
    document.querySelector('button[type="submit"]');

  if (sendBtn && !sendBtn.disabled) {
    sendBtn.click();
  }
}, 120);

The delay allows React to enable the Send button.


Bridging Content Script and Page Script

Because the two scripts live in different execution contexts, communication uses window.postMessage.


// content.js
window.postMessage(
  { type: "RUN_CHATGPT_PROMPT", text },
  "*"
);

// page-script.js
window.addEventListener("message", event => {
  if (event.data?.type === "RUN_CHATGPT_PROMPT") {
    setTextAndSubmit(event.data.text);
  }
});

Making Assistant Bullets Clickable


document.querySelectorAll("article li").forEach(li => {
  li.style.cursor = "pointer";
  li.style.textDecoration = "underline";

  li.onclick = () => {
    window.postMessage(
      { type: "RUN_CHATGPT_PROMPT", text: li.innerText },
      "*"
    );
  };
});

A MutationObserver keeps this working as new messages appear.


Advanced Prompt Menu

The extension defines a reusable prompt library:


{
  title: "Paper actions",
  items: [
    "Summarize this paper",
    "Map claims to evidence",
    "Identify limitations"
  ]
}

Each item appears as a clickable menu option inside ChatGPT.


Why This Is Powerful

  • Turns prompts into UI elements
  • Makes advanced usage the default
  • Eliminates repetitive typing
  • Scales across research, coding, and writing

Instead of remembering “good prompts,” you embed them directly into the interface.


Possible Extensions

  • Ctrl-click to insert without submitting
  • Prompt variables like {TOPIC}
  • Prompt chaining (summarize → critique → experiments)
  • Prompt history and favorites

Final Thoughts

ChatGPT is already powerful, but most users access only a small fraction of its capabilities. By embedding expert prompts directly into the UI, this extension removes friction and enables serious workflows.

Once you get used to one-click prompts, it’s hard to go back.

Post 9 The Cultural Evolution of Questionable Research Practices (QRPs)

 Based on Smaldino & McElreath (2016), “The Natural Selection of Bad Science”


Introduction: How Bad Habits Become Scientific Norms

When we talk about the problems of modern science, we often focus on individual errors:

  • p-hacking

  • selective reporting

  • HARKing

  • small sample sizes

  • forking analytical paths

  • file drawer bias

These are called Questionable Research Practices, or QRPs.

Most researchers agree these practices are harmful.
Most deny that they personally use them.
Most believe “the field” is plagued, but “my lab” is clean.

Yet QRPs continue to spread.

Why?
Because QRPs are culturally transmitted behaviors, shaped by selection pressures that operate not on truth, but on reward structures. Smaldino & McElreath argue that the modern academic environment selects for practices that maximize publication, visibility, and grant acquisition—not accuracy. Over time, these questionable practices evolve into cultural norms passed across labs and generations.

In this post, we explore:

  • Why QRPs evolve even when no one intends to cheat

  • How QRPs spread through academic “memes”

  • The role of mentorship, lab culture, and institutional norms

  • How statistical techniques become cultural artifacts

  • Case studies of QRP evolution in psychology, biomedicine, ecology, and genetics

  • Why QRPs often outperform good science in competitive environments

  • And finally: what cultural evolution tells us about reform


1. QRPs Are Not Just Bad Methods—They Are Adaptive Behaviors

Let’s start with the evolutionary concept:

Traits that increase reproductive success spread, regardless of whether they’re beneficial to the species.

In academia, “reproductive success” means:

  • publishing more

  • getting more citations

  • landing more grants

  • securing prestigious jobs

  • achieving visibility

  • appearing productive

QRPs directly enhance these success metrics:

  • p-hacking increases the chance of significance

  • selective reporting maximizes narrative clarity

  • small sample sizes allow faster publication

  • exploratory methods masquerading as confirmatory work boost novelty

  • dropping null results saves time and reputational cost

Thus, QRPs are evolutionarily fit within the academic ecosystem.

Poor scientific methods are not anomalies—they're successful strategies under existing incentives.


2. The Meme Theory of Bad Science

Richard Dawkins introduced the idea of memes—cultural analogs of genes that replicate through imitation, communication, and institutional reinforcement.

QRPs are memes.

They spread by:

2.1 Apprenticeship

Students learn analysis strategies from mentors.
If a mentor says:

“Try different models until you get a significant one,”

this meme propagates.

2.2 Paper templates

Authors copy statistical approaches from top papers.
If high-impact journals publish flashy underpowered studies, these methods become templates.

2.3 Reviewer expectations

Reviewers reward certain statistical patterns (“clean” results).
Labs adapt to produce them.

2.4 Institutional norms

Funding agencies, departments, and hiring committees prefer volume and novelty.

2.5 Social networks and collaboration

Connected labs exchange methods, scripts, code, and heuristics—good and bad.

Thus, QRPs propagate not because scientists want to be unethical, but because:

They are culturally inherited solutions to survival in academia.


3. How QRPs Begin: From Legitimate Flexibility to Weaponized Flexibility

Every research process involves choices:

  • Which variables to include?

  • Which time windows to analyze?

  • Which transformations to apply?

  • Which outliers to remove?

  • Which statistical test to use?

  • Which covariates to consider?

This flexibility is natural and often necessary.

But under selective pressures, flexibility becomes weaponized:

3.1 Garden of forking paths

A researcher tries many analyses but reports only the “best.”

3.2 Optional stopping

Collecting more data only when p > 0.05.

3.3 Convenient exclusion of “problematic” participants

When “problematic” means “didn’t fit the hypothesis.”

3.4 Retrofitting hypotheses

Turning exploratory insights into a “predicted effect.”

None of these require malicious intent. They require only:

  • pressure

  • time scarcity

  • ambiguity in methods

  • conventional expectations

Smaldino & McElreath argue that ambiguity is the breeding ground for QRPs.


4. QRPs Spread Through Academic Lineages: Evidence from History

The cultural evolution of QRPs can be observed across disciplines.


4.1 Psychology: A Textbook Case

The pre-2010s psychological literature shows an entire culture shaped by:

  • tiny sample sizes

  • significance chasing

  • underpowered designs

  • flexible stopping rules

  • lack of preregistration

These QRPs were not isolated events—they were evolved norms, taught by senior researchers and rewarded by journals.

The famous Bem (2011) ESP paper—which claimed that people can predict future events—passed peer review because the methods matched the norms of the field.


4.2 Cancer biology: Image manipulation as a cultural practice

In cancer biology:

  • fluorescence images

  • Western blots

  • microscopy panels

were routinely “cleaned up.”

In the early 2000s, this was considered good practice.
Magazines published tutorials on how to “enhance band visibility.”

Over time, these became cultural norms—and only later did the field decide these were QRPs or outright misconduct.


4.3 Genomics: Bioinformatic flexibility

Before preregistration or strict pipelines, it was normal to:

  • choose thresholds post-hoc

  • use multiple assembly parameters

  • cherry-pick alignments

  • apply filters to “remove noise”

These became part of “lab culture,” not individual malpractice.


4.4 Ecology: The “significance hunt”

Ecological field studies often lack sample size control; thus:

  • significance testing became a ritual

  • p < 0.05 became the default “truth filter”

  • QRPs naturally evolved to meet this constraint


5. QRPs Become Cultural Norms When They Are Hidden Benefits

QRPs have advantages:

Faster publication = quicker CV growth

Cleaner narratives = easier acceptance

Better-looking results = higher impact journals

More significant results = more citations

More grants = better resources

Less ambiguity = fewer reviewer complaints

Over time, QRPs become “the way science is done.”

This process mirrors cultural transmission in anthropology:
Practices that are rewarded persist.


6. Why QRPs Outcompete Good Science

Smaldino & McElreath’s model predicts an uncomfortable truth:

Labs using QRPs will consistently outperform ethical labs in output metrics.

6.1 Methodological rigor reduces quantity

Large sample sizes = fewer studies per year.

6.2 Transparency slows things down

Preregistration, data sharing, and detailed documentation add labor.

6.3 Statistical integrity reduces positive results

Rigorous analysis → fewer p < 0.05 findings → fewer publications.

6.4 Honest researchers are penalized

In grant review panels and job markets, quantity and novelty dominate.

Thus, ethical labs face extinction within the population of labs—unless the system actively protects them.


7. Cultural Niche Construction: How Fields Shape Their Own Evolution

Smaldino & McElreath point to a key concept: niche construction.

Just as organisms modify their environment (like beavers building dams), scientific communities modify:

  • publication expectations

  • methodological norms

  • peer review criteria

  • statistical conventions

  • training curricula

  • grant requirements

These environmental modifications then shape the next generation of researchers.

Example: The rise of p < 0.05 as a ritual threshold

Originally introduced by Fisher as a rule-of-thumb, p < 0.05 became:

  • a universal criterion

  • a reviewer expectation

  • a hiring benchmark

  • a grant-writing norm

This cultural niche selected for QRPs that produce p < 0.05 reliably.

Thus:

The system becomes self-perpetuating.


8. The Institutionalization of QRPs: When Practices Gain Official Status

Some QRPs become institutionalized:

8.1 Software defaults

SPSS defaults encouraged questionable analyses for decades.
Excel autopredicts models that are statistically invalid.

8.2 Reviewer norms

Reviewer #2 often demands significance.
This pressure selects for QRPs.

8.3 Journal expectations

Top journals prefer surprising results.
“Surprising” often requires flexible methods.

8.4 Grant success patterns

Funding committees reward bold claims.
QRPs help generate such claims.


9. QRPs Resist Change: Cultural Evolution Creates Inertia

Cultural evolution is resistant to reform because:

9.1 Practices feel natural

QRPs become invisible to those who grow up inside the system.

9.2 Senior scientists defend the status quo

Their reputations and past work depend on outdated methods.

9.3 Fields develop vested interests

Entire theories rest on QRP-dependent findings.

9.4 Institutions reward QRP-driven outcomes

Impact factors, grant income, and productivity metrics are built on shaky foundations.

9.5 Reformers face retaliation

Criticizing QRPs is framed as unprofessional or combative.

Thus, QRPs continue not because they are good methods, but because:

They are good strategies for survival within a maladaptive environment.


10. How Cultural Evolution Can Be Harnessed for Good

The mechanism that spreads QRPs can also spread good practices, if selective pressures change.

10.1 Preregistration becomes a norm

OSF and registered reports create cultural expectations of transparency.

10.2 Open data becomes mandatory

Younger labs increasingly default to open science.

10.3 Large-scale collaborative projects

Participatory replications teach students rigor.

10.4 Teaching meta-science

Universities begin integrating meta-research into curriculum.

10.5 Funding agencies shifting values

NIH, NSF, and ERC now emphasize rigor and reproducibility.

10.6 Social incentives

Twitter/X, Mastodon, and YouTube critique bad practices publicly, generating peer-driven accountability.

10.7 New journal models

eLife’s consultative review
PLOS ONE’s methodological criteria
Registered Reports at many journals

Cultural evolution can shift direction if the fitness landscape of academia changes.


Conclusion: QRPs Are Not Detours—They Are Outcomes of Cultural Evolution

QRPs are not deviations from scientific culture—they are products of it.
They flourish because the environmental conditions of academia reward them.

Smaldino & McElreath’s evolutionary lens reveals that:

  • QRPs spread because they increase academic fitness

  • QRPs become norms through cultural transmission

  • QRPs persist because institutions reinforce them

  • QRPs outcompete rigorous methods under current incentives

  • Reform requires changing the selective landscape, not blaming individuals

Science must actively rebalance its internal ecology so that:

  • rigor becomes adaptive

  • transparency becomes normal

  • quantity becomes less important than quality

  • mentorship focuses on good methods

  • early-career researchers aren’t forced into QRPs to survive

Only then will cultural evolution shift from selecting for questionable practices to selecting for robust, reliable, and replicable science.