Saturday, February 7, 2026

Turning ChatGPT into a One-Click Prompt Power Tool

If you use ChatGPT heavily—for research, coding, writing, or reviewing papers—you’ve probably noticed two things:

  • ChatGPT often suggests useful follow-up actions like “Summarize this paper” or “Critique this answer”
  • You still have to manually copy, paste, and submit those prompts

That friction adds up. This post explains how I built a Chrome extension that turns ChatGPT into a one-click prompt launcher.


What the Extension Does

1. Clickable Assistant Suggestions

Any bullet-point suggestion generated by ChatGPT becomes clickable. One click:

  • Inserts the text into the input box
  • Automatically submits the prompt

No typing. No copy-paste.

2. Advanced Prompt Menu Inside ChatGPT

The extension adds a floating menu with expert-level actions:

  • 📄 Summarize a paper (structured, reviewer-style, methods-focused)
  • 🧠 Critique reasoning or list assumptions
  • ⚙️ Explain or debug code
  • ✍️ Rewrite text for clarity or reviewers

Each item is fully clickable and auto-submits.


Why This Is Non-Trivial

Automating ChatGPT is harder than it looks:

  • Inline scripts are blocked by Content Security Policy (CSP)
  • React ignores synthetic keyboard events
  • ChatGPT no longer uses a <textarea>
  • Content scripts cannot directly control page JavaScript

This extension solves all of these problems properly.


Architecture Overview


content.js      → UI, DOM detection, click handling
page-script.js  → Trusted page-context execution
postMessage     → Secure communication bridge

The content script handles the UI. The page script runs inside ChatGPT’s own context, allowing safe interaction with React-controlled inputs.


Manifest (Chrome Extension Setup)

{
  "manifest_version": 3,
  "name": "ChatGPT Clickable Prompts & Advanced Menu",
  "version": "1.0",
  "content_scripts": [
    {
      "matches": ["https://chatgpt.com/*"],
      "js": ["content.js"],
      "run_at": "document_idle"
    }
  ],
  "web_accessible_resources": [
    {
      "resources": ["page-script.js"],
      "matches": ["https://chatgpt.com/*"]
    }
  ]
}

The web_accessible_resources entry is critical—it allows CSP-safe script injection.


Injecting a Trusted Page Script


function injectPageScript() {
  const s = document.createElement("script");
  s.src = chrome.runtime.getURL("page-script.js");
  document.documentElement.appendChild(s);
}

This avoids inline scripts, which ChatGPT blocks.


Handling ChatGPT’s Input Box

ChatGPT now uses a contenteditable div, not a textarea:


div[contenteditable="true"][role="textbox"]

To insert text in a way React accepts:


document.execCommand("insertText", false, text);
box.dispatchEvent(new InputEvent("input", { bubbles: true }));

Auto-Submitting the Prompt


setTimeout(() => {
  const sendBtn =
    document.querySelector('button[data-testid="send-button"]') ||
    document.querySelector('button[type="submit"]');

  if (sendBtn && !sendBtn.disabled) {
    sendBtn.click();
  }
}, 120);

The delay allows React to enable the Send button.


Bridging Content Script and Page Script

Because the two scripts live in different execution contexts, communication uses window.postMessage.


// content.js
window.postMessage(
  { type: "RUN_CHATGPT_PROMPT", text },
  "*"
);

// page-script.js
window.addEventListener("message", event => {
  if (event.data?.type === "RUN_CHATGPT_PROMPT") {
    setTextAndSubmit(event.data.text);
  }
});

Making Assistant Bullets Clickable


document.querySelectorAll("article li").forEach(li => {
  li.style.cursor = "pointer";
  li.style.textDecoration = "underline";

  li.onclick = () => {
    window.postMessage(
      { type: "RUN_CHATGPT_PROMPT", text: li.innerText },
      "*"
    );
  };
});

A MutationObserver keeps this working as new messages appear.


Advanced Prompt Menu

The extension defines a reusable prompt library:


{
  title: "Paper actions",
  items: [
    "Summarize this paper",
    "Map claims to evidence",
    "Identify limitations"
  ]
}

Each item appears as a clickable menu option inside ChatGPT.


Why This Is Powerful

  • Turns prompts into UI elements
  • Makes advanced usage the default
  • Eliminates repetitive typing
  • Scales across research, coding, and writing

Instead of remembering “good prompts,” you embed them directly into the interface.


Possible Extensions

  • Ctrl-click to insert without submitting
  • Prompt variables like {TOPIC}
  • Prompt chaining (summarize → critique → experiments)
  • Prompt history and favorites

Final Thoughts

ChatGPT is already powerful, but most users access only a small fraction of its capabilities. By embedding expert prompts directly into the UI, this extension removes friction and enables serious workflows.

Once you get used to one-click prompts, it’s hard to go back.

Post 9 The Cultural Evolution of Questionable Research Practices (QRPs)

 Based on Smaldino & McElreath (2016), “The Natural Selection of Bad Science”


Introduction: How Bad Habits Become Scientific Norms

When we talk about the problems of modern science, we often focus on individual errors:

  • p-hacking

  • selective reporting

  • HARKing

  • small sample sizes

  • forking analytical paths

  • file drawer bias

These are called Questionable Research Practices, or QRPs.

Most researchers agree these practices are harmful.
Most deny that they personally use them.
Most believe “the field” is plagued, but “my lab” is clean.

Yet QRPs continue to spread.

Why?
Because QRPs are culturally transmitted behaviors, shaped by selection pressures that operate not on truth, but on reward structures. Smaldino & McElreath argue that the modern academic environment selects for practices that maximize publication, visibility, and grant acquisition—not accuracy. Over time, these questionable practices evolve into cultural norms passed across labs and generations.

In this post, we explore:

  • Why QRPs evolve even when no one intends to cheat

  • How QRPs spread through academic “memes”

  • The role of mentorship, lab culture, and institutional norms

  • How statistical techniques become cultural artifacts

  • Case studies of QRP evolution in psychology, biomedicine, ecology, and genetics

  • Why QRPs often outperform good science in competitive environments

  • And finally: what cultural evolution tells us about reform


1. QRPs Are Not Just Bad Methods—They Are Adaptive Behaviors

Let’s start with the evolutionary concept:

Traits that increase reproductive success spread, regardless of whether they’re beneficial to the species.

In academia, “reproductive success” means:

  • publishing more

  • getting more citations

  • landing more grants

  • securing prestigious jobs

  • achieving visibility

  • appearing productive

QRPs directly enhance these success metrics:

  • p-hacking increases the chance of significance

  • selective reporting maximizes narrative clarity

  • small sample sizes allow faster publication

  • exploratory methods masquerading as confirmatory work boost novelty

  • dropping null results saves time and reputational cost

Thus, QRPs are evolutionarily fit within the academic ecosystem.

Poor scientific methods are not anomalies—they're successful strategies under existing incentives.


2. The Meme Theory of Bad Science

Richard Dawkins introduced the idea of memes—cultural analogs of genes that replicate through imitation, communication, and institutional reinforcement.

QRPs are memes.

They spread by:

2.1 Apprenticeship

Students learn analysis strategies from mentors.
If a mentor says:

“Try different models until you get a significant one,”

this meme propagates.

2.2 Paper templates

Authors copy statistical approaches from top papers.
If high-impact journals publish flashy underpowered studies, these methods become templates.

2.3 Reviewer expectations

Reviewers reward certain statistical patterns (“clean” results).
Labs adapt to produce them.

2.4 Institutional norms

Funding agencies, departments, and hiring committees prefer volume and novelty.

2.5 Social networks and collaboration

Connected labs exchange methods, scripts, code, and heuristics—good and bad.

Thus, QRPs propagate not because scientists want to be unethical, but because:

They are culturally inherited solutions to survival in academia.


3. How QRPs Begin: From Legitimate Flexibility to Weaponized Flexibility

Every research process involves choices:

  • Which variables to include?

  • Which time windows to analyze?

  • Which transformations to apply?

  • Which outliers to remove?

  • Which statistical test to use?

  • Which covariates to consider?

This flexibility is natural and often necessary.

But under selective pressures, flexibility becomes weaponized:

3.1 Garden of forking paths

A researcher tries many analyses but reports only the “best.”

3.2 Optional stopping

Collecting more data only when p > 0.05.

3.3 Convenient exclusion of “problematic” participants

When “problematic” means “didn’t fit the hypothesis.”

3.4 Retrofitting hypotheses

Turning exploratory insights into a “predicted effect.”

None of these require malicious intent. They require only:

  • pressure

  • time scarcity

  • ambiguity in methods

  • conventional expectations

Smaldino & McElreath argue that ambiguity is the breeding ground for QRPs.


4. QRPs Spread Through Academic Lineages: Evidence from History

The cultural evolution of QRPs can be observed across disciplines.


4.1 Psychology: A Textbook Case

The pre-2010s psychological literature shows an entire culture shaped by:

  • tiny sample sizes

  • significance chasing

  • underpowered designs

  • flexible stopping rules

  • lack of preregistration

These QRPs were not isolated events—they were evolved norms, taught by senior researchers and rewarded by journals.

The famous Bem (2011) ESP paper—which claimed that people can predict future events—passed peer review because the methods matched the norms of the field.


4.2 Cancer biology: Image manipulation as a cultural practice

In cancer biology:

  • fluorescence images

  • Western blots

  • microscopy panels

were routinely “cleaned up.”

In the early 2000s, this was considered good practice.
Magazines published tutorials on how to “enhance band visibility.”

Over time, these became cultural norms—and only later did the field decide these were QRPs or outright misconduct.


4.3 Genomics: Bioinformatic flexibility

Before preregistration or strict pipelines, it was normal to:

  • choose thresholds post-hoc

  • use multiple assembly parameters

  • cherry-pick alignments

  • apply filters to “remove noise”

These became part of “lab culture,” not individual malpractice.


4.4 Ecology: The “significance hunt”

Ecological field studies often lack sample size control; thus:

  • significance testing became a ritual

  • p < 0.05 became the default “truth filter”

  • QRPs naturally evolved to meet this constraint


5. QRPs Become Cultural Norms When They Are Hidden Benefits

QRPs have advantages:

Faster publication = quicker CV growth

Cleaner narratives = easier acceptance

Better-looking results = higher impact journals

More significant results = more citations

More grants = better resources

Less ambiguity = fewer reviewer complaints

Over time, QRPs become “the way science is done.”

This process mirrors cultural transmission in anthropology:
Practices that are rewarded persist.


6. Why QRPs Outcompete Good Science

Smaldino & McElreath’s model predicts an uncomfortable truth:

Labs using QRPs will consistently outperform ethical labs in output metrics.

6.1 Methodological rigor reduces quantity

Large sample sizes = fewer studies per year.

6.2 Transparency slows things down

Preregistration, data sharing, and detailed documentation add labor.

6.3 Statistical integrity reduces positive results

Rigorous analysis → fewer p < 0.05 findings → fewer publications.

6.4 Honest researchers are penalized

In grant review panels and job markets, quantity and novelty dominate.

Thus, ethical labs face extinction within the population of labs—unless the system actively protects them.


7. Cultural Niche Construction: How Fields Shape Their Own Evolution

Smaldino & McElreath point to a key concept: niche construction.

Just as organisms modify their environment (like beavers building dams), scientific communities modify:

  • publication expectations

  • methodological norms

  • peer review criteria

  • statistical conventions

  • training curricula

  • grant requirements

These environmental modifications then shape the next generation of researchers.

Example: The rise of p < 0.05 as a ritual threshold

Originally introduced by Fisher as a rule-of-thumb, p < 0.05 became:

  • a universal criterion

  • a reviewer expectation

  • a hiring benchmark

  • a grant-writing norm

This cultural niche selected for QRPs that produce p < 0.05 reliably.

Thus:

The system becomes self-perpetuating.


8. The Institutionalization of QRPs: When Practices Gain Official Status

Some QRPs become institutionalized:

8.1 Software defaults

SPSS defaults encouraged questionable analyses for decades.
Excel autopredicts models that are statistically invalid.

8.2 Reviewer norms

Reviewer #2 often demands significance.
This pressure selects for QRPs.

8.3 Journal expectations

Top journals prefer surprising results.
“Surprising” often requires flexible methods.

8.4 Grant success patterns

Funding committees reward bold claims.
QRPs help generate such claims.


9. QRPs Resist Change: Cultural Evolution Creates Inertia

Cultural evolution is resistant to reform because:

9.1 Practices feel natural

QRPs become invisible to those who grow up inside the system.

9.2 Senior scientists defend the status quo

Their reputations and past work depend on outdated methods.

9.3 Fields develop vested interests

Entire theories rest on QRP-dependent findings.

9.4 Institutions reward QRP-driven outcomes

Impact factors, grant income, and productivity metrics are built on shaky foundations.

9.5 Reformers face retaliation

Criticizing QRPs is framed as unprofessional or combative.

Thus, QRPs continue not because they are good methods, but because:

They are good strategies for survival within a maladaptive environment.


10. How Cultural Evolution Can Be Harnessed for Good

The mechanism that spreads QRPs can also spread good practices, if selective pressures change.

10.1 Preregistration becomes a norm

OSF and registered reports create cultural expectations of transparency.

10.2 Open data becomes mandatory

Younger labs increasingly default to open science.

10.3 Large-scale collaborative projects

Participatory replications teach students rigor.

10.4 Teaching meta-science

Universities begin integrating meta-research into curriculum.

10.5 Funding agencies shifting values

NIH, NSF, and ERC now emphasize rigor and reproducibility.

10.6 Social incentives

Twitter/X, Mastodon, and YouTube critique bad practices publicly, generating peer-driven accountability.

10.7 New journal models

eLife’s consultative review
PLOS ONE’s methodological criteria
Registered Reports at many journals

Cultural evolution can shift direction if the fitness landscape of academia changes.


Conclusion: QRPs Are Not Detours—They Are Outcomes of Cultural Evolution

QRPs are not deviations from scientific culture—they are products of it.
They flourish because the environmental conditions of academia reward them.

Smaldino & McElreath’s evolutionary lens reveals that:

  • QRPs spread because they increase academic fitness

  • QRPs become norms through cultural transmission

  • QRPs persist because institutions reinforce them

  • QRPs outcompete rigorous methods under current incentives

  • Reform requires changing the selective landscape, not blaming individuals

Science must actively rebalance its internal ecology so that:

  • rigor becomes adaptive

  • transparency becomes normal

  • quantity becomes less important than quality

  • mentorship focuses on good methods

  • early-career researchers aren’t forced into QRPs to survive

Only then will cultural evolution shift from selecting for questionable practices to selecting for robust, reliable, and replicable science.

Friday, February 6, 2026

Post 8: The Evolution of Fraud—Why Misaligned Incentives Make Cheating Inevitable

 Based on Smaldino & McElreath (2016), “The Natural Selection of Bad Science”


Introduction: Fraud Doesn’t Begin With Villains—It Begins With Incentives

When people imagine scientific fraud, they picture caricatures:
a rogue scientist faking data to become famous; a malicious figure cooking numbers for personal gain.

Reality is much more subtle—and much more disturbing.

Fraud evolves.

Just like biological traits arise in response to environmental pressures, fraudulent behaviors arise within scientific ecosystems because those behaviors confer competitive advantage under certain institutional conditions. Smaldino & McElreath’s paper argues that the same incentives that select for poor methodological rigor also select for increasingly bold forms of cheating.

This evolutionary view challenges the idea that individual misconduct is the root of the crisis. Instead:

Fraud is an adaptive response to misaligned incentives, not a personal flaw in an otherwise healthy system.

In this post, we explore:

  • How small methodological shortcuts evolve into systemic fraud

  • Why fraud emerges even among people with good intentions

  • Historical and modern examples of fraud evolution

  • How fraudulent strategies spread within academic lineages

  • Why policing fraud is so difficult in an ecosystem selecting for it

  • And critically: how we can redesign incentives to prevent fraud’s natural selection


1. The Continuum of Cheating: From Innocent Flexibility to Full Fabrication

Fraud rarely begins with outright forgery. It evolves gradually through a dangerous continuum:

Stage 1: Innocent flexibility

Researchers try multiple statistical models because “they want to understand the data better.”

Stage 2: Selective reporting

Negative results are dropped “because the journal won’t accept them.”

Stage 3: HARKing (Hypothesizing After Results are Known)

Researchers rewrite hypotheses post-hoc to match significant results.

Stage 4: Data massaging

Removal of outliers that “don’t make sense,”
or reclassifying categories to achieve significance.

Stage 5: Fabrication-lite

Inventing a few missing values, adjusting means slightly, or copying data points to “fix noise.”

Stage 6: Full data fabrication

Creating entire datasets from imagination.

The key insight is this:
At each step, competitive advantage increases while detection risk remains low.
Evolution always favors strategies with the highest payoff relative to cost.

Misaligned incentives—publish or perish, novelty over accuracy, prestige over honesty—act as selective pressures moving individuals along this continuum.


2. Why Good People Drift Toward Bad Science

People do not enter science as cheaters. They enter as idealists.
But as evolutionary biologists know, behavior adapts to the environment.

Three forces push researchers toward unethical behavior:


2.1 Selection for productivity over accuracy

A researcher who produces 12 papers a year—thanks to flexible methods or data manipulation—is more likely to get:

  • job offers

  • grants

  • tenure

  • speaking invitations

  • media attention

Meanwhile, the careful, slower researcher is deemed “less productive.”

This is pure Darwinian selection, not moral selection.


2.2 Lack of punishment mechanisms

In nature, cheating thrives when policing is absent.
In academia:

  • Fraud detection rates are extremely low

  • Replication rarely occurs

  • Retractions are rare and slow

  • Institutions protect successful researchers

  • Journals avoid scandals to protect their reputation

Low policing + high reward = the perfect conditions for cheating to thrive.


2.3 Cognitive dissonance and rationalization

Once minor cheating yields rewards, researchers begin to rationalize:

  • “Everyone does it.”

  • “The result is basically true.”

  • “This helps me survive until tenure.”

  • “The reviewers won’t understand anyway.”

  • “I know the effect is real—I just need cleaner numbers.”

This psychological lubricant allows unethical behavior to seem justified.


3. Fraud Evolves Because It Works

Smaldino & McElreath’s core argument is simple and devastating:

The system selects for those who succeed—not those who are honest.

Every generation of researchers learns from the successful.

And who is successful?

The ones who:

  • publish frequently

  • produce flashy claims

  • get into prestigious journals

  • secure big grants

  • attract media coverage

If these successes are achieved through questionable practices, then those practices become heritable—not genetically, but culturally, through lab training and mentoring.


4. Fraud Spreads Through Academic Lineages

Just as biological traits spread through reproduction, research practices spread through academic genealogy.

4.1 “Descendants” adopt their mentors’ strategies

If a PI produces statistically improbable results regularly, their trainees absorb:

  • their data methods

  • their publication strategies

  • their analysis shortcuts

  • their attitude toward p-values and significance

  • their tolerance for exaggeration

This creates academic “lineages” with distinct methodological cultures.

Evidence: David L. Stern & the “lineage effect”

Stern’s work showed that even fruit flies inherit certain behavior patterns culturally across generations.
Labs do too.

Good practices and bad practices propagate through lineages.

4.2 Fraud clusters geographically and institutionally

Just like infections spreading in populations, fraud patterns cluster:

  • similar manipulation techniques

  • same statistical artifacts

  • same impossible distributions

  • same writing styles

  • same figure duplications

These clusters reveal that fraud is not random—it is learned.


5. Historical Examples: Fraud Evolution in Action

Fraud is not new, but it is increasingly detected in clusters, consistent with evolutionary models.


5.1 The Cyril Burt IQ scandal

Cyril Burt published “twin studies” claiming extremely high heritability of intelligence.
Later, investigators found:

  • nonexistent coauthors

  • fabricated correlations

  • copied data patterns

For decades, he thrived.
His fraudulent work shaped education policies.
And his students inherited his methods.

This is classic evolutionary propagation: successful phenotype → expanded lineage.


5.2 Anil Potti (Duke University cancer genomics)

Potti published numerous high-impact cancer biomarker papers.
Later:

  • analyses showed fabricated patient data

  • bioinformatic methods were manipulated

  • clinical trials were influenced

His lab’s success created a generation of scientists trained on toxic practices.


5.3 Diederik Stapel (Social psychology)

Stapel produced extremely clean datasets that were “too perfect.”
His fraud persisted because:

  • he trained students with similar data expectations

  • his results matched reviewers’ theoretical biases

  • replication was rare

The ecosystem protected him.


5.4 Yoshinori Watanabe (Cell biology)

Watanabe’s lab was caught manipulating blots and fluorescence images.
Investigations revealed:

  • systemic training in visual data manipulation

  • multiple students involved

  • institutional reluctance to punish

Fraud had become a lab culture, not individual misconduct.


6. Why Fraud Thrives in the Current Scientific Ecosystem

Fraud spreads because the ecosystem selects for it.
Smaldino & McElreath highlight several systemic pressures:

6.1 Lack of replication removes constraints

Replication is the natural predator of fraud.
But when replication is rare, fraud proliferates.

6.2 High competition intensifies selective pressure

When survival depends on out-producing rivals:

  • statistical flexibility becomes adaptive

  • selective reporting becomes strategic

  • fabrication becomes tempting

This is akin to bacteria evolving antibiotic resistance under selective pressure.

6.3 Journals reward “too good to be true” results

Fraudsters know what reviewers want:

  • large effect sizes

  • perfect curves

  • clean p-values

  • dramatic conclusions

This mirrors sexual selection in nature: whatever trait is preferred, individuals evolve to exaggerate it.

6.4 Institutions protect high-performers

Universities benefit from:

  • prestige

  • funding

  • high-impact publications

  • media attention

They often resist investigating misconduct because the fraudster benefits the institution.

This is group-level incentive misalignment.


7. The Fragility of Policing Mechanisms

Unlike biological evolution, which often has built-in constraints, scientific culture has weak policing:

7.1 Journal peer review rarely checks raw data

Reviewers lack time, expertise, or access.

7.2 Retractions take years

Retraction Watch tracks retractions that took decades.

7.3 Whistleblowers face retaliation

Whistleblowing can destroy careers.

7.4 Detection methods lag behind fabrication techniques

For instance:

  • easy digital manipulation of images

  • generative AI for synthetic data

  • deep statistical obfuscation

  • complex bioinformatic pipelines

The result:
Fraud evolves faster than policing.


8. Can Fraud Ever Be Eliminated? Evolutionary Theory Says No—But It Can Be Minimized

In natural ecosystems, cheating strategies never disappear entirely.
But they can be controlled by making cheating:

  • less rewarding

  • more risky

  • more detectable

The same must be done in science.

8.1 Increase the cost of cheating

  • mandatory raw data and code availability

  • unblinded access to analysis pipelines

  • random replication audits

  • statistical anomaly detectors

  • funding agency spot checks

8.2 Reduce the rewards

  • prioritize quality over quantity

  • reward transparency

  • value incremental progress

  • shift journal prestige toward replicable work

8.3 Enhance policing

  • fast-track retractions

  • strong whistleblower protections

  • specialized forensic-statistics units

  • replication consortia to investigate suspicious papers

8.4 Change cultural expectations

The real transformation begins when lab culture shifts from “show me significance” to “show me validity.”


Conclusion: Fraud Is Not a Disease of Individuals—It Is an Evolutionary Outcome of the System

This is the most sobering conclusion of Smaldino & McElreath’s work:

Fraud is inevitable in a system that rewards fraudulent strategies.

Unless incentives change, the evolution of scientific misconduct will continue—and accelerate.

Fraud is not merely a failure of morality.
It is a failure of ecology.
A failure of institutional design.
A failure of evolutionary pressures.

We can restore integrity only by reshaping the selective landscape so that:

  • honesty becomes adaptive

  • replication becomes central

  • transparency becomes mandatory

  • quality becomes rewarded

Only then will the evolution of fraud slow—and perhaps stabilize at manageable levels.

Thursday, February 5, 2026

Post 7: Replication as the Immune System of Science—and Why It’s Failing

 Based on Smaldino & McElreath (2016), “The Natural Selection of Bad Science”


Introduction: When a Body Stops Fighting Infections

In biology, the immune system acts as the organism’s defense mechanism. It detects pathogens, neutralizes them, and remembers patterns to prevent future infections. A healthy immune system keeps the organism stable despite continuous exposure to new threats.

Science has its own immune system: replication.
Replication checks whether published findings are real or illusions created by noise, bias, or methodological sloppiness. It is one of the core pillars of scientific progress.

Yet today, the immune system of science is malfunctioning.
Replication rates are low. Replication studies are systematically discouraged. Large-scale replication projects expose entire fields where more than half the findings do not replicate. And instead of strengthening the scientific body, the ecosystem appears to be spiraling toward chronic autoimmune disorders and epidemics of unreliability.

In this post, we examine:

  • The biological analogy: What makes replication an immune system?

  • Why the “immune system” is suppressed by modern incentives.

  • What Smaldino & McElreath’s model reveals about the evolutionary decline of replication attempts.

  • Real-world replication crisis examples from psychology, cancer biology, neuroscience, and economics.

  • How academia ends up with “opportunistic infections.”

  • What an immune-restoration program for science might look like.


1. Replication as the Immune System: A Deep Analogy

The immune system’s key functions:

  1. Detect errors and invaders

  2. Neutralize harmful pathogens

  3. Maintain homeostasis

  4. Build long-term resilience

  5. Evolve and adapt as threats evolve

Replication in science performs precisely the same functions:

1.1 Detection

Independent researchers check:

  • Did the experiment produce the same effect size?

  • Was the result driven by noise?

  • Were the statistical methods sufficiently robust?

1.2 Neutralization

If a result fails replication:

  • journals may issue corrections

  • meta-analyses update effect sizes

  • failed ideas lose prominence

  • fraudulent or careless work gets exposed

1.3 Homeostasis

Replication maintains epistemic stability—the idea that science converges on truth over time.

1.4 Memory

Each replication teaches the field something:

  • which methods are reliable

  • which sample sizes are needed

  • what effect sizes are realistic

  • what pitfalls must be avoided

1.5 Evolution

Replication helps the field adapt by promoting better practices.

So why does the immune system seem to be failing?


2. The Immune Suppression: Pressures Against Replication

Smaldino & McElreath’s model shows that incentives suppress replication, making it rare, weak, and strategically unprofitable.

2.1 Replication is slow

A replication attempt may take:

  • months of careful method reconstruction

  • large sample sizes

  • precise controls

  • detailed statistical transparency

Meanwhile, original (and often weaker) studies can be completed faster.

2.2 Replication is low-status

In modern academia:

  • journals seldom publish replications

  • hiring committees value novelty

  • grants rarely fund confirmatory work

  • replication is seen as derivative or uncreative

In other words, replication is treated as menial labor, not scientific contribution.

2.3 Replication is risky

If you attempt to replicate another lab’s work:

  • you may antagonize senior scientists

  • you may be labeled confrontational

  • you may face pushback or retaliation

  • you may damage collaborative relationships

Few early-career researchers want to risk such conflicts.

2.4 Replication is costly

Unlike exploratory studies, replication requires:

  • larger sample sizes

  • stricter controls

  • more preregistration

  • more time investment

  • specialized skills in forensic-level methodology

Thus, replication is expensive but undervalued.


3. What the Smaldino & McElreath Model Shows

The model reveals a deadly evolutionary dynamic:

3.1 Labs with low rigor but high output are rewarded

They produce many “positive” findings—even if false.

3.2 Scientists with high rigor produce fewer papers

They lose in grant competitions and job markets.

3.3 Replication becomes too costly

As labs adopt weaker methods, replication attempts become:

  • harder

  • rarer

  • less successful

3.4 The success rate of replication falls over time

A direct prediction of the model:

As bad methods spread, replication rates collapse.

3.5 The ecosystem adapts to noise

The population of labs evolves toward:

  • smaller sample sizes

  • higher flexibility in analysis

  • greater p-hacking

  • lower reproducibility

In evolutionary terms:
The “species” of high-quality research goes extinct.


4. Real-World Evidence: The Replication Crisis Across Disciplines

Smaldino & McElreath wrote before many major replication reports came out. Yet their predictions match reality.

4.1 Psychology

The Open Science Collaboration (2015) reproduced 100 classic findings.

Result?

  • Only 39% replicated.

  • Effect sizes were on average half the original.

  • Some foundational theories were undermined.

This was essentially the equivalent of screening an entire population and discovering widespread immune deficiency.


4.2 Cancer Biology

The Reproducibility Project: Cancer Biology attempted to replicate 50 high-profile papers.

Outcome so far:

  • Only 11% fully replicated.

  • Many results showed drastically reduced effects.

  • Some relied on materials or methods that labs refused to share.

Given cancer biology drives billions in funding, this is like discovering that most of the medical literature for a disease is unreliable.


4.3 Neuroscience

Button et al. (2013) demonstrated that median sample sizes in neuroscience are too small, creating “power failure” so severe that:

  • effect sizes are inflated

  • false-positive rates skyrocket

  • replication is nearly impossible

This is akin to a diagnostic test with 20% sensitivity being used as the gold standard.


4.4 Economics

The “Many Analysts” projects showed:

  • same dataset + same question

  • 120 analysis teams

  • wildly different answers

How can we replicate a result if analysts cannot even agree on the method?


4.5 Genomics & Biomedical Sciences

Ioannidis (2005) famously explained mathematically why most published findings are false.

Replication failures in genetics revealed:

  • missing heritability

  • misinterpreted associations

  • population structure artifacts

  • pervasive p-hacking in GWAS

  • difficulty reproducing basic gene-expression studies

Across disciplines, the story is the same:
Replication is sick, and the organism is weakening.


5. Opportunistic Infections: What Happens When Replication Fails

In medicine, when the immune system collapses, opportunistic pathogens thrive:

  • fungal infections

  • latent viruses

  • cancers

  • antibiotic-resistant bacteria

Academia shows similar symptoms.

5.1 Fraud spreads more easily

Fraudulent papers go unnoticed because nobody replicates them.

5.2 Noise becomes indistinguishable from signal

Low-powered studies create a fog of contradictory results.

5.3 Predatory journals explode

They take advantage of weak replication policing.

5.4 Entire fields diverge

Separate subfields evolve incompatible methodologies.

5.5 Incentive-driven false positives become dominant

The ecosystem becomes a breeding ground for low-quality but high-output “pathogens.”


6. Why the Immune System Fails: A Systemic Evolutionary Explanation

Smaldino & McElreath argue that replication declines because the system evolves toward lower rigor.

6.1 Replication costs increase

As methods weaken, replication becomes harder.

6.2 Novelty bias becomes stronger

Early-career researchers must publish flashy papers to survive.

6.3 Institutions mismeasure success

Counting papers instead of verifying impact.

6.4 Labs evolve toward quantity-maximizing strategies

This crowds out replication-focused labs.

6.5 Replication becomes a public good

Like clean air, everyone benefits from replication—but individuals do not benefit from contributing to it.

This is a classic game-theoretic tragedy of the commons.


7. How to Restore the Immune System: A Treatment Plan

Fixing replication is like rebuilding immune function.

7.1 Mandate data and code availability

A vaccine against method ambiguity.

7.2 Institute replication grants

Fund replication explicitly.

7.3 Create publication incentives for confirmatory work

Journal prestige should attach to quality, not novelty.

7.4 Registered reports as immune boosters

If a study is accepted before data collection, p-hacking incentive evaporates.

7.5 Large-scale collaborative replications

Economies of scale reduce the cost barrier.

7.6 Penalize non-replicable labs

Introduce metrics for long-term reproducibility.

7.7 Teach statistical literacy rigorously

More immune cells = more protection.


Conclusion: Science Needs Its Immune System Back

Replication is not optional.
It is not secondary.
It is not an afterthought.

It is the immune system of science, required to detect, eliminate, and prevent the spread of false findings. But as the incentives of academia shift toward quantity, speed, and novelty, replication is increasingly suppressed—just as an immune system collapses under chronic stress or malnourishment.

Smaldino & McElreath’s evolutionary model demonstrates that this suppression is not an accident. It is the inevitable outcome of the selective pressures that dominate modern academia.

If we want science to be healthy, we must restore and strengthen the immune system. That means rebuilding replication as a mainstream, celebrated, well-funded, and high-prestige component of scientific practice.

Wednesday, February 4, 2026

Post 6 -- The Ecology of Modern Science: Competition, Cooperation, and Collapse

 Based on Smaldino & McElreath (2016), “The Natural Selection of Bad Science”


Introduction: Science as an Ecosystem—But a Degraded One

If you walk into a rainforest, you witness dynamic interactions: predator and prey, mutualism, competition, niche partitioning, evolutionary trade-offs. Ecology teaches us that systems adapt—but not always toward greater “goodness”. Sometimes they adapt toward survival shortcuts, parasitism, invasive dominance, or collapse.

Modern science behaves very much like such an ecosystem. This is the argument that sits at the heart of Smaldino & McElreath’s 2016 paper: research institutions do not select for truth-finding efficiency; they select for strategies that maximize professional survival, often at the cost of scientific integrity.

In this post, we step away from equations and instead interpret the paper through a broader ecological lens. We ask:

  • What “species” exist in the academic ecosystem?

  • What competition pressures distort adaptation?

  • Why does “cheating” (or corner-cutting) evolve so naturally?

  • How do these pressures produce runaway selection for low-quality research?

Let’s explore.


1. The Scientific Ecosystem: Who Lives Here?

Ecologists categorize organisms by roles—producers, consumers, decomposers. Science has its own functional guilds:

1.1 Explorers (slow, careful, high-quality)

These align most closely with the ideal of science:

  • thoughtful hypothesis construction

  • rigorous statistical reasoning

  • careful replication

  • incremental but robust discoveries

In the analogy, they are slow-growing trees—deep roots, solid wood, long lifespan.

1.2 Exploiters (fast, flashy, low-quality)

These labs or researchers produce:

  • many papers per year

  • flashy statistical significance

  • weakly designed experiments

  • exaggerated statements

  • irreproducible claims

Ecologically, they resemble invasive species—quick growth, low resource investment, rapid colonization.

1.3 Predators (journals, rankings, funders)

Predators shape prey behavior. Journals and funding agencies exert:

  • aggressive selection for novelty

  • “predatory” attention toward surprising results

  • pressure to publish frequently

  • biases toward positive results

They don’t “eat” scientists literally; they consume scientists’ time, energy, and incentives.

1.4 Scavengers (meta-analysts, critics, reformers)

They pick up the remains:

  • replication failures

  • systematic reviews of conflicted data

  • post-mortems of entire research fields

They recycle waste—an essential role, but one overwhelmed by the scale of what must be cleaned.

You can begin to see already why problems emerge: fast-growing invasive species outcompete slow-growing trees when the environment rewards speed over durability.


2. Environmental Pressures: The Selective Forces Distorting Science

In ecology, environmental pressures shape evolutionary direction. In academia, the environmental pressures include:

2.1 Publish-or-perish metrics

This is the strongest selection force.

  • Tenure depends on publication count.

  • Grants depend on publication count.

  • Promotions depend on publication count.

Slow, careful, thoughtful (but fewer) papers lose to fast, frequent, flashy output.

2.2 Journal prestige as habitat quality

Top journals function like patches of high-quality habitat with limited space. The individuals that reach them are often those who:

  • exaggerate novelty

  • optimize statistically for significance

  • oversell or overspeculate

Slow, cautious, nuanced research often cannot thrive in these patches.

2.3 Grant funding as a limiting resource

Like food scarcity in an ecosystem, scarce funding leads to:

  • fierce competition

  • favoritism for risky, sexy, newsworthy ideas

  • penalties for “boring” but necessary replication

2.4 Career bottlenecks: Postdoc → Faculty transition

This bottleneck creates evolutionary sweeps:

  • only the most prolific survive

  • survival probabilities depend on output speed

  • quality becomes less relevant

  • risk-taking (in the statistical sense) is rewarded

Together, these pressures create a landscape where invasive strategies thrive.


3. Evolutionarily Stable Strategies: Why Bad Practices Survive

In ecology, an evolutionarily stable strategy (ESS) arises when a strategy, once common, cannot be outcompeted by alternatives.

In modern academia, the ESS is distressingly simple:

Produce as many statistically significant, novel results as possible using minimal time per project.

This ESS is not in line with truth discovery. But once adopted widely, it is difficult to reverse because:

3.1 Slow science loses competitions

Careful labs never reach the publication numbers of fast labs. So they fail in grant competitions and hiring rounds.

3.2 Reputation becomes decoupled from truth

A lab that publishes 15 papers a year appears more “successful” than one that produces two carefully validated papers.

3.3 The ecosystem becomes “locked in”

When every institution measures success using the same metrics, every participant must adapt or perish. Even well-meaning, careful scientists are forced to play the game or risk extinction.


4. Ecological Collapse: What Happens When Bad Science Takes Over?

When an ecosystem is dominated by opportunistic invaders, you get collapse:

  • soil nutrient loss

  • biodiversity crashes

  • long-term resilience disappears

In science, the analogs are:

4.1 Replicability crisis

Field after field demonstrates:

  • low reproducibility

  • inflated effect sizes

  • contradictory results

  • entire literatures built on fragile foundations

4.2 Epistemic pollution

Low-quality publications accumulate like pollution:

  • meta-analyses become impossible

  • true effects are masked

  • pseudoscience gains legitimacy

  • real progress becomes slower

4.3 Career disillusionment and attrition

Talented scientists burn out when forced to compete on quantity rather than quality.

4.4 Loss of public trust

When the public sees contradictory findings, fraud scandals, and frequent retractions, trust erodes.

This is the scientific equivalent of ecological desertification—once the soil is lost, recovery is extremely hard.


5. Ecological Anecdotes That Mirror Academic Dysfunction

Anecdote 1: The cane toad (Australia)

Introduced to control beetles, the cane toad multiplied explosively and destabilized ecosystems. It adapted well to the incentives but generated harmful outcomes.

Academic parallel:
Inventing “impact factor” was like introducing cane toads. It solved one problem but introduced many more.


Anecdote 2: The chestnut blight fungus (North America)

A fast-growing pathogen wiped out slow-growing, foundational species.

Academic parallel:
Fast-publication labs crowd out foundational, rigorous labs.


Anecdote 3: The tragedy of the commons

Each individual herder benefits from adding more cattle, but collectively they destroy the pasture.

Academic parallel:
Each scientist benefits individually from publishing more—even low-quality papers.
Collectively, academia becomes a wasteland of irreproducible findings.


6. The Paper’s Core Claim in Ecological Terms

To recast Smaldino & McElreath in ecological language:

The incentives of modern academia create a habitat where invasive, fast-replicating research strategies thrive, driving out slow, careful, high-quality science through natural selection.

This is not moral failure, individual laziness, or corruption.
It is ecological inevitability under the current environment.


7. Toward a Restoration Ecology of Science

If we think like restoration ecologists, what interventions help restore ecosystems?

7.1 Change the selective environment

  • reward replication

  • reward transparency

  • reward null results

  • reduce dependence on publication count

7.2 Diversify habitats

  • establish journals that value careful, long-term research

  • create grant categories for incremental or confirmatory work

7.3 Reintroduce apex predators

Predators regulate ecosystems. In science, the predators could be:

  • replicability audits

  • statistical screening tools

  • meta-analytic policing

  • data availability requirements

These would eat away at low-quality work.

7.4 Create refugia for slow science

Institutions like the IAS (Princeton) or EMBL partially serve this role by giving scientists time without pressure to produce.

7.5 Facilitate succession

Allow the ecosystem to shift toward more stable, long-lived scientific strategies.


Conclusion: Science Needs Ecological Thinking

The ecosystem analogy is powerful because it reframes the conversation away from blaming individuals and toward understanding systemic evolution.

In ecology, systems adapt to whatever pressures exist. If the pressures reward destructive behaviors, destructive organisms thrive.
The same is true in academia.

Smaldino & McElreath’s insight is that bad science is not an accident—it is the product of natural selection in a distorted environment.

To fix science, we must change the environment.