Saturday, February 7, 2026

Post 9 The Cultural Evolution of Questionable Research Practices (QRPs)

 Based on Smaldino & McElreath (2016), “The Natural Selection of Bad Science”


Introduction: How Bad Habits Become Scientific Norms

When we talk about the problems of modern science, we often focus on individual errors:

  • p-hacking

  • selective reporting

  • HARKing

  • small sample sizes

  • forking analytical paths

  • file drawer bias

These are called Questionable Research Practices, or QRPs.

Most researchers agree these practices are harmful.
Most deny that they personally use them.
Most believe “the field” is plagued, but “my lab” is clean.

Yet QRPs continue to spread.

Why?
Because QRPs are culturally transmitted behaviors, shaped by selection pressures that operate not on truth, but on reward structures. Smaldino & McElreath argue that the modern academic environment selects for practices that maximize publication, visibility, and grant acquisition—not accuracy. Over time, these questionable practices evolve into cultural norms passed across labs and generations.

In this post, we explore:

  • Why QRPs evolve even when no one intends to cheat

  • How QRPs spread through academic “memes”

  • The role of mentorship, lab culture, and institutional norms

  • How statistical techniques become cultural artifacts

  • Case studies of QRP evolution in psychology, biomedicine, ecology, and genetics

  • Why QRPs often outperform good science in competitive environments

  • And finally: what cultural evolution tells us about reform


1. QRPs Are Not Just Bad Methods—They Are Adaptive Behaviors

Let’s start with the evolutionary concept:

Traits that increase reproductive success spread, regardless of whether they’re beneficial to the species.

In academia, “reproductive success” means:

  • publishing more

  • getting more citations

  • landing more grants

  • securing prestigious jobs

  • achieving visibility

  • appearing productive

QRPs directly enhance these success metrics:

  • p-hacking increases the chance of significance

  • selective reporting maximizes narrative clarity

  • small sample sizes allow faster publication

  • exploratory methods masquerading as confirmatory work boost novelty

  • dropping null results saves time and reputational cost

Thus, QRPs are evolutionarily fit within the academic ecosystem.

Poor scientific methods are not anomalies—they're successful strategies under existing incentives.


2. The Meme Theory of Bad Science

Richard Dawkins introduced the idea of memes—cultural analogs of genes that replicate through imitation, communication, and institutional reinforcement.

QRPs are memes.

They spread by:

2.1 Apprenticeship

Students learn analysis strategies from mentors.
If a mentor says:

“Try different models until you get a significant one,”

this meme propagates.

2.2 Paper templates

Authors copy statistical approaches from top papers.
If high-impact journals publish flashy underpowered studies, these methods become templates.

2.3 Reviewer expectations

Reviewers reward certain statistical patterns (“clean” results).
Labs adapt to produce them.

2.4 Institutional norms

Funding agencies, departments, and hiring committees prefer volume and novelty.

2.5 Social networks and collaboration

Connected labs exchange methods, scripts, code, and heuristics—good and bad.

Thus, QRPs propagate not because scientists want to be unethical, but because:

They are culturally inherited solutions to survival in academia.


3. How QRPs Begin: From Legitimate Flexibility to Weaponized Flexibility

Every research process involves choices:

  • Which variables to include?

  • Which time windows to analyze?

  • Which transformations to apply?

  • Which outliers to remove?

  • Which statistical test to use?

  • Which covariates to consider?

This flexibility is natural and often necessary.

But under selective pressures, flexibility becomes weaponized:

3.1 Garden of forking paths

A researcher tries many analyses but reports only the “best.”

3.2 Optional stopping

Collecting more data only when p > 0.05.

3.3 Convenient exclusion of “problematic” participants

When “problematic” means “didn’t fit the hypothesis.”

3.4 Retrofitting hypotheses

Turning exploratory insights into a “predicted effect.”

None of these require malicious intent. They require only:

  • pressure

  • time scarcity

  • ambiguity in methods

  • conventional expectations

Smaldino & McElreath argue that ambiguity is the breeding ground for QRPs.


4. QRPs Spread Through Academic Lineages: Evidence from History

The cultural evolution of QRPs can be observed across disciplines.


4.1 Psychology: A Textbook Case

The pre-2010s psychological literature shows an entire culture shaped by:

  • tiny sample sizes

  • significance chasing

  • underpowered designs

  • flexible stopping rules

  • lack of preregistration

These QRPs were not isolated events—they were evolved norms, taught by senior researchers and rewarded by journals.

The famous Bem (2011) ESP paper—which claimed that people can predict future events—passed peer review because the methods matched the norms of the field.


4.2 Cancer biology: Image manipulation as a cultural practice

In cancer biology:

  • fluorescence images

  • Western blots

  • microscopy panels

were routinely “cleaned up.”

In the early 2000s, this was considered good practice.
Magazines published tutorials on how to “enhance band visibility.”

Over time, these became cultural norms—and only later did the field decide these were QRPs or outright misconduct.


4.3 Genomics: Bioinformatic flexibility

Before preregistration or strict pipelines, it was normal to:

  • choose thresholds post-hoc

  • use multiple assembly parameters

  • cherry-pick alignments

  • apply filters to “remove noise”

These became part of “lab culture,” not individual malpractice.


4.4 Ecology: The “significance hunt”

Ecological field studies often lack sample size control; thus:

  • significance testing became a ritual

  • p < 0.05 became the default “truth filter”

  • QRPs naturally evolved to meet this constraint


5. QRPs Become Cultural Norms When They Are Hidden Benefits

QRPs have advantages:

Faster publication = quicker CV growth

Cleaner narratives = easier acceptance

Better-looking results = higher impact journals

More significant results = more citations

More grants = better resources

Less ambiguity = fewer reviewer complaints

Over time, QRPs become “the way science is done.”

This process mirrors cultural transmission in anthropology:
Practices that are rewarded persist.


6. Why QRPs Outcompete Good Science

Smaldino & McElreath’s model predicts an uncomfortable truth:

Labs using QRPs will consistently outperform ethical labs in output metrics.

6.1 Methodological rigor reduces quantity

Large sample sizes = fewer studies per year.

6.2 Transparency slows things down

Preregistration, data sharing, and detailed documentation add labor.

6.3 Statistical integrity reduces positive results

Rigorous analysis → fewer p < 0.05 findings → fewer publications.

6.4 Honest researchers are penalized

In grant review panels and job markets, quantity and novelty dominate.

Thus, ethical labs face extinction within the population of labs—unless the system actively protects them.


7. Cultural Niche Construction: How Fields Shape Their Own Evolution

Smaldino & McElreath point to a key concept: niche construction.

Just as organisms modify their environment (like beavers building dams), scientific communities modify:

  • publication expectations

  • methodological norms

  • peer review criteria

  • statistical conventions

  • training curricula

  • grant requirements

These environmental modifications then shape the next generation of researchers.

Example: The rise of p < 0.05 as a ritual threshold

Originally introduced by Fisher as a rule-of-thumb, p < 0.05 became:

  • a universal criterion

  • a reviewer expectation

  • a hiring benchmark

  • a grant-writing norm

This cultural niche selected for QRPs that produce p < 0.05 reliably.

Thus:

The system becomes self-perpetuating.


8. The Institutionalization of QRPs: When Practices Gain Official Status

Some QRPs become institutionalized:

8.1 Software defaults

SPSS defaults encouraged questionable analyses for decades.
Excel autopredicts models that are statistically invalid.

8.2 Reviewer norms

Reviewer #2 often demands significance.
This pressure selects for QRPs.

8.3 Journal expectations

Top journals prefer surprising results.
“Surprising” often requires flexible methods.

8.4 Grant success patterns

Funding committees reward bold claims.
QRPs help generate such claims.


9. QRPs Resist Change: Cultural Evolution Creates Inertia

Cultural evolution is resistant to reform because:

9.1 Practices feel natural

QRPs become invisible to those who grow up inside the system.

9.2 Senior scientists defend the status quo

Their reputations and past work depend on outdated methods.

9.3 Fields develop vested interests

Entire theories rest on QRP-dependent findings.

9.4 Institutions reward QRP-driven outcomes

Impact factors, grant income, and productivity metrics are built on shaky foundations.

9.5 Reformers face retaliation

Criticizing QRPs is framed as unprofessional or combative.

Thus, QRPs continue not because they are good methods, but because:

They are good strategies for survival within a maladaptive environment.


10. How Cultural Evolution Can Be Harnessed for Good

The mechanism that spreads QRPs can also spread good practices, if selective pressures change.

10.1 Preregistration becomes a norm

OSF and registered reports create cultural expectations of transparency.

10.2 Open data becomes mandatory

Younger labs increasingly default to open science.

10.3 Large-scale collaborative projects

Participatory replications teach students rigor.

10.4 Teaching meta-science

Universities begin integrating meta-research into curriculum.

10.5 Funding agencies shifting values

NIH, NSF, and ERC now emphasize rigor and reproducibility.

10.6 Social incentives

Twitter/X, Mastodon, and YouTube critique bad practices publicly, generating peer-driven accountability.

10.7 New journal models

eLife’s consultative review
PLOS ONE’s methodological criteria
Registered Reports at many journals

Cultural evolution can shift direction if the fitness landscape of academia changes.


Conclusion: QRPs Are Not Detours—They Are Outcomes of Cultural Evolution

QRPs are not deviations from scientific culture—they are products of it.
They flourish because the environmental conditions of academia reward them.

Smaldino & McElreath’s evolutionary lens reveals that:

  • QRPs spread because they increase academic fitness

  • QRPs become norms through cultural transmission

  • QRPs persist because institutions reinforce them

  • QRPs outcompete rigorous methods under current incentives

  • Reform requires changing the selective landscape, not blaming individuals

Science must actively rebalance its internal ecology so that:

  • rigor becomes adaptive

  • transparency becomes normal

  • quantity becomes less important than quality

  • mentorship focuses on good methods

  • early-career researchers aren’t forced into QRPs to survive

Only then will cultural evolution shift from selecting for questionable practices to selecting for robust, reliable, and replicable science.

No comments: