Here’s a rigorously detailed, start-to-finish playbook you can follow for any scholarly article. It’s designed to help you produce a review that’s incisive, fair, and genuinely useful to both the editor and the authors.
0) Before you accept
-
Check fit & conflicts. Are you qualified on the topic/methods? Any conflicts of interest (financial, personal, competitive, prior collaborations)? If yes, decline.
-
Skim metadata. Title, abstract, keywords, and cover letter. Confirm scope fits the journal and your expertise.
-
Timebox. Block two focused sessions (e.g., 60–90 min each) for deep review, plus 30–45 min to write the report.
1) First pass (high-level triage)
Goal: understand the contribution and decide whether a deep dive is warranted.
-
Read: abstract → intro first/last paragraphs → figures/tables → conclusion.
-
Ask three big questions:
-
What is the claim (novelty/significance)?
-
Is the design plausibly able to test the claim?
-
Are the results clearly in line with the claim?
-
-
Quick flags to note (don’t judge yet): missing controls, unclear sample sizes, weak baselines, ambiguous outcome measures, overclaiming, poor figure readability, ethics or data availability gaps.
-
Decision: proceed to deep dive or recommend “out of scope”/“insufficient for journal” with constructive rationale.
2) Deep read (section-by-section audit)
Keep notes in a structured worksheet (see §11 template).
A) Title & Abstract
-
Accuracy & specificity. Does the title reflect the main finding without hype? Abstract should state question, method, key quantitative results (with effect sizes/CI), limits.
B) Introduction/Background
-
Why now? Clear gap in literature?
-
Positioning. Are the most relevant, recent works cited (not just the authors’)? Are claims about prior art accurate?
-
Testable objectives. Hypotheses or research questions stated and operationalized?
C) Methods (reproducibility core)
-
Design: Is the study design appropriate (RCT, cohort, case-control, experiment, simulation, qualitative, etc.)?
-
Population/Sample: Inclusion/exclusion, sampling frame, power/sample size justification, preregistration (if applicable).
-
Variables: Clear definitions of outcomes, predictors, covariates; measurement validity/reliability.
-
Procedures/Interventions: Enough operational detail to replicate? Randomization, blinding, allocation concealment (if relevant).
-
Data & Code: Availability statement, repository links, versioning, licenses; analysis scripts or pseudo-code; computational environment (libraries/versions, seeds).
-
Ethics: IRB/ethics approval, consent, animal welfare, data privacy, trial registration.
D) Statistical/Analytical checks (quick but sharp)
-
Model appropriateness: Why this model? Assumptions checked (normality, independence, linearity, proportional hazards, etc.).
-
Effect sizes & uncertainty: Reported alongside p-values? CIs/HDIs? Practical significance vs statistical significance.
-
Multiple testing: Corrections or a principled modeling approach? Pre-specified primary endpoints?
-
Controls & confounders: Confounding addressed? Sensitivity analyses?
-
Robustness: Alternative specifications, outlier handling, missing data strategy (MCAR/MAR/MNAR; imputation details).
-
Validation: Train/validation/test splits, cross-validation, external validation; leakage avoidance.
-
Visualization quality: Axes labeled, units, error bars meaning (SD/SE/CI), readable legends, consistent color/scale, no 3D clutter.
E) Results
-
Alignment: Results directly answer the stated questions/hypotheses?
-
Clarity: Logical order, tables/figures referenced; key numeric values visible (not buried in supplement).
-
Consistency: Numbers consistent across text/tables/figures; denominators and units stable.
-
Negative/Null findings: Transparently reported?
-
Replicability: Enough detail to reproduce key figures/metrics from shared data/code?
F) Discussion & Conclusions
-
Causality language: Claims match design (avoid causal verbs from observational data, etc.).
-
Limitations: Specific, not boilerplate; threats to validity discussed (internal/external/statistical/construct).
-
Positioning: Comparison to best prior baselines; incremental vs substantial advance clearly framed.
-
Future work: Concrete next steps; real-world or theoretical implications credible.
G) References & Reporting Standards
-
Coverage: Balanced, not self-referentially narrow; recent relevant work included.
-
Standards: CONSORT/PRISMA/STROBE/ARRIVE/CARE/SRQR/TRIPOD/MIAME/PRISMA-ScR/COREQ/etc., as applicable.
-
Citation hygiene: Correct formatting; no padding.
3) Discipline-specific addenda (use what fits)
-
Clinical trials: Registration ID, CONSORT flow diagram, protocol deviations, adverse events, allocation concealment/blinding, ITT vs per-protocol, primary vs secondary outcomes.
-
Systematic reviews/meta-analyses: PRISMA, search strategy, inclusion criteria, risk-of-bias tools, heterogeneity (I²/τ²), small-study bias, pre-registration.
-
Observational studies: STROBE; directed acyclic graphs (if used), confounding, selection/measurement bias, sensitivity (E-values, tipping point).
-
Bench/omics: Blinding, replication (biological vs technical), batch effects, QC, preregistered analysis if applicable, data deposition (GEO/SRA/PRIDE).
-
ML/AI papers: Data provenance & licenses, train/val/test splits, leakage checks, baseline comparisons, ablations, calibration, fairness metrics, compute budget & carbon reporting, code+models released, reproducible seeds.
-
Qualitative research: Methodology (phenomenology/grounded theory/ethnography), sampling & saturation, reflexivity, coding scheme, triangulation, member checking, SRQR/COREQ.
4) Decide your recommendation (for editor)
Align to journal bar; separate editor-only rationale from author-facing tone.
-
Accept: Only tiny language/format fixes.
-
Minor revision: Core is sound; clarifications/additional analyses feasible without new data/experiments.
-
Major revision: Potentially publishable but needs substantial analysis, new controls, or restructuring.
-
Reject (or transfer): Flawed design for claim, inadequate novelty for journal, irreparable validity issues, or ethical/data availability problems. Offer transfer suggestions if appropriate.
5) Write the report (structure + tone)
Use clear headings; number comments and tag [Major] vs [Minor]. Keep paragraphs short.
A) Opening summary (1–2 short paragraphs)
-
One-sentence what the paper does.
-
One-sentence why it matters.
-
Two-to-four bullets of strengths (novel dataset, rigorous design, strong baselines, elegant theory, etc.).
-
One sentence on overall assessment (publishable after X; or not suited to this journal).
Example opener:
This manuscript investigates [topic] by [method], aiming to test whether [hypothesis]. The work is timely given [context]. Strengths include [S1–S3]. However, I have several concerns regarding [design/statistics/interpretation], detailed below. I believe the paper could be suitable after major/minor revision.
B) Major comments (actionable, ranked by impact)
Each item: Issue → Why it matters → Specific, feasible fix.
-
Bad: “Statistics are weak.”
-
Good: “[Major] Power & effect sizes. The primary outcome shows p=0.049 without an a priori power analysis. Please report effect sizes with 95% CIs, provide a power calculation (or precision justification), and clarify whether the analysis plan was preregistered.”
C) Minor comments (clarity, presentation, small checks)
Style, figure readability, missing references, typos, unit consistency, legend clarity, data/code link formatting.
D) Confidential comments to the editor (optional but valuable)
-
Fit with journal and audience; novelty vs bar.
-
Any undisclosed conflicts you suspect; overlap with other literature; ethical/data concerns.
-
A crisp recommendation and risk profile (e.g., “sound but incremental,” “methodologically ambitious but under-validated”).
6) Polite phrasing bank (copy-ready)
-
Constructive hedging: “The data appear consistent with…”, “Consider tempering claims of causality…”
-
Actionable asks: “Please report [X] with [units/CI].”, “Add a sensitivity analysis varying [assumption].”
-
Praise with specificity: “The [dataset/assay] is a notable strength, particularly the [feature].”
-
Scope control: “This request is optional and intended to improve clarity rather than alter the study’s aims.”
7) Common red flags & how to handle them
-
P-value fishing / undisclosed flexibility: Ask for preregistration, full analysis plan, and correction for multiplicity; suggest moving exploratory results to supplement.
-
Data not available (without reason): Request an availability timeline or justification; suggest depositing de-identified data/code.
-
Ambiguous outcomes/metrics: Ask for precise definitions and units; request operationalization details.
-
Overclaiming: Quote the sentence and suggest neutral wording consistent with the design.
-
Figure integrity concerns: Request high-res originals, raw image data for critical blots/micrographs, or image-processing details.
-
Ethics gaps: Request IRB/animal approval numbers, consent wording, or anonymization details; if unresolved, flag to editor confidentially.
8) Fast statistical sanity checklist (one-glance)
-
Effect sizes + CIs reported for all primary outcomes.
-
Assumptions checked; diagnostics plotted (residuals, collinearity VIFs, proportional hazards tests).
-
Multiple comparisons addressed.
-
Missing data quantified; method justified.
-
Robustness/sensitivity analyses present.
-
Pre-specification vs exploration clearly labeled.
9) Figure & table audit (quick wins)
-
Every figure answers a question; legends standalone; axes labels and units present.
-
Consistent sample sizes (n) across panels; error bars defined.
-
Tables don’t duplicate figures; key numbers in main text, not only supplement.
-
Color choices accessible; font sizes readable.
10) Ethical & reproducibility checklist
-
Competing interests disclosed.
-
Funding sources stated and role clarified.
-
Data/code availability with persistent identifiers (DOI).
-
Participant privacy protections described.
-
Animal welfare and 3Rs addressed (if applicable).
-
For AI: dataset consent/licensing, bias/fairness discussion.
11) Reviewer worksheet (you can paste into your notes)
Paper ID/Title:
Claim in one sentence:
Primary contribution (method/data/theory/application):
Why it matters (2 bullets):
Key strengths (3 bullets):
Top risks/limitations (3 bullets):
Decision (provisional): Accept / Minor / Major / Reject
Major comments (numbered):
-
Issue → Impact → Specific request.
-
…
Minor comments (numbered):
-
…
Standards to check: [CONSORT/PRISMA/STROBE/etc.]
Data/Code availability: Links & notes
Stats checklist: [tick boxes]
Ethics: Approval IDs / consent / privacy
Editor-only notes: Fit/priority/risks
12) Final polish before submission
-
Prioritize. Lead with the most consequential issues; keep total major points to ~3–7.
-
Be specific. Quote line numbers/figure panels where possible.
-
Be respectful. Assume good faith; thank authors for contributions and transparency.
-
One-page rule. Aim for ~¾–2 pages for most papers; longer only if necessary and structured.
-
Self-check: If authors did everything you asked, would the paper meet the journal’s bar?
Ready-to-use review template (copy, then customize)
Summary
-
This manuscript [examines/introduces/tests] [topic] using [methods] to address [question].
-
Strengths: [S1], [S2], [S3].
-
Overall: I recommend [Accept/Minor/Major/Reject] because [one-sentence rationale].
Major comments
-
[Short label] — Issue. What’s the concern? Why it matters. Validity/interpretation/reproducibility angle. Request. Specific, feasible fix (analyses, clarifications, data/code, revised framing).
-
…
-
…
Minor comments
-
Clarity: Please define [term] on first use.
-
Figure readability: Increase font size in Fig. [X]; specify what error bars denote.
-
References: Add [key recent work] to contextualize [claim].
…
Data, code, and ethics
-
Please provide [repository link/DOI]; include environment details (versions/seeds).
-
Confirm [IRB/animal ethics/consent] details and data privacy steps.
Confidential comments to the editor (not shared with authors)
-
Fit/priority:
-
Risks/concerns:
-
Recommendation: [Accept/Minor/Major/Reject], with key reasons.
No comments:
Post a Comment