
Deep Research
Research
Which disciplines are most vulnerable to publication bias?
-
MiroThinker
MiroMind Deep Analysis
Verification
Sources
MiroMind Deep Analysis
10
sources
Multi-cycle verification
Deep Reasoning
Publication bias—systematic distortion of the literature because positive, novel, or statistically significant results are more likely to be published than null or negative findings—has been documented across many fields. Recent meta‑research and theoretical work suggest that vulnerability varies by discipline due to differences in norms around statistical significance, study design, power, and openness practices [2][4][7][10]. Understanding where this vulnerability is greatest helps prioritize reforms and interpret evidence cautiously.
Disciplines with High Vulnerability
1. Psychology and Behavioral Sciences
Evidence and mechanisms
Large‑scale analyses of publication outcomes show high rates of positive findings and selective reporting in psychology and related behavioral sciences, reflecting strong “publish or perish” incentives and historically heavy reliance on null‑hypothesis significance testing with p‑value thresholds [2][10].
Optional stopping, flexible analysis decisions (researcher degrees of freedom), and underpowered studies have been shown to inflate reported effect sizes in the literature; a 2025 paper on optional stopping quantifies how these practices, combined with publication bias, skew effect size distributions [10].
Replication crises in social and cognitive psychology highlight the downstream consequences: a non‑trivial fraction of high‑profile findings fail to replicate when large, pre‑registered studies are conducted.
Why vulnerability is high:
Strong incentives for novel, counter‑intuitive findings.
Historically weak norms around pre‑registration and data sharing (though this is improving).
Many small‑sample studies and flexible analytic pipelines.
2. Biomedical and Clinical Research
Evidence and mechanisms
Clinical and preclinical biomedical research often involves multiple endpoints, subgroup analyses, and large datasets, creating many opportunities for selective reporting and multiple testing without adequate correction [7][10].
Judicial and policy analyses warn that courts often rely on research with poor control for multiple testing and publication bias, leading to unreliable evidence in legal settings [7].
Observational studies and early‑phase clinical trials are particularly susceptible to positive‑outcome bias and suppression of negative or inconclusive results, which can distort meta‑analyses and clinical guidelines.
Why vulnerability is high:
High financial and reputational stakes, including industry sponsorship.
Complexity of datasets and analytic decisions.
Until recently, limited incentives to publish negative or “no effect” studies.
3. Preclinical animal research and basic life sciences
Animal studies and other preclinical experiments often involve small sample sizes and many potential outcome measures, yet statistically significant positive findings are more likely to be published and cited.
Historically, fewer requirements for trial registration or detailed methods reporting than in human clinical research, increasing the scope for selective reporting and non‑publication of null results.
Why vulnerability is high:
Limited pre‑registration and trial registries for animal studies.
Strong novelty bias in high‑impact journals.
Perception that “clean” positive results are needed to justify costly clinical follow‑up.
4. Economics and some social sciences
Similar to psychology, parts of economics and other social sciences have relied heavily on statistical significance and multiple testing in large observational datasets (e.g., policy evaluations, econometrics).
The optional stopping and researcher flexibility issues identified in general meta‑research apply here too, with incentives for policy‑relevant, significant results [10].
Why vulnerability is moderate to high:
Strong reliance on p‑value thresholds and significance framing.
Historically less openness about full analytic pipelines, though data and code sharing are improving in some subfields.
Disciplines with Particular Structural Risks
Fields with rapidly evolving, high‑dimensional data (e.g., AI in medicine, digital health)
A 2026 analysis of new LLM models in healthcare notes that large, complex datasets and multiple outcome measures make it easy to find apparently impressive results that are not robust, especially if negative findings are not published [8].
Without strong reporting and evaluation standards, there is a risk of “success stories” being overrepresented relative to failed or neutral deployments.
Fields dominated by predatory or low‑quality journals
A 2025 scoping review (discussed in guidance on predatory journals) notes that articles from predatory outlets can infiltrate systematic reviews and meta‑analyses, introducing bias because such journals may disproportionately publish poorly designed, unregistered, or selectively reported positive studies [3].
Disciplines with high pressure and less stringent journal gatekeeping (some applied sciences, complementary and alternative medicine, certain educational subfields) are particularly exposed.
Countervailing Trends and Discipline-Specific Nuances
Discipline‑specific ethics and QRPs:
A 2026 survey of researchers’ attitudes toward research misconduct and questionable research practices finds that norms and perceived acceptability of behaviors like p‑hacking or selective reporting differ by discipline [5]. This means vulnerability is not uniform—even within a field—as subcommunities adopt different standards.
Open practices in arts, humanities, social sciences:
A 2026 report on open practices in arts, humanities, and social sciences highlights efforts to strengthen rigor and trust via open data, pre‑registration, and transparent methods [6]. These moves can reduce publication bias over time but are not yet universal.
LLMs and “failure‑worth‑publishing”
A 2026 preprint argues that large language models may lower the cost of analyzing and writing up negative or null results, potentially making “failure worth publishing” and helping counter traditional publication bias by making it easier to document and disseminate them [2].
Implications
For evidence synthesis (systematic reviews, meta‑analyses):
Reviews in psychology, biomedical research, and other high‑risk fields must explicitly account for publication bias (e.g., using funnel plots, trim‑and‑fill, selection models) and be wary of incorporating low‑quality or predatory‑journal studies.
Courts and policymakers relying on scientific evidence should understand that published literature may overestimate effect sizes, especially in areas with strong incentives and complex datasets [7][10].
For researchers:
In highly vulnerable disciplines, rigorous pre‑registration, data sharing, and publishing of null results are crucial to improve the reliability of the field and your own reputation.
Actively choosing reputable journals, transparently reporting all prespecified outcomes, and making all analyses (including null results) available in repositories can help shift norms.
For institutions and funders:
Targeted support for open practices and registered reports in the most affected fields can yield outsized benefits.
Encouraging or mandating the publication (or at least registry reporting) of all funded studies—regardless of outcome—reduces hidden negative findings.
MiroMind Reasoning Summary
I combined broad meta‑research on publication bias and optional stopping with discipline‑specific discussions of questionable research practices, predatory journals, and the reliability of research used in courts. These converge on psychology/behavioral sciences and biomedical/clinical research as especially vulnerable due to a mix of statistical practices, incentives, and historically weaker openness norms, with significant vulnerability also in preclinical and some social science domains. Recent moves toward open practices and the potential for LLMs to make “failures” easier to publish suggest partial mitigation, but current evidence still supports classifying these disciplines as most at risk.
Deep Research
6
Reasoning Steps
Verification
3
Cycles Cross-checked
Confidence Level
High
MiroMind Deep Analysis
10
sources
Multi-cycle verification
Deep Reasoning
Publication bias—systematic distortion of the literature because positive, novel, or statistically significant results are more likely to be published than null or negative findings—has been documented across many fields. Recent meta‑research and theoretical work suggest that vulnerability varies by discipline due to differences in norms around statistical significance, study design, power, and openness practices [2][4][7][10]. Understanding where this vulnerability is greatest helps prioritize reforms and interpret evidence cautiously.
Disciplines with High Vulnerability
1. Psychology and Behavioral Sciences
Evidence and mechanisms
Large‑scale analyses of publication outcomes show high rates of positive findings and selective reporting in psychology and related behavioral sciences, reflecting strong “publish or perish” incentives and historically heavy reliance on null‑hypothesis significance testing with p‑value thresholds [2][10].
Optional stopping, flexible analysis decisions (researcher degrees of freedom), and underpowered studies have been shown to inflate reported effect sizes in the literature; a 2025 paper on optional stopping quantifies how these practices, combined with publication bias, skew effect size distributions [10].
Replication crises in social and cognitive psychology highlight the downstream consequences: a non‑trivial fraction of high‑profile findings fail to replicate when large, pre‑registered studies are conducted.
Why vulnerability is high:
Strong incentives for novel, counter‑intuitive findings.
Historically weak norms around pre‑registration and data sharing (though this is improving).
Many small‑sample studies and flexible analytic pipelines.
2. Biomedical and Clinical Research
Evidence and mechanisms
Clinical and preclinical biomedical research often involves multiple endpoints, subgroup analyses, and large datasets, creating many opportunities for selective reporting and multiple testing without adequate correction [7][10].
Judicial and policy analyses warn that courts often rely on research with poor control for multiple testing and publication bias, leading to unreliable evidence in legal settings [7].
Observational studies and early‑phase clinical trials are particularly susceptible to positive‑outcome bias and suppression of negative or inconclusive results, which can distort meta‑analyses and clinical guidelines.
Why vulnerability is high:
High financial and reputational stakes, including industry sponsorship.
Complexity of datasets and analytic decisions.
Until recently, limited incentives to publish negative or “no effect” studies.
3. Preclinical animal research and basic life sciences
Animal studies and other preclinical experiments often involve small sample sizes and many potential outcome measures, yet statistically significant positive findings are more likely to be published and cited.
Historically, fewer requirements for trial registration or detailed methods reporting than in human clinical research, increasing the scope for selective reporting and non‑publication of null results.
Why vulnerability is high:
Limited pre‑registration and trial registries for animal studies.
Strong novelty bias in high‑impact journals.
Perception that “clean” positive results are needed to justify costly clinical follow‑up.
4. Economics and some social sciences
Similar to psychology, parts of economics and other social sciences have relied heavily on statistical significance and multiple testing in large observational datasets (e.g., policy evaluations, econometrics).
The optional stopping and researcher flexibility issues identified in general meta‑research apply here too, with incentives for policy‑relevant, significant results [10].
Why vulnerability is moderate to high:
Strong reliance on p‑value thresholds and significance framing.
Historically less openness about full analytic pipelines, though data and code sharing are improving in some subfields.
Disciplines with Particular Structural Risks
Fields with rapidly evolving, high‑dimensional data (e.g., AI in medicine, digital health)
A 2026 analysis of new LLM models in healthcare notes that large, complex datasets and multiple outcome measures make it easy to find apparently impressive results that are not robust, especially if negative findings are not published [8].
Without strong reporting and evaluation standards, there is a risk of “success stories” being overrepresented relative to failed or neutral deployments.
Fields dominated by predatory or low‑quality journals
A 2025 scoping review (discussed in guidance on predatory journals) notes that articles from predatory outlets can infiltrate systematic reviews and meta‑analyses, introducing bias because such journals may disproportionately publish poorly designed, unregistered, or selectively reported positive studies [3].
Disciplines with high pressure and less stringent journal gatekeeping (some applied sciences, complementary and alternative medicine, certain educational subfields) are particularly exposed.
Countervailing Trends and Discipline-Specific Nuances
Discipline‑specific ethics and QRPs:
A 2026 survey of researchers’ attitudes toward research misconduct and questionable research practices finds that norms and perceived acceptability of behaviors like p‑hacking or selective reporting differ by discipline [5]. This means vulnerability is not uniform—even within a field—as subcommunities adopt different standards.
Open practices in arts, humanities, social sciences:
A 2026 report on open practices in arts, humanities, and social sciences highlights efforts to strengthen rigor and trust via open data, pre‑registration, and transparent methods [6]. These moves can reduce publication bias over time but are not yet universal.
LLMs and “failure‑worth‑publishing”
A 2026 preprint argues that large language models may lower the cost of analyzing and writing up negative or null results, potentially making “failure worth publishing” and helping counter traditional publication bias by making it easier to document and disseminate them [2].
Implications
For evidence synthesis (systematic reviews, meta‑analyses):
Reviews in psychology, biomedical research, and other high‑risk fields must explicitly account for publication bias (e.g., using funnel plots, trim‑and‑fill, selection models) and be wary of incorporating low‑quality or predatory‑journal studies.
Courts and policymakers relying on scientific evidence should understand that published literature may overestimate effect sizes, especially in areas with strong incentives and complex datasets [7][10].
For researchers:
In highly vulnerable disciplines, rigorous pre‑registration, data sharing, and publishing of null results are crucial to improve the reliability of the field and your own reputation.
Actively choosing reputable journals, transparently reporting all prespecified outcomes, and making all analyses (including null results) available in repositories can help shift norms.
For institutions and funders:
Targeted support for open practices and registered reports in the most affected fields can yield outsized benefits.
Encouraging or mandating the publication (or at least registry reporting) of all funded studies—regardless of outcome—reduces hidden negative findings.
MiroMind Reasoning Summary
I combined broad meta‑research on publication bias and optional stopping with discipline‑specific discussions of questionable research practices, predatory journals, and the reliability of research used in courts. These converge on psychology/behavioral sciences and biomedical/clinical research as especially vulnerable due to a mix of statistical practices, incentives, and historically weaker openness norms, with significant vulnerability also in preclinical and some social science domains. Recent moves toward open practices and the potential for LLMs to make “failures” easier to publish suggest partial mitigation, but current evidence still supports classifying these disciplines as most at risk.
Deep Research
6
Reasoning Steps
Verification
3
Cycles Cross-checked
Confidence Level
High
MiroMind Verification Process
1
Identified meta-research and theoretical work quantifying publication bias across disciplines.
Verified
2
Reviewed discipline-specific discussions of questionable practices and ethics norms.
Verified
3
Considered evidence on predatory publishing and its influence on meta-analyses.
Verified
4
Incorporated analyses of how unreliable research affects courts and policy.
Verified
5
Evaluated emerging factors (LLMs, open practices) that could mitigate or exacerbate bias.
Verified
6
Synthesized to identify disciplines with the highest current vulnerability.
Verified
Sources
[1] a framework for understanding the complexity of misinformation use (Vulnerability and Value). Nature, 2026-05-04. https://www.nature.com/articles/s44260-026-00079-x
[2] LLMs Have Made Failure Worth Publishing. arXiv, 2026-04-04. https://arxiv.org/html/2604.06236v1
[3] Predatory Journals in 2026: Red Flags, Risks and Safe Choices. Thesify AI Blog, 2025-12-01. https://www.thesify.ai/blog/predatory-journals-2026-guide
[4] Gender differences in submission behavior exacerbate publication bias. PNAS (PMC article), 2026-02-20. https://pmc.ncbi.nlm.nih.gov/articles/PMC12923207/
[5] Is research ethics discipline-specific? A survey of researchers' and students' attitudes. Research Policy, 2026. https://www.sciencedirect.com/science/article/pii/S0048733326000260
[6] Launch of a major new report on open practices that strengthen rigour and trust in arts, humanities and social sciences. UK Reproducibility Network, 2026-03-11. https://www.ukrn.org/2026/03/11/launch-of-a-major-new-report-on-open-practices-that-strengthen-rigour-and-trust-in-arts-humanities-and-social-sciences/
[7] When Courts Rely on Unreliable Science. National Association of Scholars, 2026-04-03. https://www.nas.org/report/review-of-federal-judicial-center/full-report/
[8] New model, old risks: sociodemographic bias and adversarial hallucinations vulnerability in GPT-5. npj Digital Medicine, 2026-04-04. https://www.nature.com/articles/s41746-026-02584-8
[9] Top AI ethics and policy issues of 2025 and what to expect in 2026. AIhub, 2026-03-04. https://aihub.org/2026/03/04/top-ai-ethics-and-policy-issues-of-2025-and-what-to-expect-in-2026/
[10] The Consequences of Optional Stopping on the Research Literature. Collabra: Psychology, 2025-09-12. https://online.ucpress.edu/collabra/article/11/1/143711/213260/The-Consequences-of-Optional-Stopping-on-the
Ask MiroMind
Deep Research
Predict
Verify
MiroMind reasons across dozens of sources and delivers answers with a full evidence trail.
Explore more topics
All
Law
Public Health
Research
Technology
Medicine
Finance
Science Policy



