In a recent lecture Chris Whitty, England’s chief medical officer, talked about how setbacks driven by misinformation can be temporary and how evidence and data can rebuild confidence. He was speaking at the London School of Hygiene and Tropical Medicine on the history of misinformation and health (doi:10.1136/bmj.r2338).1 Those sentiments might be applied to new research published this week—a rapid review that finds no support for Donald Trump’s sensational views on paracetamol (acetaminophen), known by the brand name Tylenol in the US. Paracetamol taken in pregnancy, claimed the US president, was responsible for the exceptional rise in autism diagnosis seen in recent decades (doi:10.1136/bmj.r2025).2
A team of international researchers has now examined the evidence on this exact research question. They judge that it comprises low quality or preliminary observational studies and that Trump’s claims don’t stack up. Their advice is that paracetamol should not be avoided in pregnancy (doi:10.1136/bmj-2025-088141 doi:10.1136/bmj.r2368).34 One of the interesting considerations is that studies claiming such a link may be “confounded” by indication (doi:10.1136/bmj.f6409).5 In other words, even if any increase in pregnancy complications were real rather than speculative, the illness that women take paracetamol for in pregnancy might well be the cause rather than paracetamol.
But can Trump’s megaphone that misinformed the world be silenced by a carefully conducted study published in a medical journal? The odds appear slim. Part of the problem is that the world of medical publishing has a crisis of credibility. A system that once operated on the basis of transparency and trust (doi:10.1136/bmj.329.7472.0-g)6—and still relies on those bedfellows—must now be bolstered by pre-submission integrity checks and content integrity teams. Journals are submerged in a deluge of suspect research from every corner of the world.
Adding scrutiny
Like misinformation, misconduct is nothing new. But it’s become easier for authors to execute it with the aid of artificial intelligence and “paper mills,” while being harder for publishers to contend with, given the volume of potential misconduct cases. Our capacity to manage potential research misconduct is being overwhelmed. Automated pre-submission checks help but are far from infallible, which leaves responsible journals considering how best to bolster peer review, both before and after publication, and to force greater transparency.
The BMJ publishes about 2% of the research submissions it receives. Around 85-90% of papers are rejected or are transferred to other journals without peer review. The papers that are sent for peer review are scrutinised by editors, statisticians, peer reviewers, and external advisers. Despite all of this we accept that we can’t verify every detail in any research paper, and we rely on the good faith of authors and their institutions. We also accept that we can get it wrong. The BMJ—or any other scientific journal—cannot honestly ever claim otherwise. This is where transparency and trust are crucial but also have limits.
On top of this we added a layer of post-publication scrutiny over 25 years ago: our rapid response service. This enables rapid critique of any article that we publish—although the critique itself must not be insulting, legally problematic, or factually incorrect. We urge readers and authors to use our rapid response service, as we’re unable to keep track of criticism that can appear in many places online or on social media—and we make no promise to do so.
Last year, to further increase our post-publication scrutiny, we introduced a policy of mandatory code sharing for all studies that we publish and mandatory data sharing for clinical trials (doi:10.1136/bmj.q324).7 Our policy is strict. We have already refused to publish some papers because the authors were unable or unwilling to share the data, as harsh as that might seem.
Data sharing
In one respect, our mandatory data sharing policy has worked. Post-publication scrutiny of a recent research paper, about stem cell injection to prevent heart failure after myocardial infarction (doi:10.1136/bmj-2024-083382),8 quickly provoked serious questions about the integrity of the underlying data. The authors say that an incorrect dataset was posted in error. We have published an expression of concern about this paper (doi:10.1136/bmj.r2388).9 Our content integrity team is investigating the wide ranging issues brought to our attention that go beyond the data—and we will decide what further action to take once that investigation is complete. Due process is important and time consuming, especially when working with institutions and regulators.
In another respect, our data sharing policy has failed. One purpose of mandatory data sharing is that it might act as a deterrent. This would now seem to be a false hope. A reasonable conclusion from this episode is that transparency and data sharing are not enough to create trust. We’re therefore examining how to prevent a similar situation arising again. It’s clear that greater scrutiny of the underlying data is required before publication. In all of this, it’s also clear—and always has been—that this is a battle that journals can’t win on their own.
We can demand more of editors, but generally their skills are in managing peer review and appraising research methods rather than evaluating primary data. We could demand more of reviewers and statisticians, but how will they find the time? We do demand more of institutions—but they are conflicted and weighed down by arcane processes, when we require greater responsibility from them. Perhaps we should work more closely with regulators, although the regulatory landscape is hard to navigate.
Might we, in a new departure, harness the talents of internet sleuths who are expert at identifying flaws in data and corruption? Might we abandon peer review altogether (but it’s hard to see how that helps)? Nobody has peer reviewed the vast majority of preprints, for example, before their submission to a journal. And what about the overwhelming majority of studies without available data (doi:10.1136/bmj-2023-075767), when most authors refuse to share data even on reasonable request?10
We’re in this together, and we welcome your ideas. The goal is to act in the best interests of the public, to devise more robust processes and new solutions that indeed allow evidence and data to rebuild confidence.
