Fake news in academia
- May 14, 2021
To an impartial reader there was always something odd about Rosenhan’s famous paper “On being sane in insane places”. It didn’t read like a report of high-quality research of the sort you might expect in Science, and the enthusiasm with which it was received (it has been cited >4000 times according to Google Scholar) seemed out of proportion to its form and content as a single author patchily written account of an un-replicated study involving 8 subjects.

Susan Cahalan’s book The Great Pretender goes further along the road of questioning the study, making the case that the whole thing was a fraud. She was unable to find any research records related to the reported results, and when she eventually traced two of the pseudo-patients they confirmed that they had not made the detailed observations reported in the paper and indeed hadn’t been asked to. One of the two was dropped from the final report, perhaps because his experiences were too positive, and yet in the rewritten final paper (now with 8 pseudo-patients) all the other numbers were identical to the earlier draft with 9 pseudo-patients. And most damningly, when the clinical records of Rosenhan’s own admission surfaced they showed that he had lied about what he told the admitting psychiatrist, in fact retailing fairly standard symptoms of schizophrenia.
No doubt the study proved so popular because it suited people to take it at face value. Psychiatry was in a dire state with diagnosis almost meaningless and the abusive nature of inpatient care widely recognized. Calhan quotes Chief Bromden: “It’s the truth even if it didn’t happen”. Rosenhan had provided a stick with which to beat the system. Calahan reports that even Robert Spitzer knew some of the facts – he had apparently seen the notes of Rosenhan’s admitting psychiatrist but kept quiet because he was keen to force through a more standardised approach to diagnosis. Calahan’s conclusion overall: “The messages were worthy; unfortunately the messenger was not”. Although at the same time as apparently approving of the function of the report in highlighting the need for change in psychiatry, she recognizes the problems with the way the paper was used. It formed part of the rationale for an aggressive deinstitutionalisation which has had catastrophic effects on care of the severely mentally ill in the USA, and it helped feed the growth of the DSM behemoth.
The ambivalence of Calahan’s conclusion is at odds with her book’s unequivocal title, and it started me thinking. If fraudulence and integrity exist in science, how shall we know them? My preliminary take on this is that researchers who misrepresent research fall into four categories, of course with blurry boundaries. It isn’t entirely clear that we know what to do about any of them.
The out-and-our fraudsters. Burt, Wakefield, and now Rosenhan are infamous. They just made stuff up. The list of members of this group is long and will grow. Of course we call out individual studies, but what of the individual academics? Is everything they ever researched to be negated? Lance Armstrong’s Tour de France wins are no longer acknowledged, but every other race he won?
The embellishers. Perhaps not everything they wrote is made up, but there is no great clarity about which bits were truth and which bits weren’t. To my mind the most under-acknowledged member of this class is Oliver Sacks. He was coy about it, but clear enough: “I mean, perhaps it’s a case that I seized on certain themes, imaginatively intensified, deepened, and generalized them. But still”. Or again: “I don’t tell lies, though I may invent the truth”.

And yet Sacks has never, to my knowledge, been called a fraud. Why not? Perhaps because some of the content of his cases would be familiar to many clinicians in the right specialties, and therefore everything he wrote clearly wasn’t made up. Even more so than for the out-and-outers, we can’t be entirely sure how much of the output of people in this group is trustworthy, and therefore what we should do about it.
The spin doctors. Here the behaviour is nothing like making up results; it’s not fraud. Most typically it involves secondary research, reviewing and synthesising the findings of others. This is a difficult skill to get right (see my fourth group) but the efforts of the members in this class are so wilfully wide of the mark, their failure to raise uncertainties or to consider biases so glaring, their findings so predictable from their pre-existing position on the question at hand, that you have to question motives rather than competence. These are academics who should know better. During the pandemic, Heneghan’s approach to COVID transmission is one contemporary example. In my own field I place Kindeman’s outrageous claim that there is no more evidence for the efficacy of ECT than there is for that of homeopathy. Here the answer seems to be attempted rebuttal rather than looking the other way, tempting though that is. Not that it’ll influence the people involved, but perhaps it’ll influence their standing in the academic and wider communities.
The academics who produce flawed evidence is a class that includes pretty much all the rest of us. We try but often get things wrong or present results in a biased way. A personal example. Some years ago, I led a programme part of which involved a case-control study exploring whether life stress might precede the onset of stroke. The answer looked like a tentative Yes and we published in a leading stroke journal. Much later I was approached to consider giving expert evidence in a case involving somebody who had suffered a stroke after a shocking event that was apparently a third party’s fault. My study was being cited. An expert for the defence had produced an eight page critique pointing out the flaws in my study and all I could say to the plaintiff’s team was “fair cop”. The main answer here is rigorous peer review and an academic climate that encourages serious adult debate about uncertainty.
This is a muddy field. Lack of rigour merges into recklessness with the facts and that merges into complete disregard for the facts. There are parallels in the wider public discussion about fake news and online media. A tighter regulatory environment may be one answer but is unlikely to be achievable at scale and with sufficient meticulousness. We need therefore to have a better, that is more critical, approach to engaging with academic and related sources. Perhaps critical appraisal skills teaching needs to include skills in appraising authors as well as their outputs…