Journalism should take a zero-tolerance approach to publishing false or unverifiable claims
I often ask friends – when you read an article in mainstream media about a topic in which you have some expertise (about health if you are a doctor for example), how often do you notice that it contains incorrect information? The majority of answers fall at the frequently/very frequently end of the Likert scale we’d be looking at if I were polling rather than chatting. I’m not talking here about serpent-headed aliens, microchip-containing vaccines or stolen elections. But I am talking about mundane examples of misrepresentation through partial presentation of the facts and fabrication.
I give illustrations from the Guardian newspaper, not because it’s a major culprit but because it isn’t. If the problem is present even in the best, it’s present everywhere. I am a long-time reader of the Guardian and subscriber to its online edition. I value its balanced coverage and regard it as standing head and shoulders above all other daily newspapers in the UK for its reliability and lack of bias. But at times I am left wondering, even in this newspaper, about a particular piece – is this true? How would I know?
On the surface the examples I will give may seem like minor infringements, but unreliable reporting in any part of the paper can lead to lack of trust in the reporting of every part of the paper; and we are storing up trouble for the future if journalists following examples such as these come to believe that writing a good story takes precedence over writing an entirely accurate one. There is a fairly simple solution to the problem but before considering it, a few examples.
An article in January 2023 described a survey which was said to have “…found that one in five LGBTQ+ people and more than a third of trans people in the UK have been subjected to attempted conversion…”. As part of an online survey, respondents were asked whether they had ever experienced someone taking any action (my italics) to try to change, cure or suppress their sexual orientation or gender identity. Describing the findings, the phrase “subjected to” appeared in the article headline, in the final sentence and three times in the text. There was no link to the survey report but when I found one it revealed that the campaigning group commissioning the survey has a particular take on what “subjected to” means.
“There must be no “consent” loophole… Conversion practices are abuse and it is not possible to consent to abuse …The definition of conversion practices should include religious practices…”. So examples of what respondents were “subjected to” included “I saw a counsellor…” and “My partner ended our relationship because of God and then the people from church prayed for us to become straight.” For sure, there were quotes about much more unpleasant experiences but even there the reframing was unusual: being beaten up because you’re gay is wrong, but it’s a stretch to call it a conversion practice. There was no indication of a typology of practices or the prevalence of various practices – anything and everything goes towards the headline figure. This strikes me as a long way from what most people understand by the sort of conversion therapy that might be banned by legislation, but you wouldn’t know it from the way the survey was reported.
An article in June last year headed “Brain damage claim leads to new row over electroshock therapy” reported that electro-convulsive therapy (ECT) “…is now the focus of a huge row – which erupted last week – over claims that it can trigger brain damage, that guidelines covering its use are weak and that it is used disproportionately on women and the elderly.” Again there was no reported evidence of a huge row; just a link to a 5 year old Guardian article retailing the same criticisms from the same source as described in the 2022 article. The bust-up seems to have been imagined into life to act as a hook for the otherwise non-story.
Something from the pandemic. An article from January 2021 reported that the “Prince’s Trust happiness and confidence survey produces worst findings in its history”. Three accompanying comments linked the findings to the impact of the pandemic. The findings as reported were literally true (just) but a reading of the whole report gives quite a different picture. In 2021 just 56% of respondents said they were happy about, and 64% said they were confident about, their emotional health. Certainly the lowest on record but the corresponding figures for 2018 were 57% and 65%. In 2021 56% said there were always or often anxious. Again, the highest on record but the figures for the preceding years 2018-2020 were 53%, 54% and 55%. The really big changes have come since 2010 when more than 70% said they were happy and confident about their emotional health and fewer than 20% said they felt anxious or depressed all or most of the time. So a study that shows a decade-long decline in the emotional health of young people is reframed as a story about the impact of the pandemic by the simple expedient of not reporting most of its findings.
In a piece from January this year promoting assisted dying and entitled “Today, 17 people will likely die in unimaginable pain…” regular contributor Polly Toynbee writes, after a warm up about torture chambers, excruciating pain, horror and humiliation, that “On average 17 people a day die in terrible pain that can’t be relieved by even the best palliative care.” The claim is based upon a review undertaken by the Office for Health Economics which, like the research it is reviewing, refers nowhere to the severity of pain but only to “unrelieved pain” much of which, it would be clear to anybody familiar with the clinical scenarios, will not match the descriptions offered. Toynbee’s account of unimaginable pain in end-of-life care comes in fact from her own imagination.
Much of this would be avoided if journalists put a bit more work in – didn’t just recycle press releases and did some of their own fact-checking, aided by basic critical appraisal skills. How would we know if they were doing that? Online encyclopedia Wikipedia, in facing its own questioning about reliability, has developed a policy it describes as Verifiability, not truth. “Verifiability” means that material must have been published previously by a reliable source, cited by the writer and consulted by them. Sources must be appropriate, must be used carefully, and must be balanced relative to other sources.
Citing reliable sources, with a clear statement that the journalist has consulted them, gives readers the chance to check for themselves that the most appropriate authorities have been used, and used well. In fact none of the four examples I give here would be compliant with such a policy. If respectable and respected mainstream media are to maintain their reputation for trustworthiness they need to demonstrate how they manage reliability in their reporting and not just assert that they do. An explicit, and explicitly followed, verifiability policy would be a good start.
An article in the Observer 26 June 2022, “Brain damage claims lead to new row over electroshock treatment”, by Science Editor Robin McKie, is typical of its type.
The first version displayed McKie’s dismaying ignorance of the difference between psychology and psychiatry, describing ECT as “one of the most dramatic treatments employed in modern psychology” and suggesting that its greater use in women is likely to indicate a bias on the part of psychologists. Somebody must have pointed out fairly soon after publication that ECT isn’t a treatment employed in modern psychology because it’s a medical treatment administered under the auspices of psychiatrists and isn’t used at all by psychologists. Indeed, psychologists rarely work in the acute inpatient environments where most people with severe or psychotic depression are treated. Certainly the online version was changed within the week.
The article aired Professor John Read’s well-known views on ECT (which he and almost nobody else refers to as electroshock), claiming that the treatment “…is now the focus of a huge row – which erupted last week – …”. I can find no evidence in the professional or mainstream media to support the existence of this “clash”. In fact no evidence of it is provided in the article, which consists only of Read’s views linked to responses from two senior psychiatrists who were presumably invited to comment on them. It just seems to be made up as the excuse for re-hashing an old story.
There is a strong implication that something new has emerged to fuel this so-called row and indeed it is called a “new row” in the article’s headline. I can’t find any evidence that’s true – there is no mention of recent reports about ECT on the home pages of the Royal College of Psychiatrists, the British Psychological Society or the Care Quality Commission. And a search of Google Scholar reveals no new research to back up the claims made by Read. The one relevant link in the article (trailed as a “recent study”) takes us to a 5 year-old piece in the Guardian highlighting the observation that more women than men are given the treatment.
“We know it causes brain damage” says Read despite there being no consensus that’s true, going on to make the bizarre claim that psychiatrists use ECT because they don’t know the difference between psychotic depression and loneliness or bereavement.
McKie seems not to have got around to asking Read a rather obvious question. If we are going to ban ECT completely then what are we going to do instead? Awaiting spontaneous improvement won’t do for somebody who isn’t eating or drinking; psychological therapy isn’t an option for somebody who can’t sustain a conversation; medication can help with delusions and hallucinations but it is not always effective. I’m guessing he has no idea what sort of depression is actually treated with ECT and didn’t try to find out. Why bother if you can write an article based entirely on recycling what other people say?
The only thing missing was a picture of Jack Nicholson in One Flew Over the Cuckoo’s Nest.
Discussions about the treatment of severe mental illness deserve better journalism than this.
To an impartial reader there was always something odd about Rosenhan’s famous paper “On being sane in insane places”. It didn’t read like a report of high-quality research of the sort you might expect in Science, and the enthusiasm with which it was received (it has been cited >4000 times according to Google Scholar) seemed out of proportion to its form and content as a single author patchily written account of an un-replicated study involving 8 subjects.
Susan Cahalan’s book The Great Pretender goes further along the road of questioning the study, making the case that the whole thing was a fraud. She was unable to find any research records related to the reported results, and when she eventually traced two of the pseudo-patients they confirmed that they had not made the detailed observations reported in the paper and indeed hadn’t been asked to. One of the two was dropped from the final report, perhaps because his experiences were too positive, and yet in the rewritten final paper (now with 8 pseudo-patients) all the other numbers were identical to the earlier draft with 9 pseudo-patients. And most damningly, when the clinical records of Rosenhan’s own admission surfaced they showed that he had lied about what he told the admitting psychiatrist, in fact retailing fairly standard symptoms of schizophrenia.
No doubt the study proved so popular because it suited people to take it at face value. Psychiatry was in a dire state with diagnosis almost meaningless and the abusive nature of inpatient care widely recognized. Calhan quotes Chief Bromden: “It’s the truth even if it didn’t happen”. Rosenhan had provided a stick with which to beat the system. Calahan reports that even Robert Spitzer knew some of the facts – he had apparently seen the notes of Rosenhan’s admitting psychiatrist but kept quiet because he was keen to force through a more standardised approach to diagnosis. Calahan’s conclusion overall: “The messages were worthy; unfortunately the messenger was not”. Although at the same time as apparently approving of the function of the report in highlighting the need for change in psychiatry, she recognizes the problems with the way the paper was used. It formed part of the rationale for an aggressive deinstitutionalisation which has had catastrophic effects on care of the severely mentally ill in the USA, and it helped feed the growth of the DSM behemoth.
The ambivalence of Calahan’s conclusion is at odds with her book’s unequivocal title, and it started me thinking. If fraudulence and integrity exist in science, how shall we know them? My preliminary take on this is that researchers who misrepresent research fall into four categories, of course with blurry boundaries. It isn’t entirely clear that we know what to do about any of them.
The out-and-our fraudsters. Burt, Wakefield, and now Rosenhan are infamous. They just made stuff up. The list of members of this group is long and will grow. Of course we call out individual studies, but what of the individual academics? Is everything they ever researched to be negated? Lance Armstrong’s Tour de France wins are no longer acknowledged, but every other race he won?
The embellishers. Perhaps not everything they wrote is made up, but there is no great clarity about which bits were truth and which bits weren’t. To my mind the most under-acknowledged member of this class is Oliver Sacks. He was coy about it, but clear enough: “I mean, perhaps it’s a case that I seized on certain themes, imaginatively intensified, deepened, and generalized them. But still”. Or again: “I don’t tell lies, though I may invent the truth”.
And yet Sacks has never, to my knowledge, been called a fraud. Why not? Perhaps because some of the content of his cases would be familiar to many clinicians in the right specialties, and therefore everything he wrote clearly wasn’t made up. Even more so than for the out-and-outers, we can’t be entirely sure how much of the output of people in this group is trustworthy, and therefore what we should do about it.
The spin doctors. Here the behaviour is nothing like making up results; it’s not fraud. Most typically it involves secondary research, reviewing and synthesising the findings of others. This is a difficult skill to get right (see my fourth group) but the efforts of the members in this class are so wilfully wide of the mark, their failure to raise uncertainties or to consider biases so glaring, their findings so predictable from their pre-existing position on the question at hand, that you have to question motives rather than competence. These are academics who should know better. During the pandemic, Heneghan’s approach to COVID transmission is one contemporary example. In my own field I place Kindeman’s outrageous claim that there is no more evidence for the efficacy of ECT than there is for that of homeopathy. Here the answer seems to be attempted rebuttal rather than looking the other way, tempting though that is. Not that it’ll influence the people involved, but perhaps it’ll influence their standing in the academic and wider communities.
The academics who produce flawed evidence is a class that includes pretty much all the rest of us. We try but often get things wrong or present results in a biased way. A personal example. Some years ago, I led a programme part of which involved a case-control study exploring whether life stress might precede the onset of stroke. The answer looked like a tentative Yes and we published in a leading stroke journal. Much later I was approached to consider giving expert evidence in a case involving somebody who had suffered a stroke after a shocking event that was apparently a third party’s fault. My study was being cited. An expert for the defence had produced an eight page critique pointing out the flaws in my study and all I could say to the plaintiff’s team was “fair cop”. The main answer here is rigorous peer review and an academic climate that encourages serious adult debate about uncertainty.
This is a muddy field. Lack of rigour merges into recklessness with the facts and that merges into complete disregard for the facts. There are parallels in the wider public discussion about fake news and online media. A tighter regulatory environment may be one answer but is unlikely to be achievable at scale and with sufficient meticulousness. We need therefore to have a better, that is more critical, approach to engaging with academic and related sources. Perhaps critical appraisal skills teaching needs to include skills in appraising authors as well as their outputs…