What exactly is empathy in clinical practice?

  • February 19, 2024

In a recent British Journal of Psychiatry editorial summarising key points from the latest NICE guideline on management of self-harm, the authors led with a section headed “The need for empathy” – a rallying call that seems on the face of it unproblematic. Empathy is however underspecified in their article (as it so often is in psychiatry) both in terms of its defining features and in relation to what exactly should be the interventions to ensure it happens.

Interestingly the word empathy appears for the first time only in the supplement to the OED. It has gained currency since then but with a rather blurry feel-good meaning. The OED defines empathy as: “The power of entering into the experience of…emotions outside ourselves”. An early use referred to the experience of a work of art, the ability to “feel oneself into it”. These definitions too are rather hard to grasp but they suggest a state that it is unrealistic to expect a clinician to achieve – especially in relation to somebody met only briefly and in unusual circumstances, whose emotions and responses to them are likely to arise from experiences well outside the clinician’s own.

Some people claim that they can teach healthcare professionals how to be empathic. What it seems to boil down to is usually something like being attuned to implicit or non-verbal expressions of emotion, acknowledging and responding to them. This of course isn’t empathy but a sort of social practice. For that reason the more sophisticated versions of acknowledging and responding to patients has been likened to method acting.

Instead of continuing with this (frankly rather pretentious) focus on empathy, there are some less nebulous aspects of sensitive and non-aversive care that can be taught.

First is courtesy and professionalism. These, unlike empathy, can be taught and supervised by attention to behaviour – how to introduce yourself, asking how the patient wants to be addressed, paying attention to privacy and confidentiality and so on. And if that fails there is a question about lack of professionalism for which there are other remedies.

Second is being well-informed about causes and consequences. My own experience suggests that many clinicians are not au fait with what is now known about reasons for self-harm – in the sense of what its functions might be. One indication is the frequency with which discussions centre around diagnosis, which is except in a minority of no help in understanding what is going on. Another is the persistence of stereotypes about self-injury. It is difficult to undertake a sensitive and meaningful psychosocial assessment if you don’t know what you’re looking for. I wouldn’t downplay the importance of person-centred care or the value of service user involvement in training, but clinically oriented postgraduate education also needs to develop in this space. This sounds too fact-based to have anything to do with “empathy” but then I have always thought that a better word than empathy is sympathy – the sense of feeling onside with somebody that comes from a shared understanding of the situation. And how to elicit that shared understanding can be taught.

Third is the question of competence in practice and here there is a real challenge. Hardly anybody provides comprehensive psychological or psychosocial treatment services in the post-acute period – not in liaison psychiatry where most acute presentations are seen, nor in clinical psychology, nor in CMHTs. Good management involves, for sure, a professional attitude and behaviour and sound knowledge both about therapies and about the specific problem being tackled – but also the generic (transferable) skills, behavioural and emotional repertoire and expertise that come from experience. How can we enhance care in this area if we don’t provide the services within which all this can be developed?


If we are to improve how we treat people we need to go beyond rather general appeals to good practice. We need to develop self-harm services and to specify the curriculum for education and supervised training that will develop those working in such services to act professionally and sympathetically, as well as with a competence derived from education and from experience in practice.  

Social media harms: Mark Zuckerberg’s evidence-based practice

  • February 6, 2024

At the recent Senate judiciary hearing on “Big Tech and the Online Child Sexual Exploitation Crisis” Meta CEO Mark Zuckerberg said to parents present: “I’m sorry for everything you’ve all gone through, it’s terrible… No one should go through the things that your families have suffered and this is why we invest so much and we are going to continue doing industry-wide efforts to make sure no one has to go through the things your families have had to suffer.” This statement to an unimpressed looking audience was widely, and perhaps rather generously, reported as an apology (“a regretful acknowledgement of an offence or failure”) and it received a great deal of press coverage.

Less heavily reported, but to my mind much more surprising, is something else…

At the start of the hearing, Zuckerberg said, “The existing body of scientific work has not shown a causal link between using social media and young people having worse mental health.  I think it’s important to look at the science, I know people widely talked about this as if that is something that’s already been proven, and I think that the bulk of the scientific evidence does not support that.”

On the face of it this statement is a flat contradiction of much of what has been taken as read in recent public debates about harms from social media. But…Zuckerberg’s remarks are, like his apology, carefully worded. In particular the second of his assertions isn’t wildly out. It is true that the bulk of evidence doesn’t support a causal link because it is cross-sectional and shows only associations. And it is also true that some of the case has been overstated – for example there is little robust evidence that social media “cause” depressive disorders or suicidal thinking in people not already struggling with those problems.

On the other hand, the first assertion seems shakier. It says more definitely that there is no evidence across the whole area of mental health. And yet any reasonable conclusion based upon what we know about how the online world works (some of it reviewed by experts in our recent book on social media and mental health) is that it is likely harms will flow from addictive patterns of social media use, from online bullying and harassment, malicious use of personally-shared photographs, solicited contact by predatory adults and the like.

Unfortunately a “reasonable conclusion based upon what we know about how the online world works“ is not the same as rock-solid proof that is going to stand up against hot shot lawyers from Team Zuckerberg arguing about what “shown a causal link” and “worse mental health” mean.

I can’t help thinking that fulminating about the bad faith of the tech CEOs is about as productive as grumbling about the ignorance and bias of academic peer reviewers. There may be political and legal avenues to follow in pursuit of a safer online world, but for researchers the next step must be to revisit how we undertake studies in a methodologically and logistically challenging area: what exactly is the research we need to do that will produce irrefutable proof that certain sorts of remediable social media experiences are bad for the mental health of those experiencing them? And perhaps as importantly – how can we explain the findings to politicians and legislators?

Marketing medical assistance in dying and the privileging of personal choice

  • November 29, 2023

One of the defining features of a functioning state is usually taken to be that the state holds a monopoly of violence – especially violence towards its own citizens. There are however a number of situations in which the state sanctions killing, or at least does not punish it. Putting aside the activities of the armed forces and police, the other common situations fall under the umbrella of what might (surprisingly) be thought of as the public health. They include, in jurisdictions where they are allowed (not criminalised): termination of pregnancy, suicide and euthanasia.

I discern a number of influences at play when decisions are being made about the exact circumstances under which the state will allow ending of life. There may be others, but as a starter my list is:

  • The degree to which the life under consideration is regarded, by the person living it, as so burdensome as to no longer be worth living;
  • The social value of the life under consideration – that is, how others view the individual per se or as a result of any condition from which they suffer;
  • The practical, technical or emotional difficulties inherent in the process of ending the life – what we might call barriers to implementation;
  • The degree to which differences of opinion have to be taken into consideration in arriving at a final decision.

These sorts of ideas are encapsulated in enabling legislation, of course each being given different weight according to the exact circumstances.

I was recently listening to a talk about medical assistance in dying (MAID) given by the historian Kevin Yuill during a meeting in Jersey. Yuill introduced me to another angle on this, which is thinking about the motivational influences: not how the decision is made but something like why it is being made at all. Yuill suggests that in the history of euthanasia and assisted suicide, the main motivating factors have been ideas about utility and compassion. Utility has featured most brutally in pro-euthanasia and eugenicist ideas about ballast existence and while compassion could be instrumentalised in support of these endeavours, it also has a more altruistic side. More recently however, an appeal to autonomy has come to dominate: euthanasia and suicide (assisted or not) are argued to be, in the right circumstances, individual choices which should not be blocked by authoritarian government actions.

This proposition struck a chord – when I speak with friends and colleagues about MAID, the response of those who support it is that it is a personal choice that they would like to be able to make for themselves if they were in a position where it was relevant. If they have thought further, it is only to consider briefly (and dismiss as avoidable) the slippery slope argument.

And the appeal to autonomy describes neatly how the campaign for medical assistance in dying (which in the UK means physician-assisted suicide) is being framed. What is going on in the public debate about MAID legislation is not that supporters are contributing to a dispassionate assessment of the pros and cons of a momentous change in the law – with potential direct and indirect harms being considered as well as potential individual benefits. Instead the idea is promoted (marketed) that MAID is evidently something people will want to choose and that preventing that choice is wrong. The emotive case histories/horror stories, celebrity endorsement and misrepresentation of the alternatives as the quiet dignity of assisted suicide versus giving oneself up to the horrors of end-of-life or palliative care, are all designed to arouse anger about a right denied.

When the exercise of personal choice becomes the paramount consideration we are in a society where the individual’s wishes overshadow questions of the public good. Now how could that not strike a chord…

The Online Safety Bill is supposed to protect young people with mental health problems: how will we judge if it has any effect?

  • October 19, 2023

After a long public and political debate about what form legal regulation of social media should take, the UK’s Online Safety Bill (2023) has passed into law. One of its highly-publicised aims is to protect young people from harmful exposure to content likely to lead to lowering of mood and an increased risk of self-harm and perhaps suicide. Now that we have moved to the stage of implementing the measures outlined in the Bill, how will we know if it is achieving its aim of reducing severe mental health harms to young people?

Our research and that of others, published in a multi-author book this month, suggests that the answer to this question will not be easy to establish. Preoccupation with the need to suppress harmful content has not led to great precision in the definition of what constitutes harmfulness, or of what we can think of as the social in social media – including the ways in which social media are used and by whom. Little attention has been paid to the problem of unintended consequences, and especially the possibility that regulation might lead to loss of positive aspects of social media use. And we are unclear what measures of outcome will be feasible.

In the early years of this debate we were working with a doctoral student whose thesis involved an analysis of more than 600 images posted on social media with a tag that included self-harm. Our student’s findings suggested a more interesting, in some ways surprising and more complicated picture than was reflected in the public debate. While communication of distress was common so were stories of recovery and many of the associated comments were encouraging and supportive. The posts identified were by no means restricted to explicit discussion of self-harm and in more than half the accompanying image did not represent self-harm directly – labelled with the self-harm tag were discussions of a range other topics including the nature of gender and the female body and concerns about identity and belonging. Even when tagged as “self-harm”, the space was being used to discuss these other matters of emotional concern to young people.

We decided to follow this single study with a review of the research literature to explore the issues further. The review was undertaken on behalf of the mental health charity Samaritans and explored the relation between social media use and mental health, and in particular the effect of accessing content about self-harm and suicide. We found that the nature of this content was diverse. There was content that would universally be considered harmful, such as detailed description or video streaming of methods and active, explicit encouragement to act. However, there was little evidence that much of the content in isolation could be considered unambiguously harmful.

When looking at outcomes of exposure to self-harm and suicide content, we found that previous research studies have indeed identified negative consequences – the reliving of distressing personal experiences, a sense of pressure to present oneself in certain ways or to offer help to others when one was not in a position to do so, sometimes a stimulus to further, perhaps more severe, self-harm. But research also identified positive aspects of the social media experience – a feeling of reduced isolation and support from a community of people sharing similar experiences in a non-judgmental way, the opportunity to achieve some self-understanding through recounting personal experience online, and for some people access to practical advice such as details of helping agencies or guidance on hiding scars.

It was also clear that an important influence on outcomes was not just the content of social media but the way in which they were being used – such as the intensity of interaction with other posts and the amount of time spent online, and the interactions with and reactions from others to content posted. At least as important as harmful content is whether social media use leads to connection but to an unhelpful online community, to trying for connection but failing to find a community with which to identify, to being harangued for sharing experiences, or to asking for help that isn’t forthcoming. It is unclear how such experiences could be regulated or their effects mitigated except by the individual online.

There are formidable challenges in researching this area, not least that social media are valued by many people because of their anonymity, and it is difficult to apply high quality research methods to unbiased samples. For this reason we decided that it would also be valuable to gain a wider understanding of expert opinion across this field. In other words we wanted to know if there is a consensus among experts studying the relation between social media use and mental health about what can and cannot be considered harmful and what would be the most desirable responses to this relatively new feature of the social landscape. We approached academics known for their interest in the area, and the result is the multi-author book edited by us – Social Media and Mental Health  and published by Cambridge University Press.

Some of the issues raised include not just the content of postings but the great diversity in who accesses or posts, how they use social media and how they respond to specific content: outcomes cannot readily be attributed either to content alone or to the person alone – they are likely to arise from the interaction between content, person and context. While a central issue is algorithmic pushing which increases duration and intensity of exposure, there remains no specific definition of degree and type of exposure when it comes to this social aspect of a regulatory framework.

Another aspect of social media use that was under-explored in earlier public debate about the Online Safety Bill was the role of social media as a source of positive help. At the time of our own review into online resources for self-harm, we found that most sites were extremely limited in what they offered as practical help to people seeking it. Positive resources need to move beyond encouragement to take care and to seek professional help. What our contributors describe is their involvement in programmes of work that serve as a pointer to the next generation of online resources – developed on sound theoretical grounds and principles of practice and involving young people in determining format and content.

In addition to these challenges in monitoring to the form and content of online experience, there is a question of how to assess outcomes. Rates of distress, of self-harm or of suicide in young people are likely to fluctuate, but how would we know if any improvement could be attributed to the recent legislation? An associated reduction in accessing certain social media content might be taken as evidence, but correlation is not proof of causation and there are other interpretations of such an observation.

For all these reasons, we are left uncertain whether it will prove possible to evaluate the effect of the Online Safety Bill on the mental health of young people. That is, in terms of processes, whether we will be able to identify changes that incontrovertibly represent reduction in harmful content and harmful types of social media use, that do not have the unintended consequence of reducing access to helpful online interactions, and that increase the availability of genuinely helpful resources. And in terms of outcomes, to identify changes in rates of mood disturbance, self-harm or suicide that can be attributed to the effects of legislation.

If you only read one book about identity, class and gender…is this the one?

  • August 28, 2023

In 1996 the book shop chain Waterstones launched a poll of the reading public asking for views on the greatest books of the 20th century. They published a list of the top 100 as a pamphlet The Books of the Century, invited Germaine Greer to review them for the house magazine and offered customers the chance to buy titles from the list at four for the price of three. The Waterstones survey was based on an idea from the New York Public Library’s Books of the Century, a list produced in 1995, and perhaps not surprisingly the two lists shared 50 titles.

Since 1996 a steady trickle of similar Lit Lists has come to my attention. In 1999 the French retailer FNAC collaborated with Le Monde in a survey that asked the question “Quels livres sont restés dans votre memoires?” and published a list of the top 100 (this book listing exercise seems pretty much always to produce a list of 100). Of the 16 lists I have before me the BBC tops the charts with four, starting with The Big Read Top 100 and now 100 Books Everyone Should Read, the 50 Greatest Books Of All Time and the top 100 Books You Need To Read Before You Die. Newspapers and periodicals like the genre: Time Magazine’s All Time 100 Novels is joined by offerings from the Guardian, the Daily Telegraph, the Times Education Supplement and Reader’s Digest.

Who gets to choose? Crowd-sourcing has been popular: typical is the Daily Telegraph’s poll of its readers to suggest a top 100 books for World Book Day 2007 and the Modern Library request of its readers to nominate both 100 best books and 100 best novels. Experts aren’t as disdained as you might imagine: Norwegian Book Clubs asked 100 noted writers from 54 countries; in 2002 the Times Education Supplement asked teachers and on the same theme the Guardian in 2014 asked contemporary writers to suggest set texts for English school children. The Modern Library asked its editors as well as its readers and Time magazine asked two of its resident critics for a list, idiosyncratically suggesting titles only from 1923 to the present. The Guardian has published the list of just one person (Robert McCrum) and in 2013 David Bowie published his own list of 100 must-read books later made the basis for an online book club launched by his son Duncan Jones.

The criterion for inclusion in these lists is not fixed – good reads, great books, all-time great books, 20th century books only. Most consist only of works of fiction although Waterstones original list included two nonfiction books – Nelson Mandela’s A Long Walk To Freedom and Delia Smith’s Complete Cookery Course. The Bible appeared along with 99 novels in the in the Telegraph’s top 100 books.

The cinema has a role to play and accounts for some of the more implausible entries. Jurassic Park? Trainspotting? Really? The film link probably also explains why Gone With The Wind features on more than one list including Le Monde’s. On the other hand some famous films were made of books that were already widely read and would have been here anyway – Lord of the Rings, the Harry Potter books, Catch-22, Rebecca. Film doesn’t account for all the outliers:  I wonder how many people really voted for Jacques Lacan’s Écrits?

Two entries made me laugh out loud. The teachers list for TES was utterly persuasive and included Pride and Prejudice, Anna Karenina, and the Very Hungry Caterpillar. And in the writers’ suggestions for school set texts I enjoyed Hanif Kureishi’s proposal of his own book the Black Album. I awarded it a Schwartzkopf prize (for which it is the only contender from all the lists) named for the famous soprano who in her appearance on Desert Island Discs nominated seven of her own recordings.

Does a consensus emerge? Nine of the top ten books in the list of Most Begun but Unfinished Books Ever from the Goodreads website also feature in at least one of the 16 best books ever lists. Even so certain books and authors feature across several lists: Jane Austen , George Eliot, the Brontës – you can guess the other recurrent appearances in a predominantly Anglo-centric portfolio.

It is tempting to try and merge all the listings to produce an outright winner but heterogeneity among the lists makes it difficult to manage a pooling exercise. If you are keen to pick a short list you could choose books on the lists that have been written by Nobel Prize winners or that have won the Man-Booker prize; you could go with the wisdom of crowds and select books from the lists that also feature in the best sellers of all time lists, in which case you’re likely to be reading Tolkien, Rowling, Blyton and Dan Brown; you could remove the arrivistes, the fashionably popular, and stick with those that have proved enduring – leaning on the wisdom of crowds without much money or perhaps crowds that use lending libraries.

I’ll save you the trouble. There is only one book that appears on all 16 lists that I have collected. It is a novel, a good read, published in the 20th century, approved by teachers, literary critics, rock stars and the reading and book-buying public. It has been made into a film more than once. It touches on some of the great themes of the long 20th century – identity; the relation between money, social class and respectability; the treatment of women.  It is The Great Gatsby. Who knows if it is the book of books but it is the undisputed book of book lists.

Who are The Autists?

  • July 22, 2023

I was prompted to write this post after reading a review of a book about autism in the Times Literary Supplement – James Cook writing about Clara Törnvall’s The Autists (TLS 23 June). So, it isn’t a review of the book, or even a review of the review. It’s a note about a common problem I see with the popular discussion of autism.

Törnvall is described as a successful journalist and TV and radio producer who has been diagnosed in adult life as having autism.  Her description of what that means will be reasonably familiar to anybody who has encountered recent coverage of so-called high functioning autism among people in the public eye – she doesn’t pick up on subtext/banter, she struggles with eye contact, she doesn’t like being interrupted, she has a few (but intense) interests, she has a long-standing proneness to anxiety. Her book is presented as myth-busting but it has to be said some of the myths are rather underwhelming – woman can be autistic; autism isn’t a disease but a (neurodevelopmental) condition; autism’s causes are biological with an hereditary component. Still, apart from the occasional lapse into hyperbole most of it seems uncontentious enough.

What I found problematic, here and more widely, is the merging of two different ways of talking about autism – justified no doubt by the spectrum metaphor. Two because current discussion of autism treats it in two ways – as a disorder, often disabling, or as a manifestation of natural variation in human behaviour. To my mind these require different vocabularies. In the former case it is appropriate to talk about diagnosis and to regret the lack of a cure (as Cook does); in the latter we should use the vocabulary of recognition of (neuro)diversity and regret the inflexibility of modern social life. This distinction calls for clarity about what defines the boundaries of autism as a disorder (qualitatively different manifestations and associated severity of disability), but that is missing from Cook’s review and from much public discussion of the topic. Instead the language of diagnosis is used across the spectrum, and we are invited to be optimistic that more and more people (and especially women and girls) are receiving the diagnosis.

I think this merits pause for thought. Overenthusiastic application of diagnostic labels to historic figures may be merely silly (Beatrix Potter? Really?), but we should be careful not to categorise all sorts of people (especially the young) as so-called Autists on the basis of eccentricity, social awkwardness or a tendency to quirky infatuation with objects. Labels can be damaging; maybe not everybody will benefit when they’re growing up from being told (or others being told) that they’re an Autist. And you don’t need a diagnosis to try noise-cancelling headphones.  

At the same time, there is something potentially damaging to the interests of the severely disabled if we appropriate the language of disorder to describe normal variation. The person with a severe disabling autistic disorder – who may be prone for example to impetuous running, persistent hand-flapping or screaming meltdowns – is not recognizable in accounts of people who are, if you like, on the other end of the spectrum. So, there is little evidence that contemporary popular discussion of autism, concentrated as it so often is on those often described as high-functioning, does much for those who have highly disruptive (sometimes called challenging) accompanying behaviours and are unable to care for themselves or live independently. Awareness isn’t raised nor campaigns triggered for those most in need, and as a result it remains the inadequacy of our provision for people with a severe disabling autistic disorder that constitutes our society’s main failing of those with autism.

Postscript: an abbreviated version of this comment has been published as a letter in TLS 21 July 2023.

Adult Human Female – a documentary that’s about more than gender.

  • May 23, 2023

An unremarkable film that nonetheless raises some far-reaching questions.

I recently watched the documentary Adult Human Female.  It gives an outline of the main arguments raised in what we might call the gender critical response to gender identity theory as it is applied by trans activists. I tuned in after seeing an article in the Guardian newspaper reporting on a row taking place at Edinburgh University about whether or not it should be shown on campus. It isn’t difficult to find – it’s readily available on YouTube for those wanting to try and understand what the dispute is about. For me, the film raised three questions, only one directly related to the Edinburgh story.

The first question is, I suppose, why does anybody think the film should be banned? It takes the form of inter-cut pieces to camera from various people with some claim to relevant expertise – in law, medicine, and philosophy for example. There’s nothing here that would be novel to anybody who has taken even a non-specialist interest in trans debates in the last few years. The style is, in places, challenging but there’s nothing remotely illegal in it, or anything that could be reasonably described as aimed at inciting violence or hatred towards trans people. The main objection I can find online is the usual one that any opposition to the gender identity theory proposed by trans activists must necessarily be offensive and transphobic.

Two other questions came to mind while I was thinking about this: neither at all original but prompted, I think, by watching the case being presented on film rather than by reading about it.

One of those questions is – I can see what’s gained by tackling single issues as case studies, but what’s lost? The specific example raised by the film is sexual violence in prisons. For sure, I think most people can see that putting violent sexual offenders with male genitalia into female prisons is not a great idea. But the focus on this issue can easily overshadow a wider problem of sexual violence in prisons. Not long after I watched the film I read a newspaper report indicating that in the past 13 years there have been nearly 1000 rapes and more than 2000 sexual assaults in our prisons. The main lesson is that overcrowding and understaffing mean that it’s all but impossible to make prisons safe for those in them. Rather lost, then, in the furore about the threat from trans women is that sexual violence in prisons is best viewed not through the lens of identity politics but as an indictment of national government policy, and especially that pursued by successive Tory governments in the name of austerity.

And the third question – gender reassignment, sex and sexual orientation are all protected characteristics under the Equality Act 2010, but what exactly is the point of protected characteristics? They are difficult to define (see eg Malleson K. Equality law and the protected characteristics. The Modern Law Review. 2018 Jul;81(4):598-621.) and despite now extending to nine they still don’t include obvious targets for discrimination like body weight, socio-economic status or non-disabling mental disorder. When Diane Abbott MP wrote an ill-considered incoherent letter to the Observer newspaper about racism and prejudice, she was accused of trying to argue for a hierarchy of discrimination where none exists. And yet that feels like the whole point of protected characteristics – they are labels for sorts of discrimination we want to legislate against and therefore implicitly they label by omission states that don’t merit legislation.

Maybe I misunderstand, but it strikes me that these aren’t easy decisions to make, and they are made harder by the style of public debate that involves striking positions and affecting certainty – with the direction of travel seeming to be away from nuance and acknowledgement of uncertainty. What a shame that Edinburgh University students can’t lead the way in modelling what a proper debate might look like.

In an age of misinformation mainstream media need a verifiability policy

  • February 28, 2023

Journalism should take a zero-tolerance approach to publishing false or unverifiable claims

I often ask friends – when you read an article in mainstream media about a topic in which you have some expertise (about health if you are a doctor for example), how often do you notice that it contains incorrect information? The majority of answers fall at the frequently/very frequently end of the Likert scale we’d be looking at if I were polling rather than chatting. I’m not talking here about serpent-headed aliens, microchip-containing vaccines or stolen elections. But I am talking about mundane examples of misrepresentation through partial presentation of the facts and fabrication.

I give illustrations from the Guardian newspaper, not because it’s a major culprit but because it isn’t. If the problem is present even in the best, it’s present everywhere. I am a long-time reader of the Guardian and subscriber to its online edition. I value its balanced coverage and regard it as standing head and shoulders above all other daily newspapers in the UK for its reliability and lack of bias. But at times I am left wondering, even in this newspaper, about a particular piece – is this true? How would I know?

On the surface the examples I will give may seem like minor infringements, but unreliable reporting in any part of the paper can lead to lack of trust in the reporting of every part of the paper; and we are storing up trouble for the future if journalists following examples such as these come to believe that writing a good story takes precedence over writing an entirely accurate one. There is a fairly simple solution to the problem but before considering it, a few examples.

An article in January 2023 described a survey which was said to have “…found that one in five LGBTQ+ people and more than a third of trans people in the UK have been subjected to attempted conversion…”. As part of an online survey, respondents were asked whether they had ever experienced someone taking any action (my italics) to try to change, cure or suppress their sexual orientation or gender identity. Describing the findings, the phrase “subjected to” appeared in the article headline, in the final sentence and three times in the text. There was no link to the survey report but when I found one it revealed that the campaigning group commissioning the survey has a particular take on what “subjected to” means.

“There must be no “consent” loophole… Conversion practices are abuse and it is not possible to consent to abuse …The definition of conversion practices should include religious practices…”. So examples of what respondents were “subjected to” included “I saw a counsellor…” and “My partner ended our relationship because of God and then the people from church prayed for us to become straight.” For sure, there were quotes about much more unpleasant experiences but even there the reframing was unusual: being beaten up because you’re gay is wrong, but it’s a stretch to call it a conversion practice. There was no indication of a typology of practices or the prevalence of various practices – anything and everything goes towards the headline figure.  This strikes me as a long way from what most people understand by the sort of conversion therapy that might be banned by legislation, but you wouldn’t know it from the way the survey was reported.

An article in June last year headed “Brain damage claim leads to new row over electroshock therapy” reported that electro-convulsive therapy (ECT) “…is now the focus of a huge row – which erupted last week – over claims that it can trigger brain damage, that guidelines covering its use are weak and that it is used disproportionately on women and the elderly.” Again there was no reported evidence of a huge row; just a link to a 5 year old Guardian article retailing the same criticisms from the same source as described in the 2022 article. The bust-up seems to have been imagined into life to act as a hook for the otherwise non-story.

Something from the pandemic. An article from January 2021 reported that the “Prince’s Trust happiness and confidence survey produces worst findings in its history”. Three accompanying comments linked the findings to the impact of the pandemic. The findings as reported were literally true (just) but a reading of the whole report gives quite a different picture. In 2021 just 56% of respondents said they were happy about, and 64% said they were confident about, their emotional health. Certainly the lowest on record but the corresponding figures for 2018 were 57% and 65%. In 2021 56% said there were always or often anxious. Again, the highest on record but the figures for the preceding years 2018-2020 were 53%, 54% and 55%. The really big changes have come since 2010 when more than 70% said they were happy and confident about their emotional health and fewer than 20% said they felt anxious or depressed all or most of the time. So a study that shows a decade-long decline in the emotional health of young people is reframed as a story about the impact of the pandemic by the simple expedient of not reporting most of its findings.

In a piece from January this year promoting assisted dying and entitled “Today, 17 people will likely die in unimaginable pain…” regular contributor Polly Toynbee writes, after a warm up about torture chambers, excruciating pain, horror and humiliation, that “On average 17 people a day die in terrible pain that can’t be relieved by even the best palliative care.”  The claim is based upon a review undertaken by the Office for Health Economics which, like the research it is reviewing, refers nowhere to the severity of pain but only to “unrelieved pain” much of which, it would be clear to anybody familiar with the clinical scenarios, will not match the descriptions offered. Toynbee’s account of unimaginable pain in end-of-life care comes in fact from her own imagination.

Much of this would be avoided if journalists put a bit more work in – didn’t just recycle press releases and did some of their own fact-checking, aided by basic critical appraisal skills. How would we know if they were doing that? Online encyclopedia Wikipedia, in facing its own questioning about reliability, has developed a policy it describes as Verifiability, not truth. “Verifiability” means that material must have been published previously by a reliable source, cited by the writer and consulted by them. Sources must be appropriate, must be used carefully, and must be balanced relative to other sources. 

Citing reliable sources, with a clear statement that the journalist has consulted them, gives readers the chance to check for themselves that the most appropriate authorities  have been used, and used well. In fact none of the four examples I give here would be compliant with such a policy. If respectable and respected mainstream media are to maintain their reputation for trustworthiness they need to demonstrate how they manage reliability in their reporting and not just assert that they do. An explicit, and explicitly followed, verifiability policy would be a good start.

Medical assistance in dying is another name for physician assisted suicide

  • February 8, 2023

The rebranding should not blind us to the risks involved.

The argument for what is now called ‘assisted dying’ is often framed in terms of personal autonomy – the right to choose the time and mode of one’s death.

Individuals included in media reports as pressing for that right are typically mentally competent, educated, and supported by a partner or family member who affirms their desire to die. Campaigners pressing for change suggest (at times in strikingly gothic terms) that if their wishes are denied, the likely alternative is a difficult death during which pain is inadequately treated and distressing symptoms are mismanaged. ‘Assisted dying’ is thereby positioned as a form of patient-centred care – a death with ‘dignity’.

Put like this, the case can seem incontrovertible. Who wouldn’t want a ‘dignified death’ in which their own wishes were central to any decisions about their treatment? But this is a narrow and unbalanced way of framing the discussion; it fails to communicate the full range of questions that arise when thinking about serious illness. “Assisted dying’ is a euphemism for physician assisted suicide; it involves prescribing lethal drugs to somebody who will then self-administer them to end their life. Framing the practice like this gives a different perspective that is masked by the rebadging as assisted dying. What we know about suicide more widely becomes relevant in informing what we think about doctor-assisted suicide.

People living with severe, persistent physical illness can of course feel that their condition is intolerable. Indeed, research shows that about one in ten describe having thoughts that their life is not worth living, or that they might be better off dead. And suicide rates in people with a severe health condition are double those of the general population.  Even so, recent data from the Office for National Statistics suggest that in absolute terms fewer than 10% of suicides are in people with a severe health condition. Some of the study findings come as a surprise; for example of 17,195 suicides identified from 2014-2017, only 58 (0.3%) were in people with what the study called low survival cancer. This is about three times the general population suicide rate but accounts for only 3 in every 10,000 of those recorded as having low survival cancer in the study period. In other words the great majority of people (more than 99%) with negative thoughts about their circumstances do not take their own lives.

What does research into suicide in the wider population suggest might make suicide more likely? Many of the leading risks are social – loneliness, living alone, low income and lack of employment, and a lack of social support. A history of problems with alcohol or drugs is also common, especially in men. So is a history of mental health problems – typically not psychotic illness but recurrent episodes of depression. More than half of those who take their own lives have a history of previous self-harm. These risks are also prominent when suicide occurs in the setting of severe physical illness, even among those who are simultaneously in contact with mental health services.

Suicide associated with severe physical illness occurs most commonly in the first year after diagnosis, especially in the first six months. This observation is in line with research showing that rather than intolerable and untreatable symptoms it is concerns about the future and loss of independence that motivate many requests for physician-assisted suicide.

US psychologist Thomas Joiner has outlined an influential interpersonal theory of suicide that makes much sense of these findings. He outlines three risks for suicide – thwarted belongingness (closely-related to the idea of lack of social connectedness), perceived burdensomeness, and acquired capability (overcoming the fear of death). Thinking about suicide in this way helps us to be clearer about the nature of suicide in the physically ill and therefore ‘assisted dying’, which is a risk for exactly those people whose suicide we are used to working to prevent, by actively helping people to “acquire capability”.

The response to these concerns rests upon assurances that only carefully selected cases will be accepted into a programme of assisted suicide. We can have no confidence that such “safeguards” will be adhered to.  For example in one study from the Netherlands, 12% of those accepted failed to meet the criterion of there being no alternatives for palliative treatment and 7% were not reported has experiencing unbearable suffering. I have yet to see a statement from supporters of medical assistance in dying about their opinion on what is an acceptable error rate in the system.

There is another reason for concern about doctor-assisted suicide – less tangible perhaps but with far-reaching consequences. It fundamentally changes our approach to suicide, Under the Suicide Act 1961 an act “intended to encourage or assist suicide” is a criminal offence. There are no exclusions – it is an all-encompassing approach that is reflected in our National Suicide Prevention Strategy. What is proposed is a radical overhaul of the way we approach suicide – a move away from trying to prevent all instances to a world in which we attempt to prevent suicide except when we decide to make it easier.

We are facing in medical assistance in dying a privileging of personal preference over social concern. It represents not just a modification of individual clinical practice but a societal intervention designed to change how we think about and respond to suicidal wishes. I find it hard to believe that the longer-term consequences, intended or otherwise, will be of universal benefit to those most in need of our care.

Do categories help us embrace diversity?

  • January 8, 2023

Their proliferation suggests that at least some people think so

Reading the New York Review of Books recently my eye fell upon an advertisement for a book about “Caring for LGBTQ2S People”. I was intrigued because although I read my (non-expert) share about gender debates I had not come across the 2S tag before.

I discover it stands for a (contentious) neologism, Two-Spirit, that has been applied only to gender identity in indigenous people – initially in USA and Canada, which explains why it doesn’t have much currency in the UK. And browsing about the meaning of this term I came across another unfamiliar initialism: LGBTTQQIAA.

What this got me thinking about was partly how off the pace I am about terminology and gender identity. But also about a familiar question that arises from the use of categories – the value of lumping vs splitting. On the face of it the LGBTTQQIAA string looks like an example of splitting; after all it contains ten tags and that’s without 2S. On the other hand it represents a sort of lumping – based upon the assumption that all these things share something that means they belong together. This lumping isn’t universally supported, and in particular there has been some questioning of the idea of putting sexual orientation and gender identity into a single category – even it turns out from some trans quarters.

I call this a familiar question because it is to me; it has featured for years in debates about psychiatric diagnostic labelling – most recently prompted by the latest editions of the DSM and to a lesser extent ICD classificatory systems which lump (they’re all mental disorders) split a bit (single figures for numbers of chapters) and split again (dozens and dozens of individual diagnostic terms within each chapter).

The examples of gender and psychiatry illustrate one problem with categorisation. At the start a few simple categories look useful – highlighting important differences that deserve our attention. But it soon becomes clear that a few simple categories don’t cover the ground, so more categories are generated in an attempt to fill the gaps and make the system comprehensive. For example the DSM experience is of increasing numbers of categories, each iteration coming at a shorter interval from the last but never achieving the aim of exhaustive coverage – reminding me of Zeno’s paradox. Personality disorder bucks the trend: it’s still lumped in (who you are as a mental disorder) but at least in ICD-11 the downstream splitting into multiple subtypes has been, to some extent at least, resisted.

Do these categorising systems help to nuance discussion and thereby combat rigid attitudes, improve research, policy and practice, or do they lead us in the wrong direction and encourage pathologizing by diagnosing difference? In other words – categories are the embodiment of discriminating decisions: do they encourage positive or negative discrimination?

Using descriptive categories can be useful, and not just in reminding us not to be solipsistic. They help us make sense of and navigate a complex environment, and they can inform important decisions in, say, healthcare, policy, legislation or education. However categorising also has risks, of reifying and essentialising differences and – depending upon the specific vocabularies employed – of creating and pathologizing a sense of otherness of those categorised. Away from gender and psychiatry this concern is often raised in relation to debates about racism. To quote a recent newspaper article:- “We live in an age saturated with identitarian thinking and obsessed with placing people into racial boxes.” The article trails the writer’s new book, which he describes as “…a retelling of the history both of the idea of race and of the struggles to confront racism and to transcend racial categorisation,…”.

I find opinion divided in my personal network. Some think that, especially in relation to gender, the proliferation of categories/labels is no bad thing – reminding us that we live in a far from homogeneous world.

Others are less convinced, although perhaps not right there with Adorno in agreeing that “…the desire to construct types was itself indicative of the potentially fascist character”.

Perhaps the answer is something like – categories are useful, adjectives are useful, let’s not turn every adjective into a category.