Medical assistance in dying is another name for physician assisted suicide

  • February 8, 2023

The rebranding should not blind us to the risks involved.

The argument for what is now called ‘assisted dying’ is often framed in terms of personal autonomy – the right to choose the time and mode of one’s death.

Individuals included in media reports as pressing for that right are typically mentally competent, educated, and supported by a partner or family member who affirms their desire to die. Campaigners pressing for change suggest (at times in strikingly gothic terms) that if their wishes are denied, the likely alternative is a difficult death during which pain is inadequately treated and distressing symptoms are mismanaged. ‘Assisted dying’ is thereby positioned as a form of patient-centred care – a death with ‘dignity’.

Put like this, the case can seem incontrovertible. Who wouldn’t want a ‘dignified death’ in which their own wishes were central to any decisions about their treatment? But this is a narrow and unbalanced way of framing the discussion; it fails to communicate the full range of questions that arise when thinking about serious illness. “Assisted dying’ is a euphemism for physician assisted suicide; it involves prescribing lethal drugs to somebody who will then self-administer them to end their life. Framing the practice like this gives a different perspective that is masked by the rebadging as assisted dying. What we know about suicide more widely becomes relevant in informing what we think about doctor-assisted suicide.

People living with severe, persistent physical illness can of course feel that their condition is intolerable. Indeed, research shows that about one in ten describe having thoughts that their life is not worth living, or that they might be better off dead. And suicide rates in people with a severe health condition are double those of the general population.  Even so, recent data from the Office for National Statistics suggest that in absolute terms fewer than 10% of suicides are in people with a severe health condition. Some of the study findings come as a surprise; for example of 17,195 suicides identified from 2014-2017, only 58 (0.3%) were in people with what the study called low survival cancer. This is about three times the general population suicide rate but accounts for only 3 in every 10,000 of those recorded as having low survival cancer in the study period. In other words the great majority of people (more than 99%) with negative thoughts about their circumstances do not take their own lives.

What does research into suicide in the wider population suggest might make suicide more likely? Many of the leading risks are social – loneliness, living alone, low income and lack of employment, and a lack of social support. A history of problems with alcohol or drugs is also common, especially in men. So is a history of mental health problems – typically not psychotic illness but recurrent episodes of depression. More than half of those who take their own lives have a history of previous self-harm. These risks are also prominent when suicide occurs in the setting of severe physical illness, even among those who are simultaneously in contact with mental health services.

Suicide associated with severe physical illness occurs most commonly in the first year after diagnosis, especially in the first six months. This observation is in line with research showing that rather than intolerable and untreatable symptoms it is concerns about the future and loss of independence that motivate many requests for physician-assisted suicide.

US psychologist Thomas Joiner has outlined an influential interpersonal theory of suicide that makes much sense of these findings. He outlines three risks for suicide – thwarted belongingness (closely-related to the idea of lack of social connectedness), perceived burdensomeness, and acquired capability (overcoming the fear of death). Thinking about suicide in this way helps us to be clearer about the nature of suicide in the physically ill and therefore ‘assisted dying’, which is a risk for exactly those people whose suicide we are used to working to prevent, by actively helping people to “acquire capability”.

The response to these concerns rests upon assurances that only carefully selected cases will be accepted into a programme of assisted suicide. We can have no confidence that such “safeguards” will be adhered to.  For example in one study from the Netherlands, 12% of those accepted failed to meet the criterion of there being no alternatives for palliative treatment and 7% were not reported has experiencing unbearable suffering. I have yet to see a statement from supporters of medical assistance in dying about their opinion on what is an acceptable error rate in the system.

There is another reason for concern about doctor-assisted suicide – less tangible perhaps but with far-reaching consequences. It fundamentally changes our approach to suicide, Under the Suicide Act 1961 an act “intended to encourage or assist suicide” is a criminal offence. There are no exclusions – it is an all-encompassing approach that is reflected in our National Suicide Prevention Strategy. What is proposed is a radical overhaul of the way we approach suicide – a move away from trying to prevent all instances to a world in which we attempt to prevent suicide except when we decide to make it easier.

We are facing in medical assistance in dying a privileging of personal preference over social concern. It represents not just a modification of individual clinical practice but a societal intervention designed to change how we think about and respond to suicidal wishes. I find it hard to believe that the longer-term consequences, intended or otherwise, will be of universal benefit to those most in need of our care.

A book about suicide research and a suicide researcher

  • November 2, 2022

Rory O’Connor is a health psychologist who has published extensively on suicide. He is also active in discussions about suicide aimed at the general public and about suicide prevention policy, especially in Scotland where he lives and works.

His book When it is Darkest: why people die by suicide and what we can do about it  is divided into four parts, covering the main facts (and misconceptions) about suicide, its main causes, what preventive interventions might be effective, and supporting people who are suicidal or who are living in the aftermath of the suicide of somebody close. Further resources are mentioned throughout and there is a list at the end. The emphasis, especially when considering causes, is on the psychology of suicide and includes a review of the author’s own framework for organising the disparate associations with suicide into what he calls the Integrated Motivational-Volitional Model.

O’Connor’s aim is to combine personal and professional perspectives. The style is informal, written in the first person. Interspersed throughout are anecdotes about his personal experiences, his contacts with people who have felt the impact of suicide in another or of feeling suicidal themselves, and his career in suicide research. At the same time it is in parts quite technical and ends with 48 pages of academic references, with a leaning towards his own research.

The book covers a lot of ground without being exhaustive or exhausting, especially of course in its review of prevailing psychological theories. And it offers a sustained attack against fatalism in the face of suicide and the apparent impossibility of eradicating it: we can move to understand more and to develop effective prevention strategies.

No book like this can be entirely comprehensive but there are some important gaps. There is too little on the personal and social impact of drug and alcohol misuse, either as a risk for the individual or as part of the reason people become isolated or alienated from social support. Mental disorder and its treatment may not be the most important part of suicide prevention but even so it deserves more consideration than it gets. Many of those who die have been in contact with helping agencies – GP, the mental health services, university counselling services or whatever – and there is not much here about how such services might do better or what families feel about this aspect of how tragedy might have been prevented. Suicide needs to be seen in social and cultural context if we are to focus public health interventions. Psychology can’t explain the wide regional variation in rates, and some of these wider issues feel undercooked. In a laudable attempt to combat negativism the effectiveness of suicide prevention interventions is overstated.

What about readership? It requires high levels of general and scientific literacy and that will limit its utility. The presentational style will not suit everybody. I personally didn’t like the idea of calling suicide The Big S.  I also wasn’t keen on the idea that suicide is not usually about the desire to die but about the desire to end suffering. After all “suicide” means death as the result of an act intentionally designed to end life, so this is a paradox that on close inspection just doesn’t make sense. There are few accessible books on suicide for the general reader (Mark Williams’ Cry of Pain is one, and the Help is at Hand booklet for those bereaved by suicide is excellent); this text will therefore find a place as a useful review for the interested and well-educated non-specialist.

Social media and mental health: we need much more attention to the detail of what regulation might entail

  • October 25, 2022

Coverage in the mainstream media of the findings of the Molly Russell inquest concludes that the case is now made for direct action on regulation. However, in these and other similar pieces there has been little discussion of what specifically such regulation might entail or of the challenges of implementation. Here is a sample from just one newspaper:

Why is it so hard to say specifically what should be done? For sure there will be resistance from the tech companies, but an additional dilemma is that much of the content under consideration (about depression, self-harm and suicidal thinking) is seen as helpful by those who use social media – valued for its 24/7 availability and anonymity and for the supportive nature of sharing and viewing user-generated content. The challenge therefore is to eliminate the negative impact of social media without blocking access to its helpful elements.

Although the main emphasis in discussions about regulation has been on harmful content, that is only one of three aspects of the problem to be considered.

A central issue is algorithmic pushing which increases duration and intensity of exposure. We know that people with existing mental health problems are more likely to spend long periods online and more likely to use sites with content related to self-harm, and there is some evidence that such extended exposure makes matters worse. So, what limits should be set on these quantitative aspects of social media viewing?

The question of what to do about algorithmic “recommendations” is confounded with one about content. It is generally accepted that it would be no bad thing if searches for key terms (self-harm, suicide and so on) were to trigger responses offering links to helpful resources, which raises the question of how to identify specific content as helpful (OK to recommend) or harmful (not OK). In relation to moderation of content, harmfulness is usually defined by terms like glamourising, normalising and encouraging. These words are used without definition and yet proposed as the main criteria upon which any duty of care will be judged. How are they to be defined and identified in ways that don’t just rely on individual opinion?

Monitoring and responding to problematic patterns of use is a key issue in debates about online gambling – how to achieve it without driving away those who resent the idea of surveillance and loss of privacy?

Journalists may not see it as their job to grapple with these questions. Here are three suggestions from non-journos whom we might consider as having something important to say:

The coroner in Molly Russell’s case issued a prevention of future deaths report:- in which he said:

“I recommend that consideration is given by the Government to reviewing the provision of internet platforms to children, with reference to harmful on-line content, separate platforms for adults and children, verification of age before joining the platform, provision of age specific content, the use of algorithms to provide content, the use of advertising and parental guardian or carer control including access to material viewed by a child, and retention of material viewed by a child.”

The government’s plans are to be found in its Online Safety Bill. At the point at which they published their last factsheet (April 2022) on the topic, this is what they had to say:

“Platforms likely to be accessed by children will also have a duty to protect young people using their services from legal but harmful material such as self-harm or eating disorder content. Additionally, providers who publish or place pornographic content on their services will be required to prevent children from accessing that content.

The largest, highest-risk platforms will have to address named categories of legal but harmful material accessed by adults, likely to include issues such as abuse, harassment, or exposure to content encouraging self-harm or eating disorders. They will need to make clear in their terms and conditions what is and is not acceptable on their site, and enforce this.”

The coroner’s report covers all three bases in some ways but shares an important feature with the much more limited Online Safety factsheet – undesirable content is identified by the single word “harmful” which is not defined apart from a suggestion in the factsheet that it is likely to include “encouraging” which isn’t defined.

Multiple charities with an interest in the mental wellbeing of young people wrote a letter to the then prime minister Liz Truss in October, in which they attempted to unpack the idea of harmfulness in a constructive way:

“We are writing to urge you to ensure that the regulation of harmful suicide and self-harm content is retained within the Online Safety Bill…[defined as]

  • Information, instructions, and advice on methods of self-harm and suicide  
  • Content that portrays self-harm and suicide as positive or desirable
  • Graphic descriptions or depictions of self-harm and suicide.”

The first two criteria look as if they ought to be amenable to careful definition; “graphic” is more problematic. One person’s clear and vividly explicit detail is another’s matter-of-fact account. Or does it mean – any image (depiction) at all, that is not shaded/pixelated or whatever – descriptions of how to look after or conceal your wounds for example?

There seem to me to be two risks here. The first is that decisions will be left to Ofcom and (presumably) to the courts. The second is (perhaps less likely) that the tech companies will decide they can’t be bothered with all this and will go for some variant of blanket suppression of interactions about self-harm and suicide. Neither is desirable: the former if it leads to the sort of adversarial debate that failed to clarify these questions during the inquest, the latter if it ends up denying access to helpful content. There is emerging research that can contribute and health professionals should lead in arguing for its inclusion in decision-making, so that a realistic balance is struck between risks and benefits of social media in an important areas of public health policy.

error

Subscribe to keep updated!