The Government White Paper: Online Harms has been out for consultation for the past three months. Its main proposal is to establish a regulator charged with ensuring that a duty of care is exercised by all those who produce, host or distribute potentially harmful online material.
This sounds like an idea with which few could disagree, and the consultation questions are mainly about how to make it work properly. There are however some real problems raised by the inclusion of online content about self-harm. The other topics covered by the White Paper include incitement to terrorist activities, dissemination of child pornography, and drug dealing in the dark web. While it’s difficult to imagine a socially desirable component to the online presence of any of these activities, the same doesn’t apply to self-harm. Here’s my own reply to the consultation’s Question 8…
Q8: What further steps could be taken to ensure the regulator will act in a targeted and proportionate manner?
relation to self-harm, the main need is for a clear and specific definition of
the nature of harmful content. It is
responsible management of such content that constitutes the duty of care to be
imposed on those who make self-harm content available online. The White Paper
talks about “content and behaviour which encourages suicide and self-harm” (para
7.32) and “content that provides graphic details of suicide methods and
self-harming” (para 7.34). Neither definition is specific enough to inform
practice and without a tighter definition the regulator is at risk of
idiosyncratic or inconsistent intervention.
the challenge in coming up with a workable definition of harmful self-harm
three issues here:
Examination of online material about self-harm reveals substantial diversity in form and content. Those who post and those who respond to posts are engaged in conversations not just about the manifest topic of self-harm and suicide, even when the relevant posts are explicitly tagged as self-harm: content is also about emotional problems more generally, about relationships, fitting in or belonging, and about attractiveness, sexuality and body image. The mixture of textual and visual messaging leads to communication the ambiguity and irony of which can be missed by reading one without the other.
of this is regarded as helpful by those who access it, and that includes
direct communication about self-harm including images of self-injury. Such
images can help an isolated person (anything up to a half of people who self-harm
don’t confide the fact to anybody in their personal life) feel less alone. The
images may come with messages about self-care or harm minimisation. It is
reasonable to conclude that content some people find unhelpful is found helpful
by others, and that whether a particular content is found to be helpful or
unhelpful by a particular individual depends upon the immediate circumstances
in which it is accessed.
isn’t clear what the pathway to harm is, following exposure to self-harm
material online. Words like graphic, explicit or glamorising are in themselves
not tightly defined but they imply that the underlying mechanism is an
invitation to copy the behaviour. Linking this argument to suicidal behaviour
is problematic – for example most online images of self-harm are of self-injury
(cutting or burning) and yet these are extremely rare methods of suicide,
especially in young people. If the putative pathway to suicide isn’t copying
then presumably it is by exposure leading to low mood and hopelessness – in
which case it isn’t clear that images of self-injury are more problematic than
other mood-influencing content.
the risk of disproportionate or untargeted action?
of the content covered by the White Paper, there really isn’t much doubt about
what’s bad and needs to be suppressed – drug dealing, distributing child
pornography, inciting terrorism. In the case of self-harm however there are
risks of going too far in suppressing content. Those risks reside in the
diversity of material that comes under the online rubric of self-harm; the
likelihood of blocking access to material experienced as beneficial by isolated
and unhappy people, and the uncertainty about what’s genuinely harmful in
self-harm form and content. Clumsy, excessive or inconsistent intervention – in
the name of reducing harmful exposure and (by implication) habituation or normalization
– may have the unintended damaging consequence of increasing the sense of
disconnectedness and burdensomeness experienced by people with mental health
problems who self-harm.
steps to take?
White Paper is to include action on self-harm content then the regulator needs
expert and specific advice on what content should be regulated and limited
immediately – even taking into account the considerably uncertainty outlined
above. That is, the advice should identify that material for which we can be
confident that harm is likely to accrue from accessing it and the risk of harm
obviously outweighs the possibility of benefit. This advice should be provided by an expert
panel that consults a diverse range of academics and mental health specialists.
Its recommendations should be for limited immediate action given how little we
know about harms.
second step, the regulator should seek regular reports on emerging research
findings that support changes to its practice – to ensure that practice is
evidence-based rather than opinion-based.