The Online Safety Bill is supposed to protect young people with mental health problems: how will we judge if it has any effect?

  • October 19, 2023

After a long public and political debate about what form legal regulation of social media should take, the UK’s Online Safety Bill (2023) has passed into law. One of its highly-publicised aims is to protect young people from harmful exposure to content likely to lead to lowering of mood and an increased risk of self-harm and perhaps suicide. Now that we have moved to the stage of implementing the measures outlined in the Bill, how will we know if it is achieving its aim of reducing severe mental health harms to young people?

Our research and that of others, published in a multi-author book this month, suggests that the answer to this question will not be easy to establish. Preoccupation with the need to suppress harmful content has not led to great precision in the definition of what constitutes harmfulness, or of what we can think of as the social in social media – including the ways in which social media are used and by whom. Little attention has been paid to the problem of unintended consequences, and especially the possibility that regulation might lead to loss of positive aspects of social media use. And we are unclear what measures of outcome will be feasible.

In the early years of this debate we were working with a doctoral student whose thesis involved an analysis of more than 600 images posted on social media with a tag that included self-harm. Our student’s findings suggested a more interesting, in some ways surprising and more complicated picture than was reflected in the public debate. While communication of distress was common so were stories of recovery and many of the associated comments were encouraging and supportive. The posts identified were by no means restricted to explicit discussion of self-harm and in more than half the accompanying image did not represent self-harm directly – labelled with the self-harm tag were discussions of a range other topics including the nature of gender and the female body and concerns about identity and belonging. Even when tagged as “self-harm”, the space was being used to discuss these other matters of emotional concern to young people.

We decided to follow this single study with a review of the research literature to explore the issues further. The review was undertaken on behalf of the mental health charity Samaritans and explored the relation between social media use and mental health, and in particular the effect of accessing content about self-harm and suicide. We found that the nature of this content was diverse. There was content that would universally be considered harmful, such as detailed description or video streaming of methods and active, explicit encouragement to act. However, there was little evidence that much of the content in isolation could be considered unambiguously harmful.

When looking at outcomes of exposure to self-harm and suicide content, we found that previous research studies have indeed identified negative consequences – the reliving of distressing personal experiences, a sense of pressure to present oneself in certain ways or to offer help to others when one was not in a position to do so, sometimes a stimulus to further, perhaps more severe, self-harm. But research also identified positive aspects of the social media experience – a feeling of reduced isolation and support from a community of people sharing similar experiences in a non-judgmental way, the opportunity to achieve some self-understanding through recounting personal experience online, and for some people access to practical advice such as details of helping agencies or guidance on hiding scars.

It was also clear that an important influence on outcomes was not just the content of social media but the way in which they were being used – such as the intensity of interaction with other posts and the amount of time spent online, and the interactions with and reactions from others to content posted. At least as important as harmful content is whether social media use leads to connection but to an unhelpful online community, to trying for connection but failing to find a community with which to identify, to being harangued for sharing experiences, or to asking for help that isn’t forthcoming. It is unclear how such experiences could be regulated or their effects mitigated except by the individual online.

There are formidable challenges in researching this area, not least that social media are valued by many people because of their anonymity, and it is difficult to apply high quality research methods to unbiased samples. For this reason we decided that it would also be valuable to gain a wider understanding of expert opinion across this field. In other words we wanted to know if there is a consensus among experts studying the relation between social media use and mental health about what can and cannot be considered harmful and what would be the most desirable responses to this relatively new feature of the social landscape. We approached academics known for their interest in the area, and the result is the multi-author book edited by us – Social Media and Mental Health  and published by Cambridge University Press.

Some of the issues raised include not just the content of postings but the great diversity in who accesses or posts, how they use social media and how they respond to specific content: outcomes cannot readily be attributed either to content alone or to the person alone – they are likely to arise from the interaction between content, person and context. While a central issue is algorithmic pushing which increases duration and intensity of exposure, there remains no specific definition of degree and type of exposure when it comes to this social aspect of a regulatory framework.

Another aspect of social media use that was under-explored in earlier public debate about the Online Safety Bill was the role of social media as a source of positive help. At the time of our own review into online resources for self-harm, we found that most sites were extremely limited in what they offered as practical help to people seeking it. Positive resources need to move beyond encouragement to take care and to seek professional help. What our contributors describe is their involvement in programmes of work that serve as a pointer to the next generation of online resources – developed on sound theoretical grounds and principles of practice and involving young people in determining format and content.

In addition to these challenges in monitoring to the form and content of online experience, there is a question of how to assess outcomes. Rates of distress, of self-harm or of suicide in young people are likely to fluctuate, but how would we know if any improvement could be attributed to the recent legislation? An associated reduction in accessing certain social media content might be taken as evidence, but correlation is not proof of causation and there are other interpretations of such an observation.

For all these reasons, we are left uncertain whether it will prove possible to evaluate the effect of the Online Safety Bill on the mental health of young people. That is, in terms of processes, whether we will be able to identify changes that incontrovertibly represent reduction in harmful content and harmful types of social media use, that do not have the unintended consequence of reducing access to helpful online interactions, and that increase the availability of genuinely helpful resources. And in terms of outcomes, to identify changes in rates of mood disturbance, self-harm or suicide that can be attributed to the effects of legislation.

Social media and mental health: we need much more attention to the detail of what regulation might entail

  • October 25, 2022

Coverage in the mainstream media of the findings of the Molly Russell inquest concludes that the case is now made for direct action on regulation. However, in these and other similar pieces there has been little discussion of what specifically such regulation might entail or of the challenges of implementation. Here is a sample from just one newspaper:

Why is it so hard to say specifically what should be done? For sure there will be resistance from the tech companies, but an additional dilemma is that much of the content under consideration (about depression, self-harm and suicidal thinking) is seen as helpful by those who use social media – valued for its 24/7 availability and anonymity and for the supportive nature of sharing and viewing user-generated content. The challenge therefore is to eliminate the negative impact of social media without blocking access to its helpful elements.

Although the main emphasis in discussions about regulation has been on harmful content, that is only one of three aspects of the problem to be considered.

A central issue is algorithmic pushing which increases duration and intensity of exposure. We know that people with existing mental health problems are more likely to spend long periods online and more likely to use sites with content related to self-harm, and there is some evidence that such extended exposure makes matters worse. So, what limits should be set on these quantitative aspects of social media viewing?

The question of what to do about algorithmic “recommendations” is confounded with one about content. It is generally accepted that it would be no bad thing if searches for key terms (self-harm, suicide and so on) were to trigger responses offering links to helpful resources, which raises the question of how to identify specific content as helpful (OK to recommend) or harmful (not OK). In relation to moderation of content, harmfulness is usually defined by terms like glamourising, normalising and encouraging. These words are used without definition and yet proposed as the main criteria upon which any duty of care will be judged. How are they to be defined and identified in ways that don’t just rely on individual opinion?

Monitoring and responding to problematic patterns of use is a key issue in debates about online gambling – how to achieve it without driving away those who resent the idea of surveillance and loss of privacy?

Journalists may not see it as their job to grapple with these questions. Here are three suggestions from non-journos whom we might consider as having something important to say:

The coroner in Molly Russell’s case issued a prevention of future deaths report:- in which he said:

“I recommend that consideration is given by the Government to reviewing the provision of internet platforms to children, with reference to harmful on-line content, separate platforms for adults and children, verification of age before joining the platform, provision of age specific content, the use of algorithms to provide content, the use of advertising and parental guardian or carer control including access to material viewed by a child, and retention of material viewed by a child.”

The government’s plans are to be found in its Online Safety Bill. At the point at which they published their last factsheet (April 2022) on the topic, this is what they had to say:

“Platforms likely to be accessed by children will also have a duty to protect young people using their services from legal but harmful material such as self-harm or eating disorder content. Additionally, providers who publish or place pornographic content on their services will be required to prevent children from accessing that content.

The largest, highest-risk platforms will have to address named categories of legal but harmful material accessed by adults, likely to include issues such as abuse, harassment, or exposure to content encouraging self-harm or eating disorders. They will need to make clear in their terms and conditions what is and is not acceptable on their site, and enforce this.”

The coroner’s report covers all three bases in some ways but shares an important feature with the much more limited Online Safety factsheet – undesirable content is identified by the single word “harmful” which is not defined apart from a suggestion in the factsheet that it is likely to include “encouraging” which isn’t defined.

Multiple charities with an interest in the mental wellbeing of young people wrote a letter to the then prime minister Liz Truss in October, in which they attempted to unpack the idea of harmfulness in a constructive way:

“We are writing to urge you to ensure that the regulation of harmful suicide and self-harm content is retained within the Online Safety Bill…[defined as]

  • Information, instructions, and advice on methods of self-harm and suicide  
  • Content that portrays self-harm and suicide as positive or desirable
  • Graphic descriptions or depictions of self-harm and suicide.”

The first two criteria look as if they ought to be amenable to careful definition; “graphic” is more problematic. One person’s clear and vividly explicit detail is another’s matter-of-fact account. Or does it mean – any image (depiction) at all, that is not shaded/pixelated or whatever – descriptions of how to look after or conceal your wounds for example?

There seem to me to be two risks here. The first is that decisions will be left to Ofcom and (presumably) to the courts. The second is (perhaps less likely) that the tech companies will decide they can’t be bothered with all this and will go for some variant of blanket suppression of interactions about self-harm and suicide. Neither is desirable: the former if it leads to the sort of adversarial debate that failed to clarify these questions during the inquest, the latter if it ends up denying access to helpful content. There is emerging research that can contribute and health professionals should lead in arguing for its inclusion in decision-making, so that a realistic balance is struck between risks and benefits of social media in an important areas of public health policy.

error

Subscribe to keep updated!