Social media and mental health: we need much more attention to the detail of what regulation might entail
- October 25, 2022
Coverage in the mainstream media of the findings of the Molly Russell inquest concludes that the case is now made for direct action on regulation. However, in these and other similar pieces there has been little discussion of what specifically such regulation might entail or of the challenges of implementation. Here is a sample from just one newspaper:
- Molly Russell dies while suffering negative effects of online content rules coroner Dan Milmo, 30 October
- How Molly Russell fell into a vortex of despair on social media Dan Milmo 30 September
- Now we know for sure that big tech peddles despair, we must protect ourselves Zoe Williams 7 October
- Instagram still hosting self-harm images after Molly Russell inquest verdict Shanti Das 8 October
- The Guardian view on digital dangers after Molly Russell: MPs must act Guardian opinion 4 October
- Molly Russell was trapped by the cruel algorithms of Pinterest and Instagram John Naughton 1 October
- The Molly Russell inquest damns silicon valley: There can be no more excuses Peter Wanless and Beeban Kidron 30 September
- Social media firms face a safety reckoning after the Molly Russell inquest Dan Milmo 5 October
Why is it so hard to say specifically what should be done? For sure there will be resistance from the tech companies, but an additional dilemma is that much of the content under consideration (about depression, self-harm and suicidal thinking) is seen as helpful by those who use social media – valued for its 24/7 availability and anonymity and for the supportive nature of sharing and viewing user-generated content. The challenge therefore is to eliminate the negative impact of social media without blocking access to its helpful elements.
Although the main emphasis in discussions about regulation has been on harmful content, that is only one of three aspects of the problem to be considered.
A central issue is algorithmic pushing which increases duration and intensity of exposure. We know that people with existing mental health problems are more likely to spend long periods online and more likely to use sites with content related to self-harm, and there is some evidence that such extended exposure makes matters worse. So, what limits should be set on these quantitative aspects of social media viewing?
The question of what to do about algorithmic “recommendations” is confounded with one about content. It is generally accepted that it would be no bad thing if searches for key terms (self-harm, suicide and so on) were to trigger responses offering links to helpful resources, which raises the question of how to identify specific content as helpful (OK to recommend) or harmful (not OK). In relation to moderation of content, harmfulness is usually defined by terms like glamourising, normalising and encouraging. These words are used without definition and yet proposed as the main criteria upon which any duty of care will be judged. How are they to be defined and identified in ways that don’t just rely on individual opinion?
Monitoring and responding to problematic patterns of use is a key issue in debates about online gambling – how to achieve it without driving away those who resent the idea of surveillance and loss of privacy?
Journalists may not see it as their job to grapple with these questions. Here are three suggestions from non-journos whom we might consider as having something important to say:
The coroner in Molly Russell’s case issued a prevention of future deaths report:- in which he said:
“I recommend that consideration is given by the Government to reviewing the provision of internet platforms to children, with reference to harmful on-line content, separate platforms for adults and children, verification of age before joining the platform, provision of age specific content, the use of algorithms to provide content, the use of advertising and parental guardian or carer control including access to material viewed by a child, and retention of material viewed by a child.”
The government’s plans are to be found in its Online Safety Bill. At the point at which they published their last factsheet (April 2022) on the topic, this is what they had to say:
“Platforms likely to be accessed by children will also have a duty to protect young people using their services from legal but harmful material such as self-harm or eating disorder content. Additionally, providers who publish or place pornographic content on their services will be required to prevent children from accessing that content.
The largest, highest-risk platforms will have to address named categories of legal but harmful material accessed by adults, likely to include issues such as abuse, harassment, or exposure to content encouraging self-harm or eating disorders. They will need to make clear in their terms and conditions what is and is not acceptable on their site, and enforce this.”
The coroner’s report covers all three bases in some ways but shares an important feature with the much more limited Online Safety factsheet – undesirable content is identified by the single word “harmful” which is not defined apart from a suggestion in the factsheet that it is likely to include “encouraging” which isn’t defined.
Multiple charities with an interest in the mental wellbeing of young people wrote a letter to the then prime minister Liz Truss in October, in which they attempted to unpack the idea of harmfulness in a constructive way:
“We are writing to urge you to ensure that the regulation of harmful suicide and self-harm content is retained within the Online Safety Bill…[defined as]
- Information, instructions, and advice on methods of self-harm and suicide
- Content that portrays self-harm and suicide as positive or desirable
- Graphic descriptions or depictions of self-harm and suicide.”
The first two criteria look as if they ought to be amenable to careful definition; “graphic” is more problematic. One person’s clear and vividly explicit detail is another’s matter-of-fact account. Or does it mean – any image (depiction) at all, that is not shaded/pixelated or whatever – descriptions of how to look after or conceal your wounds for example?
There seem to me to be two risks here. The first is that decisions will be left to Ofcom and (presumably) to the courts. The second is (perhaps less likely) that the tech companies will decide they can’t be bothered with all this and will go for some variant of blanket suppression of interactions about self-harm and suicide. Neither is desirable: the former if it leads to the sort of adversarial debate that failed to clarify these questions during the inquest, the latter if it ends up denying access to helpful content. There is emerging research that can contribute and health professionals should lead in arguing for its inclusion in decision-making, so that a realistic balance is struck between risks and benefits of social media in an important areas of public health policy.