How could governments design a regulator for social media? Martin Stanley offers his thoughts exploring how this could be achieved.

The suicide of Molly Russell and the live streaming of the Christchurch massacre are only the latest examples of the harms laid at the door of social media. The companies, responding to pressure from advertisers as well as wider society, are self-censoring by deploying AI as well as large numbers of moderators, and by refusing to host material from the likes of Alex Jones and Tommy Robinson. But many feel that Big Tech should be forced to do much more, and should in particular be made subject to some form of duty of care – see my earlier blog Regulating in a Digital World.
There is, however, much force in Graham Smith’s argument that it would be difficult, verging on impossible, to draft a duty of care with sufficient certainty and precision that it could be enforced through the courts. The answer might be a regulator, but Graham argues that regulatory “flexibility, discretionary power and nimbleness … are a vice, not a virtue … when freedom of individual speech is at stake … it is unclear how Parliament delegating policy matters to an independent regulator is any more acceptable than delegating them to a policeman, judge or jury.” These are real concerns but… the HFEA and NICE do a pretty good job in their politically and ethically tricky areas of human fertility and NHS medicine availability respectively. So here are some thoughts about how to design a social media regulator.
To begin at the beginning, HMG should publish a Green Paper – a range of ideas (especially concerning the scope and nature of the regulator’s duties) intended to elicit views and information from interested individuals and organisations – not a White Paper containing firm proposals. (I understand that HMG does in fact intend to proceed straight to a White Paper. If so, let us hope that it has a substantial green tinge to it.)
In particular, it must be for politicians, following consultation, to tell the regulator where the “acceptable/unacceptable line” is to be drawn between various potential harms.
Many would agree that the regulator should work hard to minimise the harm that might be done to vulnerable users, and to others via copycat behaviour.
- But should regulatory prohibition extend to the dangerous nonsense pedalled by anti-vaxxers?
- And nonsense cancer cures?
- What about homeopaths, and climate change deniers?
- Should ‘Big Tech’ be forced to weed out ‘bots’?
The regulator’s subsequent policy decisions should be taken by its externally appointed part-time Commissioners (or similar designation) following extensive consultation as well as discussion with, and advice from, staff. The regulator should therefore be free-standing and not part of Ofcom, for instance. Its Commissioners must be given the time to think about and debate the issues, meet stakeholders, approve consultation and decision documents, and so on. Staff, including technical and other experts, should have important roles, but they must not be allowed to mark their own homework in these highly contentious areas.
The regulator should follow good consultation practice by publishing issues statements and ‘minded to’ provisional findings before taking important policy decisions. This will mean that it might take four or five months to develop a new or updated policy, but the result will then be much more likely to be respected by stakeholders, and upheld if judicially reviewed.
The resultant policies must be designed and communicated so that they can easily be understood by the general public, and by staff charged with their implementation. But there should be an appeals process – though probably only for those who feel that their freedom of speech is being unreasonably constrained.
The regulator should also follow good regulatory practice by avoiding regulatory capture and being pretty sceptical about the arguments and information submitted by Big Tech. Despite their protestations to the contrary, the companies have shown themselves perfectly capable of changing their algorithms when required by advertisers or federal law (stopping advertisers targeting only those of a certain race, gender or age group). See also Dan Hett’s article in the Guardian 20 March 2019.
It may therefore make sense for the regulator to focus on an organisation’s algorithms etc., rather than on, say, Instagram’s decision making in individual cases.
Last, but not least, both the government and the regulator itself should be prepared – keen even – to adapt the legislation and policies in the light of experience. Digital regulation has totally failed to keep pace with the rate of change in platform design and platform operators’ ethics and behaviour. (Facebook’s proposed ‘pivot’ will add another challenge.) Thought needs to be given to how the regulator might be empowered to make rapid and immediately enforceable decisions (a bit like legal injunctions?) subject to subsequent amendment or withdrawal following debate with affected parties.
It is absolutely right to be concerned about freedom of speech.
“If you don’t stand up for the stuff you don’t like, when they come for the stuff you do like, you’ve already lost”
Neil Gaiman
But it is also right to be concerned about the significant damage being done by certain activity on social media. It will not be easy to get the balance right, but an intelligent, truly impartial and properly empowered regulator, willing to learn from experience, may be the best way forward.
Read further articles by Martin Stanley on Understanding Regulation.
The views and opinions expressed in this post are those of the author(s) and not necessarily those of the Bennett Institute for Public Policy.