Platforms are public spaces, so let’s treat them like one
Applying a ‘duty of care’ provides flexibility as well as protection
When considering the regulation of the internet, specifically social networks, two issues have challenged policymakers trying to reduce harm: whether platforms should be seen as publishers or intermediaries, (i.e. liable for the content of others) and how content can be monitored at scale. There is, however, another approach: to focus on that for which the service providers are responsible – the platform that each provides.
Platforms can be seen as forms of public space. Each has its own expectation of appropriate behaviour and risks of participation, for which owners or controllers have a responsibility to their users. The developer of the platform controls the environment, since the code they produce is the enabler for everything that happens within it. Actions which take place on a platform result from corporate decisions about the software, the terms of service and the resources available to enforce those terms. So, just as ‘persuasive design’ can nudge users to act in one way or another, a service provider can also design in safety features to reduce the risk of reasonably foreseeable harm to its users.
At Carnegie UK Trust, we suggest that service providers should be under a statutory ‘duty of care’ to their users to provide an appropriately safe space. This does not mean that service providers must eradicate all harm. But they should take care to minimise it by adopting a precautionary principle focused on the risk of types of harm arising, rather than the details of causation.
This approach has a number of advantages. Specifying the high-level objectives to safeguard the general public allows room for service providers to act by taking account of the type of service they offer, the risks it poses (particularly to vulnerable users) and the tools and technologies available at the time. The approach builds on the knowledge base of the sector and allows for future proofing. The steps taken are not prescribed but can change depending on their effectiveness and on developments in technologies and their uses. A final advantage applies to the content itself. Although the duty of care does not directly regulate content (though some national rules doing so may well remain in place), users might respond by changing their behaviours. This could mean that problematic content and behaviours (e.g. misogynistic abuse) do not become normalised and, perhaps, not even arise in the first place.
In the emphasis on the architecture of the platform there are similarities with the ‘by design’ approach seen in data protection and information security (for example in the EU General Data Protection Regulation). This approach is often thought of as ‘baked in’ at the beginning of the service or product design and thereafter not considered. This contrasts with the statutory duty of care approach. While there may be some decisions and structures that are fundamental to the service and therefore left unchanged, others may need revision, adaptation or to be abandoned altogether. This ensures that the statutory duty of care approach is not a one-off action but an ongoing, flexible and future-proofed responsibility that can be applied effectively to fast-moving technologies and rapidly emerging new services.
Professor Lorna Woods,
University of Essex, in conjunction with Carnegie UK Trust
- Thursday, 14 February 2019