Published on 12 October 2023
Share Tweet  Share

What will it take for AI to be used responsibly?

Two things are critical to achieving Responsible AI in practice, write Ayantola Alayande and Lucy Hampton. A shared understanding of what R-AI means based on broader societal values, and an outline of the roles required of different stakeholders.

Rapid advancements in Artificial Intelligence (AI) – such as the development of foundation models powering ChatGPT – and the resultant explosion in usage have led to an increased focus on Responsible AI (R-AI). In other words, what will ensure the new capabilities are used safely and for the benefit of the economy and society? Yet, recent layoffs of ethics teams at big tech firms and the stalling of corporate investment in R-AI point to its de-prioritisation. The very idea of developing a ‘responsible’ framework contrasts with tech companies’ desire to ‘move fast and break things’.

There is also a lack of collaboration among seemingly ‘different’ stakeholder groups; for instance, between academia and policymakers, between the arts and humanities and technical or computing research groups, and between tech innovators and civil society.  

Closing this divide is the focus of new research initiatives such as the Bridging Responsible Artificial Intelligence (BRAID) research programme, newly launched by the Arts and Humanities Research Council (AHRC) and the University of Edinburgh, and the Responsible AI programme led by the University of Southampton. This is a large and pressing area of work and so there are others in the pipeline. 

Two things are critical to achieving R-AI in practice: a shared understanding of what R-AI means based on broader societal values, and an outline of the roles required of different stakeholders. 

What is Responsible AI?

The phrase ‘Responsible AI’ and its variants ‘Ethical AI’ and ‘Trustworthy AI’ encompass the process of designing, training, and deploying AI models in a way that is ethical and safe. However, some have argued that using such terms risks overgeneralisation and places emphasis on ‘not doing harm’ with AI rather than actively ‘doing good’ with it. As a result, scholars now call to move away from R-AI as a set of technical and generic principles towards a philosophical conceptualisation of ‘what is good AI?’ in each given context. One good example is Abeba Birhane’s proposition for relational ethics, i.e crafting AI ethics with consideration to the unequal relations between groups, while prioritising those most likely to be disproportionately impacted by the technology. 

What  does an R-AI approach look like in practice? 

Primarily, R-AI in practice means going beyond the basic guardrails already being implemented by tech companies. The companies must do more than the minimum and embrace R-AI as a practice rather than a one-off project. This requires senior management in both AI-producing and AI-adopting organisations to own the process of integrating organisation-wide R-AI practices into every link of the value chain. Additionally, R-AI discussions beyond the companies themselves must include not only safety practices, but also concerns around human rights, social justice, and the balance of power between the public and a few private firms who control the data and computing infrastructure, as well as between privileged groups and the marginalised.

Why is this approach needed now?

This kind of multistakeholder approach to R-AI is important for two reasons. The first is the rapid pace of AI innovation. With new forms of AI emerging every day, stakeholders now have to constantly re-ideate around what constitutes ‘ethics’. For example, initial concerns about the technology were largely about job displacements, but the advent of open-access Large Language Models (LLMs) has brought new terms such as ‘safety’, ‘fairness’ and ‘robustness’ into currency.  Beyond ‘explainability’ and ‘de-biasing’ models, some experts are also beginning to grapple with the potential existential risks from more advanced AI technologies.  

Secondly, existing frameworks such as the ECCOLA principle, while comprehensive, are not applicable in the same manner across sectors and demographics. For instance, the application of LLMs for sexual health education or gender-based violence prediction in low-income communities poses far greater risks and requires more explicability than, say, its use for marketing or content creation in a consulting firm. 

What gaps need to be bridged to achieve this?

To achieve all of this, three crucial gaps must be bridged. The first is within academia. Currently, R-AI is framed as a purely technical or scientific pursuit. Yet, arts and humanities scholars are better equipped to highlight the dynamic and creative aspects of human life that computation may be blind to, and to bring a historical lens to our understanding of technology’s impact on society. Blending big data with the humanities’ ethnographic insights can also enrich the AI policymaking process. 

The second gap is between academia and policy. A study on the perception of AI use in the education sector among policymakers and academics in the US, UK and Europe reveals a divergence in priorities between both groups, with academics being concerned about the methodological and pedagogical impact of the technology, while policymakers are focused on its economic impacts. Policymakers also sometimes conflate distinct AI governance issues (e.g. values versus standards), or confuse AI-specific concerns with those about other types of digital technologies (see for example US lawmakers’ line of questioning to TikTok CEO, Shou Zi Chew, during his appearance in Congress earlier this year). More collaboration between policymakers and a broad community of academia would ensure that policy benefits substantially from research. 

What is the role of civil society?

The third gap is the exclusion of the rest of society. Current conversations about AI are dominated by industry, which develops almost all cutting-edge models and has a growing role in AI research in general. For example, 91% of benchmarks (used for measuring progress in AI) originate from industry, while research papers with one or more industry co-authors at leading conferences doubled to 38% in 2020.

The field of AI is wealthy, white, Western and male. But the definition of responsible AI is contested and context-sensitive. So, it is urgent that a broader swath of society is involved in these conversations to ensure diversity of perspectives and experiences. This is important not only in the more abstract discussions about values, but also in anticipating unintended consequences in deployment. Marginalised voices are particularly crucial both because there is less data available on these groups — resulting in lower model accuracy — and because AI can entrench existing inequalities

Public consultation is made all the more pressing by the increasing use of automated decision-making in service provision, such as in determining welfare eligibility, policing, credit ratings and healthcare. In theory, these systems have the potential to reduce bias by nullifying the influence of irrelevant characteristics. But the consequences of faulty ‘suspicion machines’ can be dire: in one high-profile example, an automated system caused more than 40,000 people to be wrongfully flagged for unemployment insurance fraud in Michigan between 2013 and 2015. 

The technical nature of AI can indeed be a barrier to productive conversations. In a 2022 survey by the Centre for Data Ethics and Innovation, only 13% of respondents could give a definition of AI. Many more are unaware of the prevalence of automated decision-making systems. For example, in a 2018 survey by the Royal Society of Arts and YouGov, only 32% of respondents were aware of the use of automated decision-making systems in healthcare. So how can the public be consulted meaningfully? 

Successful public engagement can take many shapes. At one end of the spectrum are algorithmic bias bounty contests, pioneered at Twitter by Dr Rumman Chowdhury and Jutta Williams, where participants compete for cash to create tools to identify and mitigate biases in AI models. It is hoped that these will help develop a capable and independent ‘third party’ of researchers for auditing models. 

At the other end are citizens’ assemblies, where randomly selected citizens meet to deliberate on a topic. Recently this was employed in Germany for AI in health. The functioning of such assemblies can also be improved by the use of AI, as the case of the Taiwanese digital platform Pol.is shows, using machine learning to identify consensus from conversations. OpenAI itself has expressed enthusiasm for the concept and is launching a programme to award 10 $100,000 grants for experiments into democratic decision-making on AI.

The broader regulatory ecosystem 

Yet for all their merits, public engagement initiatives must exist as part of a broader regulatory ecosystem. Governments are best equipped for representation at scale. Local governments, for example, can voice the specific concerns of their communities, especially prior to the rollout of new systems, and communicate clearly to citizens when and why automated decision-making systems are being used. National governments can help set out standards and regulations for the ethical use of AI (including the use of automated decision-making systems), provide funding for or access to the compute required for research, clarify liability for AI harms, conduct or facilitate model audits, and attract talented researchers from around the world to build R-AI.

International coordination is essential given the danger of a ‘race to the bottom’ on standards and the diffuse nature of the AI supply chain. For example, it is important to ensure that advances in model safety or better worker protections in wealthy countries do not come at the expense of people in the Global South, who are often employed on low wages to label toxic content in training data, and bear the brunt of the climate impacts of AI. But apart from several EU initiatives (including the AI Act 2021), international proposals are still mostly in their infancy. Some of these, as set out in a recent paper by Ho et al. (2023), include a commission for establishing scientific consensus on risks, a standards-setting organisation, a public-private partnership to provide access to cutting-edge models to ensure their benefits are widespread, and a CERN-style research project for AI. 

Conclusion

In the context of the rapid diffusion of AI and the changing nature of the risks, it is urgent that we create and deploy AI responsibly. Yet we should not rely on big tech firms to do this. The public, academia, industry and government all need to be involved in ongoing conversations about what ‘responsible’ means as well as what we want from AI more broadly.  


This blog builds on the launch of the Bridging Responsible Artificial Intelligence Divide (BRAID) research programme held in London on 15 September 2023. 

Image: Wikimedia Hackathon 2013 by Sebastiaan ter Burg. Creative Commons Attribution 2.0 Generic


The views and opinions expressed in this post are those of the author(s) and not necessarily those of the Bennett Institute for Public Policy.

Authors

Ayantola Alayande

Research Assistant

Ayantola is a Research Assistant on the Digitalisation of the Public Sector project, which explores various aspects of digital government in the UK and other countries. Prior to joining the...

Lucy Hampton

Research Assistant

Lucy is a Research Assistant working on the Sectoral Productivity project, which investigates the drivers of productivity in different sectors. Her research interests include the economic impacts of artificial intelligence, research...

Related Publications

Back to Top