Blog

Written on 31 May 2021 by Professor Diane Coyle and Verity Harding

G7 must discuss the future of AI

"The future of artificial intelligence should be discussed at the G7 Summit and the UK should establish itself as a world leader when it comes to supporting ‘strict but permissive’ AI innovation," say Professor Diane Coyle and Verity Harding.

Next month, if we’re lucky, we might get a chance to see something no one has  seen before: a gaggle of world leaders eating a Cornish pasty. The United Kingdom, currently holding the presidency of the prestigious ‘group of seven’ (G7) rich democractic nations, will host the most powerful people in the world at a hotel in Carbis Bay, Cornwall. Inviting them to the British seaside might seem risky, not least because of our typical summer weather.

But the G7 presidency has thrown the UK a huge piece of good fortune, offering the chance to get President Joe Biden - lukewarm about the UK - on our soil and hearing our priorities. While climate change and the coronavirus pandemic pose the most pressing and prominent challenges, the Prime Minister and his team should not neglect the need for international agreement concerning the future  of Artificial Intelligence (AI).

The UK has always made an outsized contribution to AI research and has a strong history of technological leadership dating from Alan Turing. The famous ‘Turing Test’ is still cited by scientists across the globe as a measure of how powerful AI might become. Our universities are some of the best in the world, attracting thousands of international students every year, and producing world-leading AI research. Partly as a result of this concentration of talent and creativity, the UK has also produced several cutting-edge AI companies too. Crucially, given our nation’s mixed record of commercialising our scientific prowess, AI is widely seen as a sector where a technological lead can translate into economic success. 

Just as importantly, the UK also has a proud history of regulating new technology effectively. In the 1980s we led the world in the governance of what was at the time a new and exciting but also challenging capability -  human embryological research. The birth of Louise Brown in 1978, the first baby to be born using IVF techniques, was an extraordinary medical breakthrough. But the ability to create life in a laboratory raised unexplored legal and ethical questions. Under the skillful chairing of the moral philosopher Baroness Mary Warnock, a committee of experts including scientists, theologians, social workers and legal scholars reached a consensus. The technology would be allowed to flourish, bringing unmeasurable joy to those who benefitted from it, but it would be carefully scrutinised, reviewed, and managed by a professional regulatory body created by an act of Parliament in 1990. The regulatory environment thus established is called ‘strict but permissive’ and it is one of the most successful in the world.

As AI develops and becomes ever more powerful, who will determine how and when this technology is integrated or imposed onto the ordinary citizen is a critical issue. There are important choices to be made with lasting consequences. Will we allow the use of AI to determine our ‘social credit scores’, tying together all the data the government has about us to test how good a citizen we are? Are we comfortable with algorithms searching through all of our Internet traffic to pick out criminals? Is facial recognition technology racist? How should this be evaluated and what would its status as evidence be? The Post Office Horizon scandal is a powerful reminder that technology can fail with appalling human consequences when there is no scrutiny or accountability.

And what about the positives too: how can AI be steered to deliver better public services or innovations that benefit society? One of the most promising areas of AI application is in tackling the climate crisis through better weather prediction and renewable grid management. The development of AI does not exist in a vacuum.  Political and regulatory choices will be critical in influencing what gets funded, what the scientists build, and what people buy.

The UK has indeed been active in these global conversations as of the first countries to start grappling with the questions raised by artificial intelligence. In 2018 the government set up a  new Centre for Data Ethics and Innovation, an independent advisory board tasked with analysing the potential benefits and opportunities of data-driven technology, as well as its ethical quandaries. The UK was also one of the first nations to support the Global Partnership on AI, an initiative itself born of previous French and Candian G7 presidencies. It aims to ensure international cooperation in the governance of AI.

But now the future of AI, and its governance, is at a crucial inflection point. Two of our allies, Europe and the United States, are poised to leap forward in determining how this technology will manifest in all of our lives, and we risk being left stuck in the middle. A third, China, is investing heavily in AI research and development, but is unsurprisingly distant from conversations about international regulatory co-operation.

In the US a newly recharged White House is filling vacancies in the science and technology departments gutted by President Trump. The Trump team offered support for AI development and indeed signed up to the OECD’s principles. Unfortunately, it was through an ‘America First’ prism, focussed mainly on competing with the Chinese and creating American jobs. The result was an unnecessarily heated and polarised international environment. By contrast, President Biden has appointed the social scientist Dr Alondra Nelson,  known for her research on how science, technology and social inequality interact. Under her leadership the new American government  is much more likely to lean into the social, economic and political issues of AI. Meanwhile there are several powerful members of Congress, supported by a large national security infrastructure, who are pushing for American leadership in the face of a declared threat from China and in doing frame AI development through a militaristic angle.

The EU, on the other hand recently released its own AI proposals that will likely become legislation in the near future, categorising  AI development based on grades of risk. Low risk applications have no restrictions; “limited-risk” activities such as chatbots are permitted but with transparency obligations so that a user knows they aren’t talking to a human. The higher risk categories are defined as materially interfering with peoples’ lives, behaviour and jobs - everything from automated CV filtering to deploying “subliminal techniques beyond a person’s consciousness” - and may be subject to strict restrictions or outright bans. These are complex and yet significant decisions that the UK has no formal way to influence.

The British government is right to focus attention at the G7 on the urgent twin crises of climate and pandemic. But as we attempt to forge a new role for ourselves in the world, our technological  know-how and regulatory experience must not be forgotten in shaping AI governance. We should assert that the UK will champion international dialogue, fairness, caution and deliberation. We will support our allies in shaping AI to meet our shared democratic values, while avoiding the zero-sum game of competing nationalism.  Offering a vision for AI governance that is focused neither on national dominance nor risk averse limitations, we can demonstrate the type of nation we want to be as we attempt to forge a new global identity: one that is innovative and exciting, supports and values science, but is unwilling to be blinded by techno-hype into putting our citizens, especially the most vulnerable, at risk.

The UK can and should be a world leader when it comes to supporting ‘strict but permissive’ AI innovation. We have proven before that we can do it. The beautiful beaches of Cornwall are an auspicious place to start.


Image: "3KB15708a_C" by Kernowfile is licensed under CC BY-NC-ND 2.0

  • About the author

    Professor Diane Coyle, Bennett Professor of Public Policy

    Professor Coyle co-directs the Institute with Professor Kenny. She is heading research under the progress and productivity themes.   Learn more

    Diane Coyle 2018
  • About the author

    Verity Harding, Visiting Fellow

    Verity Harding is a Visiting Fellow at the Bennett Institute. She is on secondment from DeepMind where she was Head of the Policy and Partnerships team, a research and engagement unit established to explore the real-world impacts of artificial intelligence so that it works for the ...   Learn more