Published on 3 August 2020
Share Tweet  Share

Lessons from history: what can past technological breakthroughs teach the AI community today

Verity Harding discusses the importance of geopolitics in governing artificial intelligence and the integration of advanced technologies for better global health and prosperity.

In 1967 the UN Outer Space Treaty was signed, in the middle of the Cold War and just two decades after the end of World War Two. Before any human being had ever walked on the moon, the Treaty declared space exploration “the province of all mankind”. An Astronaut, a relatively new concept itself, was to be an “envoy of all mankind” first, a representative of their nation state second.

More than sixty years later the (admirably short) text remains the underpinning of global space law and has been called “a shining example of multilateralism”[1]. At a time of peak global volatility, with superpowers seemingly on the edge of nuclear war, this was a remarkable achievement and serves as an inspiration to all of those today attempting to steer new technological advancements towards global benefit.

Yet this does not mean that it was perfect. Indeed, limitations of the Treaty mean that it will likely not last as a governance framework for the future. The proliferation of private satellite companies, the concerning trend of space debris, the United States’ desire to launch a ‘Space Force’ and the UK’s intention to prepare for ‘satellite wars’, are all issues which the Treaty cannot help. The authors did not anticipate that the technology would become so much cheaper, nor that non-state actors would ever have the resources to achieve space exploration. The abilities of private companies such as SpaceX and BlueOrigin are not accounted for, nor are the general activities of commercial organisations such as the acquisition and sale of assets.

Nevertheless, it stands as a highly successful example of geopolitical cooperation on an emerging and uncertain technology against the backdrop of global instability. Its constraints do not detract from its ingenuity, and its inability to predict every future policy concern does not render all effort pointless. In fact, it is the Treaty’s limits which in some ways define its strengths. Its limited nature was a feature and not a flaw. Not letting the best be the enemy of the good meant that by the time man landed on the moon we had a global political framework as a foundation on which to build, binding the world together in a new endeavour. Authors and signatories recognised the value in this rather than allowing the impossibility of exact prediction to hold them back from putting any measures at all in place. And what was seemingly a very simple concept – that Space exploration and exploitation belonged to the world and not to any one nation or company – turned out to be quite revolutionary.

From this, there is much to learn, particularly for those focused on global governance of increasingly powerful Artificial Intelligence (AI). There are evident similarities; from the potential magnitude of the scientific advance, to the uncertainty about when and how it will be achieved. There are concerns about all that could go wrong, and international tensions as a backdrop. In fact, unnerving posturing between global superpowers feels all too familiar. These factors often combine into a skepticism from the AI community that anything approaching global agreement, even on general principles, is possible today.

And yet the UN Outer Space Treaty, which paved the way for safe innovation in a nascent area of science, was achieved at a time at least as unsafe and complicated as today’s world – if not considerably more so. This surely shows us that regulatory challenges of the scale of AI have been overcome before, and there is much we can learn from the past to enhance our endeavours today.

‘Lessons from History’ : a research project

So what is different this time? What can we, the community of AI researchers, scholars, activists and practitioners who want to see a peaceful and beneficial future, learn from this and other examples of big scientific breakthroughs from the past?

These are the questions that I will spend the next year researching at the Bennett Institute, attempting a comparative analysis of recent transformative inventions.

The research is based on a workshop I co-hosted with Prof Diane Coyle in 2019, designed to test the hypothesis that the AI industry and wider community had much to learn from earlier examples of innovative public policy vehicles. I plan to investigate several transformative technologies from the recent past, which were all received very differently by wider society, and draw out the key themes and lessons for AI policy today.

From each of the examples I will be looking at – including the UK’s Warnock Committee and Human Embryology Act 1990, the Internet Corporation for assigned Names and Numbers (ICANN), the UN Space Treaty and the European ban on genetically modified crops – it is possible to glean excitingly prescient information to take with us in the quest to ensure AI remains safe, beneficial and under meaningful human control.

Clear themes emerge which help paint a picture of what a successful model of societal engagement, acceptance and regulation can be. And there is much to be learned from the differences, from the characteristics of the technology itself to the business models and the nature of the scientific community involved in its creation.

These themes and resulting recommendations for the AI community are what I’ll be writing about  during my time with the Bennett Institute but  –  at a time when the world is trying to pull together in search of a vaccine for Covid-19  –  one in particular feels particularly urgent.

Early conclusions: the importance of geopolitical leadership

Most obviously, the presence or paucity of strong geopolitical leadership shines through. To take the case of the UN Outer Space Treaty, the intention and motives of President Kennedy to put a man on the moon “before this decade is out” undoubtedly galvanised policy responses to a nascent technological capacity. But this kind of leadership can also be seen in the UK’s approach to human embryology research and the regulation of domain names at the advent of the Internet.

In the former case the government’s initiation of a Royal Commission led by the philosopher Dame Warnock proved critical to building both a public and a scientific consensus around the ethical principles which would govern new scientific possibilities relating to IVF. In the latter, the Clinton administration’s desire to find a framework to govern the early Internet sat neatly with his Third Way philosophy. In both cases, the guiding hand of government was necessary for the facilitation and decision-making that led to authoritative regimes, on the basis of which further innovation was possible.

Any national government is, of course, guided by its own set of localised values and realities. But geopolitics, I would argue, always underlies any ambition. While national concerns dominate communication with the electorate, it is rare for a government’s strategy to be imagined without the practical constraints of geopolitics. In some cases it’s a drive to create, for example President Clinton’s hope to forge a global centrist agenda buttressed by American technical leadership. In others it’s a drive to prevent, such as a strategy for domination or one to limit ascendance of another power or bloc. In all cases, geopolitics is ever-present and entwined with a country’s technology strategy.

The same will undoubtedly be true of the direction that AI will take and as such it is important that we understand what interests and imperatives persuade governments to act, or not, when it comes to scientific and technological innovation.

Applying the lessons: the geopolitics of AI

In the contemporary discourse surrounding AI, geopolitics serves primarily to add more sensationalism to an already over-hyped debate. The narrative is one of an ‘arms-race’ between the US and China, with other nation states and international bodies largely ignored. The fight for the best talent, the ethics of private companies, the integrity of projects, research and people. All of it, when geopolitics is featured at all, is featured through a prism of a ‘new cold war’.

To be sure, AI has featured prominently within increasingly fractured US-China relations and that dynamic will have consequences for the development of the industry and scientific community. This context is incredibly important for understanding the current environment in which any product is built or research conducted and it will be critical to forging a path forwards and towards safe and beneficial AI.

But while relevant, this geopolitical prism alone is not sufficient to those in public policy hoping to craft a responsible approach to governing AI either now or in future. The immaturity of the ‘Geopolitics of AI’ field leaves the picture incomplete and unclear. The significance of strengthening relationships between the UK, Canada and France is under-played. The potential for influence from the post-Brexit European Union is under-analysed. And the important role of other global players such as Japan and the continent of Africa is under-valued. An arms-race between China and the US is too reductive to encompass these subtleties and will lead to conclusions which are, if not wrong, then deficient in their analysis.

The success of the 1967 UN Treaty shows us that we must dig deeper to understand various geopolitical motives and use those to encourage greater cooperation and ambition for our shared world. Most of our historical examples show that political leaders who are focused not only on national success, but on global health and prosperity, can galvanise multilateral cooperation and that this cooperation in turn leads to a healthier integration of advanced technologies.

Pioneering in AI Policy

The challenges are great, and the lessons of the past cannot be simply superimposed onto the present. What is possible geopolitically, however, is one example where AI scientists, practitioners and policymakers can take heart from historical precedent.

We all want to see responsible AI, innovative law and responsive, adaptable public policy. Before we get too deep or too technical, analysing what has and hasn’t worked in the past will give us the best foundations upon which to build.


Reference:
[1] Innovation in Outer Space: International and African Legal Perspectives – Lessons Learned: 5th & 6th Luxembourg Workshops on Space and Satellite Communication Law, PJ Blout and Mahulena Hofmann


The views and opinions expressed in this post are those of the author(s) and not necessarily those of the Bennett Institute for Public Policy.

Authors

Verity Harding

Director of the AI and Geopolitics Project (AIxGEO)

Verity Harding is a globally recognised expert in AI, technology and public policy. She is currently Director of the AI and Geopolitics Project (AIxGEO) at the Bennett Institute for Public...

Related Research Projects

Back to Top