Published on 20 June 2018
Share Tweet  Share

Developing AI for Government: What role and limits for the private sector?

As technology firms contribute to building artificial intelligence capacities for Government across the globe, we ask an international panel of policymakers and experts 'What should the role and limits for the private sector be in developing AI for Government?'

Last week the newly announced UK Government Centre for Data Ethics and Innovation issued a public consultation. The Centre called for guidance on governing not only private-sector data usage and technology development but also the use of artificial intelligence (AI) in the public sector, including for the ‘accurate targeting of public services to those most in need and more effective distribution of public resources.’

The UK is not alone in seeking to develop its AI capacity for public service provision and policymaking. The most ‘AI ready’ governments are arguably spread across Europe, Australasia, the Americas, and the Middle East. These governments depend on strong relationships with the technology sector, a trend that is likely to intensify as technology giants realise the global scope of the AI business opportunity, and governments continue to rely on external technological know-how. Governments are also beginning to approach start-ups as potential providers of AI solutions for the public sector, with state-backed ‘GovTech’ or technology for government funds and competitions nurturing the growth of the sector. 

For all the potential benefits of public-private AI collaborations, the relationship is complicated from the outset. The primary duty of government is to its citizens. The primary responsibility of private companies, by contrast, is to their shareholders. These priorities can clash—a problem that is not new, but may be magnified as the global GovTech market courts trillion-dollar valuations. Who will protect and control citizens’ data? And who will capture the economic value of the insights that can be gleaned from them? The answers to these questions are crucial to government legitimacy, particularly in regions where public-private partnerships have historically doubled as hotbeds for corruption and crony capitalism. 

Citizen-centric governments around the world must ensure that they have sufficient in-house technological understanding to be cognisant and demanding clients. Now more than ever, governments also require skilled ‘translators’ equipped to navigate between with the different languages, cultures, priorities and ambitions both across the technological and policymaking dimensions of government and between tech firms and the state.

As technology firms increasingly contribute to developing AI for Government, at varied paces and in diverse political cultures across the globe, we asked an international panel of policymakers and experts ‘What should the role and limits for the private sector be in developing AI for Government?’

Dr. Tanya Filer, Bennett Institute for Public Policy, University of Cambridge

‘Large organisations and governments are poorly equipped to deal with such disruption.’

Over the past couple of months, many governments around the world have announced or upgraded their digital strategy and often complemented it with an AI chapter. France, the UK, the USA, Russia and China have all expressed a clear will to take some leadership in this field, often with very different approaches. What distinguishes this case from the traditional process of policy making is that governments have little in common with disruptive technologies – even less so with largely distributed technologies. That most key developments are open-source (on Github and Tensor Flow) is only one of the complexities that governments have to deal with. Large organisations and governments are poorly equipped to deal with such disruption. Disruptive projects for states have tended to end badly, with the exception of DARPA and a very limited number of other cases. 

So, how should states use digital and AI in order to reap the benefits of the productivity gains that they can provide? First, they must focus on principles rather than on the details of delivery. They must respect and provide public services to the citizen, rather than the consumer; second, they must enthusiastically subcontract to the private sector so as to increase their expertise in project and operation management; third, this delegation of public services must be strongly regulated and that regulation enforced, if possible, through regional agreements with extraterritorial status, like GDPR. Should they do this, governments may become twenty-first century institutions; otherwise they risk becoming obsolete, progressively replaced by private actors over which they would have little control. 

Gilles Babinet is France’s Digital Champion to the European Commission.

‘The future relationship with the private sector needs to be one of true partnership, not pliant submission.’

It’s time for the public sector to take control of its digital destiny. Since the 1980s successive governments have relied on the private sector for digital innovation. Inflexible, costly IT contracts; poor user experience; and an erosion of in-house expertise were the consequence. Since the 2010s GDS partially reversed this trend for central government, but vast swathes of local government and the NHS remained dependent on a small number of suppliers.

The government’s AI Grand Challenge risks perpetuating the mindset that it is only the private sector that can innovate and drive economic growth. The government’s role appears to be limited to stimulating investment and modest digital infrastructure improvements. Yet, as the economist Mariana Mazzucato has shown, state investment and research has heralded startling technological developments: GPS, touch screen, search algorithms, and the Internet owe much to state nurturing.

The state should play a leading role in the future of AI, and the future relationship with the private sector needs to be one of true partnership, not one of pliant submission. Three policy changes would help. First, AI capability should be built-up at the heart of government. AI researchers should be funded and seconded into government departments and senior policymakers should receive AI training. Second, state investment should challenge AI start-ups to focus on core public service issues (such as A&E, social care etc) through R&D investment and funding. The universal nature of public services means that “lift and shift” models from the private sector will only work up to a point, often narrowly focussing on “cheap to serve” transactions, leaving the state to pick up the costly “edge cases”. The new Office for Artificial Intelligence could co-ordinate departmental “use case” requests. And finally, the state should maximise its digital assets. There is no reason why rich, machine-learning friendly datasets from public services should be free to access for AI companies. The state should look to take equity in start-ups it provides critical support to. For example,  Oxford University Hospitals recently took a £5m stakeholding in Drayson Technologies.

The government is right to prioritise ethical AI and a framework for state-private relationships is needed; data protection and control issues need to be addressed. Citizens are more likely to be in favour of the use of their data if they know there will be public benefit; through finding cures for diseases they or their loved ones may have, for instance. 

Antonio Weiss is a director at 2020 Delivery, a specialist consultancy advising UK public services on digital transformation. 2020 Delivery co-hosts ‘How Soon will Robots Run UK Public Services?’ on June 21.

 

 ‘Mexico’s new government will have to develop an integral National AI Policy that can boost Mexico’s economic potential and promote inclusive social development.’

There is no better place to prove AI´s potential to bring transformative social change to poverty and inequality reduction than in the Global South. However, national and local governments in these regions have to understand the urgency of investing in AI and developing integral AI policies that consider ethical constraints and ensure that the benefits are shared by all.

One of the emerging economies leading change in this field is Mexico. Its Government has made great progress over the past years in open data policies and digital infrastructure, two key foundations for an effective AI policy. It also recently published a national AI strategy for 2018, positioning Mexico as one of the first ten countries to have one. Civil society, industry and academia are also driving important change by laying the foundations for an effective intersectoral collaboration through the recently created AI coalition, AI2030. Their aim is to help develop a long-term country agenda for AI, considering ethics and sector co-responsibility as its core components. Furthermore, local governments, such as the government of Jalisco and Mexico City, are pushing conversations and actions forward and helping accelerate the growth of the data and AI local ecosystems.

There is no doubt that Mexico is steering in the right direction; nonetheless, there are many challenges that must be addressed to ensure a true social and inclusive economic impact driven by AI. Mexico’s new government, which will be elected in July of 2018, will have to develop an integral National AI Policy that can boost Mexico’s economic potential and promote inclusive social development. Among many key actions, it will also need to continue efforts to address the gap in digital skill sets, prioritize the digital agenda, invest in digital infrastructure, keep ensuring the publication of good quality open data and create a National AI Center to help incentivize academic and industry partnerships, define the areas with highest potential and help commercialise the best products. If leveraged correctly at the national and local level, AI can be a key tool to help improve Mexican public services for the lowest-earning 80% of citizens.

Constanza Gomez Mont is co-author of ‘Towards a national strategy for AI in Mexico’, (download at www.cminds.co/ai) and Founder & CEO of C Minds.

 

‘Given the high demand for experts in this area many if not most machine learning systems in government will be built and maintained by the private sector.’

As we start to use ‘Artificial Intelligence’ in government we need to remember an obvious truth: governing is fundamentally a normative exercise. There is no ‘objectively right’ way of doing many of the things government does. This, of course, has always been true. What’s new is that when we use algorithms we are forced to transform these value judgments into code.

Given the high demand for experts in this area many if not most machine learning systems in government will be built and maintained by the private sector. We are, however, not set up to oversee and control how firms decide to encode such value judgments when they build AI systems. There is sometime even only a limited appreciation that these value judgments are being made in the first place.

Government needs to take responsibility and ensure that the normative choices we make in building machine learning systems reflect conscious, legitimate choices rather than arbitrary or unaccountable ones. 

Consider COMPAS, the recidivism risk algorithm. The algorithm calculates a score indicating the likelihood of a defendant re-offending which is used to inform judges’ decisions. The company had built a particular idea of fairness into the algorithm which resulted in a true positive rate that was equal for black and white defendants. The false positive rate, however, was twice as high for black defendants.

As part of our research into the use of AI in government we have spoken with both the data scientists who created this algorithm as well as with the individuals who rely on its results. The trouble with COMPAS wasn’t only that it had this particular, arguably problematic notion of fairness encoded in it. Such choices are inevitable and health visitors, social workers, teachers and civil servants make similar judgments all the time. 

The problem was that the judges and parole officers relying on the results of the algorithm had little appreciation that this particular notion of fairness was being used. The problem was also that the private sector data scientists who built the algorithm weren’t conscious of the fact that they were building certain normative views into their algorithm.

When it comes to human decision making we have, over time and partly by trial and error, built governance mechanisms which allow us to scrutinize the value judgments behind a decision, appeal against it and have it changed. As imperfect as these mechanisms may be, at least they exist.

We need those same mechanisms for “Artificial Intelligence” systems in government. Unless we build these controls and unless we do it in a way that is seen as legitimate by citizens, the great promise of AI in government will go unfulfilled.

Danny Buerkli is Programme Director at the Centre for Public Impact.


The views and opinions expressed in this post are those of the author(s) and not necessarily those of the Bennett Institute for Public Policy.

Authors

Tanya Filer

Dr Tanya Filer

Affiliated Researcher

Dr Tanya Filer leads the Digital State Project at the Bennett Institute for Public Policy. Her work focuses on GovTech (government technology) innovation ecosystems, and on digital government more broadly....

Related Research Projects

Back to Top