Published on 26 June 2023
Share Tweet  Share

Generative AI in low-resourced contexts: considerations for innovators and policymakers

Generative AI use for social impact and business in developing countries is not getting sufficient attention in the current global discourse, writes Ayantola Alayande. There needs to be more collaboration between individual national governments and AI innovators, as well as between national stakeholders and the international community.

Since the release of ChatGPT in November 2022, generative artificial intelligence (AI) has grown in scale across several economic and social sectors, including health, agriculture, education, and research.  Much of the policy discussion around its use has been driven by perspectives from developed economies — notably the US, Europe, and China — where most AI innovations are domiciled.

There are however numerous cases of generative AI use for social impact and business in developing countries that have not garnered sufficient attention in the current global discourse. Whether it is for natural disaster preparedness in Mozambique, farm management and advisory practices in India, or functioning as an HIV management peer coach and aiding its more accurate diagnosis in South Africa, generative AI is seeing a rapid adoption in several low-resourced contexts. AI solutions developed in Africa are also being widely used outside the continent. For example, minohealth AI Labs, a large language model (LLM) for radiological diagnosis was developed in Ghana but is also being used by clients in the Philippines, China, and elsewhere.

AI usage in low and middle-income countries (LMICs) presents opportunities as well as risks that are distinct from what is applicable elsewhere. Certain challenges in the product development process are also peculiar to LMICs, such as access to quality representative data/data models, talent acquisition and development, and access to finance. As such, discussions around generative AI must draw attention to these barriers to innovation as well as the policy challenges.

Innovation challenges and societal risks.

The biggest challenge for AI innovators in LMICs is structural. Implementing new technology on a complex sociotechnical system such as public service or education is challenging, in addition to huge infrastructure deficits, such as a poor power supply and internet access, dearth of data storage centres, and a lack of access to cloud computing systems. This requires constant reiteration during and after the product development stage.

Also, innovators face the challenge of institutional dysfunction and a lack of trust in new technologies by governments and societal stakeholders at large. Relatedly, poor government investment in technologies has provided little incentive for AI innovation to boom in LMICs.

Talent shortage is another prominent challenge. As of 2019, almost 60 percent of all top-tier AI researchers reside in the US; six times the number in China and Europe (about 10 percent each), and India has approximately 386 people out of the 22,000 PhD-educated AI scientists globally.  Africa faces a brain drain of data scientists, with US and European big tech companies hiring many of its top talents.

Lack of quality and representative data is another barrier to AI innovation in developing countries. Often innovators from low-resourced contexts rely on models developed by big tech companies based in the US, China, and Europe, yet the training data for such models might be inappropriate for LMIC contexts. More so, if such models are not open access, LMIC innovators incur huge financial costs to access them.  

When it comes to risks, Dr Rachel Adams of Research ICT Africa proposes that generative AI risks in LMICs should be viewed from 3 three levels:

  1. System level harms –technology can have inherent bias, data privacy issues, hallucination (in other words, the models’ confident invention of incorrect answers), and security risks.
  2. Structural level harms — the possibility of AI amplifying existing societal or economic inequalities, such as marginalisation and job displacement.
  3. Harms to social values — AI’s ability to distort democratic or social views through misinformation, polarisation and deepfakes. 

All of these risks could materialise at the individual, collective and societal level.

Considerations before further deployment

Adams argues that evidence is needed to better understand the impacts, implications, risks, and opportunities that generative AI technologies present in low-resourced contexts such as Africa.

First, the social impact of existing generative AI use needs to be assessed. Policymakers and AI innovators should aggregate public perception of the technology including where, how, and to what extent the public wants them to be used and at what scale. . This would create a positive overlap between government priorities and end-user goals.

Second, fostering stakeholder buy-in across various sectors of AI use is needed. This is especially important in the public sector where, for instance, the use of AI for budget forecasting could be frustrated by budget padding practices of corrupt officials. Conversely, more technology is not always the solution, and clear guidelines for AI procurement in the public sector could help understand where its use is not needed.

Third, regulatory agencies and innovators must collaborate to create mechanisms that validate the quality and inclusiveness of datasets or models that go into the general deployment of AI. One example is the approach by Nigeria’s National Information Technology Development Agency (NITDA) in co-developing its National AI Policy with a team of 63 industry experts, having received over 350 applications from its call for volunteers home and abroad.

Additionally, as opposed to the current regulatory approach that generally restricts specific AI models, policy should focus on sector-specific use of AI. This way, policymakers — with the help of the tech community — can identify which models are too risky to deploy in certain sectors but pose little to no harm in others. Importantly, AI innovation must specify its own human moderation mechanisms (e.g. generative media platforms should have clear provenance procedures before they are deployed in newsrooms).

One challenge with advancing an extensive regulatory framework is that many developing countries do not have government units that oversee AI development; as of 2022, only four of 54 African countries have a finalised national AI strategy. 

Finally, a few dilemmas emerge with respect to governing AI in LMICs. It’s important to address whether developing countries should adopt the US’ ‘innovation-first’ approach to AI governance, or Europe’s ‘privacy first’ model. Or if they should hybridise or rather evolve their own context-specific regulations. When adopting international standards, notes Paul Plantinga, policymakers must also be mindful of sector-specific protocols such as the United Nations World Health Organisation and International Telecommunications Unit Benchmarking Process for Artificial Intelligence in Health. Whichever governance leaning is adopted, local context is crucial when developing regulatory frameworks – for instance, in designing digital rights policies, Halifet Ayemohammed Yusuf argues that African governments might find the African Charter on Human and Peoples’ Rights a better fit than the Universal Declaration of Human Rights

Conclusion

Generative AI use in LMICs presents distinct opportunities and challenges, but we have been here before. Developing countries can draw immense lessons from the general evolution of digital technologies since the late 90s. While generative AI technology is peculiar, its emergence is not sudden and the sociotechnical considerations surrounding it are similar to what global south policymakers are already acquainted with.  Rather than re-inventing the wheel, efforts should be geared towards amplifying collaboration between individual national governments and AI innovators, as well as between national stakeholders and the international community.


This blog is based on a roundtable held by Research ICT Africa and the Bill and Melinda Gates Foundation, as part of the AfricAI Conference in Kigali, June 2023.


The views and opinions expressed in this post are those of the author(s) and not necessarily those of the Bennett Institute for Public Policy.

Authors

Ayantola Alayande

Research Assistant

Ayantola is a Research Assistant on the Digitalisation of the Public Sector project, which explores various aspects of digital government in the UK and other countries. Prior to joining the...

Related Publications

Back to Top