Leaders from academia, tech businesses and the media took part in a Bennett Institute event to discuss AI’s impact on this year’s many elections, its regulation, and the media's role in combating deepfakes and misinformation. The speakers highlighted safe AI development, corporate responsibility, public perception, and the need to balance regulation and innovation to protect electoral integrity and enhance democracy.
On 21 May 2024, experts in AI from academia, business, and policy convened to explore the dynamics between AI, electoral processes, regulation, and the media’s role in combating deepfakes and misinformation. The public event, hosted by the Bennett Institute for Public Policy, provided a platform for two panels of experts to discuss how best to navigate these critical intersections in the context of 68 countries holding elections this year. Here are summaries of the main points to emerge.
Global perspectives on AI’s electoral impact
The impact of AI on elections varies significantly by country. In the United States the focus is on AI’s influence on political discourse, while in India the primary concern is the proliferation of disinformation and misinformation, and in the UK there is a call for political parties to establish a cross-party agreement on the responsible use of generative AI ahead of the elections. This is an opportunity to set a precedent for how elections should be run in democracies, said Prof Gina Neff, Executive Director of the Minderoo Centre for Technology and Democracy.
Commitment to safe AI development
Prof Neff reported that earlier that day, 16 leading tech companies, including OpenAI and Meta, had committed to the ‘Frontier AI Safety Commitments’. These commitments outline that leading AI developers will listen to governments and AI Safety Institutes in setting thresholds when they would consider risks unmanageable. This initiative reflects a growing recognition of the need for robust safety measures in AI development, with regulation beginning to catch up to technological advancements.
Corporate responsibility and the role of SMEs
So far, the onus has been on companies to put in place safety measures and regulation is starting to catch up with an appetite for government to get more involved, said Tom Mason, former chief technology officer for Stability AI. Tech companies such as Adobe which co-founded the Content Authenticity Initiative, are putting in their own guardrails for content provenance and authenticity to help people know what is trustworthy online but inevitably there will be a bias. He raised the critical question that output (information) from machines, not from people or institutions, increases attention to responsibilities, limitations, and regulation of AI-generated content, particularly in the context of political campaigns.
Prof Neff agreed that alongside big tech companies, there should be be greater involvement from small and medium-sized enterprises (SMEs), policymakers, and citizens in discussions on responsible AI. A recent report by the Bennett Institute, the Minderoo Centre, and AI@Cam on “Generative AI” (2023) recommended that the UK should pursue becoming a global leader in applying generative AI to the economy. Rather than using public support for building new foundation models, the UK could support the growing ecosystem of startups that develop new applications for these models, creating new products and services.
Public perception
Dr Ella McPherson, Associate Professor of the Sociology of New Media and Digital Technology, expressed her concerns about the public’s anxiety over AI threats overshadowing other pressing issues such as the state of education. This could be seen as a political tactic of distraction. She questioned who benefits from the prevailing anxieties surrounding AI-driven misinformation and deepfakes despite extensive research on propaganda suggesting it has limited effect. For now we don’t have high quality evidence to say that people who see things on social media are changing their behaviour when it comes to political elections but there is research to show that the very uncertainty about this is slowly chipping away at pro-democracy sentiments and the notion of what is “right” and “wrong”.
Detection and disclosure of deepfakes and misinformation
In the second panel discussion, Henry Ajder, Founder of Latent Space, discussed existing tools for identifying deepfakes, including detection image scanning, watermark disclosure that isn’t evident to the human eye, and cryptography which travels with a piece of media to show how, where and when it has been created. But, while detection is currently the most advanced method, he cautioned that the different technologies on the market are not yet foolproof.
Ensuring quality information
Kenneth Cukier, Deputy Executive Editor of The Economist, said that misinformation and hoaxes are nothing new and highlighted the rigorous fact-checking processes his publication employs to combat them. However, he acknowledged that not everyone has access to high-quality journalism around the world and misinformation created by rogue actors could influence electoral engagement.
Dr Kerry McInerney, Research Associate at the Leverhulme Centre for the Future of Intelligence, emphasised that big tech firms are the main gatekeepers of information, controlling which political movements gain prominence on platforms and which platforms are accessible globally. This dynamic is particularly evident in countries like Myanmar and India, where platforms like Facebook are crucial for news dissemination, business operations and government announcements.
Balancing regulation and innovation
Dr McInerney also noted the potential harms such as distress and reputational harm from deepfakes, even when they are known to be fake, highlighting the need for comprehensive regulation and public awareness. While regulation alone cannot eliminate rogue actors, it is a critical component of a broader strategy to protect electoral integrity.
AI to improve democracy
That said, Prof Neff highlighted the importance of leveraging social, cultural, political, and technical tools to improve information quality and enhance electoral processes. She said this is an opportunity for parties to use AI to spark interest in voting and to demonstrate how democratic elections should be conducted responsibly in the digital age.
Thank you to Walter Pasquarelli, Affiliated Researcher at the Bennett Institute for Public Policy, for chairing the panel discussions and to Prof Diane Coyle for hosting.
Watch the event: How will AI impact the year of elections?
View photos from the evening below
The views and opinions expressed in this post are those of the author(s) and not necessarily those of the Bennett Institute for Public Policy.