The data trust framework promises to relaunch the global data economy, making it accountable and privacy focused. For the EU and several national governments that moved to adopt the public data trusts, there is a delicate balance to strike: the data stewards are expected to protect individual privacy while facilitating national AI industries, writes Anna Artyushina.
Over the last four years, the term “data trust” has become a policy buzzword as the European Union, United Kingdom, Canada, and Australia experiment with data stewardship frameworks as a means to counter the surveillance practices of the US platform companies and boost national artificial intelligence (AI) industries.
Across the UK, Europe, North America, Japan, and Australia, the landscape of data trusts has greatly diversified, with providers offering novel data management solutions for individuals (personal data spaces), communities (data cooperatives), and companies (commercial data trusts). As a set of technologies and legal frameworks, data trusts promise to give more leverage to individuals keen to protect their privacy online, as well as offering the potential to make public services more efficient, help advance collective interests for communities that are largely overlooked by existing regulations, and help companies that wish to use personal data in good faith.
I spoke to policymakers, data governance practitioners, and privacy experts about the perspectives and limitations of this emerging framework, as well as the likely privacy implications of the new policies for the general public. This blog, and a subsequent one, set out the findings.
What is a data trust?
By definition, a data trust is a steward that manages someone’s data on their behalf. For the sake of clarity, it may be useful to distinguish between the trusts that store one’s data and those that manage individual and collective rights of access to the data. A library can be a useful analogy here: there are physical book storages and there are digital libraries like JSTOR, which allow a person to access the texts regardless of their physical location. Both a library and JSTOR serve a designated community and protect the texts from unauthorised access.
In the UK, trust law is widely applied to manage the public and common resources. No wonder, then, that the British government was the first to embrace data trusts as a means of boosting the national AI industry by creating a framework for data access.
In terms of technology, the majority of the currently operating data trusts owe their existence to the Solid open-source technology invented by Sir Tim Berners-Lee, who envisioned a way to demonopolise the Internet by giving users an option to store their data on a private server. If, for instance, Facebook doesn’t store a person’s data, it becomes significantly easier for that individual to control which parts of their digital lives they are sharing with the company.
Looking for a legal framework to support Berners-Lee’s innovation, Jack Balkin and Jonathan Zittrain came up with the concept of information fiduciaries. Just like lawyers and medical professionals, tech companies have access to personal or potentially sensitive information about their clients, and they can be legally required to act in the best interests of these clients. However, the concept of information fiduciaries has been criticised for putting too much trust in the platform companies.
Instead, Sylvie Delacroix and Neil Lawrence, authors of the ‘bottom-up’ data trust framework, explain the need for a variety of independent data stewards that would mediate the relations between the individuals and communities and the tech companies. Sean Macdonald explored the concept of civic data trusts.
Public data trusts and the global race to dominate AI
Platform companies have long recognised and captured the value of digital data, creating a rentier economy where the data posted online or collected across different platforms is controlled by and economically benefits the data collector. Alongside the Big Tech companies that conceptualise user data as the cornerstone of digital innovation, a growing number of policymakers consider citizens’ data and data analytics to be indispensable for good governance. Lucy Bernholz traces the origin of the evidence-based policies and public data trusts as far back as the Roosevelt Era, when the U.S. public agencies acquired the funding to store and reuse the data across the departments, and the World War II fueled investment In the vast new computation powers.
In recent years, the concept of the public data trust has brought new aspirations, promising to make public services more efficient, but also to redistribute the value produced in the data economy by making privately and publicly held data more available to civic actors, businesses, and the public. The European Union, UK, Canada, and Australia consider data stewards to be a way to achieve leadership in artificial intelligence. As governments boost their AI budgets and pledge to automate more public services, there is a growing need for large-scale, quality datasets to train these algorithms. Ahead of the curve, the European Commission’s Data Governance Strategy and the AI Roadmap propose establishing public pools of personal and nonpersonal information collected from European residents and institutionalising an industry of professional data stewards, which will help individuals protect their privacy and realise their rights under the new regulations.
The United Kingdom recognises the lack of public trust in the data economy as the main impediment to their global AI leadership. The University of Cambridge has just launched the Data Trust Initiative that examines the many ways data can be employed to extract monetary and nonmonetary value. Following the years of exploratory work on data trusts at the Open Data Institute (ODI), the Ada Lovelace Institute recently published a report proposing the possible legal frameworks for the data stewardship in the UK and abroad. Some proposals explore the delicate balance between the civic and commercial applications of sensitive data.
Across the Atlantic, Canada’s Bill C-11 (Digital Charter Implementation Act) proposes establishing public data trusts to allow for reusing de-identified data for “socially beneficial purposes.” Over the last two years, research on data trusts in Canada has been conducted by the Office of the Privacy Commissioner, MaRS and Element AI. More Canadian public sector actors are considering adopting AI systems; for example, the province of Ontario is experimenting with a public health data platform that provides researchers working on the issues related to Covid-19 access to the patient data.
Similarly, in 2020, Australia announced its new AI and data stewardship policies. After the two years of public consultations on the national AI strategy, in April 2021, Belgium launched the AI Institute for the Common Good (FARI).
My second blog discusses some practical implications of these policies for the general public, and the privacy potential of existing and future data trusts.
The views and opinions expressed in this post are those of the author(s) and not necessarily those of the Bennett Institute for Public Policy.