Published on 10 June 2021
Share Tweet  Share

Privacy in a sustainable data economy: lessons from existing and failed data trusts

The implementation of data trusts in the European Union and beyond will likely lead to more data sharing across the public and private sectors, which challenges our conventional definition of privacy. European policymakers argue it is important for individuals to accept their role as “data donors” who willingly share information with the trustworthy organisations for collective benefit, writes Anna Artyushina.

Shoshanna Zuboff’s bestselling book The Age of Surveillance Capitalism argued there was significant detrimental impact of the mass digital tracking conducted by platform companies in order to sell advertising. In his widely read critique of Zuboff’s work, Cory Doctorow argued that the data monopolies constitute a much bigger threat than the business model of targeted advertising. The debate shows that as the gatekeepers of digital markets and public discourses, the Big Tech companies have come to regulate our economies and political institutions.

Breaking up the data silos amassed by the monopolies is one of the biggest promises of data trusts. Services like Cozy Cloud, Inrupt, and polypoly allow users to store the data from their devices and choose what information to share with each application or service provider. The same intention, albeit on a much larger scale, drives the European Cloud Initiative. In October 2020, an open digital ecosystem project GAIA-X was announced. When the European digital infrastructures are ready, international platform companies will not be permitted to move the data outside the European Union (EU), and the extent of their engagement with the data collected from Europeans will be closely monitored through the new program interfaces.

Surely, though, it is desirable to avoid a techno-determinist stance towards the data trusts. Like any technological solution, they are only as good as the policies that regulate their work. Sidewalk Labs’ proposal for the Urban Data Trust in Toronto, Canada was abandoned amid a heated public controversy. Legal scholars and privacy advocates argue the goal of the trust may have been to make the data collected in the city exempt from Canada’s privacy laws. Some critics of the personal data spaces see them as another manifestation of surveillance capitalism because they urge individuals to participate in the very data economy that they distrust.

The most profound criticism of the new European policies is that they essentially aim for more data sharing than is allowed by the current privacy laws. For instance, some of the proposed European data pools will reuse sensitive data that is collected by the public sector. For privacy advocates, the question of increasing state surveillance in Europe is also a major concern.

For the European Commission, the goal of the new policies and technical interventions is to create a sustainable data economy where companies that operate in good faith gain access to the large, quality data pools. For individuals, this means having more control over their digital lives—one can manage what kind of data the credit agencies, employers, and law enforcement agencies obtain about them, or demand the change or removal of this information. To avoid creating a digital panopticon, the EU’s data pools will contain niche data: financial and health information would belong in different pools and be governed by different sets of regulations. Professional data stewards will help individuals navigate this new privacy landscape, whether they want to restrict the movement of data collected about them, be reimbursed for their data inputs, or donate data for a charitable cause.

Recently released regulations for the facial recognition technology once again recognise the human rights framework as central to the EU’s data governance policies. The high risk algorithmic systems will be banned from the continent, and the medium to low risk systems will be allowed to operate while maintaining the high level of integrity.

Technology after ‘surveillance capitalism’

On a global scale, the shift to a sustainable data economy will not be fast or cheap. It will likely require large investments into the public digital infrastructures, international regulatory cooperation, make privacy a matter of personal concern, and require significant changes in perspective on the part of businesses and governments.

European policymakers argue it is important for individuals to accept their role as “data donors” who willingly share information with the trustworthy organisations for collective benefit. This data governance model is sometimes likened to the European biobanks, where patients agree to share their information for research purposes. The solidarity-based approach to the issues of data governance may be uniquely European in manifesting a significant amount of trust in public institutions and favoring collective interests over individual privacy.

The slow burning crisis of democracy in North America spills over to the issues of data governance and individual privacy. Between the fear of state surveillance reinforced by Edward Snowden’s revelations about the illegal practices of the U.S. National Security Agency and the privacy concerns raised by the Cambridge Analytica scandal, it is difficult for many people to accept the idea of willingly sharing data with the government or anyone else. However, attitudes do change. A recent study from the Pew Research Center shows that Americans are ready to share their health data with the public agencies and researchers working on issues related to Covid-19.

Existing data trusts demonstrate great potential for facilitating responsible data management. One example of a data trust that works for a civic purpose is the Silicon Valley Regional Data Trust, which is operated by the University of California in partnership with several district school boards and local social services. The trust is a non-profit cooperative and shares the data only among the organisations that donate data. The data feminism scholars make the case for an alternative data governance where communities employ data to fight the economic and social inequalities. The data commons approach is best illustrated by the data cooperatives. The Driver’s Seat Cooperative and services like WeClock help gig workers get insights into the private algorithmic systems and give unions new tools to fight for the fair working conditions.

As more people are moving to work for the platform companies or having to deal with arcane algorithmic systems in different areas of their lives, the self-management of data through the use of data stewards will soon be a necessary skill. The professionalisation of data stewardship will hopefully contribute to the data economy becoming more transparent and manageable at the individual, collective, and supra-national levels alike. Similarly, privacy and data security businesses can greatly improve our collective well-being by creating market alternatives to the existing exploitative business models.

As more national governments expect to build the large-scale AI systems and economically benefit from the fast-evolving digital innovation, it is very easy to sideline the interests of privacy. Wherever possible, individuals and communities should have an option to opt out from the data collection and automated processing.

The threats of mass surveillance and exploitation of data without benefit to its subjects by the public and private actors are significant. As Julie Cohen argues, in the absence of explicit legal frameworks to secure data ownership, the companies that store or manage data have de facto control over it. Given the global scale of these challenges, they can only be addressed through the international dialogue and regulatory cooperation.

Previous blog: The future of data trusts and the global race to dominate AI

The views and opinions expressed in this post are those of the author(s) and not necessarily those of the Bennett Institute for Public Policy.


Back to Top