That Hideous Strength: Maintaining Public Trust in the Era of Big Data Policy
In this week’s blog post, the All-Party Parliamentary Group on Data Analytics‘s Jack Tindale explores the subject of big data and public trust.
In his 2016 book on the Fourth Industrial Revolution, Klaus Schwab, the Founder and Executive Chairman of the World Economic Forum, noted that “One of the greatest individual challenges posed by new information technologies is privacy. We instinctively understand why it is so essential, yet the tracking and sharing of information about is a crucial part of the new connectivity. Debates about fundamental issues such as the impact on our inner lives of the loss of control over our data will only intensify in the years ahead.”
For the first time in many years – we are facing the very real risk of even democratic countries backpedalling on the notion of individual rights and liberties – not simply by state actors, but by transnational corporations, terrorist groups, and criminal organisations alike.
In May this year, the All-Party Parliamentary Group on Data Analytics launched our first substantive research project on data, Trust, Transparency and Tech: Building Ethical Data Policies for the Public Good. It comes at a crucial time for the subject as Government, Academia and Industry are starting to grapple with the risks and opportunities that big tech presents.
The practical applications of data in public policy are not a recent innovation. Drawing predictions from crime records has been used by police forces since the 1920s, John Snow was able to achieve landmark breakthroughs in medical science when he used data to the identify the source of a cholera-invested pump in Soho, and Charles Booth’s maps showing the rates of poverty in London played a major role in social policy at the end of the 19th Century.
However, the sheer volume of data that exists today, and our ability to process it, is having a tremendous impact on the ethical considerations surrounding the subject.
We must take the rise of big data as a given, our concepts of individual liberty, right to privacy, and the role of the state in protecting citizens and regulating both itself and private bodies must all change.
However, one should seek to avoid pessimism. Human society does not tend to march unconcernedly towards dystopia. We are already seeing the positive impact that comes from leveraging big data for industry to making predictions from data, such as in a medical context, aided by the transformation in computing power.
The increasing reliance on the collection, storage, and use of data must be accompanied by governance improvements and the development of mechanisms that provide protection to individual rights and privacy, so as to improve public trust and retain support for technological change.
The UK has also been at the forefront of numerous innovations – ranging from the recognition of Artificial Intelligence and Data within the Industrial Strategy, to incentives such as NHSX to facilitate closer interactions between the health service, patient groups, and the life sciences sector. This is vital to improving public engagement with new technologies and should provide a model for other government departments to follow.
The UK is also very much ahead of the curve in understanding the role of government in this area. The modern industrial strategy rightly recognises the importance of big data across the whole of public life, and the establishment of bodies such as the Office for AI and the Centre for Data Ethics and Innovation.
Nevertheless, policy makers, must work across a range of different areas to improve public engagement and trust.
Rules made with little or no public engagement have led to avoidable errors which could contribute to a public distrust in data use. The growing role of data in everyday life has in many cases occurred without consultation, and public agreement and a lack of engagement has compounded the damage caused to public trust through data breaches and misuse. The comparative ease of big data processing and the range of ways in which it can be used means that public understanding and consent is often partial and uninformed on the follow-on intentions for data collected. There are different assumptions across public services about levels of ‘consent’ whether informed or implicit, and about the extent to which the ‘common good’ test can and should be used to justify data collection, analysis and sharing.
The public should be engaged through a wide variety of methods – including open consultations, town-hall meetings, industry outreach, and other ways of directly engaging with members of the public and relevant stakeholders.
Ethical considerations must be tackled from the very start of developing or implementing transformational technologies to prevent a loss of public confidence and a withdrawal of the public “licence to operate”. There are significant issues around intellectual property versus accountability and public trust: commercial intellectual property rights of technology firms makes algorithm-based decisions particularly opaque.
The engagement of commercial organisations in the delivery of public services allows these organisations access to significant amounts of data. Third party use of data is a particular concern and citizens may feel differently about data-sharing with commercial bodies with the potential for loss of trust in the particular public service.
Finally, it is vital that citizens are aware that they are receiving a consistent and impartial experience. On current trends, there is an inherent risk to the future cohesion of the body politic that citizens and industry will have divergent experiences across policy and geographical areas. Technology and data-driven investment could undermine broader national and devolved environmental and social policy objectives, particularly given the devolved nature of decisions on service delivery.
Bodies such as the Centre for Data Ethics and Innovation will need to work very proactively across government – its role in developing a rules-based system must be clarified as this will support the join-up between it and other bodies. However, the Centre can only do this effectively and easily if each policy areas has a single national focus on data ethics.
One of my favourite authors, Ursula Le Guin, once said that it is only when science asks why, instead of simply describing how, that it becomes more than technology.
When it asks why, it discovers relativity. When it only shows how, it invents the atom bomb.
Policy makers and industry figures are able to explain why the public need data-driven technologies, rather than simply developing them for the sake of doing so. When we talk about big data and public policy – it is vital that we do not lose sight of the importance of protecting natural laws and personal responsibility against the risks that they posit.
About
The All-Party Parliamentary Group on Data Analytics brings together a range of parliamentarians, industry bodies, and academics to discuss the impact that big and open data is having on society, and the public policy implications that emerge from them.
It was established to provide an open forum for politicians and civil servants to gain a firmer understanding of the challenges and risks associated with big data, as well as the best way for Government, Parliament and the wider body politic to respond.
The Group was founded in October 2016 by the Labour MP, Daniel Zeichner, and since then we have held a number of events and roundtables around Parliament on a range of topics pertaining to this subject.
The APGDA is organised by the cross-party think tank, Policy Connect.
Photo source: https://www.policyconnect.org.uk/appgda/sites/site_appgda/files/report/454/fieldreportdownload/trusttransparentcyandtechreport.pdf