Blog

Time to Act – Translating Algorithm Ethics into Practice

Time to Act – Translating Algorithm Ethics into Practice

In this week’s guest post, two researchers from University College London reflect on translating algorithms into ethical practice.

With the seemingly ubiquitous adoption of new digital technologies, in particular big data analytics and artificial intelligence systems, the current epoch is being described as revolutionary. We believe that this is not an understatement. Indeed, we are already witnessing the use of such technologies in public service delivery (medicine, policing and welfare delivery), finance, social media and advertising, education, the military, etc. With this, a rapid increase in research, investment and policy debates are taking place; all of which is epitomised by ‘Artificial Intelligence and Data’ being flagged as one of the four grand challenges of the industrial strategy.

Crucially, through high-profile cases of harm (ex. voter manipulation; bias in facial recognition, recidivism and recruitment; medical misdiagnosis; surveillance; etc.) a public consciousness has emerged calling for action with respect to the ethical impact of such technologies. The response by governments, NGOs, academic groups and other relevant stakeholders has predominantly been to call for, and propose, a set of digital ethics guidelines. This can be considered the first phase. Following this a drive to implement such guidelines and proposals was observed. A call to action, through engineering processes i.e. ethical-by-design was made resulting in the publication of a number of practical AI ethics manuals. This can broadly be construed as the second phase which has been followed by a third phase, typified by an awareness that no one community (government, industry, academia, engineers, etc.) can fully address the problem of the ethics of new digital tech.

Often there is a perceived division between the humanities and science. It is clear that no such division can be allowed or maintained in the field of digital ethics; often the problem with critiques and comments that originate from the humanities and legal community is that they are uninformed of the technology itself. This ‘distance’ can be addressed through upstream collaboration. This is something that we are engaged in, is continuous and will mature and evolve over the next decades.

Indeed, our work is a response to the call for an acute interdisciplinarity. Where Adriano’s specialist engineering knowledge in building artificial intelligence systems is complimented and cross-fertilised with Emre’s background in philosophical ethics.

More concretely we approach the problem of digital ethics through technological tools and discursive analysis:

  1. AI Impact Assessment Technology

This is led by Adriano’s research and development of an ‘Algorithmic Impact Assessment Toolkit and Framework’, which aims to do such things as ‘police’ algorithms by monitoring and trapping rogue algorithm behaviour and setting the boundary, usage and shelf-life of a system. The tool is envisioned to be part of a governance structure that builds trust between stakeholder of a system and a tool for accountability and auditing.

  1. Discursive Analysis

This is led by Emre’s research into the ethical, legal (including regulatory and legislative frameworks) and policy debates in the field of digital ethics. This touches on themes such as transparency, privacy, bias, safety and automation, as presented in the table below:  

Transparency

The Black-box problem; Interpretability; disclosure of information; mode of disclosure (technical, non-technical).   

Privacy

Data protection, GDPR; data security; privacy and freedom; data stewardship.

Justice and fairness (Bias)

Discrimination; respect for diversity, inclusion, equality; right to redress and remedy; Fair access to AI, to Data to benefits of these digital technologies; Diverse and accurate training Data. 

Safety

Intentional misuse (ex. cyberwarfare, hacking); risk-management strategies; Discrimination; privacy; loss of trust; “radical individualism”; the risk that technological progress might outpace regulatory measures; negative impacts on long-term social well-being; ‘arms race’.

Automation and Agency

Moral self-determination; cognitive shifts; loss of skills. 

 

Policy Implications

Algorithmic ethics has been the subject of numerous governmental national and international reports and white papers, with cross-sector stakeholder implications (from industry, academia and NGOs to regulators, and government). Indeed, this is an area of high policy activity. Below we note three themes that we see emerging within the policy domain:

  1. AI Regulation

Within the immediate short term, in the UK context, we do not foresee the creation of an AI regulator or specific legislation on AI itself. One reason for this is that, notwithstanding a public call for more regulation and/or action the problem of a (lack of) coherence in, to date, published ethical principles and guidance remains.  Indeed, this problem of a lack of consensus on what exactly are the appropriate ethical principles is well recorded in the academic literature.

Instead, appropriate application and enforcement of existing legislation, in particular GDPR and the Equality Act 2010, is far more likely to be seen. Within this stream the build-up of case law is likely to direct future (possible) legislation (as opposed to a top-down legislative force).

We do, nonetheless, anticipate an increase in competency and capacity building capabilities within government to consult from an informed perspective. The Centre for Data Ethics (part of the Department for Digital, Culture, Media and Sport) and the Office for AI (part of Department for Digital, Culture, Media and Sport and Department for Business, Energy and Industrial Strategy) are examples of this, as well as recent calls for a ‘Regulatory Assurance Body’.

It is worth noting that there is an increasing debate within the legal scholarship concerning the legal status of Al/algorithms. Whether algorithms will follow how companies have rights and obligations, and if AI systems will have artificial personhood status, will have a significant impact on how these systems are designed and the nature of the oversight.

  1. Impact Assessment and Robust Governance

A vibrant area of activity that we foresee maturing is how an AI impact assessment requirement can be used and incorporated into existing legislative, governance and oversight mechanism contexts. These assessments are likely to become standard, and possibly a legal requirement, which must be published. We welcome this as it concerns a practical and auditable mechanism by which AI systems and implementation can be structured. Indeed, it is likely that a standardised version of this will eventually emerge (both through legislation/regulation and transfer of best practice).

  • AI Literacy

A significant policy implication will be issues of training, education and knowledge transfer. An increase in AI literacy is crucial in informing auditors, legislators, regulators, practitioners, developers and civil society. With respect to the kind of training, a highly interdisciplinary approach is needed; a holistic education and training agenda is required, with discursive and heuristic disciplines (from policy, ethics and the social sciences, to engineering and data science) integrated. We therefore call for, and anticipate, further strategic central government funding into these fields as well as expertise building exercises through interdisciplinarity in practice. Indeed, we anticipate that the need to (re)educate and inform will rise to parallel the importance of industrial AI for the UK economy – as discussed in the AI Sector Deal.

Emre Kazim is a digital ethicist based in the department of Computer Science, University College London. His research interests include algorithmic assessment, the impact of new digital technologies on the structures of state and informed policy making. He has a track record of interdisciplinary and knowledge exchange through community and consortia building. Emre can be contacted at: ekazim@cs.ucl.ac.uk

Adriano Koshiyama is a researcher at Computer Science at University College London. His research interests: Machine Learning, Finance, and Algorithmic Assessment. He is a specialist in designing trustworthy autonomous systems. Adriano can be contacted at: a.koshiyama@cs.ucl.ac.uk