Launch of the UK Centre for Emerging Technology and Security
This week on the PaCCS blog, the Alan Turing Institute’s Alexander Babuta shares his reflections on the launch of the new UK Centre for Emerging Technology and Security.
The Alan Turing Institute is delighted to have launched the UK Centre for Emerging Technology and Security (CETaS), based within the Turing’s Defence and Security Programme. The Centre’s mission is to inform UK security policy through interdisciplinary research and analysis on emerging technology issues.
Recent years have seen rapid developments in fields such as data science, artificial intelligence (AI), cloud computing and privacy technology. These advances are transforming all aspects of our lives, and most UK citizens now interact with cutting-edge data-driven technology on a daily basis, whether consciously or otherwise. With the ongoing growth in data volume and complexity, this progress does not show any sign of slowing.
As emerging technologies are opening up exciting new opportunities to enhance and enrich our lives, they also bring with them new risks for UK security. Those who wish to do us harm will inevitably seek to use emerging technologies to attack the UK in new ways. Recent years have seen a proliferation in the use of digital information operations to manipulate public opinion and interfere with democratic processes – and the information domain remains a key battleground in Russia’s war in Ukraine. In the coming years, improvements in ‘deep fake’ technology – enabled by increasingly sophisticated machine learning and cloud computing architecture – will create new tools for those seeking to engage in targeted influence operations, requiring a concerted effort from the UK government and its partners to detect, disrupt and mitigate threats.
Improvements in AI will also give rise to new cyber threats and vulnerabilities that can be exploited by malicious actors. AI-enabled malware could frequently change its identifiable characteristics to significantly reduce the risk of detection, posing challenges for traditional cyber defence systems. As a larger number of products and services increasingly depend on AI, this opens up new vectors for adversarial attack, whether directed towards autonomous vehicles, IoT and smart home technology, or connected infrastructure and networks. Cybersecurity research will need to progress hand-in-hand with AI research, to ensure the UK’s future hyperconnected digital ecosystem remains protected from this full spectrum of new and emerging threats.
But emerging technologies also present opportunities for the security community to innovate in new ways with a wider range of partners. Developments in big data and AI are already delivering operational benefit for UK security, for instance to map out complex international networks, analyse large-scale chains of financial transactions, or provide geographical information on illegal activity. The Turing’s ongoing research at the intersection of data science and cybersecurity set out a research roadmap for applied AI in active cyber defence, describing how fundamental AI research could help develop the technologies required to protect against future cyber threats at national scale. Recently, the importance of big data analytics for understanding and responding to emerging threats has come into sharp focus in the context of Russia’s war in Ukraine, enabling open-source analysts and investigators to rapidly derive insights from vast and disparate datasets, signalling a step-change in how the UK security community leverages the data and skills of its external partners.
These new technologies present challenging legal and ethical questions. The UK national security community operates within a tightly restricted legal framework, and is subject to additional oversight and scrutiny regarding its use of technology – restrictions to which the private sector is not subject. This means that technologies that are widely available in the commercial sector may not be readily transferable to the national security context, or may require the development of extensive policy, governance and legal safeguards before being deployed operationally.
This complex framework of legislation and policy serves to ensure that new technologies are used in a way that protects personal freedoms, rather than restricting them. However, it puts us at a disadvantage when compared to our adversaries, who are not subject to the same constraints. We must recognise that adversaries will seek to use technology in ways that do not respect the rights and freedoms of citizens, making it all the more important that we innovate at pace to defend from these threats. We must do so in a way that upholds our democratic values and rule of law, the cornerstones of our open, liberal society.
Ensuring new technologies are used in an ethical way that is consistent with our values requires engaging with diverse voices from across the research and policy ecosystem. The national security community recognises that innovation and diversity go hand in hand – a wide range of backgrounds, perspectives and life experiences is required to help this community think in new ways, and deliver new solutions that help keep the UK safe. But doing that in practice is hard, and especially so within the necessarily closed environment of national security.
But protecting the UK is not only the responsibility of state institutions. Academia, the private sector, civil society and individual citizens all have an important role to play. Both in helping to create new tools and capabilities to protect the UK, but also in ensuring that policy and regulation develop in a way that is consistent with our values, and respects the rights and needs of all citizens. An open and inclusive dialogue is essential to ensure that all voices are represented in this process.
As the national institute for data science and artificial intelligence, The Alan Turing Institute is proud to play a central role in convening diverse voices from across the policy, research and technology ecosystem – building new partnerships and helping to move the public debate forward on these difficult societal questions. The launch of CETaS is an important milestone in the development of the Institute. The Centre will help to ensure that future policy is informed by evidence-based, interdisciplinary analysis on the risks and opportunities presented by emerging technologies. We will keep challenging ourselves to think differently and question our assumptions, and we ask you to do the same.
***
About the author
Alexander Babuta is Head of the Centre for Emerging Technology and Security at the Alan Turing Institute. His research interests include the applications of artificial intelligence and data science for UK security and policing, the regulation of investigatory powers, and the psychology of criminal offending.
Prior to joining the Turing in January 2022, he worked within the UK Government as AI Futures Lead at the Centre for Data Ethics and Innovation, and before this as Research Fellow at the Royal United Services Institute (RUSI), where he led the institute’s research programme on national security, technology and digital policing. He has given evidence to various parliamentary inquiries at UK and EU level, and his work has featured in mainstream media outlets such as the BBC, Financial Times, The Guardian, The Telegraph and others.
He is Chair of the Essex Police Data Ethics Committee, Associate Fellow at the University of Bristol, and Research Associate at the National Centre for Gang Research (University of West London). He holds an MSc with Distinction in Crime Science from University College London (UCL), where his research explored the use of data science methods for police risk assessment of missing children. He also holds a Bachelor’s degree in Linguistics from UCL.