Blog

Researching Online Harms

PaCCS Communications Officer Kate McNeil sat down with the University of Bristol’s Professor Awais Rashid to discuss his work on online harms, and the launch of the new National Research Centre on Privacy, Harm Reduction, and Adversarial Influence online (REPHRAIN). This conversation has been edited and condensed for clarity and concision.

Kate: Would you mind getting started by telling me a little bit about your research background, and how you ended up leading this new National Research Centre on privacy, online harms, and influence online?

AwaisI am currently a Professor of Cybersecurity at the University of Bristol, but my background is quite eclectic and interdisciplinary. My first degree was in electronics engineering, and I then went on to study software engineering and computer science. I worked on security and privacy in various shapes and forms, including detecting mass marketing fraud online, exploring how users struggle with privacy in large scale online social networks, detecting the sharing of illegal sexual material on peer-to-peer networks, and an ongoing project on studying cyber-criminal behaviours on darknet markets. Over the last 15 years I also have done a lot of work on protecting vulnerable users online, and my contributions to this new National Research Centre build on those experiences. 

What prompted you to start working on protecting vulnerable users online?

That work started with an EPSRC project which was designed specifically around protecting children in online social networks. That project gave us the opportunity to develop techniques to understand when adults were masquerading as children online to gain their trust and engage them in abusive activities. That project involved developing language analysis techniques which could identify with a high degree of accuracy when someone was pretending to be a different age or gender. The project has gone on to become a successful start-up.

Can you tell me about the Centre you are now launching, the National Research Centre on Privacy, Harm Reduction, and Adversarial Influence online? What will fall under your centre’s mandate?

The centre’s overall focus is in response to a call for developing research advances for protecting citizens online. Fundamentally, the centre is being created because we live in a digital economy that is driven by large-scale data sharing. This sharing leads to a potential for harm while all the while resulting in the socio-economic benefits of a sharing-driven economy. It is not a zero-sum game, and we need to consider the social and behavioural implications, the regulatory and policy responses, and where the balance lies between data sharing and the harms that come from that.

We want to find ways to support citizens in understanding and managing the values and threats that come from their data. So, we aim to develop new techniques to deliver privacy at scale while ensuring that we can mitigate the risk of people inflicting harm while hiding behind privacy enhancing technologies. We also want to make data ecosystems more transparent and self-explainable, to find ways of information-sharing in crises in privacy-protecting ways, and to explore the balance between individual agency versus social good.  As part of this work, we will be looking at online harms arising from privacy violations including personal violations and organisational-scale violations where organisations are capturing large scale data. The latter is, for example, Cambridge Analytica territory. As a society, we need to tackle issues around adversarial influence, disinformation, and micro-targeting of individuals.

It is also worth noting that harms are not always experienced equally and addressing harms from one group may open the potential of harms for another. We will be exploring the online harms that arise from information sharing, and how they may impact different user groups in different ways. 

What can you tell me about your initial projects?

We plan to start by building a map of current issues and approaches in this research area, which we can use as a yardstick to measure our progress. The projects will also contribute to a toolbox, a shared set of resources integrating outcomes across the centre. At a technical level, that will include shared datasets people can work on, automated challenges to test tools against, and the development of a privacy testbed which will allow people to deploy techniques to see whether they deliver the privacy and harm reduction that has been claimed. 

In the medium term, we hope to develop programs such as a researcher or practitioner in residence programme, and to run events and activities such as sandpits, hackathons, and master classes which will bring knowledge about privacy and harm reduction to practitioners and policymakers. 

What forms of expertise will the centre be relying on?

We have 25 inaugural projects which involve non-academic stakeholders, policy professionals, citizen groups, third-sector NGOs, and 31 academics from several UK universities. It is a large program of work involving people from traditional computer science and technologies research as well as people who work in fields such as psychology, sociology, criminology, and ethics. What is exciting about the centre is that it takes an interdisciplinary view of the problems we are facing.

We plan on going beyond what you would think of traditional ‘ivory tower’ academic research, through a whole program of working with stakeholders. There is a set of streams that will run through the centre, including streams focused on policy and regulation, responsible innovation, and design and engagement. The design and engagement stream, for example, will work to understand citizens’ perspectives of technology, because you cannot predict how citizens will act without understanding what they need out of technology. 

You have mentioned that this centre will take an interdisciplinary and cross-sectoral approach. What sort of value-add do you suspect that will bring?

Technology does not live in isolation, and the challenges come when humans, technologies and organisations intersect in complex ways. The only way to address this problem properly is to take an interdisciplinary view. Otherwise, we would end up building technology which tries to understand the problem, but when it goes out into the world that technology would be used in ways we do not understand.