Dec 10, 2021
In Episode 1 of Series 7 of The Rights Track, Todd is in conversation with Ben Lucas, Managing Director of the University of Nottingham's Data-Driven Discovery Initiative (3DI).
Together they discuss the threat to human rights posed by aspects of a digital world and the opportunities it can create for positive change.
Todd Landman 0:00
Welcome to The Rights Track podcast which gets the hard facts about the human rights challenges facing us today. In series seven, we're discussing human rights in a digital world. I'm Todd Landman, in our first episode of the series, I'm delighted to be joined by Ben Lucas. Ben is Managing Director of 3DI at the University of Nottingham. A hub for world class data science research, and a funder for this series of The Rights Track. To kick off the series, we're talking about some of the challenges and opportunities created in a data driven society, and particularly what all that means for our human rights. So welcome on this episode of The Rights Track.
Ben Lucas 0:37
Thank you so much for having me.
Todd Landman 0:38
It's great to have you here, Ben. And I guess I want to start with just to kind of broad open question. We've been living with the internet for a number of years now. When I first came to United Kingdom, we barely had the internet and suddenly the web exploded, and it is a wonderful thing. It's transformed our lives in so many different ways. But it's also created major challenges for human rights, law and practice around the world. So my first question really is, what are the key concerns?
Ben Lucas 1:04
I think that the internet is perhaps not bad in and of itself, and in that regard, it's very similar to any other new and emerging technology. We look at something like the automobile there's obviously dangers that having cars on roads introduced into society, but there's also a lot of good as far as a boost in quality of life and economic productivity and so forth. I think the central challenge and one that's perhaps getting exponentially more challenging is the fact that often more now than ever, digital technologies are moving a lot faster than what the regulatory environment can keep up with. And also very importantly, humankind's ability to fully understand the potential consequences of misuse or what happens when things go wrong.
Todd Landman 1:50
So in some ways, it is interesting, you could look at Moore's Law for example, technology increases exponentially and this point you're making about the inability for the regulatory environment to keep up with that. I think that's a crucial insight you've given us because human rights in a way is a regulatory environment. We have international standards; we have domestic standards.
Ben Lucas 2:08
Todd Landman 2:09
We have de jure protection of rights, de facto enjoyment of rights, but oftentimes, there's a great tension or gap between those two things. And when new issues emerge, we either need a new standard, or we need a new interpretation of those standards to be able to apply to that new thing. So, we're going to call the Internet a new thing for now and it actually, this dual use of technology is also interesting to me. When barbed wire was invented it's a great thing because you can suddenly close off bits of land and keep animals in one place. And it's wonderful for agriculture, but it's also a way to control property. And as we know, the enclosure laws in this country led to quite a lot of political conflict. But if we get back to the questions then about, you know, positive and negative aspects of the Internet, what else can you share with us?
Ben Lucas 2:50
There are examples such as work that colleagues in the Rights Lab are doing, for example, on the use of the Internet and in particular social media, for exploitation. So, child exploitation, for example. There's also terrible examples of migrant exploitation. People who join groups thinking it's going to be a community to help them to get a job in another place. And that turns out to be quite dodgy, so that there's examples that are just blatantly you know, bad and terrible and terrible things that happen on the internet. But then there are other examples that are, I think, much more complicated, especially around the transmission of information and new emergent keywords we're seeing around misinformation and disinformation. The power that user generated content can have to help mobilise activists and protests for good for example, to get information out when journalists can't get in. Then the flip side of that is the potential exploitation by nefarious actors who are obviously spreading information that potentially damages democracies and otherwise stable and important institutions around the world. The other thing I would sort of cite here would be work by our colleague, Austin Choi-Fitzpatrick with his book, The Good Drone. That's a really interesting contrast here. So, a book about the use of UAVs and where on the one hand, if we think about a UAV that's armed.
Todd Landman 4:12
That's an Unmanned Aerial Vehicle for our listeners.
Ben Lucas 4:14
Yeah, Unmanned Aerial Vehicle. And if we think about one of those drones that's armed and also potentially autonomous moving forward to some that's potentially you know, very, very scary. On the other hand, this same basic sort of technology platform could provide cheap and accessible technology to help mobilise social movements to help journalists for example. And so I think any debate around the good and bad of technology, that there's some really interesting and very complicated contrast involved.
Todd Landman 4:43
And you know, you see drones being used for beautiful visual displays over you know, presidential inaugurations, for example.
Ben Lucas 4:48
Todd Landman 4:49
You see this big, colourful display, but that same swarm technology of UAVs can actually be used for combat for warfare, etc. And we know from the work on human rights, modern slavery and human trafficking that, you know, taking pictures of the Earth using satellites with swarms of satellites is very good, but then that can also be used for for ill as well and I think that challenge of the dual use of technology will always be with us. I wonder now if we could just turn to another set of questions, which is, is the difference between life online and life offline. Do we think that human rights rules are different for online and offline life or roughly the same?
Ben Lucas 5:25
A lot of people argue that online is a mirror of offline, although there are those potentially really negative amplification effects involved in the bad stuff that happens in the real world so to speak, when you move it online because you can take something that's very small and suddenly make it very big. I think there's a degree of it really just being a mirror and potentially an amplifier for the offline. Again, I think the central problem when we talk about human rights and the general protection of users of the Internet, is again really this fact that the technology is just moving so fast. That regulation both it's you know, how it's developed, initiated, interpreted going forward, the tech just moves so much faster. And then I think what we're seeing now is really kind of a shock that internet users get after the fact but it's maybe the sort of Newton's third law effect. You know, tech moved so fast was so aggressive and so free in the way it kind of there was sort of a wild west of how we, you know, captured and used data. And now we're just sort of experiencing the backlash that you would expect. One other sort of complicated dimension here is that we really need regulation to protect users of the internet but of course, that's then balanced against examples we see around the world of the way the internet's regulated being used to oppress and suppress populations. There's a really important balance that we need to achieve there. We need to protect everybody online. We need to preserve freedom of access to information, freedom of speech. We don't want people to get hurt online, but we also don't want to do that in an oppressive way. Maybe one thing that's really different as far as human rights online and offline, will emerge in the future around artificial intelligence. The big question I think that researchers in artificial intelligence are dealing with be they folks who are working on the algorithmics or be they the colleagues in law who are working on the ethics and the legal side of it. The really big question is around sort of transparency and tractability what's actually happening in this magic algorithmic box? Can we make sure that people can have appropriate checks and balances on what these you know this this new class of machines is doing?
Todd Landman 7:32
Well, it's interesting because there is this observation about people who, who who use AI and design those algorithms that the AI solution and the algorithm that's been designed reflects many of the biases of the coder in the first place.
Ben Lucas 7:44
Todd Landman 7:425
And who are these coders? Well, they come from a particular social demographic and therefore you're replicating their positionality through AI, yet AI is presented as this neutral machine that simply calculates things and gives you the best deals on whatever platform you might be shopping.
Ben Lucas 7:58
Precisely. And a lot of these you know, if we think about machine learning in general, where we're training an algorithm, essentially a type of machine to do something it involves a training set that involves a training data set. Where is that coming from? Who's putting it together? Exactly what biases are present in that? And now, and this is probably one of the most pronounced differences when we think about sort of human rights offline and online. I think a really big issue going forward is going to be that of AI discrimination, basically, and we're seeing that in everything from financial services - you know a machine is making a decision about does somebody get a loan, does somebody get a good credit score, applications and facial recognition technology. Who are they trying to find? What are they trying to do with that tech? And this AI discrimination issue is going to be one of the, one of the key things about that online/offline contrast.
Todd Landman 8:50
Yeah, you know running right through all of our human rights law discourses, one about you know no discrimination, right that there should not be discrimination by type of person.
Ben Lucas 8:59
Todd Landman 9:00
And yet, we know in practice, there's law discrimination already. And in a way AI can only amplify or maybe accelerate some of that discrimination. So it's a good cautionary tale about you know, the, the, shall we say, the naive embrace of AI as a as a solution to our problems. I wonder if I might just move forward a little bit about the cross-border nature of the internet, one of the promises of the internet is that nation state boundaries disappear, that people can share information across space and time we've just lived through a pandemic, but we're able to talk to each other in meetings all around the world without having to get in any kind of form of transport. But what sort of things should we thinking about in terms of the cross-border nature of the internet?
Ben Lucas 9:38
I think that I would encourage all listeners today to go back to Alain de Botton's book, The News; a User's Manual, and also some of the talks he gave around that period, I think around 2014. We can have a totally new interpretation of some of those very relevant ideas, where we are now in the present and I'm talking about what some people are calling the threat of the post truth era. We've seen a completely unprecedented explosion in the information that we have access to the ability to suddenly take somebody's very small idea, good or bad, and project to a massive audience. But with that comes, you know, the vulnerabilities around misinformation and disinformation campaigns and the threat that that leads to, you know, potentially threatening democracies threatening, you know, various populations around the world. And another important branch of work that we're doing is studying campaigns and user generated content, and actually studying what's being said, at scale within these large audiences. We've done quite some work, Todd and I are with the Rights Lab for example, looking at analysing campaigns on Twitter. And this really comes down to trying to get into, exactly as you would study any other marketing campaign, looking at how do you cut through clutter? How do you achieve salience? But then also through to more practical functional matters of campaigns such as you know, driving guaranteed region awareness, policy influence donations, but we're just doing that at a much larger scale, which is facilitated, obviously, by the fact that we have access to social media data.
Todd Landman 11:16
It's unmediated supply of information that connects the person who generates the content to the person who consumes it.
Ben Lucas 11:23
Todd Landman 11:24
Earlier you were talking about the media you're talking about academia and others, you know, there's always some sort of accountability peer review element to that before something goes into the public domain. Whereas here you're talking about a massive democratisation of technology, a massive democratisation of content generation, but actually a collapse in the mediated form of that so that anybody can say anything, and if it gains traction, and in many ways, if it's repeated enough, and enough enough people believe it's actually true. And of course, we've seen that during the pandemic, but we see it across many other elements of politics, society, economy, etc, and culture. And yet, you know, there we are in this emerging post truth era, not really sure what to do about that. We see the proliferation of media organisations, the collapse of some more traditional media organisations, like broadsheet newspapers and others have had to change the way they do things and catch up. But that peer review element, that kind of sense check on the content that's being developed is gone in a way.
Ben Lucas 12:18
Yep and it's potentially very scary because there's no editor in chief for, you know, someone's social media posts. On top of that, they probably have or could potentially have a far greater reach than a traditional media outlet. And I think the other thing is, I mean, we were kind of for warned on many of these issues. The NATO Review published quite some interesting work on Disinformation and Propaganda in the context of hybrid warfare, I think around sort of starting in 2016, or ramping up in 2016, which is, you know, also very fascinating read. And then the flip side again of this connectivity that we have now, I guess the good side, you know, is when user generated content is used in a good way. And again, that's examples like, you know, examples we've seen around the world with the mobilisation of protests for good causes or fighting for democracy, grassroots activism, and in particular, that ability to get information out when journalists can't get in.
Todd Landman 13:15
You know it's interesting we did a study years ago, colleagues and I, on the the mobilisation against the Ben Ali regime in Tunisia, and we were particularly interested in the role of social media and Facebook platform for doing that. And it turned out that a. there was a diaspora living outside the country interested in the developments within the country but within the country, those who were more socially active on these platforms more likely to turn up to an event precisely because they could work out how many other people were going to go so it solves that collective action problem of you know, my personal risk and cost associated protesting is suddenly reduced because I know 100 other people are going to go. And you know, we did a systematic study of the motivations and mobilisation of those folks, you know, try, trying to oust the Ben Ali regime, but it gets to the heart of what you're saying that this this you know, user generated content can have a tech for good or a social good element to it.
Ben Lucas 14:08
Exactly. And I think another important note here, that's maybe some sort of upside is that, you know, there are a lot of academics in a lot of different fields working on understanding this massive proliferation of connectivity as well. In a kind of, I guess, strange silver lining to many of the new problems that this technology may or may not have caused is that it's also given rise to the emergence of new fields like so we're talking about Infodemiology, now we've got some amazing studies happening on the subjects of echo chambers and confirmation bias and these types of type of themes and I think it's really given rise to some really interesting science and research and I have some some confidence that we've got, even if we don't have those, again, editors in chief on social media, I have confidence because we certainly have some, you know, wonderful scientists coming at this scenario from a lot of different angles, which I think also helps to sort of moderate and bring some of the downsides to the public attention.
Todd Landman 15:04
Yeah, and let me jump to research now, because I'm really interested in the type of research that people are doing in 3DI here at the university. Can you just tell us a little bit about some of the projects and how they're utilising this new infodemiology as you call it, or new grasp and harnessing of these technologies?
Ben Lucas 15:23
Yeah, so 3DI as the data driven discovery initiative, we're basically interested in all things applied data science. We have, I think, quite a broad and really wonderful portfolio of activity that we represent here at the University of Nottingham, in our Faculty of Social Science. Faculty of Social Sciences. This is everything from economics, to law, to business, to geography, and everything in between. We take a very broad exploratory approach to the kinds of questions that we're interested in solving, I would say. But we do tend to focus a lot on what we call local competitive advantage. So we're very interested in the region that we operate - Nottinghamshire - sectors and industry clusters where they have questions that can be answered via data science.
Todd Landman 16:08
What sort of questions? What sort of things are they interested in?
Ben Lucas 16:11
This is everything from the development of new financial services to really driving world class, new practice in digital marketing, developing and sort of advancing professions like law, where there is a very big appetite to bring in new sort of tech and data driven solutions into that space but a need to achieve those new sort of fusions and synergies. So that, that side is obviously very, you know, commercially focused, but very importantly, a big part of our portfolio is SDG focus. So Sustainable Development Goal focused, and we've got, I think, some really fascinating examples in that space. My colleagues in our N-Lab, which is a new demographic laboratory, based in the business school, are working on food poverty, for example. And they're doing this in what I think is really exciting way. They've teamed up with a food sharing app. So, this is very much driven by the start-up world. It's very much a marketplace offering. The platform is set up to combat, hopefully both hunger, but also food waste. So, we're talking SDG 2, and we're talking SDG 12, sustainable production and consumption. And they've then been able to expand this work not just from understanding the platform - how it works, not just helping the platform, how it can work and function better. But they've been able to take that data from the private sector and apply it to questions in the public sector. So, they are doing a lot of wonderful work.
Todd Landman 17:37
So, people have a bit of surplus food, and they go on to the app and they say I've got an extra six eggs, and someone else goes on the app and says I need six eggs and then there's some sort of exchange, almost like an eBay for food.
Ben Lucas 17:47
Todd Landman 17:48
But as you say, people who are hungry get access to food for much less than going to the shop and buying it and.
Ben Lucas 17:55
Todd Landman 17:56
And people with the extra six eggs don't chuck them out at the end of the week. They've actually given them to somebody right?
Ben Lucas 18:01
Todd Landman 18:02
And then from that you generate really interesting data that can be geo-located and filled into Maps, because then you can work out where the areas of deprivation then where people have, say, a higher probability of seeking less expensive food.
Ben Lucas 18:15
Precisely. Yeah. And I think that's also a good segue into you know, so one of the other flagship projects we have is 3DI, which is tracktheeconomy.ac.uk where we've been looking at, again, taking data from the private sector, but also government data and looking at how economic deprivation might have been exacerbated or not or how it changed. In particular focused on COVID and what sort of shocks that brought about, but with the intention of taking that forward. And the biggest sort of revelations that we've had working on that project have been really around the need for better geographical granularity. The fact that a lot of our national statistics or you know, marketing research assessments that are made by companies are based on you know, bigger geographical chunks. Actually, if we can get more granular and get into some of that heterogeneity that might exist at smaller geographical levels, you know, that's that's really, really important. That really, really changes a lot of policy formulation, sort of scenarios and questions that policy makers have.
Todd Landman 19:19
One of the big problems when when you aggregate stuff, you lose that specificity and precisely the areas that are in most need. So I wonder in this research that your your colleagues been doing and that you've been doing, you know, what's the end game? What are we working towards here? And how is that going to help us in terms of it from a human rights perspective?
Ben Lucas 19:41
I think speaking from a personal perspective, when I was a student when I was first taught economics, I was taught in a way that really highlighted that this is you know, economics was was just something that everyone as a citizen should know even if you don't want to become an economist or an econometrician, you need to know it as a citizen. The same now very much applies when we talk about technologies that might not be familiar to all folks like AI data science. I think there's a lot to be said, as far as what I would say is a big sort of mission for 3DI is to really boost the accessibility of technical skills to really benefit people in terms of prosperity, but also just in terms of understanding as citizens what's actually going on. You know, if machines are going to be making decisions for us in the future, that we have a right to understand how those decisions are made. Also, if we think about other challenges, in the sort of AI and automation space around, you know, potentially people losing jobs because it's become automated. I think we have a right to know how and why that is. I think another big sort of an extension of that point is really in learning and getting technical skills out there to people for you know, potentially benefiting prosperity and the labour market. We really need to keep that very tightly paired with critical thinking skills. You know, we're very good as academics, thinking about things and breaking them down and analysing them especially you know, we as social scientists, you know, coding is probably going to be language of the future to borrow your quote Todd, but who's going to use that coding and what for? So I think we need to keep people in a good mindset and be using this this this technology and this power for good. And then the last point would be as something that's been done very well on this podcast in the past, is getting people to think both researchers and again, definitely citizens to think about the inextricably intertwined nature of the Sustainable Development Goals. You know, so for us at 3DI we're looking for those problems at scale, where we have measurements at scale, where we can do data science and crack big challenges, but I think whether you're doing you know, much more focused work or work with the SDGs at scale, it's all really interconnected. An obvious example, what is climate change going to do for you know, potentially displacing populations and the flow on, the horrible flow on effects that's going to have? So I really, I think that's yes, sort of our our mission, I would say, moving forward.
Todd Landman 22:07
That's fantastic. So you've covered a lot of ground, Ben, it's been fascinating discussion, you know, from the dual use of technology and this age old question of the good and the bad of any kind of new technological advance. You've covered all things around the, you know, the mobilizational potential problems with post truth era. The expanse and proliferation of multiple sources of information in a sense in the absence of of that mediated or peer reviewed element. And this amazing gap between the speed of technology and the slowness of our regulatory frameworks, all of which have running right through them major challenges for the human rights community. So we're really excited about this series because we're going to be talking to a lot of people around precisely the issues you set out for us and many more. In the coming months we've got Martin Sheinin who is a great human rights expert, former UN Special Rapporteur, but now a global, British Academy global professor at the Bonavero Institute at the University of Oxford working on precisely these challenges for human rights law, and this new digital world. And that's going to be followed by a podcast with Diane Coyle, who's the Bennett Professor of Economics, University of Cambridge. It's interesting because she wrote a book in 1997 called The Weightless World, which is about this emerging digital transformation coming to the economy, and has now written a new book called Cogs and Monsters. It's a great take on the modern study of economics and the role of digital transformation. But for now, I just want to thank you, Ben, for joining us. It's exciting to hear about the work of 3DI. We appreciate the support of 3DI for this series of The Rights Track. We look forward to the guests and I think by the end of the series we would like to have you back on for some reflections about what we've learned over this series of the Rights Track.
Ben Lucas 23:50
Happy to. Thank-you for having me.
Christine Garrington 23:53
Thanks for listening to this episode of The Rights Track, which was presented by Todd Landman and produced by Chris Garrington of Research Podcasts with funding from 3DI. You can find detailed show notes on the website at www.RightsTrack.org. And don't forget to subscribe wherever you listen to your podcasts to access future and earlier episodes.