Future Ethics: from modern ethical theory to practical advice

Cennydd Bowles
|
NowNext
Founder and Director
We talk about Future Ethics, and how we can use methods from disciplines such as speculative design and future studies to stimulate moral imagination. We also discuss the role of ethical guidelines, bringing those into practice, and how you can start having more conversations within your team or organisation around ethics.

Nathalie Post  
Today I'm joined by Cennydd Bowles, a designer, futurist and director of now next, where he advises companies on ethical technology, and responsible innovation. Cennydd is also the author of Future Ethics, a book that takes modern ethical theory and transforms it into practical advice for designers, product managers, and software engineers. In this episode, we talk about his book, and how we can use methods from disciplines like speculative design and future studies in order to stimulate our moral imagination. We also talk about the role of ethical guidelines, and what it means to translate those into practice, and how you can start and have more conversations within your team or organisation around ethics. And finally, Cennydd will share his view on the field of ethics and tech as it is today, and the changes he's noticing within that. So without further ado, here is the episode.

Hi Cennydd, and welcome to the human centred AI podcast. I'm actually really, really excited to be talking to you today because I love your work. And I've been following it for a while. And I cannot wait to pick your brain on all things, ethics and technology. But for our listeners who may not know you yet, could you start by giving a bit of an introduction about yourself and also your background?

Cennydd Bowles  
Sure thing. Glad to be here. Thank you for having me on. So yes, my name is Cennydd Bowles. I'm a designer and a futurist based in London, I now run an ethical design and futures studio called NowNext. I've also written a book called future ethics, which was two years ago now, god it's flown by. My background is in interaction design, digital product design, 20 or so years of experience in that across a range of, well, a range of sectors, government, ecommerce, social media, I was heading up the UK design team at Twitter for three years. And since then, I've been consulting and moving ever more into the space of responsible technology and ethical innovation. So that's my my full time. My full time gig. Yeah.

Nathalie Post  
Yeah. So what kind of motivated you to focus on, you know, ethics and technology? Like, what caused that direction?

Cennydd Bowles  
Well, I think I had, I had a sort of an interest in it. As a as a younger man, I don't have any background in philosophy or anything like that. I actually have a physics degree, which is quite rare, I think, for designer. But I've been lucky enough to have been giving talks at conferences for 12 years or so something like that. And I was introduced at one event. And they had a system whereby it was it was your peers who were introducing your this. And so one of the one of the one of the attendees was introducing me and said one thing I like about Cennydd is that he has this ethical theme running through his talks. And I was like I do, I hadn't really realised that that was something that had seeped into my professional worldview, I suppose. After I left Twitter, so this would be 2015. I, it's not that I witnessed any practice that immediately caused sort of ethical horror or anything like that. But I felt that ethics was a topic that really wasn't getting the airtime that needed within the field, we were getting to a level of maturity that suggested we needed to be much more sophisticated about how we thought about our duties. And that discussion wasn't really happening. And I had the luck and privilege to have you know, I made a little bit of money off of Twitter. You know, what my time at Twitter, you know, sold, sold some stock, not not enough to, you know, utterly transform my life, not not from Silicon Valley, you know, myth money, but you know, enough that I didn't have to rush into the next thing. And so I thought, you know, this is a topic that I need to learn more about, because there's something I can perhaps do to further that conversation. And then I started researching it and was blown away by the depth of work that is happening, what was happening at the time around that, but that practitioners really had no idea about, you know, you had academics, you had artists, you had futurists, you know, all sorts of writers, talking about the implications, the social impact of technology, and we weren't listening to these people. So I thought, okay, all right. I've got a I've got to spend some time took two years learning about that. Researching the book and so on. And it, it fortunately became a field that gets more fascinating. The deeper I get into it. And so, I sort of put the hook in me from that point and hasn't let go since.

Nathalie Post  
Yeah, that's amazing. And it's funny what you say, because right now, especially in tech, it feels like everyone is talking about ethics, you know, and I definitely think that a lot of those contributions came from also movies that are bringing this to the broader audience. But I'm very curious to hear more about why did you choose to write a book like why, why that format? And why, yeah, why future ethics? Why that title?

Cennydd Bowles  
I think it was James Baldwin, who says, you don't, no one sets out to be a writer, you discover you are a writer. And I think that's the case for me, like writing is horrible. No one enjoys writing, but I love having written. And, you know, having written a book previously, I knew I was capable of doing it, I thought this would would be the best method to try to bridge that gap, if you like, between the work that was happening and the world of practitioners. For me, the particularly interesting implications of ethics were around emerging technology. There's certainly a lot to be said about the ethics of existing technology. But you know, it's a bit of a fait accompli, these things are built there in the market, they're unleashing their, you know, there harms onto people already. And the ethical stakes are only going to get higher with time as tech companies ask for more trust, and they intrude further, potentially into people's lives. And we rely upon those fundamental input infrastructures more more deeply. And then and so of course, that means the time to have those conversations is now before those tools are commercialised, before they're designed before we know exactly what format they'll take. So for me, it felt that it was going to be most useful for me to focus on what was around the corner, I'm talking mostly, you know, your sort of five to 10 year horizon rather than your 50 to 100 year horizon. And so that was really the the the genesis of the book saying, Yeah, the way the way the book is structured, is you start with some of the more proximate threats, you know, things that I have right now around data and algorithmic bias, for example, but then we end up in the more far future, you know, autonomous weaponry, or, you know, all these wonderful theories about super intelligence, the ethical challenges that might pose so it's essentially a little bit of a projected future history, if you like, of where technology might head next.

Nathalie Post  
And what I found really interesting about your book is that you talk about using all these tools and methods that exist in speculative design and future studies, kind of for moral imagination purposes. And I wonder what made you make the connection between those fields.

Cennydd Bowles  
Initially, it was accident, I was working with the BBC on a project. In fact, it was it was, you know, a project around around ethics and bringing ethics into a, you know, a real world project that they had in front of a prototype using ethical methods. So I, I brought some approaches from value sensitive design, and tried to apply them to a design sprint model that mean, it's pretty common within design practice. And we got to a point where we realised the prototype we were working on, really wasn't potentially harmful. It was quite a benign use case, we were investigating. And so we could push things, essentially, we could sort of say, Well, what if we test some prototypes, not not ones that we think of viable product, but once essentially, to push to see how far we can go. And, you know, if we turn this dial up, you know, all the way to use this have a problem with it, that kind of thing, as I say, we looked at there really wasn't any damage that we could do doing this. And so we realised what we were creating was essentially more prototypes to provoke rather than prototypes to test product validity. And, you know, I even said, well, maybe we need a name for this. So I said, well, you know, provocatype with sort of tongue firmly in cheek. And of course, I was aware there was this, you know, as a designer, right? And I was aware, there was a field of critical design of speculative design that sort of looked at design fiction. I saw this as slightly different, but it was that was, frankly, a little bit pedantic. And so the more I looked into it, the more I deepen my knowledge about speculative and critical design, the more I said, well, you know what, actually, this is pretty much the vehicle that we were we were playing with anyway. So let me let me learn more about that and draw some of those ideas and that point about moral imagination is key. One of the tech industry's bigger problems, I think is it's quite, it's actually quite limited in its imaginations, or its imagination only happens through a pretty constrained channel. And that imagination is often a very sort of optimistic, utopian perspective of how technology is going to improve the world and change the world for the better. Right. And that means that by from my experience, technologists aren't particularly good at imagining future impacts on people who are not like them, and future impact that don't proceed according to the plan. And speculative design is a very good way to make potential futures feel real, I think it's particularly strong and making morally ambiguous futures feel real. I think there's far too much leaning on sort of easy utopias or tried to dystopias, particularly in sort of mainstream science fiction, for example, I mean, look at Black Mirror, it's a very capable piece of speculative fiction, but it leans heavily dystopian. And I think that it, I think, is much more interesting when you carve up sort of morally ambiguous middle ground. And so it just felt like a natural fit that we can shortcuts. You know, rather than make people participate in thought experiments about the future social impacts of technology, let's create the artefacts let's bring people directly into that world by bringing it well to them. And then they can have conversations about it. And that applies not just for project teams, but also for users, for communities for other stakeholders whose views we might need to get on.

Nathalie Post  
Yeah, that's amazing. Could you actually talk a bit more about those different types of methods? I mean, you mentioned the provocatypes. I love that word, by the way, really, really great. Um, but I know you also talk a bit about unintended consequences and externalities in your book, and what types of methods you use to uncover those more? Could you talk a bit about those?

Cennydd Bowles  
Sure. Well, there's quite quite a lot I could say on that. Let's talk about externalities first. So I'm sure your listeners I'm sure know, but the externality is essentially rather, an effect that falls upon someone outside of the system. So you know, patents, for example, you don't buy the cigarette, you don't smoke the cigarette, but if it's next to a smoker, you inhale. So my views on this have actually shifted a little bit or maybe become more sophisticated, perhaps as I've learned more, and things are starting to change a little bit in the space. So I use, I use something called the actor triangle, which is from I sort of adapted from Nordkapp's actionable futures toolkit, which is a way to essentially tease out who might be some of the stakeholders that we were ignoring. Again, maybe a weakness of technolgists, or particularly designers, as they focus on users, to the extent that they really overlook the non users of a system who are still impacted by that technology, right. And they overlook systemic impacts as well impacts on the sort of abstract concepts like democracy, freedom of the press, justice, you know, whatever you might want to throw in there. And of course, your non human life and the impact on the environment, those are generally overlooked in users. Right. So the point of the triangle is to try and tease out some of those extra stakeholders, those indirect stakeholders who might be impacted by our work. But I think there's a there's a change going on now. Well, there's a sort of subtle shift in what's happening, particularly in the tech ethics space. Some people have said, we're actually in the third wave of tech ethics. Already, it's only taken sort of four years to get to this point, we've had a sort of philosophically driven wave, we've had a technically driven wave mostly around algorithmic bias and mitigation methods for data scientists and so on. But then a third wave, which is predicated much more upon inclusion, diversity and justice. And so there's overlap with the social justice movements plural. But looking at not just ensuring we listen to people from underrepresented groups in the design process, but actually actively including them in our product strategies in our design sessions, bringing them into the you know, into the design studio into the room where the decisions are made. And so I've learned a lot even in the last couple of months from people working in product inclusion and design justice, and I'll mention any John Baptiste here I mentioned, Sasha Costanza, folks who are really helping to push that forward. So actually, on the topic of externalities and ensuring that we understand a more diverse group of people and communities, I would refer to them as the experts on that. And I'm trying to learn what I can to update my own practices and to bring those to my clients. With the unintended consequences, this is where I lean mostly on the future thinking and strategic foresight methods. I use things like the the futures wheel, sometimes called impact mapping, I use that quite a bit. What else would I use I'd use. I actually even just recently with the client started using the Gartner hype cycle, as a way just to give a framework for predicting potential timelines for technological developments and put them in a future historical context. Okay, this might hit a trough of disillusionment in 2024. What might was that? Right? What might be the social impacts of it that caused technologists to lose faith in this technology? So you sort of attempt to preempt where that technology heads and heads next? So yeah, so this is this is where that where that link to all of the valuable work that people working in foresight and futures comes in. I've been, I've just started to read Scott Smith. And, you know, changes that that crew, they just brought out a book called How to Future, which gives really good overview of some of those techniques. So yeah, plenty, plenty of wisdom contained within those.

Nathalie Post  
Yeah, great. Yeah. So I want to talk a little bit more about the work that you do right now with organisations and putting ethics into practice. Because, yeah, I think we're all quite aware, a lot of organisations, they set out these ethics guidelines or codes of ethics. And first of all, I really want to hear your thoughts on those. And then I'm wondering how you take those guidelines or codes of ethics, or even established oath, and how you go about bringing them into practice.

Cennydd Bowles  
My thoughts on on codes of ethics or principles have changed a little bit, I was quite hostile originally. And my thinking on it is developed, because of because of the role that people placed them in. To be honest, it's, it's almost a default, first step. Now, for a lot of people, pretty much every time I engage with a company, or I mean, a potential client, they say, I think what we need is some kind of guidelines. And by themselves, that they weren't solid thing, right, you know, guidelines and principles can be very useful, but they are the result of proper investment into responsible innovation and ethics, you know, and ethics. And I think a lot of companies see them as the solution, right? Probably some guidelines, problem solved. And so this is, of course, is where we get into the the idea of ethics washing, but a lot of companies want the performative aspects, you know, they can publish something to say we're taking ethics seriously, but then when tricky decisions are taken, those get very quickly routed around. So I think they can have utility, I'm, I'm absolutely not a big fan of the sort of codes of ethics or the principles that are created by one enlightened expert. This has blighted the field of design the last few years, you've had, you know, senior experienced designers saying, well, I've diagnose the problem for our industry. And if we just follow my 10 step plan, then we will now practice ethically, and that's that's not because, you know, whoever these people are, they don't recognise of course, those principles are riddled with their own biases. And who on earth appointed them the ethical arbiter right, that's, that's, that for me is not a good way to go about it. I'm I'm much more of a fan of the consultative codes of ethics you get from the IEEE, for example, they have ethically aligned design, you have the ACM professional body have their code of ethics, which has been updated recently. It's very strong. Anyway, so translating those two practices is, is terrifically important. So I think they're necessary but not sufficient tool. And I know working with companies and saying, yes, okay, well, we we will create something like that, because we need to document what we mean by certain values by certain principles. We need to have a Northstar that we navigate toward, but that needs to be translated into practice, through tools and techniques and an ethos of responsibility. And so typically, I'm marrying that up with, you know, some kind of playbook or some kind of process changes, recommendations, maybe some ethical infrastructure, as I would call it as well some you know, more structural changes to ensure that it's not just like checklist ethics, right? Where you do your usual design and development process and then just before you hit the button, you say, oh, let's just look at the guidelines. Have we violated any of these major harms? More, though, or these promises that we've made? No? Good. Okay, let's Let's release it. That that isn't the kind of work I'm interested in doing, because it's not going to affect any any positive change. So yeah, you have those as overarching definitional things. And also usefully, they have a useful role in assuming you publish them publicly, which I think you should, in making sure that you're accountable for it to the public. Because you can put out a statement saying, here's how we'll, here's how we operate. But you've got to give people tools to to actually do their work differently. And you've got to have the incentives and the infrastructure in place to ensure that process is fully applied.

Nathalie Post  
Yeah. And so I'm curious to hear how you make sure that that process is fully applied. So how do you make sure it's embedded rather than this afterthought checklist?

Cennydd Bowles  
I mean, this is the, this is the tricky thing, because any project will always, you know, grind up against the reality of what's happening inside any company. And I've yet to see a project in 15 or so years of consulting, maybe a bit less, but plenty of years of consulting, I've yet to be on a project that's perfectly aligned to change everything that it ought to be able to change. And, you know, so projects typically have a mandate, you can look at this, but you can't look at this. And that can be tricky, right? It can cause this disillusionment among the team, when they realise the boundaries they've been given, okay, start to bump up against the walls and say, well, here's the change, we can affect. But while these other practices are going on, or while these other incentive systems or business models, or whatever it is, are in place, and we can't challenge them, then the efficacy of what we do is going to be low. Nevertheless, I'm never I'm never too keen on being told things are out of bounds. as a consultant, I think it's my job and doesn't have you know, an ethics person, I suppose, I think I have a moral duty to say, Well, I, you know, I, I understand that. But from my professional perspective, these issues are also impacting the success of, you know, what you want to do here. So, you have to question them, but you also have to be realistic and pragmatic, and know that you're not going to be able to go into Google, for example, and say, you need to end the advertiser funding model for Google, because you ruin that company. Right. And so it takes, it takes a good balance of ambition and optimism, but a bit of sort of pessimistic pragmatism as well. A phrase I keep, I find myself repeating frequently is pull every lever at the moment, there are so many things, we need to work on, that we need some micro changes, we need some little tweaks to processes, we need some designers to start moving buttons in different places. So they're not dark patterns, you know, things like that. And then we need people looking at is, is capitalism, unethical system? You know, huge political questions that I'm not really equipped to, to ask. So you have to understand those boundaries. And then you have to push gently against them and tried to mobilise a deeper discussion than you were originally given licence for, I think,

Nathalie Post  
So for people who are listening to this, and are maybe, you know, having these types of questions or wanting to facilitate discussions around ethics within their companies or within their teams, what would you recommend them to go about this? How to, you know, start having these conversations? And also, you know, what is a suitable, let's say, frame within an organisation to have these conversations in?

Cennydd Bowles  
The easiest way to start having these conversations is just with simple prompt questions. And there are plenty of those published. Most of them, frankly, are not very good, but there are a couple that I would recommend, I would look at something like the ODI's, data ethics canvas, which is, you know, it's very nice PDF, but it's really a lot of prompt questions under certain categories. So they're, they're well constructed. And I've looked at ethical explorer as well, which is a toolkit that was released about a month or two ago. Very nice sort of starter pack to what I would call risk mapping, looking at the potential damage that you could do or the potential ethical issues that might arise in your work, again, by category so you have you know, disinformation surveillance, I think there might be one first or physical harm. But those those are good starting points. So yeah, arm yourself with a good set of prompt questions just to begin that conversation. Because you'll find you'll find some people won't have considered them but you'll also be surprised I think so that some people in your team will say, you know what, yeah, that's been bugging me too. What should we do about that? So you start that discussion very simply, and very excessively, you don't need to lead in with huge amounts of ethical theory, you don't need to hire a philosopher on day one, start those conversations, and then see where that takes you see what your next steps should be. And in terms of the right point, to interject that kind of conversation, the last answer is whenever, right, there's never a bad time to do it. Even after you've launched, okay, it's better to ask those questions before but at least, you know, if you ask them after you've launched, at least you're asking them at least or being alert to the possibility that you might be able to do better next time. But the most obvious spots are either within design critique, so in my background is designed so that's that's a, you know, a ritual I know, well, where you, you know, essentially tear each other's design to shreds in a respectful manner. Because you're already asking difficult questions about is this actually the right solution? And what might this cause that we didn't expect? So design critique, if you're a designer, or if you're lucky enough to be in those rooms, and those sessions, or the other one is probably sprint demos. So I'm assuming that a lot of tech teams are going to be working in an agile manner, they're gonna be working in Sprint's. And then every, every week, or every two weeks, whatever it is, you will have a demo, here's what we've built. That's a great time to ask those questions. Right? say, oh, okay, what if what if, you know, people use this particular tool in a way we hadn't anticipated? Could they use it to harm other people? Could they use it to harass other people, you know, what might be some of those negative consequences or externalities that might result from this. So those, there is something to be said for sort of piggybacking existing rituals in companies and just dropping in those questions. So that's where I'd I'd always recommend starting off.

Nathalie Post  
Yeah, yeah. It's really interesting that you say that actually, about that piggybacking on, you know, these agile rituals, I haven't even thought about it in that way. But it's a really nice way to actually use existing, you know, structures, rather than creating entire new meetings and sessions, etc, to have these conversations. I think that's a really, really nice way. And I wonder a bit more about the type of work you do with organisations? Because I can imagine, well, it's quite a process as a consultant to embed yourself within an organisation and their context. But how do you go about, you know, creating that learning journey for the companies you work with, to, you know, start asking these questions and start embedding it in their process.

Cennydd Bowles  
Typically, when I have a potential client reach out, it's usually because you know, they've come across my work, but there's also internal appetite. It's partly, it's sometimes driven by you know, there's a senior practitioner, so you know, not a manager, but someone who's respected and listened to within the company, has said, You know, I think we need to listen to Cennydd, we should, you know, get him in for a talk or getting ready for some training or just have a chat with him about how we handle this has a response to building appetite and mobilisation inside the company. And, you know, we've seen a lot of major tech firms, very large tech firms have significant amounts of employee activism and mobilisation around ethics and responsibility. Google are the most obvious example. But that's now spread to you know, pretty much every every major tech firm has had that. And it's now trickling down to mid sized tech companies as well. So yeah, usually there's that, that's sort of appetite set. Right? Well, let's, let's talk to Cennyd and see, yeah, see, see what what he would recommend. A lot of the time, then there are existing people you can draw on. So I, what I tend to recommend is trying to assemble an internal project team, you know, just a virtual team, you don't need to hire, you know, a chief ethics officer right off the bat, you don't need a etc. People who are interested in this, try to immerse them in a little bit more theory, a little bit more of a sort of contextual understanding of what's happening in the space. And then my job as consultant is much more of a facilitator to help that team be as effective as possible, by giving them some of the knowledge, giving them some of the tools and techniques and just kind of corralling that work and and helping to shape it in the right way. But that's it has to be contextual. You can't, you know, I don't have the luxury, it'd be easier as a consultant, if I could just come in and say, here's my patented process that's been proven to work in 25 different sectors. But you know, it doesn't work. Every company has their own context and culture. So you have to tailor it accordingly, which is why the power has to rest internally with that group and not with me as the external, the external person. So clients I'm working with at the moment is, is Yeah, we're moving from you know, deep sort of immersion and training and upskilling to a process of, Okay, what are the interventions we want to make within the wider company? What are the right techniques for that? Do we need a set of principles? Do we need a set of, you know, tools and playbooks and things like that? Do we need risk mapping exercises? Do we need internal training? Do we need a cup, you know, calm strategy and so on? So it's, yeah, there's no, there's no sort of template. But typically, yeah, I find people companies start to engage with me because they want to talk or they want some training, and then I sort of start questioning what are the what's the change, you actually want to effect? It might be a talk or training is a good start. But it might be that actually, that's not going to give you the sort of transformation you really want. In which case, let's talk about what that deeper relationship looks like.

Nathalie Post  
What have kind of been your most significant learnings because you, you talk a bit about, you know how your view has either like change or a bit expanded over these last few years working in ethics and tech. But what have been those, you know, learning moments where you were like, Oh, hey, this is eye opening? Let's look at it differently.

Cennydd Bowles  
I would certainly say, back to what I was mentioning earlier about the product inclusion and the design justice perspectives. Something I've identified as a sort of historic weakness in, in my work, for sure, but actually, frankly, the work of a lot of the field is there's been too much too much technocracy, you know, I have an almost totally understandable conclusion of where we've got ourselves into kind of a mess here, as a, as a field. And so it's on us to get ourselves out of it, which, okay, at least you're owning up to some moral responsibility there, and you're trying to try to say, let's, let's change things. But these decisions are too important to be left to just, you know, well educated or technically literate people. They should also include a far wider cross section of society. So that's been one of my significant learnings this year is we've been failing on that. And we need to involve the public in these decisions. We need to involve underrepresented groups and communities in these decisions. I'm trying to think what else has shifted in my own, I think I think I've been surprised by by the the advertising the literacy in these issues around these issues in a lot of companies. Now the conversation, conversations I'm having are becoming much more sophisticated, which is fantastic, obviously, you know, that means hopefully, we're progressing and it makes my life easier as well. We are somewhat getting past the point of, well, I think a lot of companies frankly, hoped this ethics thing would blow over. Right. And it's not going to, you know, that that that ship has sailed. Tech workers have realised they are powerful in aggregate. You know, they're expensive, they're hard to replace, you know, they directly affect what gets built and what doesn't. And so, you know, that that course is not going to be reversed. And so now, I think we're seeing companies saying, Okay, well, we can't just sort of placate them by publishing some guidelines, we do actually need to make some kind of change. And they're also connecting that to consumer demand. And there's now a lot more evidence that consumers are starting to see the tech industry, less positive light than they used to going back five years even. And so there are stronger commercial reasons to make those kinds of kinds of changes. Well, it's not just to keep your talent happy. Or it's not just to avoid, you know, journalists and newspapers, writing terrible articles about you, there's actually a, you know, legitimate positive benefit. So the conversation has shifted a little bit away from just a pure risk based mindset to one of saying, Okay, this is actually a benefit that we can positively differentiate against competitors, we can create better quality products, we can, you know, use this as a constructive influence, if you like, within a company, a long way to go before that's the default mindset, of course, but I suppose that's been one of the biggest shifts that I've seen in how clients think about this stuff.

Nathalie Post  
Yeah, yeah. And so I'm also wondering, this third wave of ethics. is climate change a part of that? And like, do you consider climate change an ethical problem, and also these ethical dialogues? So to say that you're having.

Cennydd Bowles  
I don't think climate is anywhere near well enough represented in these conversations. While I, while I think this third wave, if we're calling it that is, is terrifically important. Perhaps there's a fourth wave that's more climate orientated as well, I'm not saying we do away with the current wave, because as I say, it has a hell of a lot to tell us. Climate is the moral issue of the century. Yeah. And if we solve climate as much as we're ever going to solve climate, but if we overcome the challenges that climate crisis thrown at us, then frankly, it's very likely that we also along the way, scoop up a lot of the other moral problems of society and improve things along those axes as well. That's not guaranteed there are climate solutions that are not just not equitable. But if we ignore the, you know, the spectre of climate crisis, then to be honest, a lot of these smaller fixes, you know, it's important to fix biassed algorithms, for example, but if 1/3 of the planet is uninhabitable by the end of the century, then frankly, that doesn't really matter quite so much compared to that, right. So I do worry, we have been focused on actually, in the long term, the threats that get the attention are, frankly, quite daft. You know, worries about super intelligence, and all this sort of stuff. I, they're interesting thought experiments. And they're, you know, there's lots of interesting blog posts to be written about them. But it's like worrying about cholesterol, while you're tied to the train tracks, right. I mean, there's there's, there's an enormous threat that is certain, you know, there is no backing out of climate crisis. Now it's going to happen, the question is, how bad things are going to get? And I think we, we need to recognise the severity of that. Now, there is some very good mobilisation, there's very good work happening within tech around climate crisis, climate action groups, within companies and you know, among practitioners, but it's not really being connected to the ethics movement in any meaningful way. And that's, that's a damn shame, because there's a lot we should learn from each other. And so, you know, I've tried my best to learn more about those communities and how we sort of tie them together. But I'm, I'm just one person. So we've got a hell of a lot more work to do on that.

Nathalie Post  
Yeah. So is that what you think is next in ethical tech, let's say? Or, or what is next?

Cennydd Bowles  
Yeah, I mean, possibly I, I don't think it's likely that that's going to be the next significant shift. I think more likely is, a pretty obvious trajectory, that now there's a lot of interest in this field is starting to attract some people who are less capable, less knowledgeable about it. The question for those people is how they address that? Do they want to address it? Are they going to take the time to actually learn about these issues and consult with, you know, the literature that exists, for example? Or are they, you know, going to just practice charlatan, read because there's money available, right. And my hunch is probably going to be the latter. So there's going to be a crisis of credibility. At some point within this field, you know, that sort of tech lash lash, if you like were under qualified people, or as in, you know, mega consulting groups, for example. So many of them are scrambling to implement responsible innovation practices. And departments are some of those I respect, some of them are actually doing it the right way. And some of them I do not respect. So the results are going to be very mixed. And that's going to Yeah, that's going to cause this this sort of problem. So I suspect we're going to have discussions about how we tell those groups apart how we actually determine someone's capabilities and competence in this field. There's going to be talk of certification and licencing and chartership, as they're always these kind of things. Do we need like a certified, you know, ethical technologists, that kind of stuff. And it's going to it's still going to hit unresolved questions about balancing priorities of the short term needs of industrial growth, capitalism and the longer term social impact that technology causes. So maybe the fourth wave is also even more, you know, explicitly political. You could argue this third wave is getting pretty strongly political, political or politicised. But it's on the assumption that we actually start to invoke green new deals and that sort of, you know, more profound economic change in coming years, then I think that that sort of political interface is going to become more more explicitly important as well.

Nathalie Post  
So as we're kind of nearing the end of of our time here. I'm curious to hear like knowing all this, and what you just said about, you know, what is actually next and ethical tech and what you see changing in terms of, you know, maybe those ethics licences or all the politicisation of these things. What can you recommend to people to do you know, with this information, how should people take this forward?

Cennydd Bowles  
Well it's always about listening. You know, I think I'm fairly knowledgeable in this field. Now, I certainly feel a lot more knowledgeable than I did six months into it. You know, six months in, I was realising just how little I knew, and so had to had to pick that up. And that's what, five years into that journey. You know, I feel like I'm picking up things but this community is shifting quite rapidly. These days. You have to carve out time and space to learn from experts in this in this field you want we don't need is more kind of almost sort of tech neocolonialism, that's, let me have a little rant, Silicon Valley has this infuriating habit of believing themselves to be the first brave explorers on any new shore. Right. And, you know, all these experts have been watching on the sidelines, having you know, spent dedicated their entire lives to it and seeing us run just into these same old walls again, and again. So take the time carve out some space to listen to these people to bring them into your, into your brain, essentially. But we also desperately need to translate this stuff into practice, unless we start to create particular case studies, there really aren't enough case studies for this stuff. So we need companies, we need practitioners start making some of these changes even on small way, inside their own companies, listen to each other, take action collectively. Because it's very hard to say no to 1000 people, it's easy to say no to one person, but find allies, within your community within your organisation and say we can we can do something and we can push some kind of change, and then tell the world what you did. And tell the world about the effect that it had. Because that then bolsters the entire movement. So I'm I'm on a thing right now, where I'm trying desperately to find more case studies that we can use precedent to convince this is worth it. So don't just get wrapped up in, in improving your knowledge, put it into action as well.

Nathalie Post  
I actually really love what you're saying about these case studies. Because I think anyone who's ever like looked into you know, the kind of ethical questions that we're facing and things like bias and what is happening in the tech world, you often see things that went wrong, and you know, articles about how it went wrong, but not a case study of how you can take an ethical question and how to have these conversations with moral imagination. And to get to a certain solution. I think that's a really good point, and that you're making me more of those rather than this was what went wrong.

Cennydd Bowles  
Yeah. And that that focus on what went wrong is one of the things that keeps us in that risk mindset, that this is a mitigating risks, and avoiding all the bad things that could happen. And if we end up concluding that that's all ethics offers us that's that's kind of a sad loss. Like that really isn't the real potential of this body of work.

Nathalie Post  
So to close this conversation, if people want to learn more about you your work you know, your book, where should they go?

Cennydd Bowles  
Sure. Well, I'm, I am very easy to find because I have an unusually spelled name. So I sort of dominate the the search rankings for that. So just Google Cennydd, you'll find my website or find my Twitter. And so that's, that's a good place to start. My book is called future ethics. And you can find more about that at future-ethics.com.

Nathalie Post  
Great. Thank you so much, Cennydd, for this conversation. Really, really enjoyed it.

Cennydd Bowles  
Great. Thanks a lot for having me.

permanere audire

Continue listening...

newsletter

Want to stay up to date?

Sign up for our newsletter, and we’ll keep you posted on our research, podcast and other AI goodies.
* We don't share your data. See our Privacy Policy
Thank you! You've subscribed.
Oops! Something went wrong while submitting the form.