Dynamics of AI principles

Cansu Canca
|
AI Ethics Lab
Founder and Director
We talk about the work Cansu does at AI Ethics Lab, and in particular their latest initiative, “Dynamics of AI Principles”. We also discuss the role of AI ethics principles, the difference between core principles and instrumental principles, and how organisations can go about operationalising AI ethics.

Nathalie Post  
Today I'm joined by Cansu Cansa, philosopher and founder of AI Ethics lab. Cansu has a PhD in philosophy, and has specialised in applied ethics. At AI ethics lab, she leads teams of computer scientists, philosophers and legal scholars to provide ethics analysis and guidance to researchers and practitioners. She also serves as an ethics expert in a range of ethics, advisory and editorial boards. In this episode, we dive into the work that Cansu does at AI ethics lab, and in particular, their latest initiative, dynamics of AI principles. In this context, we talk about the role of AI principles, the difference between core principles and instrumental principles, and how organisations can go about operationalizing AI ethics. We also discuss the flagship model of AI ethics lab named "puzzle solving in AI", which sounds fun, right? So let's get into the episode.

Hi Cansu, and welcome to the human centred AI podcast. It is really, really great to talk to you today. I am looking forward to this already for a while. So I'm really glad we finally get the opportunity to record this. But for our listeners who may not know you yet, could you give a bit of an introduction about yourself and also your background?

Cansu Canca  
Hi, Nathalie, it's great to talk to you. Finally, after all that back and forth. I'm Cansu Canca. I'm a philosopher by training. And I run the AI ethics lab, which is a initiative based in Boston working on the questions of AI ethics, both on the research side but also as a consulting, helping practitioners both in academia and in private in the industry. To deal with ethical questions that they face in their everyday work.

Nathalie Post  
How did you actually end up focusing on AI ethics? Because you're a philosopher by training? So how did that journey look like?

Cansu Canca  
So I started off as a philosopher working on applied ethics. So my main area from the very start was ethics and health. So I've been working on questions related to healthcare, you know, the patient doctor relationship, ethical questions that arise between the patient and doctors, decisions within hospitals, but also policy level, you know, how should the health policy be organised and and how the decisions in healthcare should be made. I started looking into technology while I was working at at a medical school, actually, because I started realising that we get a lot of new technologies, especially technologies with AI systems. And when we are talking about all these different ethical questions, this one aspect, which is what kind of ethical decisions that go into the technology is always lost. So we never we never talk about that technology is given and what you do as a physician, or, or as a healthcare provider, is what we discussed, which seems to miss like a huge part of the conversation increasingly, as technologies become a bigger part of healthcare. So healthcare was the leadway for me to ethics of technology and ethics of AI. And from there, I realised that there are so many interesting questions that are very specifically about AI ethics. So I sort of moved a little bit away from health and started focusing on ethics of AI. But I still love the health questions. And the sweet spot for me is when the healthcare ethics and AI intersect, that's like a lot of fun, interesting, exciting questions.

Nathalie Post  
Yeah. And so can you tell us a little bit more about AI ethics lab and what you do at AI ethics lab and your approach there?

Cansu Canca  
Yep. So, basically, the idea of AI ethics plus we started really early, we started I started at in 2016, early 2016 working on these questions, and towards the late 2016. I had to figure out okay, what what step do I need to take, should I get a job as an as a, in academia working on AI ethics questions, should I go to industry, but turns out that was really early because when I talk to, you know, leading academic institutions and leading companies. No one really wanted to talk about AI ethics. Like, seriously, you know, the word was slightly getting around, but it was not taken seriously yet. So AI ethics is based basically did the thing that I thought should have existed back then. But didn't a centre where you have interdisciplinary research. And where you, because it's applied ethics, where you apply this research and apply these questions, when somebody has a question who is not a philosopher, you can help them, give them advice, help them with ethics analysis, so that they can quickly integrate that into their work and move on. So instead of like slowing down the process, sort of making it really dynamic and fast, and a part of the innovation, innovation, process progress, basically. So that's the idea behind the AI ethics and that's what we do right now as well.

Nathalie Post  
Yeah, no. And I find that so fascinating that you really started in 2016 2017. Because I feel like right now, definitely everyone in the space of AI, you know, knows that AI ethics and AI ethics guidelines, etc. It's much talked about, but it was definitely a very different a couple of years ago, where it was really contained in a smaller research community, I would say, it's a really, really interesting.

Cansu Canca  
Yeah. And in fact, I think a lot of the times like we forget this, like, because we talk about AI ethics so much, it sounds like it. We've been talking about it for a long time, we just like we just started talking about this in the spotlight for in the last like, three years, because 2016 was not very, very busy in that sense. But of course, like, let me let me make it clear that there has always been people in academia and in industry concerned about these questions and doing great work about this questions. Maybe they did not call it AI ethics and call that machine ethics, computer ethics, ethics of the internet, but it was more or less the same questions just evolving as as technology evolves?

Nathalie Post  
Absolutely. What I do think is quite like a, I guess a more recent development of these last couple of years, right, are all those AI principles being published by organisations. And you know, what started with a couple became many of them. And so you recently did a research about that topic, dynamics of AI principles. Could you tell us a little bit more about that, and why you decided to create the toolbox?

Cansu Canca  
Sure. So the the initial very first idea, we started this actually quite early, it just took us a while to fully complete and achieve what we wanted to put out what we wanted to put out. So we started, I actually don't know. But it's been easily more than a year, perhaps two years already. But the the main idea was that these documents kept coming up and up. And the question was, well, how do they fit together? How many are there in total? What are they talking about? Because you don't get to see them all one by one, but you don't actually get a sense of how do they all fit together? So initial idea was that, well, how do you make sense of this? Can we do something that helps us? And we wanted to basically gather all of the existing documents, put like a little summary so that we can understand like, quickly, where they are going? And see them on a map? Because there was also this, like, where'd that where is this more prominent, this conversation is happening more often. So this is how we started. But then, as we were continuing, we realised, well, we actually want to be able to differentiate between organisations, we want to be able to differentiate between regions, we want to be able to compare all the documents that we kept adding new functions. And at some point, they also realise that the principles are so there's one thing that's very interesting for anyone who comes from ethics and health, bioethics, which is that the origin of principles the origin of four principles have been published in 1979. And they have been used in the health ethics and health for all this time with no real change. And suddenly, when it becomes AI ethics, everyone publishes their own principles. So there it was weird to think like okay, but what's different? And when we looked through, we did realise that there is a there are actually a lot of conceptual problems, how, why do they have so like, How do I explain, so the one conceptual problem, the main conceptual problem is there are principles that are just instrumental to reach certain ethical purposes, like privacy, we don't value privacy for itself, evaluate because it It allows us to make autonomous decisions. It allows us to control our lives. And it might protect us from harm or unfair treatment. So but it's it's instrumental. But when you look at these documents, you realise that a lot of them just glorify this instrumental points without realising how they fit together, you know, privacy, transparency explainability. Yes, but and this is very important, because when it's a hard question, there's always a trade off involved. And in order to be able to do the trade off, you have to understand what serves to what purpose. So for that reason, we then I wrote an article on this, which is now published. And then we added another tool, which is called the box to this whole system, where you can see how these different principles fit together conceptually, and you can use them. It was a little bit of a long explanation.

Nathalie Post  
No, but I think it's actually it gives a good understanding. Because I think from being in this space, I constantly see all these principles being published. And I'm always very divided in my thoughts about those principles. Because in the end, yeah, it's obviously about the operationalizing of them and putting them into action. So what are your thoughts around that? Around the value of these principles?

Cansu Canca  
The principles, I think, so the principles are very useful. But if you under if you try to make if you try to do more than they are they can offer, then they can become harmful. And we've seen this in, in bioethics as well. So it's not only special to ethics, ethics. Basically, principles are sort of like a checklist if they remind you the important concerns, but there are no there is no hierarchy beyond between principles. And hard questions are when, for example, questions where the core principles conflict, like for example, you might have a case where you need a lot of data in order to, a lot of personal data in order to create health healthcare tool, and you cannot get consent. So there you have a pro and you cannot anonymize, let's say, so there, you have a problem of clear problem with the autonomy. But if you drop the project, and if you think that this healthcare technology will be extremely useful, you're now giving away giving up the benefits. So that's a hard question. And the principles don't give you a way out of this. But until here, like until you hit this hard question, principles are a great way of both core principles and instrumental principles like privacy, transparency, accountability, are great ways of checking what you should think about. And when you hit the hard questions. There you have theories, because theories are complete, they give you a hierarchy, they tell you which one to choose, in which situation. So I think the problem happens when these principles are used beyond their purpose, almost like a checkbox that is justified by taste. So that's, that's not a good idea.

Nathalie Post  
Yeah. And so when you hit those hard questions, can you elaborate a bit more on the theory that you just mentioned? And like, how do you actually bring that theory into practice as well?

Cansu Canca  
Yeah. So let me actually start by I think I missed one part of the answer in your last question, which is that well, how do you operationalize and that part, so that's where we wanted to use the box for the box gives you the are the box that we created as a as a part of our toolbox, not just a box? Clear a name, I suppose, but it's a simple name. The box gives you a checklist, and where it's an interactive checklist, so you can sort of as you think, through you can put, you can evaluate the project or the product, looking at how the project or the product, endorses, and, and implements certain instrumental principles. So what did you do about the privacy? What did you do about transparency and so on? So it's, it's a checklist, that doesn't mean that you have to fulfil all of them. But if you haven't fulfilled one of them, you should think about it. Was that a bad thing that did that? Does that have consequences? And can you make it better? So it starts with that and if certain instrumental principles are not fully implemented, cannot be fully implemented. Then the box also sort of helps you think well, but to what purpose to what actual ethical goal, does it serve? Is it about autonomy is about harm and benefit, is it about justice, so about these core principles, which one is at risk? So that's the starting point of operationalizing the principles, but then, as you as right now, you know, like when you hit the hard question, when you, when you have a real conflict, especially between the core principles, what do you do? So, we have there in ethics, which is a part of philosophy as a discipline, I say this because this is sometimes like confused. So in ethics, we have a number of moral theories that give you ways of reasoning about them. So the utilitarian theory will tell you take into account all the consequences and weigh them in terms of how, how much harm could they possibly do, and how to reduce that harm to minimum. So that would be the utilitarian way to go about it. On the other hand, a kantian theory will tell you more focusing on individual autonomy and people's control on their on their lives. Sometimes they overlap. So you can say, Okay, this is the answer all the theories, and we have other theories, of course, so these are just two. But then justice, for example, is tricky, because we have multiple theories of justice. So sometimes most of all the theories align, and you know, what is the right thing to do? Maybe it wasn't so obvious. But once you did the the ethical work, you will get to that answer. A lot of the times if the question is really hard, they will not align that there is your point to two different directions. And oftentimes, people think that's a big problem, because that means that ethics just doesn't give you answers. I think and I argue that it is not a big problem, because even if you choose one of those equally, right answers, equally justifiable answers, you are way better off than choosing any other answer. So you are still narrowing down your decisionset drastically to three, maybe two or three options, and choosing from them will be much better than choosing any other decision and making any other decision. So that's why I like the way to think about ethics should be it can give you it can give you a very clear answer. Or it can give you options. And when you have the options, you can go back to your principles then and say, Well, what kind of a company are we, what do we actually prioritise when things conflict. When when, when there is no way out? Do we always prioritise privacy because we prioritise individual control? Or do we always prioritise well being of the society? Because that's what we are going for there? The principles make sense, then you are endorsing it as a company, what does it mean for you?

Nathalie Post  
Yeah, yeah, I think it's really interesting what you're saying here, because I'm kind of wondering. So do you see generally that organisations are prioritising their principles already? Or is it are like on prioritised? And is this something that they face the moment they hit such a question?

Cansu Canca  
So far, I have not seen any organisation that does proper prioritisation. And oftentimes, it's even the case that these sets of principles that they publish, they should mean that when you have a set of principles, principles, that should mean that you're not endorsing the others, right, like whatever is excluded. But I don't think that's even what they mean. I think that sort of like, whatever word cloud feels closer to the company, that's what the principles usually are, rather than this very systematic way of thinking and saying, here are the ways that they can conflict. And when they happen, we are going to go with this. So I have not seen this. I have not seen this yet. In any company. Myself.

Nathalie Post  
Okay. Yeah. And so I'm kind of wondering, based on on that actually is like, if so, do you think organisations should set out their own principles for AI or tech are generally ethical principles? I would say? Um, and should they do that, let's say, based on, you know, on these core principles and the distinguishment, between those instrumental ones, or how should they basically go about it? Like, what is a good way to go about this?

Cansu Canca  
Yeah, I think this is a it's a good question to start with. And I think the way to do this is, yes, first of all, yes, they should have principles, because I think, again, if you use them, Well, I think principles are useful. Oftentimes, they are just window dressing and then there is no meaning of course, but the good way of using them would be when you start off with a really good discussion in the company trying to figure out how do you what do you most strongly endorse. By the way, I mean, it's clear, like all of us should want to endorse all of these principles like there should not be like, Oh, I actually don't believe in accountability. That's why like, when you look at the box, we have all of them because all of them must matter. But then when you within a company, the process should be that you start by understanding the principles, and seeing how they fit together. And from there, applying them in in your everyday work. And as you do this, it will become more and more clear with with cases with use cases, what do you what decision do you do, do you make? So once you make a decision on a particular case, you already made a decision about which principal you prioritise. And that will be your precedent. So once you have that precedent, next time you have the similar question, you will have a guidance, but you will also have a chance to look back and check whether that was the right decision, because it could always be the case that that decision was the wrong one. And now you want to make it right, right. So it will give you this like, sort of like bouncing back and forth way of strengthening your principles and understanding them more and more with a variety of cases that are real cases, not just hypothetical, but real cases that you have been struggling in within the company, this systematic way of doing it will eventually give you a really good playbook, both for the developers to know what is what does this country's company stand for, for the ethicist to help the developers or the leadership to make decisions and for the leadership to say, well, this is what we've been advocating. So either let's go with this or if we are going to change course, well, let's do that carefully.

Nathalie Post  
Yeah, yeah. And so is this the type of work that you're also then doing with organisations as in like, being embedded within their teams, and, you know, working with them very actively on these matters?

Cansu Canca  
That is sort of the type of work that we are pushing the in. All honesty say that this is something that we do all the time, what we do is more like portions of it. So you know, companies might come and ask our feedback on their principles, or their on their general strategy about AI ethics, and so on. So it's like this, this whole on a specific case, a project or a product, right? So it's sort of like this whole thing that should happen in a much more systematic way, right? Now we get to do bits and pieces with each company, because it's still not standard practice within the industry to really go from your strategy to your you know, your ethics strategy, or ethics principles goes in your ethics strategy, analysing the projects or products when you have a hard question, talking to the experts, and all of these things are not like clearly defined for the companies within the companies yet. So by the way, I should not just single out the industry, I think in academia as well, it's not, you would rarely hear computer science department reaching out to the philosophy department asking the ethics questions very rarely. So this is not a standard, yet. It is something that we are very, very strongly trying to change. Because I think this is one way of this, this is not just like, it should be this way. But it's also like, now we have a chance to make the make it such that ethics becomes a meaningful part of a structure. You know, you could say that, well, it was never a meaningful part of business structure. But hey, you know, technology, if it can become a part of technology, it's gonna become a part of business as well. So it's a way of possibly if you can manage it, it's a way of making ethics stronger in in society, in some sense.

Nathalie Post  
Yeah. Yeah. And I must say, I really appreciate the the honesty about the or transparency at least, about the work that you know, that is going on, because I think if you look at you know, what is being published, that often gives you a very different view than what is actually happening in organisations. And the same holds for I would say, emerging technologies in general, there's quite a big gap between you know, what is being published versus what's actually happening. But I'm very curious also, one other piece of work that I would love to discuss with you is your puzzle solving and ethics model and I love the name of this because it just sounds fun. So I'm curious, like, what led you to create this model? And well, first of all, maybe what is it actually?

Cansu Canca  
So the idea is, let me say what it is not. Okay. So I think a lot of the times when people hear ethics, especially people who are in practice hear, ethics, they hear something that's like a policing system, you know, you will make sure that they say, yes, they approve, and if they don't approve, you're in trouble. So somebody's sort of like, always pointing fingers to them and saying, you've been wrong. So that is the structure that we want to get away from. Because the philosophical, moral, like moral philosophy, philosophical ethics, is really about having a hard question where you don't know what is the right thing to do. And trying to figure out what is the right decision? What is the right action to take? And trying to do this, you know, taking into account everything, including the cost of the business cost, the time, everything, it's not just like, in your leisure anytime they think about ethics endlessly, but like everything real life, that gives you everything considered, how can they make the best decision. And that is like puzzle solving. It is very, I love ethics, like, it is very exciting. It's very logical and analytical. So something again, that a lot of the messy types don't think they think of ethics is more like this, this literature style, which is odd, it's very, like logic is basically, you know, the basis of a lot of computer code. So it's very systematic, and very, very interesting, because you are honestly trying to figure out what is the right answer in a given situation. And that is like puzzle solving. And the reason why I wanted to, like put it in the name is because that's how I want people to approach it. Like when you have a problem, you don't go for approval, when you have a problem you go to help to solve the problem. So we want we are there to help you figure out what is the right thing to do. And also, when I say help this, this I mean, it literally, it's not like, I'm, I'm not trying to downplay like, we actually tell you, but I say we help you. No, no, because I have to be able to get feedback from you saying, you know, how is it technologically possible, right? Like, I can tell you here are the ethical, like, here is what you should do. But now let's talk about what can be done, technically, what is, you know, what are the costs of it, and so on, and get that feedback and incorporated back into our reasoning. So it's very much in in a teamwork. And it really works. If both parties are trying to find an answer, find a solution. And it's so much fun.

Nathalie Post  
Can you maybe give some examples of how you know this worked in practice? Maybe in some of the work you've done? Yeah. Is there any example that you can share?

Cansu Canca  
Let me just give you an example. That was mostly like, on the idea creation phase. So it's easy to share that one. But the idea was, can we have a question that was raised was that can you have a variable that gathers the variable that's connected to a lot of IoT s, and therefore gets a lot of information about your life, about your, you know, bodily functions, and all of those things. And eventually, basically be able to have a very good understanding of your yourself, your body, your lifestyle, how you react to things. And the purpose of this variable was not just health, but the idea behind it when the question came to us, was to help you live a good life. So that's the question that came to us and that was like a big and messy puzzle. Because there you have, first of all, you have all the questions about like, what does it mean to lead a goodbye, like, what do we want to help people to? What kind of services do we want people to get from such a device? Presumably, this is gonna change a lot from person to person because of your preferences and values. You know, it could be unhealthy for you to work long hours, but if your purpose in life is let's say, to find a vaccine for COVID-19 that is gonna long hours and should this thing keep beeping saying that you should go to sleep. So like, the end, we had these different ideas like well, should this thing ask you a lot of questions? About your, your preferences and your values, but then what kind of system would judge these values and say, what is what is worthwhile, because it should not just go with, okay, this is your goal, I'm going to help you achieve this goal because your goal could be bad for society, you know, your goal could be I don't know, I want to defraud people in the most effective way. Well, preferably, this thing should not help you do this. So it did, we had a lot of discussions about, you know, what a product like this should be like. And basically, we help them change the goal to a more well being oriented one, because anything beyond that was not feasible, and possibly harmful. And as they are doing that also introduced questions regarding who would analyse the data and give health related advice, because there you have this question of gatekeeping. Just like, you know, think about it, like what happened with newspapers, you know, should anyone be able to post news? Should anyone be able to tell you what is good for you? Like, should it be like, given us posture or giving you diabetes advice? Or should we make sure that it's at least physicians? And, you know, we came with a, they came up with ideas that for some gatekeeping, for expertise, and, and safety, basically. So all of these small things were long discussions about what we, we told them about what are the ethically justifiable options and what they told us what they can do. And therefore, you know, which route to take and which ones not to take.

Nathalie Post  
I think it's super interesting. You sharing that example? Because it just gives a better idea of what kind of questions actually arise and like how you're dealing with those. I'm also wondering, so because now that you've wrapped up your dynamics of AI principles initiative, I'm kind of wondering what's next for you, you know, are there are other topics that you feel like you're gonna dig into, like, what's on your on your radar roadmap, let's say?

Cansu Canca  
Yeah, so there is always there are just so many extremely exciting questions that we are unable to go fast enough to finish and jump into the next project. So I think the only problem is like how much we can go up like, oh, like, there's nothing interesting left to do never that. So we already have a project that's running about research ethics for AI research with human subjects. We've already wrote modules for this online teaching modules for the city training city programme in the United States. So like, we wrote three modules on this topic, and we did a webinar already. And we are now sort of did they are much more introductory, these modules and the webinar, and we are now detailing this account and writing our sort of like a white paper, a policy paper about what should be done because here's like in mini explanation, here's what's going on. The question is, we have research ethics and research ethics is one of the most well, widely used, applied ethics areas. So it is very strong. It has its rules, its it has its regulations, and it has its way of private practice. But, and the main purpose there is to ensure that the research does not harm the society. But most importantly, if there are human subjects involved, it does not harm humans. The AI research is interesting, because a lot of the times the human subjects are not right in your face, like in a clinical trial. And they do get like you use similar ideas, but the similar ideas just become so ineffective, for example, the regular informed consent in research ethics would require you as a researcher to make sure that the person has read the consent form, understood it, and voluntarily and rationally agreed to it. Terms and Conditions on the website does none of that, and is pretending that it's informed consent. It is not beyond that. So instead of trying to figure out well, but then what should happen like should we force informed consent in which case things get really, things do slow down if you force that of course because how are you gonna... is the data online? So sort of like pointing out these questions and at least offering some ways of thinking about them? We are not large enough of a of a group to say, here are the answers. But we are, I think we have experiences enough among us to say here are the major problems here are where the existing systems fall short and here are raised to go forward. So that's a project that's already currently running, we are going to start another one. In January that which I'm very excited about, which is about which, I'm not going to tell you whether there's something that design and ethics, that makes me very excited. And a lot of the other projects really come as things evolve, or what we face, what kind of questions we keep seeing repeatedly from the consulting side. Because as we see more and more issues, we realise that okay, this actually demands research, we should be working on this in a much more detailed manner, so that we can help us and the, the field in furthering these questions. So like some of these questions are coming, will be coming from our existing collaborations with practitioners.

Nathalie Post  
Great. Yeah, no, that's amazing to hear. I'm definitely gonna stay up to date with everything that you're going to share and publish. Because you're doing really great work here, I think. And given that we're kind of getting towards the end of our time here, for some closing words, if you want, you know, if people want to learn more about you, or ethics lab, where should they go? Where do you want to send them to?

Cansu Canca  
I want them to come to our website! So aiethicslab.com. You'll find it very easily. And on our website, you will easily find the toolbox that we've been discussing. It's on the front page, because it's new. So we are still promoting it. And but you can also find all sorts of videos that the talks that we've done, we've given them the details, some of the details about the trainings that we offer to practitioners, webinars, articles, so like a lot of the things that come from the lab, or that we collaborate. So it's not necessarily just our work, but in collaboration with other partners. You can find them all on the website and more information about me, you can google me and the lab. So Cansu Canca AI ethics lab, you'll find my page. And we are trying to keep an active Twitter, both for AI ethics lab, and I try as well.  

Nathalie Post  
Great. Thank you so much. This was a pleasure. Thank you!

Cansu Canca  
You're very welcome. It was my pleasure. Thank you very much for having me.

permanere audire

Continue listening...

newsletter

Want to stay up to date?

Sign up for our newsletter, and we’ll keep you posted on our research, podcast and other AI goodies.
* We don't share your data. See our Privacy Policy
Thank you! You've subscribed.
Oops! Something went wrong while submitting the form.