Predicting the unintended consequences of AI

Niya Stoimenova
|
TU Delft
PhD Researcher
We discuss Niya's research on uncovering the unintended consequences of AI. We also talk about her recently published paper on designing with and for artificial intelligence, and finally, about adaptive organisations in the context of artificial intelligence.

Nathalie Post  
Today I'm joined by Niya Stoimenova, who is a PhD researcher in artificial intelligence and design at TU Delft. In this episode, we talk about her research, which is focused on uncovering the unintended consequences of AI. And the importance of being conscious of how we can prevent things and prepare for the things that might happen. Instead of being reactive, both from an individual as well as from an organisational perspective. We also talk about her recently published paper on designing with and for artificial intelligence. And we talk about adaptive organisations in you guessed it, the context of artificial intelligence. Without further ado, enjoy this episode.

Hi, Niya, and welcome to the human centred AI podcast. It's really great to have here today. I think I heard your dog just there to background

Niya Stoimenova  
He hates me speaking English. I don't know what's going on. Sorry.

Nathalie Post  
No worries, no worries at all. But for our listeners who might not know you, or your dog yet, giving a bit of an introduction just about yourself, and also your background.

Niya Stoimenova  
Alright, I this question is also a little bit weird for me, because when I tried to explain my background, it always sounds super weird. But I studied a lot of mathematics. So in high school, we in Bulgaria, we have this high schools where you really study a lot of mathematics. So I did a lot of that. And I also competed in philosophy, which is a thing, apparently. And then after, like, around the time, I was graduating, I had an existential crisis. And then I went studying design, which I never really thought about it before. I never really wanted to wait to do design. So officially, I am trained as industrial design engineer. Then I went on to study strategic design, which is kind of an MBA with a design twist, sort of. And while I was studying, I also worked for several very big companies, both in Europe and in the US. And then after I was graduating, the whole Cambridge analytical thing happened. And I started looking more into everything that's going on with AI. And I felt like what I was doing is not really significant. So I was, after I graduated, I had really, really nice offers very flattering ones, but I couldn't really make a choice. So my parents decided, suggested, okay, take some time off, I took some time off in the mountains. When climbing, thought, I'm going to die. I didn't obviously, I had like a big existential differently, and I thought I'm gonna reject all my offers and started doing it. Working on research with AI. So I jumped into something I had no idea with what to do. So I'm now I have my own consultancy, again, with working with organisations on organisational design innovation process. And for a year now I'm doing a PhD in AI. So the the whole idea behind a PhD is that I'm trying to figure out how can we identify, anticipate and mitigate unintended consequences for your power solutions? I have a theory. I'm now testing out the theory. So I think that was a very lengthy explanation of what I've been, but it's kind of combining everything on the topic of AI, I think.

Nathalie Post  
Yeah, no, I don't think that was a very lengthy explanation at all. That's that's completely fine. I'm actually curious. So you said that you went into AI or studying AI not really knowing, you know, all that much about the subject. Can you expand a bit more about that like and how you actually started that learning journey into that massive domain? Yeah.

Niya Stoimenova  
So as I told you, I studied mathematics and I studied a lot of well, not a lot of but I studied some in high school, and I've always I liked technology. So I've always been interested in everything that's going on. And suddenly, when I decided that I'm going to do something with AI, I kind of had the idea that there's probably a theory that I can use, because the initial idea of my work with AI was that I wanted to create a different type of AI. But the theory that I wanted to use was not existing. So that's why I'm doing a PhD. But what I did is essentially, I started reading a lot, I started talking to a lot of different people to really understand what's going on, and how is it going on, I started following some courses on linear algebra and calculus to really understand the fundamentals of how machine learning works, which I liked a lot because it brings me back to, you know, mathematics. And then I focus more on the thing, or sort of, I started with the foundation on the things I know. So because I'm working with a theory on synthesis, which we use a lot of in design. So I tried to translate that theory to the problems that we are currently facing with AI, especially when it comes to identifying unintended consequences of it.

Nathalie Post  
Yeah, can you can you explain maybe a bit more about that theory of synthesis, just for our listeners who might not be familiar with that?

Niya Stoimenova  
It is actually very basic, we all do synthesis. So it's kind of the opposite of analysis, you analysis, as we all know, it's I'm now entering a little bit of a teacher mode, sorry, but an AI analysis is basically the idea that you have a situation and you kind of pick it apart to see what's happening. And the synthesis is, you have a lot of information, and you need to make sense out of the information. So that's very straightforward what synthesis is, and then there, the the theoretical background of that is that there is a specific way of reasoning that allows you to create a synthesis, and it's called abductive reasoning. And abductive reasoning is the idea that you can come up with hypothesis. So we all know what induction inductive reasoning and deductive reasoning is more or less, because this is what you know, the real sciences do. So the easiest way to, for me to explain what inductive reasoning is for is for, you get go out in the morning, and you see that the street is wet, and you think, oh, it must have rained. That is an example of abductive reasoning. It must maybe someone a pipe bursted said, and that's why the street is wet. Maybe someone decided to clean the street, you don't know. But the more logical explanation would be that it must have rained. So it's this abductive reasoning essentially creates explanations. One part of abductive reasoning creates explanation. There are so many different types of abductive reasoning, but I think that might not be that we're going into very theoretical grounds here. So probably we should stay away from that.

Nathalie Post  
So and how but how does that theory of synthesis kind of play into your research in AI and design? Like, how does that work?

Niya Stoimenova  
So I told you that I started looking at what are the big problems with AI. And one of the problems is that we first of all, it's sort of a black box, of course, we understand the database behind it, because it's kind of straightforward. But we don't understand why certain decisions have been made, because of the sheer number of neurons. And the sheer number of layers that are that have been in that that were interacting with each other. So because of that, it's very difficult for us to imagine what kind of unintended, unintended stuff will come up. And we've all seen the different examples, right with the fake with fake news and biases and misidentifying certain people and certain face recognition apps working better for white people than for black people, all these kind of things were arguably unintended, right? But for us to be able to detect that this is actually very difficult. So there's a lot of research going on, on the fairness account of AI, there's a lot of research ongoing on identifying reducing biases. There's a lot of research going on on explainable AI, which is the opening of the black box, and really creating algorithms and models that could explain themselves to us in a human and in a language or in a reasoning understandable by human. And then I thought, these are all really great and nice. But we have a lot of imperfect elements that are going on, that are kind of interacting with with a within the model itself. And once this model has been implemented in, in very complex systems like society and all of these, every app is interacting with humans. And once this has been implemented in the complex system, then it becomes even more difficult because you don't know how the system will react to everything you're doing. So I thought what if instead of trying to validate whether what we think will go wrong will go wrong? What if we try to come up with hypotheses of what might go wrong before we actually build the model? And that's what my theory is trying to do. That's what the synthesis and this assumption creation model is coming to do, you know, to the fore. Essentially, the idea is that we can create an inventory of potential consequences that might happen if we implement that type of model within a societal system or complex system, before we actually build the model before we actually apply the model before we deploy it. So it's sort of anticipatory activity of sorts.

Nathalie Post  
Yeah. Yeah. So now actually, your, your, your LinkedIn header, or headline, it makes a lot more sense to me, because your headline is something like predicting the unintended consequences of AI. And now, you know, by explaining this, I think that actually makes a lot of sense in terms of the work that you're doing, like really that that pre phase, like, so what landed you on focusing on on that part, you know, of the entire massive scope that you could have chosen? Like, why did you, you know, zoom in on, you know, yeah, the prediction of unintended consequences,

Niya Stoimenova  
So it was both practical, but it was also kind of something that fit well fit well with my own understanding of how things should be done. So before I started, started doing my PhD, I worked a lot with organisational process. So I would help companies to create innovation processes that will help them to innovate very quickly, and to innovate in a human centred way. And then, the practical part of the reason why I went for this is because there's a lot of theory and design that could actually be very well applied to that, which is the theory of synthesis and how we formulate hypothesis about a very complex system. And the second part is that I like to know what might happen so I can be prepared for it. It's sort of a personal preference, I think. And I think, because it's just, I find it a little bit stupid. It's not the right way. But I can not the right word, but I cannot think of a better word for it. But I find it a bit stupid to think that, oh, let me first build I spend so much money on it. And then I put it into the context, and then all sorts of the hell breaks loose. And then all sorts of weird things are coming up. So what if we do a little What if we were acting a little bit in a more preventive way instead of in a more, like, offensive way?

Nathalie Post  
Yeah, do you think that most of the unintended consequences, I think of AI but of technology in general, could have been predicted? Or do you think that there's always going to be some that are just, you know, impossible to foresee is given that, you know, their effect in the world could also alter the world as it is? Like how do you look at that?

Niya Stoimenova  
I think so. So unintended consequences are not new, we've been talking about unintended consequences in the context of AI for recently for the past three, four years or so really, or in technology. But this has been like a very expansive topic. In research in academia for a very long time. Actually, the first time the the term was coined was back in the 30s, and 1930s. It's not new, none of the things that we are experiencing are new fake news, or not a new thing. We've had fake news before we've had disinformation before. We had a lot of the things that we're seeing now are sort of, I believe, sort of a kind of a repetitive patterns that we've seen before in history, of course, these patterns have evolved, and they become a bit more difficult to detect to deal with. But on a very abstract level, we see similar things happening all over again. So I think some of them can be predicted, because we it's not that we don't know that this might happen. But it's very difficult to imagine in which ways it will happen. So that's one of the parts that I'm trying to play. The role that I'm trying to play is try to imagine in which ways this kind of unintended consequences can manifest themselves. And then of course, I'm going to, I'm going to be a little bit theoretical again. And there is a difference between unintended consequences and unanticipated consequences. And the unintended consequence, it doesn't mean that we didn't expect them to happen. We just don't want them to happen. And the other one, the unanticipated ones, I don't think we'll ever be able to anticipate them just by their nature. So yes, I think there are a lot of things that we cannot anticipate, because once the, the the system is interact, the our solution starts interacting with the system. It produces a lot of things, but that's why what I'm trying to do with my theories that really try to interact with the system as soon as possible to see what kind of effects what happened. And I do that with a lot of prototyping, for instance. So prototyping plays a very, very central role in that. It is a bit counterintuitive the way I'm using prototyping because usually when we think about prototypes is or MVPs, or whatever we want to call them, we already have assumptions, we already know what we want to test. And then we, we build a prototype. And then we go and validate our assumptions. And what I'm trying to do is say, okay, we can use prototyping, we can use this sort of MVP type of thing, to not validate stuff, but to create assumptions based on how the system will react to whatever intervention we present to the system.

Nathalie Post  
And I'm curious, like, do you kind of like work also with industry? Or is it you know, how are you researching this? Can you tell more about that?

Niya Stoimenova  
So because my, my, my dissertation is very much theoretical, because this is a new theory that we are going to use, I am now setting up a case with industry, I cannot tell you who the industry partner is, because it's not finished yet. But also I, I use a lot my experience, and a lot of the things I've done for industry before and I use, I worked a lot, I worked a lot with the aviation industry, in Europe. So a lot of the examples come from there, because I worked there on creating an innovation process that will allow you to, to deal with fast changing situations. So for instance, if you're at the airport, how can you prototype something at the airport when you have life flights, real passengers, real employees going on all the time? So there's a lot of things that are coming from industry, and there's a lot of consideration of how the industry will work, because I think one of the big problems in academia and all the models and theories that we are developing is that it's very difficult for the theory to actually be applied because you don't have a general understanding. And, of course, maybe I'm over exaggerating, but we don't usually have a general understanding of how the industry works. And why the industry? What are the primary interests and the primary drivers of industry? And whether we like it or not, industry is there to make money. So you need to think about how much time something is going to take you where would that theoretical part fit into the overall innovation process? So that's where I kind of bank a little bit on my previous experience with innovation processes, and where that would fit in.

Nathalie Post  
Yeah, no, I understand. And like, I'm curious, like, in general. So I don't know, if you're you're part of like a research group that is focused on exploring these topics broader. But how much do you actually see back in industry have, you know, that kind of links to the work that you're doing? Because, I mean, obviously, the unintended consequences part, there's a lot of news about things when they go wrong or have gone wrong. But I'm curious if you see other links, as it is right now that are happening

Niya Stoimenova  
So first of all, I'm not really part of any group, I have a professor who likes to tell me that I'm a loosely coupled element to everything else. So this, as I told you, this research came out of my own understanding of what something has to be in why, why we should do what we should do. There is groups within the within the university itself that are dealing with and they're very actively trying to think about the this creation of meaningful human control they call it, which is the idea that humans can actually exercise control and understand the control that they're exercising, exercising over the machine. What I'm trying to do is not I haven't seen it directly any anywhere in the industry yet. I haven't seen it also in in, in academia, exactly what I'm suggesting, which is, I think, to an extent good because it validates the theoretical propositions that I'm making. But I'm trying really hard now because to translate like to start translating things to industry. I've already ran some some projects with students and we have very weird, interesting, in a sense, not not weird, super weird, but very weirdly interesting outcomes of this project of where people will behave in a completely different way, even if they say that they will not when they are faced with an AI, proposing something to them. For instance, I had a student team that designed a Health app that will provide you with exercise advice and meal advice on how do you keep track of your calories and all these kinds of things. And then they had the AI asked a while the it was sort of the AI asked the participants what kind of food they would like and how much they would like to exercise and then they intentionally made it so that the AI provide the users with very controversial advice. So the if the person will say I really hate seafood, the AI would suggest recipes with seafood. And you'd be surprised how many of them actually took the advice of the AI and started eating meat meals with seafood?

Nathalie Post  
Oh, wow. It's very counterintuitive. But isn't that like automation bias, like you could basically account that too, or, or do you think there's other things that have caused that?

Niya Stoimenova  
The thing is, I'm not a psychologist. So I don't know exactly what causes it right. I also don't think I have a little bit more specific opinion about biases and whether they're actually that bad because I think biases are not always bad. But I think what I'm trying to do is I've tried to deal with the outcomes of what's happening. So the idea is that if I can if we can create these type of outcomes, if we know how the system would react, if we know how people would react to that, then I can take that into consideration when I started designing the real thing. So if I know that people tend to go for the advice of the AI, then I can think of ways for me to prevent that. If I think that's a very important thing for the autonomy of human beings. And I personally find that very concerning if people take the advice of AI blindly. So I would probably try to come up with different concepts of how this can be prevented.

Nathalie Post  
Understood, Yeah, yeah. And so there's another research paper that you recently wrote about exploring the nuances of designing with or for artificial intelligence. And I was wondering if you could talk a little bit about those nuances. And what you found in doing that research.

Niya Stoimenova  
The idea is that we have AI models, machine learning models that have been designed with very clear variables in mind. So it's a very clearly defined problem, we need to know how many calories somebody as used up, or we need to track somebody's heartbeat, that's a very, very well defined problem. We know more or less how to do that. But however, but this thing is going to play a role within a bigger system where people are involved. And when people are involved, the problems suddenly become much more fuzzy, and much more ill defined. And by ill defined, I mean that there's, there's not really a right or wrong answer, there's a good or a bad solution, suddenly. So when that happens, there's a lot of tension. And there's a lot of friction between the model and system, I believe that creates a lot of the unintended consequences that we're seeing right now. So if you're trying to, to deal with the decision, if you're trying to deal with that social situation, that Ill defined situation that we are that we are observing, then we also have to have a very different approach to that. And that's why I am trying now to do a work work out the way in which we can identify these before they actually happen. So that was, this paper actually, was actually written way before the whole PhD came to be, academic, academic writing, academic publication takes years to pursue. The beauty of academia.

Nathalie Post  
So you you wrote this paper before you actually enrolled in the PhD? Was this part of your masters then?

Niya Stoimenova  
It was more transitional? So I graduated 27th 2017. And I wrote the paper 2017. So it's a very, very old paper, which only came out a few months ago.

Nathalie Post  
Yeah, that is actually crazy. Wow, okay. I did not realise that because I feel like even right now, it's, you know, still very relevant, and it's not outdated, what you wrote there. So I think it also kind of almost emphasises, or highlights, as in like, I do feel like the industry is changing, and it's evolving. But I also feel like sometimes, you know, by everyone talking about these things all the time, it might feel like it's evolving a lot quicker than it actually is uncertain topics. I think this is quite a an example of that worse, like, these issues were there three years ago. And they're still here today. And I think a lot of them still unsolved, or, yeah, in the process.

Niya Stoimenova  
I think that's one of my biggest problems. I think a lot of the things that are happening are sort of white noise. And then there's this dominant narrative that is happening, that all things are happening so quickly, and everything is changing so quickly, but I don't think that's actually true. Were we all talking about the exponential acceleration of technological progress, but I don't think that actually is the case. During the Christmas holiday I've been reading a lot of you can say old books from the 80s. There's a very nice book called Amusing ourselves to death, and another one from the early 90s called The age of propaganda. If you read these books, the situation they're explaining way before Google or AI was really prominent as it is today in in like normal people's lives. The situation they're explaining is pretty much the same as we are currently in. They are talking about disinformation. They're talking about factoids. They call it instead of fake news. But it's the same thing. And the instances which they blame for US television. And there's, there's a very fascinating argumentation on why that happens. So I would highly recommend to anybody who has a little bit of time to read both of these books. So when you read something from the 30s, or from the 40s, or from the 80s, you see, oh, yeah, okay, that didn't really change that much. And there's another really interesting book from the 70s called Future Shock. It's a, it's a relatively easy to read, even the examples. dollar gifts are very kind of understandable, very close to what we have today. So they're not really moving thatfast.

Nathalie Post  
Now, it's super interesting, I think, Well, actually, it also kind of links to another topic that I wanted to discuss with you that you mentioned very briefly, which was about adaptive organisations in the context of AI. And yeah, you said, like, you priorly also did some work and more of the organisational design side of things. I'm very curious, could you give a bit of an intro about that part of your research and what it entails?

Niya Stoimenova  
I'm not an organisational scholar. Anybody who is an organisational scholar might think of this as complete bullshit. But the idea, the basic idea is that you would have especially now when we have this corona situation, which you really don't know what's going on. So it simply becomes extremely important for your organisation to very quickly react to what might happen. Essentially, the idea is that you will have an organisation that acts proactively instead of reactively. Most of the cases which we saw, when the corona broke, broke out, when all the lock downs were initiated all over the world was we saw organisations being very reactive. They had the they had the situation, they had to figure out what to do. But what if you try to and then they, they did something, it didn't work, or it didn't work, it created all sorts of weird outcomes. That actually reminds me of the example very early in the I think it was in March or April or so when we saw the all the the algorithms that are governing the supply chains and procurement companies, they suddenly went crazy, because out of all their predictions were completely unusable because suddenly, people started ordering hand sanitizers and gloves and masks, and they didn't really predict for that. So the idea is, you would essentially prepare yourself and create your organisational structures and organisational process in such a lean way that you can quick, quickly shift. And I think the quick shift would come from you being able to kind of predict, and a prediction is very big word with sort of anticipate what might happen. So the theory that I was explaining about anticipating unintended consequences that actually comes a lot from, as I said, from my previous work, where we use a lot of prototyping to sort of predict how our system might react to certain things. Does that answer the question? Or am I talking super abstractly?

Nathalie Post  
No, I don't think so. I don't think you're talking abstractly. Now. I'm just curious more. So, you know, like, how does that piece of adaptive organisations fit in into that broader question even like, you know, like, how, how can organise? Or like, how should organisations adapt? And what does AI actually change to the way they need to adapt compared to without AI?

Niya Stoimenova  
Okay, so the thing is that with AI, we constantly have, we don't, usually we have, we've had software that because of the adapt, and the software more or less stays the same, right? We, the software doesn't, doesn't certainly decide to act in a completely different way. And now with machine learning algorithms embedded in most of the solutions that we that we use, this thing suddenly starts to act a little bit on its own accord. Of course, I don't mean that it has a mind on its own, but based on the data that you generate, it learns and it might act in a way that you didn't predict. So it is more important than ever, to not see the the solution that we have as a finished solution. But to continuously keep track of everything that's going on, essentially you move from assume the idea that you have a solution to the idea that you have a prototype, essentially that you have to constantly, constantly check and constantly use for for you to create new assumptions of what might go wrong. So it's, you constantly have to adapt to everything that's going on you have to observe what was happening, you have to observe how something is being used, you have to observe and kind of even very proactively probe with the system, poke the system to see how the system might react. And based on that you will make your your solution better. So, it is very different than most of the organisational process where we assume, okay, the solution is done, even if it's a software solution, the solution is more or less done, and we just roll out, you know, update. But I think the the big part there is that we need to be able, we need to be prepared for constant update of things and a cost of change of things where we have a tool that we don't really understand properly, because most of the times when we have a solution, we understand how it will work, we understand how it will react more or less. And we can test a lot of the things and also only because it's so personalised and it's so kind of a tune to every single person, there are so many different possibilities. And for an organisation to deal with this many different possibilities, you need a different way to look at it, you need a different way to think about it. And that I think that shifts between us constantly trying to validate all of the things that we are doing. And the shift towards more like, okay, we can use what we are doing to continue to learn and understand our situation better. That that comes with adaptability, I think and I think that is much more suitable for what we are, what we're dealing with right now.

Nathalie Post  
And do you think like the, let's say, the larger organisations that are, kind of driving this whole AI industry in general right now? Do you see that they're changing the way that they're adapting already? Or, you know, or is this something that you feel like is only in the years to come?

Niya Stoimenova  
You mean, the Googles and the Facebook's of the world?

Nathalie Post  
Yeah, the Googles, the Amazons, exactly.

Niya Stoimenova  
I don't know, I don't have that much. Because these are all internal processes. And very rarely, we get to see the internal process of something right, unless they decide to publish a book about it. So I don't really know exactly what they're doing. I have some insights on that. But they're not easily shareable. Because I've obtained them in different ways that I, that are not public ways. But the thing is, I think a lot of I don't think, look, especially in the past few years, we've all seen this, Google's and Facebook, we've vilified them a lot. And we think all these are the bad, big bad move, that wants to make money and wants to, you know, scrap democracy and create all these bad things in the world, because they just want to make money. And I think that's kind of become the lot of not really dominant narrative, but a very consistent narrative over the years, I don't think these people are bad, I don't think these people intentionally are trying to create fake news, I think is or the organisation itself, I think it's just the situation is so complex, that if we don't start addressing the complexity of the situation, we are going to create even even worse things. And to address the complexity of the situation the existing most of the existing things we have, should not go away there, they work for a reason. But we should add to that situation a little bit. So we should we should enhance the the or in the process that we have so that we have a better grasp on what's going on. And if you most of these, like of course software organisations work primarily with agile and agile is a very, what is it goal oriented approach? And when you have, but when you have a goal and it's been, it is very good for achieving your goals. But what if you don't know what your goal should be? What if you don't know what what kind of things you should be prepared for what what if you don't know what kind of things you might create? So we do need something upfront to help us understand that so I'm not saying scrap agile, scrap all the innovation processes, design thinking, whatever, these are all good things. And these are I think this should be seen more as a as a kind of a toolbox from which we can use, but we really adapt even the organisation process should be adapted based on the situation and based on the context and based on the thing that you want to achieve. So for a lot of things, agile is perfectly fine.

Nathalie Post  
Hmm, I think this is actually quite a nice note also to to end with because we're almost through our time here. But maybe for some closing words, I'm kind of curious if you know, like, based on what you've learned, and also based on your experience moving from, you know, kind of like the industry that you're doing a PhD. What would you recommend others who are interested in similar subjects or you know, who are also frustrated with things that are happening in the world in terms of AI and technology and feel like they want to, to learn more to change things about that. What would you recommend?

Niya Stoimenova  
I think the best thing we can do is to educate ourselves, and educating will not fix entirely the problem because even though you know that you're manipulated, you can still be manipulated, because that's how the human brain works. But I think we also need to be very conscious and very active on thinking about how can we prevent things? Or how can we prepare for things that might happen, instead of just blindly reacting to what what is happening, I think that's never a good strategy to bank on to just react on stuff. You need to have some, you know, a few kind of a foresight of what might happen. And I think that's also very useful for organisations so they will know what to prepare for, they will know to what they have to adapt. And of course, you would have always kind of a wild card. I think it's called Black Swan events where you no one could have predicted. But I think if you if you put a lot of if you put effort into making your process and making your organisation sort of adaptable and attune to what might happen, then the shift would be easier.

Nathalie Post  
Yeah, I think those are some beautiful closing words. Maybe final final, that if people want to learn more about you, or what you're doing or stay up to date with your work, where should they go?

Niya Stoimenova  
Oh, I am all over the place. Probably probably add me on LinkedIn, because I post stuff there. Well, you will see my name, my name is very difficult to spell. So you will see my name into into the title of the podcast. But it's Niya Stoimenova. You can also Google me, if you I'm pretty much the only person with that name. I think there's one other person with the same name, but you will know who I am. So it's very easy to find me if you want to. Even if you go Niya TU Delft, you would still find me.

Nathalie Post  
Great. Amazing. Thank you so much.

Niya Stoimenova  
Thank you. That's great. Thank you. And that was it for this episode of the human centred AI podcast. If you like this episode or hve any feedback, do not hesitate to reach out to us@davis.ai. Thank you for tuning in and see you next time.


permanere audire

Continue listening...

newsletter

Want to stay up to date?

Sign up for our newsletter, and we’ll keep you posted on our research, podcast and other AI goodies.
* We don't share your data. See our Privacy Policy
Thank you! You've subscribed.
Oops! Something went wrong while submitting the form.