Bias on the web

Ricardo Baeza-Yates
|
Northeastern University
Director of Data Science
We talk about bias, which has been a central topic in Ricardo’s research for the last 12 years. We discuss the many different types of biases that are out there, zoom in on bias on the web and bias in recommender systems. Beyond that, Ricardo shares valuable advice about what we can all do to become more aware of our own biases.

Nathalie Post  
In this episode, I am joined by Ricardo Baeza, Yates, who is the director of data science at Northeastern University in Silicon Valley. He was previously the CTO of NTENT, a semantic search and natural language understanding technology company. Prior to that, he was VP of research at Yahoo labs. In this episode, we talk about bias. And in particular bias on the web, which has been a central topic and Ricardo's research. For the last 12 years. We discuss the many different types of biases that are out there, zoom in on bias on the web, and in particular bias in recommender systems. And finally, Ricardo will share his valuable advice about what we can do to become more aware of our own biases. I hope you will enjoy this episode. So let's get into that.

Hi, Ricardo, and welcome to the human centred AI podcast. I'm very excited to be talking to you today. And but for our listeners who may not know you yet, could you perhaps start by giving a little bit of an introduction about yourself? and also your background?

Ricardo Baeza  
Yes. First of all, thanks Nathalie for the invitation I'm excited too. So I'm a computer scientist, I did my PhD, almost a bit more than 30 years ago in computer science in, in the University of Waterloo in Canada. I started working in search there, and then I guess most of my research career has been on search technologies, but then believe fast, I went to data to data mining, to any anything related to what is called today data science. So I think really, I'm I'm a data scientist, the last 30 years, even when the term didn't exist already.

Nathalie Post  
Yeah. Wow. Yeah. And so can you talk a little bit more about your current research? Because you've really focused on researching bias? But what has motivated you to do that and take that direction?

Ricardo Baeza  
That's a bit of a difficult question, because I think I started working in bias, even before I realised that I was working in bias, so, I think my first paper on bias was, like 12 years ago, and I think what was the motivation at that time, was more like, like trying to make systems to be fair, and to be like, really aware of the real world, and not about just the data they receive. So I was looking at, basically, at the feedback loops at the time, I didn't know that was so important. Later, because my main work today is also feedback loops. But I think only I would say only like four years ago, I realised that a lot of things I have done, were about bias in different ways. So bias of content, bias of of interaction, bias on data coming from people. So I put everything together. And I started to talk about bias on the web. Because there are many biases that that we are not aware of, that basically are tampering our behaviour. And then people should know about that. And I'm, and I'm not talking about the standard biases that you have in society, which are also very important, although people are working on them, like gender bias or race bias, or sexual orientation bias or, or religious bias. I'm talking about things that are more subtle that happens between systems and people when they interact.

Nathalie Post  
Yeah, yeah. And so well, as we're gonna be talking about bias today. And maybe a good place to start from is just by expanding a little bit on the definition of bias. So could you start by perhaps giving a bit of an explanation about that? What bias is?

Ricardo Baeza  
Yes. So I think they're even bias on that on defining defining bias. Because depending, yeah, depending if any of you talk to a statistician or to sociologists, they will see bias for the different things and and for some people, something that that we call bias, maybe not bias is just the way that things are, for example, that first of all, I have to say that bias is not necessarily negative could be positive. But we have a bias to talk only about the negative cases, because these are the ones that basically hunt people. But the truth is bias is a completely neutral term. But when it's positive we don't use it much, for example, bias in some sense is any any systematic deviation from some reference point of view. So one problem is, what is the right reference point of view, so these are the first problems with bias. For example, when we have this podcast live, what should be the right percentage of women listening to this podcast? It should be half on half. I don't know, it's something that that certainly some topics interest more men and other topics and others interest more women. So, for example, there are many reference points that we don't know, the first problem is bias. We don't know how to be the right answer. But most of the time, we know that we are we are in the wrong place. For example, if you say percentage of women in STEM, like in science and engineering, we know that we need more, though we know that in this election, we need to walk. So this is important, we need to go in that direction. Later, we will have to decide what is the right point. But we know the way to do it. And then we can do affirmative actions, which will be another kind of positive bias, to change that, for example, to have some places only for women in engineering, that that was done have been done in some countries. And that has been a big success. Because not only you get more women because of these places, but also you get more women because of the perception that you're doing something for women and then more women apply. And because intelligence, it's basically doesn't depend on gender, do you get more women affected. So this, these are the powerful thing that affirmative actions do that, that that change the perception of people that, that it's easier to get in and then more people apply, in this case, women, so so these will be more like the statistical definition. And, and there are many that really are critical. So basically depends on, on on your education, your religion, your parents, your language. And then for example, words like accountability that we have in English, we don't have in Spanish. And I think it's not so easy to find in other languages, because maybe we don't want to be accountable. But that's the bias in the language. So they're thinking God, so in some languages, you have kind of a way to say, as no, another other languages, they don't have the words to know, because they have never seen it, though. All the only difference that we have are encoding the language. And the last class or biases is maybe the most dangerous one is cognitive bias. But basically, these are your own cognitive biases. For example, if you have a gender bias that is your own cognitive bias, if you're racist, or xenophobic, or you're homophobic, these are your cognitive biases. And one problem with that, and in general, about bias is that people is not aware of that. So awareness is the first thing you need to do to find bias, because some people don't realise that they're saying something racist until someone says, Well, you know, when you said the word black, or when you said, Chairman or chairperson, you are doing gender bias, and then people realise it. And then some people do the conscious effort to change that to change the way they talk, to change the things they say, and so on.

Nathalie Post  
Yeah. So So what would you recommend to people to become more aware of their biases?

Ricardo Baeza  
Yeah, this is, I guess, the, the key, the key thing that that you have to do so. So the first thing is that you need to be open to learn open to disagreement, open to hear other opinions, because then you can do do like check ins, look and say, Okay, what I think is really fair, for example, I have these, for example, this bad feeling about immigrants. Is this fair. Many people don't want to do that. But basically, in some sense, you need to get out of the famous bubble that you have, you have a cognitive bubble. And then if you are exposed to, to more knowledge, or to more trouble, for example, you see the diversity of the world and your bubble expands. Someone said, if you read, then then you lose a lot of your biases because you read things that you and maybe you don't agree with. But then you read good arguments and about why you are wrong. And the same with travelling and do do live a wonderful life until when you go to India and see how many people is living a terrible life. So you need to basically be aware of the reality and not of the of your perception of your reality. And suddenly, many people today have a perception of the world that is not the right one.

Nathalie Post  
Yeah, no clear. Yeah. And so maybe to go back to bias and the different types of biases that are out there, because you already touched upon a number of them. But what are kind of the most prominent ones,

Ricardo Baeza  
So the most prominent ones, as I said, are the ones that are cognitive so basically, are basic intrinsic to every person, perhaps for me, the most dangerous is is confirmation bias. The confirmation bias is the one that when you see something that is aligned with your beliefs, and you feel it, you merely believe on that, because basically agrees with you. And and this is what false news use. Information that is false, but basically agree with what the people wants to hear by, for example, we don't want more immigrants, or, for example, we don't want more women at work or things like that. And they agree, okay, this, this person is finally saying the right thing, because he's thinking the same as me. It's like, I'm right, everyone else is wrong. But this is basically a basic bias. But now there are more than 100 cognitive biases. These are codex, as you can find in the web, have more than 100. And this is a work of psychologists, where they found basically learn all the things, but there are many more, for example, that you can anchor things. So basically, you and you either information bias, that there are so many that that even they are classified in taxonomy. So it's very hard to, to use them for a normal person, because there's too many. But for example, it will save you at least work with confirmation bias by to try to, to listen to listen to other opinions. So for example, if you read a news, try to do some fact checking, or for example, see what other people think. And then contrast your thinking with that other people so one problem today is polarisation. We don't hear the other side, that we don't want to hear the people that disagree with us. And that's a problem because then we don't know, we don't know the size of the universe. So we don't know how diverse is the is our thinking. And then you really believe that all the people think like you and in many elections, we have seen these that what do you think is not what happens.

Nathalie Post  
Yeah. Yeah, and I think especially right now, as well, as we're spending a lot of our time online. I'm really curious to hear more about how bias kind of gets amplified almost in our present day digital society.

Ricardo Baeza  
Yes. So most of the bias today comes from with the data so comes from the interaction with people. So basically, the data that comes from people implicitly encodes the bias of those people. For example, let me give you an example that is not online. And then you see how in the online world this can get amplified. A few years ago, they did a study on bails in, in New York. So basically, the judge has to decide if you have bail or not, when you are accused of crime basically, felony. And then they took all the data they had from from millions of cases. And they only gave the the machine learning model. So machine learning is a type of artificial intelligence. The machine learning model, the only information about about the person was the age. But basically, they didn't know the names. They didn't know the gender. So although most of the people are men, and the people that get to court. So there's no way to know the gender. There's no way to know where the personal lives because if you don't put a person live in the US, maybe you can get the race and also you can guess the felony very well. So the only thing was the age and then all the information from the case. So the model that learned all that was more racist than the judges. The basically, racism was encoded in the case data was not too much was a little bit more, but was amplified. But then something interesting happened. And which I think is important to what we're talking is that the model, even though there was more racist, was more just than the judges. And why that's happened. Because basically, there's another thing very important that that that exists when people take decisions, which is not bias, and which is called the noise. So basically, humans are noisy in the sense that in the same situation, they don't always do the same. For example, there are studies that show that if you see the judge after lunch, surely, maybe the the, the sentence will be worse. And if you see the very early in the morning, after breakfast, maybe the sentence will be better because the judge is happy. So a lot of the sentence depends on the mood of the judge. And what is the interesting thing about algorithms is that they always do the same, in the same situation. So they don't have noise. So algorithms may have bias, but they don't have noise. And sometimes the noise is worse than bias. Because basically, in the case of justice, noise is the one that basically makes justice. Also, not reliable. And not just because if you have exactly, that same felony, did you shouldn't go like, double time to the prison, only because of that, but this is a very good example of how things get amplified, but in but at the same time, algorithms can be more fair in the sense that they do the same for the same situation. So in some sense, okay. Yeah, he's racist, but then, at least, is fair. In the cases. Of course, you don't want to have racism. So you can always mitigate that. And for example, you can say, Okay, let's, let's try to mitigate that bias, and and then we have something better than any judge is not racist, and also doesn't have any noise.

Nathalie Post  
So does bias affect people in general? Or are there very specific groups of people that are generally affected?

Ricardo Baeza  
That's a great question, because that's a really by my, my line of research, I'm interested in biases that affect all people. And there are many. So for example, this is more related to what is called nudging to detail nudging, but basically, how small things with more details in the screen, where you interact with digital systems, basically, modify your behaviour. For example, in the Western world, we always look first to the top left corner, because that's the way we read, though. So for example, if you put something prominent there, and you will see it first. So I can make a page that basically I can predict in which order you will read. And then if I can do that, then I can put the important things in the right places to, to make do look there, for example. And that's how how advertising in the web works. So if you put it in the right places, but then you have more subtle things, for example. Typically, the interaction data is used in real time, by the system to to personalise your experience, or also your historic data, so basically tries to, to know what you want. But that means that I can only use the data that you put there. And then we'll create something called the filter bubble or an echo chamber, basically, with only the thing that you have seen, and will be very hard to show you something new. Because it's not in the data that you haven't seen, but if you see it you will like it. And that's why recommender system use this idea of collaborative filtering, that we try to use things that are that people that are similar to you like, so we can break this bubble. However, the problem is still there, because you can only click on things that you see in the screen. And those things are decided by the algorithm. But basically, that that creates one of the worst biases is popularity bias. Basically, you're only seeing what all people like but maybe you would like to see things that do like not what all people like.

Nathalie Post  
Yeah, so what is really the impact of bias in those recommender systems like how does that...

Ricardo Baeza  
So the first bias there is what is called exposure bias. So basically, you can only click on things that are exposed to you and then all the other things that are not there, you cannot click on them. So the first thing is not something that affects the user, but may affect the the, the publishers or the producers, or the suppliers of whatever you are interacting with the, for example, maybe a store will never be shown to you. I mean, a store that serves a given product will never be shown to you, because, in some sense never had enough clicks. And the system we believe it's not good. But maybe it's just because was not never presented to anyone, or was presented too little. And then didn't have enough exposure. So, but the first thing is that this bias does is that basically creates unfairness in the market itself. Because it's an economical market that is basically your, you have a platform that a digital market, and you have unfair that. So this is one thing, so basically you're you're affecting the small producers, with respect to the big producer of their popular producers. The second problem is for the user, cause this is called a two sided market, because you have the the market of the producers and you have the market of the people. But and for the people basically you see popular things. And then that mean that the people selling popular things, we get richer, and the poor get poorer. So basically, the rich get richer, and the poor get poorer behaviour, so the Matthews effect. And and then also that may affect your user user experience. Because as we said before, you are seeing things that many people like but maybe it's not exactly what you like. And personalization doesn't follow that because basically, the data doesn't have everything you want, everything you like. But basically, the data is already biassed to do what the system showed you, and not to what you really everything you like. And of course, this is very dynamic. So you have new items, new users, users change taste, so it's very hard for a system to keep learning all that and to learn the system, those do something's called exploration algorithms. So, basically, they tried to explore the taste of people showing sometimes random things or or tell you that things to see if they like them, but but doing that is losing money, because the ability effect of a click that gets to a sell is lower. So, so system don't do much of that. And that may mean that you are not really learning the world and we can go back to the beginning basically you are learning your own perception of the world, but these these are markets that basically are stuck in their own huge bubble for they feel like a large bubble which is the union of all the bubbles of the people that is inside interacting with that market.

Nathalie Post  
Well then how do we counteract this and also for the people who are building the systems or working in organisations that are building the systems? What can we do to counteract this and make it a fairer place?

Ricardo Baeza  
Yeah, so, so, I have the hypothesis that I think I already have some partial proof that basically knowing better your world, so let's say knowing exactly what your users want that leads to to a better system and not only for the people because basically you know exactly what the people want, but also because you can sell more, because of you have more information. So, in the long term, if you really learn your world well and and how the world is changing, then you will have a larger revenue. But what happens today, today, today you have short term goals, but basically typically the stock market will say okay, what is it? What is this year revenue, what is next next quarter revenue and you need to do predictions, but for to to explore the world to understand that what you need to explore more, so you need to invest traffic, and that means losing money today, but basically to earn more money tomorrow so so the system to do more exploration until they realise that they finally catch up with the with the world and and learn everything. And then they will be in a position to do like a better optimization to not only optimise revenue but also optimise user experience. And I think systems shouldn't be greedy. And they have to optimise the user experience. Why? Because basically, if people don't like the system, they leave and then the revenue is hit too. So sometimes, I think there's not too much common sense that user experience is to be the number one because if if traffic goes down, then you lose money much much more rapidly than trying to improve your revenue 1% today.

Nathalie Post  
Yeah, so talking about the users and the user experience, to what standard should we as users of the systems hold those systems?

Ricardo Baeza  
Yeah, this is a another key question. For example, if you have a, let's say, if you have a system to search for people to LinkedIn, or any other place for for hiring people, we should be able to audit them and see if the results are fair. For example, do you have enough women in the answers? Do you have enough minorities at least in the percentage that they should be represented? With a sample? If you have a full 10% immigrants? Do you get 10% of immigrants in the answer, of course, with a valid work permit, we always have to follow the law. So these things are the things that are not very easy to do today to audit the system and to basically be able to validate, for example, even claims, or for example, just validate laws, by some places that there are laws about, for example, gender parity. One, one thing that that I believe is that in the future, fairness of algorithms will be like like organic food, basically, people are willing to pay more to build a better world to basically, to have a protocol called the common good, so basically you are contributing to the common good. And systems should do the same to they should continue to the common good not to be greedy. And I feel that today, many systems are too greedy. And you know, there must be the system like and they have lost the traffic and they have died. So we have things like Myspace, or large search engines that existed in the past. That doesn't exist today. Because they really didn't worry about the user.

Nathalie Post  
Yeah, no, exactly. And so maybe to, to segue a little bit, well, slightly, because we're talking about the users in like my space and that sort of thing. So one thing that I found really interesting about your work was the research that you conducted around activity bias, and kind of like the different websites that you analysed in that context. Could you explain a bit more about that research? And how activity bias affects content?

Ricardo Baeza  
Yes. So activity bias, also can can be called engagement bias is basically how we behave as a group. In a website, for example, we know because of the law of minimal effort, coined by George Zipf, in the last century, in the beginning of the century, that that basically, not all people is active all the time. So if you take any any website that has, say, 1 million users, maybe only 10% of them will be active users in in a segment of time. And that's why it's so important when the the websites say, Oh, I have, I have 10 million users, but I have 1 million active users per month, for example. But basically, to know how much activity is there because in especially social networks, people is just what Nelson Nielsen defined that they are just lurking. And he has his own participation rule in internet, that he said that basically 90% of the people are passive 9% basically react to other people, that is just the 1% that basically are doing things are putting new content, doing opinions. And these are the influences today. And and the truth is that this is not something new, it's not something that that happens in only in the web, has always been like that. It basically it's like if you go to any country, a very few people are public faces. And because of the same, some of them want to be involved in politics, or only oriented or in some kind of activism. And most people prefer No, I want to live my life, I don't want to be there exposed. So this is not new. So basically, a lot of people is passive. And then very few people is active. So we look at this related problem, which was how many people produce half of the content. So basically, for me, it's like the democracy of content, how many people produce half of the content, and we we study a small data set from Facebook, and then we found that 7% of the people did half of the post, though a small percentage less than 10% will be the active one. Then we look at the very large collection that is in a Stanford that you can go and and, and use. And I think it's still been being collected, which are reviews in Amazon. And there, we found that 4% of the people wrote half of the reviews. But I thought and I said, this can't be right. I mean, 4% of the people have time to write half of the reviews. And we check the results and and, and was right. And then I said, well, the only way that this may happen is that they are being paid to do that, that basically, maybe some of them are false. And, and by coincidence, but I don't, I can't, I cannot know if that was from my work. One month later, after we publish the result, Amazon started to sue fake reviewers. And they are still doing that until today, that was in 2000, that was in 2015. And basically, there are many reviews that are false, because they are being paid by someone to have a better number of stars. We even check, if you check the quality of the review, this 4% will go down to 2.5%. If you take the quality of the roof in account, the basically 2.5% of the people wrote the best half of the reviews, using some proxies on review quality, then once we had access to to run the algorithms in a very large collection of Twitter of 2009. So that time Twitter really gave to a German institution gave the whole collection of tools that was not that big, like today. But the interesting thing is that there was a everything, everything. And then we found that 2% of the people do half of it, which here is not so price is even more skewed. But I think here ego and basically the this participation rule of Nielson works like like some people wants to be leaders and be very visible, and and basically have many followers because either they want to impact or it they're insecure, and they want to feel that they have any followers. I don't know, that depends on the cognitive biases of every person. And then we also check all the Wikipedia data, you know, all the Wikipedia traffic is completely public. So anyone can use that. And it's a pity because it should be used more. But there we found that basically 2000 people, which is less than 0.1% of the editors, wrote the first half of English Wikipedia. And those were paid, of course. And maybe thanks to them, we have Wikipedia today because I know that most people will not continue to do something isn't that we say to someone write in this encyclopaedia that is empty, no one will do that. You need to have at least something that is half full. And then all the optimists will start contributing. And that's what happened, right? That's why we have Wikipedia today.

Nathalie Post  
But in the end, like, what are the implications of this for the content that we consume on the web as users? Like, what does this mean, you know, like, very concretely.

Ricardo Baeza  
more concretely, it means that basically, we are reading the content of the people that try to influence but we are not reading the content of all the people. But basically, it's not really, it's not really the wisdom of crowds. But at some point, I was like a big defender of the crowds. But the truth is that it's more like the wisdom of a few, though, for example, how many popular people who have been treated and those are followed by most of the people, I think there was a number there like like, like 50% of the people follow, like 1% of the people something like that there's a number there, it's a very skewed. And this shows how our world works like like the politicians, the leaders we have is a very small percentage of their world, and they decide for us, and many times they don't decide in the right way. And you can do my thing that the democracy solve, solve this problem. But if you see in today, in many elections, the tension is very high. So even when you have important things to decide, like for example, the referendum two days ago in Chile 49% of the people didn't vote, though. So we are basically a lot of people are facing the young people feel that the system is not working. And they don't even want to vote to change that. And partly the reason I think is because they don't have the right leaders. They don't find the right person that says, I want to change things, and I want to make a better world. And I guess, in Sweden that may happen when Greta becomes old enough to be a politician, I don't know. But we don't have these kinds of people too much in the world to change things, right? Because I think a lot of people, young people are seeing climate change, or seeing xenophobia. They're seeing racism, and they are different from the previous generation, they want to change, they want to be the more fairer world but they are, they cannot have a voice. And this is something good about social networks, social networks are providing a platform for voice to be heard. And many countries that, that its social networks are really used a lot to like, say something close to you Turkey, is because that's the only way for people to express themselves because the media is also controlled by the dominant groups in most countries.

Nathalie Post  
No, absolutely. So we're kind of nearing the end of our time here, that was actually really fast. But I'm wondering if there's any closing words, or even a call to action that you want to give to people knowing this and all the information that you just shared,

Ricardo Baeza  
So maybe I can take to say a few things that that people can do and to you did this question before, but maybe we can expand it, though. So the first thing I would say is not to be pessimistic. And, and believe that nothing can be done. Because this will be like, like saying we cannot solve the problem. Until other people have seen the social dilemma, and then they don't know what to do. And think there are many things we can do. But we need to be to be strong, to have a lot of will to do this. But the second thing, the second thing I would say is that that be aware of of your personal bubble, for example, now, Apple gives you a better percentage of time that you use every week in every device, which is something very good, because makes you more aware of how much you are using the system. And we have to do like we did with the kids in the past, like how many hours per TV you could have. Here, we need to do the same. And also, for example, how many places you go, if you go always to the same place means that you have to have no diversity. And then how much time that what I publish, what is the goal of my publishing, is just to waste my time like, like laughing, or I'm trying to contribute with something important to other people, do you interact with the content of your network of your friends, and so on, this is something like be aware of what you're really doing. And then at least using your time, well, if you don't want to spend less time there, if you realise that you have, you're spending too much time you have to do like a internet diet. So internet diet means Okay, I will restrict to the time of using internet every day. And really you have to resist the temptation. The my first mobile phone was the Blackberry, I was looking at the BlackBerry all the time. And it was very happy to be able to be connected to internet in my hands. But at some point, I said I cannot leave in function of this device. And I said okay, I will not look at the device in after 10 minutes. After every time I look. And I did that. And now I look at not even 10 minutes because now not an addiction, sometimes half an hour, I don't look at my phone. If you see some young people that they cannot leave the phone from their hands, you have a problem. Because basically you cannot live without your phone, right? And this is an addiction like any drug. Another thing is important is like try to be to don't fool yourself. And if you are really looking at radical content like like radical political content or or for example that the eath is flat or conspiracy theories or that COVID doesn't exist. Try to to say okay, let's look at another opinion. Just in case I'm being radical. The problem that radical people don't realise they're radical until it's too late. For example, you can always use search engines, like Duck Duck Go and Quant that are more privacy and and then the results are not personalised and this is good because personalization basically just diminishes your did your bubble, so your echo chamber gets smaller because your historic data is used. When you look at the screen, try to fight again against this nudging. And for example, read everything, look at everything or look for other places. and whatever you read, try to be a sceptic, not everything that that aligns with your belief is correct. And sometimes I see news that I say this, this is impossible this is not real, and they are real. And sometimes you are seeing that they say, oh, finally it's happened, and they're false. So we need to be more careful, because now with also with deep learning, even even videos can be fake. And they mean, they can believe real, oh, this politician said that, and was effected. And finally, I already said, I think this is the most important one, the only way to solve this problem is to be aware of that. So if you are aware of the time that you're into internet, also, you need to be aware of your cognitive biases, because basically, that's the only way to be a better person is to be aware of, of your deviation. So basically, there are some deviations that are not good. And then you should fight against that. I, when I realised that I was working on bias, and and I did the conscious effort of, of taking, for example, how I say things, I realised that that that many times, I was using words that I shouldn't use, because for me were normal to use, but that but if you think on the, on the origin of that word, the word what had the wrong origin. And there are still many words in English that for example, and with the "man", and some of them are very hard to, to change. But for example, instead of talking about men, you should talk about the human kind or not mankind, or a human being, but not men. So there are many ways to  fix that old people don't realise. Because they have been using them for they 50 years. So I don't blame them. And if someone's told them, look, maybe you should change and use these different words, to also have good awareness. I think it's the first thing, the most important thing.

Nathalie Post  
Thank you so much for that Ricardo, that was really great and insightful. And maybe they to close off, if people want to learn more about you, or maybe read more about the research that you're doing, and have done, where should they go?

Ricardo Baeza
Yes, so so I have a website. It's Baeza.cl because I'm originally from Chile. So you can go to my website, and there are many publications there. I also write in Medium. So you can you can search the web and find my Medium page and I write both in English and Spanish. So for people that read Spanish. Also, now I'm quite active in in Twitter, I was not that active on Twitter, but because of a work of on COVID data science I became active and now and I'm using that for other purposes like research and my handle is polarbearby. So at the beginning was kind of I wanted to be a lurker at the beginning I was like I wanted to be passive. I became active but I keep I keep that polarbearby because I'm an explorer at heart and, and I guess polar bears are always exploring the Arctic. I think these are the three places and of course, you can always check my LinkedIn entry. Were also I post things about artificial intelligence and about data science. And  that's completely public. So basically, Twitter, LinkedIn, my website and Medium are completely public so you can read about my work.

Nathalie Post  
Great. Well, thank you so much, Ricardo. It was a pleasure.

permanere audire

Continue listening...

newsletter

Want to stay up to date?

Sign up for our newsletter, and we’ll keep you posted on our research, podcast and other AI goodies.
* We don't share your data. See our Privacy Policy
Thank you! You've subscribed.
Oops! Something went wrong while submitting the form.