AI's impact on labor and the economy

B Cavello
|
Partnership on AI
Program Lead
We dive into the work B does at the Partnership, particularly around the topics of responsible sourcing and the AI and Shared Prosperity Initiative. Within this conversation, we explore big questions such as 'How can you steer technology in a direction that increases economic prosperity for the many rather than the few?'

Nathalie Post  
In this episode, I am joined by B Cavello, Research Program lead at the Partnership on AI, which is a nonprofit multi stakeholder initiative that addresses the most important and difficult questions on the future of artificial intelligence. B is a technology and facilitation expert, passionate about creating social change through empowering everyone to participate in technological and social governance. And this is what B does a Partnership on AI, leading multistakeholder research with leaders at organisations including Google, Microsoft, Intel, Facebook, and ACLU to guide the responsible development and deployment of artificial intelligence. In this episode, we dive into the work that B does at the partnership, and in particular, around the topics of responsible sourcing, and the AI and shared prosperity initiative. Within this conversation, we explore big questions, such as how you can steer technology in a direction that increases economic prosperity for the many, rather than the few. I'm very excited to share this episode with you. So let's get into that.

Hey B, and welcome to the human centred AI podcast. Great to have you here today. I'm really excited to talk to you. But for the people who may not know you yet, could you start by giving a bit of an introduction about yourself and your background?

B Cavello  
Sure thing, my name is B Cavallo, I am a research programme lead at the Partnership on AI an organisation that is really about steering the course of artificial intelligence development in a way that benefits people and society and really tries to create greater equity in the world. And the work that I do at the partnership on AI, or PAI is really focused on the ways in which artificial intelligence technologies are intersecting with labour and the economy.

Nathalie Post  
Yeah, that's really great. I mean, can you tell us a bit more about you know, your background and what kind of got you to join Partnership on AI?

B Cavello  
My background is pretty strange. I have taken a very circuitous path into this space. There's a podcast that I like to listen to sometimes called talking machines. And they always ask people how they got where they are. And everyone says they have a weird background, but I feel like they are they don't know what weird is. So my journey takes me through studying economics and working in a nonprofit arts and education organisation that ended up developing toys to teach people how to code I ended up entering the game space with this record breaking Kickstarter campaign called Exploding Kittens, where we raised almost $9 million to make a silly card game, which was an absolute adventure. But in that journey, I realised I really wanted to get to get closer back to technology. And I ended up joining IBM in the Watson division, their AI group. And there I got the opportunity to work cross industry, cross sector, cross geography, really seeing the applications of AI all over the world in all different industries, which was fantastic. However, I also had this strong feeling that there's also a real need and an opportunity to be able to think really concretely about how AI can be something for public benefit. And that took me to the assembly fellowship programme run between MIT Media Lab and the Harvard Berkman Klein Centre for Internet and Society. And ultimately, here I am at the Partnership on AI getting to do what is just incredible work from an intellectual perspective and a community perspective. There are so many brilliant people that I get to work with every day. So I'm grateful that my, my journey through through economics and card game startups and all these sort of silly diversions have ultimately brought me back to something that I really care about, which is the relationship between people and each other and people in technology and how we can try to make that something really positive and powerful in the world.

Nathalie Post  
Yeah, no, that definitely is an interesting pathway to getting where you currently are. I must admit that, like I do hear more of these stories. And this one definitely stands out. I mean, the link from Exploding Kittens to what you're doing now especially is like, a mind blowing one should I say?

B Cavello  
Appropriately!

Nathalie Post  
Exactly. So, I'm kind of curious. So you mentioned already a little bit about what partnership on AI does. Can you tell more about why Partnership on AI was founded and what it kind of aims to achieve?

B Cavello  
Yeah, the Partnership on AI as an organisation is a kind of coalition nonprofit. It's this multi stakeholder organisation meaning that the partnership in the name refers to all of these partner organisations that come together to work together to achieve our mission. And so our mission is really around, you know, bringing together these diverse voices in the AI development community as well as the communities impacted by AI's development. And really try to really serve as a forum for conversation amongst these 100 plus partner organisations as well as other communities and to create recommendations and resources and strategies for steering that that development and the partners in the partnership on AI are many and quite diverse across academic institutions like MIT Media Lab, and Berkman Klein, as well. As you know, the big industry players, the Facebook's the Googles, the Amazons of the world, and the bulk of our partnership is actually made up by nonprofit members. So here in the US folks might be familiar with the ACLU, or folks might know about the Electronic Frontier Foundation, there are all of these different nonprofit organisations like Human Rights Watch that really bring this mindshare to the partnership to help ensure that that the recommendations and the resources that we develop, are are going to really take into consideration the breadth of impact that technology can have. And we were founded now, almost five years ago, I want to say, or 2016, I guess, not quite, four years ago, and the the work that we've done has really ramped up in the last year or two. And we've had such incredible energy from our partners. And that's something that really drew me to this space is really getting to see AI and see the impact that it has from so many different vantage points and so many different perspectives.

Nathalie Post  
Yeah, yeah. And so what are you currently actually working on within Partnership on AI? What kind of research activities do you take on or beyond that? Can you tell a bit more about that?

B Cavello  
Yes, the research team does a lot of different things. And in a pre pandemic world, I would joke that at the office, I would always experienced FOMO, the fear of missing out because every conversation you would overhear from people was just so cool and so interesting. And you just were like, wow, I wish I was working on that project, what it was every single project, and the work that my colleagues do, and engage with things like media integrity, thinking about both how information is served up to you in in, for instance, your news feed on social media, as well as deep fake detection and thinking about the different ways that AI is used to create synthetic media. My colleagues are also engaged in thinking about intersections of AI and fairness as it relates to criminal justice as well as hiring decisions. And I've had the pleasure of getting to support some work that we're doing as well along the lines of thinking about kind of how high stakes research or or kind of research AI development that might really have some dramatic impacts on the world, how that should actually be produced and published, you know, there are considerations that we can learn from the fields of biomedical research as well as nuclear research, thinking about when we're developing something that is so powerful, you know, how can we exhibit the responsibility needed to ensure that we're kind of, you know, being ushering being the the custodians of, of this really incredible, powerful technology. And there's so many different things that are happening at the partnership on AI that are really, really exciting. The work that I do in particular with my colleague, Katya Klinova is focused on AI is intersection with labour and the economy. And in particular, we have two different work streams happening right now. So we just wrapped up a workshop series and I'm really excited from all the learnings that we've been able to compile from partners and collaborators, thinking about what I think of as sort of the upstream of AI development. So in order to build machine learning models, you need lots and lots of data, that's just the way it is right now. And in particular, the majority of industry applications of AI are really based on what we would consider labelled data or data sets that have sort of either, you know, an annotated image, so you might have a photo that has a box around the the car, or the cat or the aeroplane, or, you know, sometimes it can be labelled data in terms of having a clip of audio that has the transcript or a sentence in one language, and it's translation into another language. And this is really the bread and butter of most industry applications of deep learning in AI. And the reality is that a lot of people think about labelled data as just sort of a thing, a commodity, yeah, a data set that you can, you know, purchase or acquire, which in some ways is true. However, what goes unacknowledged in that is that there are so many hours and hours of human work and intention that go into curating and creating those labels, those annotations. And we broadly describe this as data enrichment. So one of our areas of work as we think about what feeds into the development of AI, and how does labour and and, you know, human work go into these systems, it's really around thinking about the working conditions of the people who are doing data annotation and how we as a community, as the AI community can make more intentional choices about uplifting those workers and making sure that they're paid for the work that they do that their working conditions are reasonable and fair, that they're able to have sort of insight into what they're, they're labelling as well, you know, sometimes people might be asked to, to label a data set, and then they find out what it was used for. And it feels really horrible. Um, and this also has intersections, of course, with things in, for instance, the AI and media space around content moderation, and how do we decide what kinds of information that we want to train our models on? So that's one area of work has kind of responsible sourcing the AI supply lines that feed into the system? The other area of work that we do is kind of the downstream impacts of AI thinking about how how does AI actually impact our economies? How does it impact our labour markets? And that's the AI and shared prosperity initiative, which really challenges us to think, how can we think of technology development not as, as a deterministic course, that technology is just unfolding, and we're along for the ride, but rather say, you know, the, the way in which technology is developed and the kinds of jobs that it produces or eliminates, that's something that we get to make decisions about, and we shouldn't make decisions about it. And we should really be thoughtful as the developers of technology about what we're building and who it's going to impact and how we can think really, more concretely both in the upstream and the downstream about who you know, whose work is valued or not. So it's a really, really interesting space, I could go on about it for hours. I'll stop here. But yeah, really cool stuff.

Nathalie Post  
No, it's super interesting. And I think especially the whole responsible sourcing part, it's really not talked about enough. And so I think it's definitely like a very good research area. And so I'm kind of curious, like, how do you go about these researches and what kind of thing you're learning so far in both those areas, the upstream and the downstream side of things.

B Cavello  
The way that we conduct research at the partnership on AI does vary a little bit by issue area. But in particular, in this work, for instance, we've had the tremendous pleasure of having extraordinary collaborators. So when it comes to the kind of data enrichment work, the responsible sourcing work, if you want to get a background on on this story, this this kind of important, untold story in the AI world. One of the great books to read is one called ghosts work by Mary Grey, and her and her collaborator Sid, and we've had the opportunity, so fortunate to actually work with Mary in this work, right, and to work with various data enrichment providers, as well as of course, all of these companies that are consumers or purchasers of labelled data. So part of the way that we conduct our work is in conversation in interviews by being able to actually talk through the processes and understand where the kind of breakdowns are with the people with lived experience on the ground who are doing that work today. So one of the kind of mechanisms and ways that the partnership on AI conducts its research is through conveaning people through hosting conversation. Another thing that we do is, of course, the kind of more traditional research, bringing together all of the different previous work in the space and really trying to pull that together into something that can give people kind of an entry point to understand an issue. So, in the responsible sourcing work, for instance, we are drafting a white paper with recommendations. And that includes, of course, kind of references to other materials are already produced, as well as the kinds of you know, tools and resources that people might need, for instance, just as a kind of experiment, case study, Caroline Sinders, who's a researcher and creator created a kind of tool for testing how long it takes you to do certain annotation tasks, and kind of shows how much you would have to price each task in order to hit a living wage or a minimum wage. And tools like that can be really powerful and eye opening, as well. So our work is in bringing together all of these many voices, and producing kind of these outputs that help to guide people and make concrete recommendations. However, I will note, you know, when I worked at IBM, I worked with a lot of product managers and and, you know, machine learning engineers and folks who were doing amazing stuff. But the reality at the end of the day is very few of them, if any of them, we're going to read a white paper, a recommendation. So part of our work as well is in translating the kind of academic research, the white papers, the things that might speak more to the policy teams, or the the kind of parts of an organisation who takes the time to read these things. And actually translating those white paper recommendations into something a little bit more actionable into tools and resources. For the folks on the ground, the the data scientists, the ML engineers, the the product managers who on a day to day basis are actually making really important decisions about the path of technology, even if they don't know it.

Nathalie Post  
Yeah, yeah. And I really think it is actually great that you are making that translation into practice. And I'm kind of wondering, is there any advice you can give in that sense? Because, you know, especially in the field? Well, I think in any academic film, there are so much being published, but a lot of it actually stays within the academic community. And is there any practical advice you can give to make that translation into practice?

B Cavello  
So the work of translating is something I'm deeply passionate about, but it's certainly something I wouldn't say I'm the the foremost expert on in that there are a lot of unanswered questions yet in the AI field about how to get this done, and do it in a repeatable, scalable fashion. So I think that no one has quite cracked that nut entirely yet. However, we are so lucky at the partnership on AI to have Jingjing Yang, who's our head of product development, and she's really guiding us on this path of thinking concretely about kind of a user oriented experience. That said, there are some kind of high level learnings. I think you and I actually, first crossed paths, I think through a project called closing gaps in responsible AI, which was really around asking the question to the community. What do you need? What would what's stopping you from from kind of fulfilling these promises from following through on these recommendations already, and others, including Jingjing, and Bobby, Rakova and and other researchers have also been really interrogating this question of translating principles into practice. And some of the kind of high level learnings I think, is that one thing is, unfortunately, fortunately, having organisation leadership bought in is really, really important for folks. And part of what that means as an opportunity is that one, it's a smaller audience size, right? So on the one hand, that can be a powerful thing. Another thing that a lot of folks feel is challenging is that a lot of the recommendations that are made in frameworks, and the principles that are shared by organisations tend to be rather high level, but they don't necessarily have a really concrete recommendation for what to do in the day to day. And then I think another kind of high level learning is that there can be a lot of time kind of deliberating what is the the best way to do something, which is certainly an endeavour worth undertaking, and we need to keep that conversation going. It's what is I think fueling so much of the conversation in the AI and ethics space is really in pursuit of something other than what we have now, but there's also a reality. That sometimes that can be perceived by people as kind of a bunch of like rules and limitations and hindrances in the path to making great products or whatever it might be. And then it gets cast aside. And so part of the challenge is in reframing the conversation around responsibility and around ethics as things that are, you know, empowering and productive and valuable to society, and don't you want your business to be doing things that are valuable for society. And so part of that is also just a practice of bringing together like the partnership on AI does. The researchers and the academic folks with the product designers and the, and the people who are kind of in the field building products, because that, that gap, closing the gap in conversation and in the language and the values, I think it's actually really powerful. So being able to speak the language of the person whom you want to actually enact the change that you're recommending is really, really important.

Nathalie Post  
And I really loved like what you said also about, basically those like high level guidelines, like be transparent, or whatsoever that we're seeing oftentimes, where you're kind of like, Okay, so what does it mean to what degree like how does this apply to me? How am I going to embed this even in my process? And so you mentioned, of course, the closing gaps and responsible AI work that you did. And I personally really love that project. And I'm kind of curious if you could tell a little bit more about that work. And also, you know, where it kind of ended up right now, because it has been a while since it has been published.

B Cavello  
Yeah, thank you so much. So the closing gaps in responsible AI work was inspired from me for, it was inspired by the reality that I was just having a lot of conversations about this topic, in particular, and I think many of us who are steeped in this space suddenly find ourselves like really engaged in this kind of principles to practice conversation. And in particular, what I recognised was, you know, in conversations with people, folks maybe did not always feel comfortable sharing publicly, the kind of criticism or confusion that they were experiencing in their company, sometimes the people were just straight up confused, but they didn't want to reveal that. And so part of what closing gap sought to do is really give people a space to participate in an anonymous fashion, kind of in what I would consider, like an asynchronous distributed ideation session. So at IBM, I do a lot of design thinking workshops, and, you know, Say what you will about them. But there's something really powerful about getting into this generative creative mind space. And in particular, closing gaps and responsible AI was about kind of trying to translate especially, was quite prescient because it was just before the pandemic hit. And then suddenly, we were running this thing during the the kind of March timeline to kind of take what I would have loved to host as like a multi 1000 person design thinking workshop in a giant convention hall, it is said translate that into an online asynchronous activity that people around the world negotiating all of the time zones and things that we have to deal with, couldn't participate in and be in conversation with each other. And so in that we had a few 100 different participants and many, many ideas shared and challenges identified as well as potential opportunities. And in that process, we've been really synthesising that knowledge to be able to inform the the work that we're doing, especially the work I'm doing now in the responsible sourcing work, for instance, recognising that a white paper isn't what's gonna, this is fairly resonate for folks. But we also have intentions to kind of share more of those learnings as we produce more work going forward. And I think that a fun kind of spillover of this is the process of of taking that, you know, that giant convention hall design thinking style ideation, and turning it into a digital activity was also created an artefact right, it was something that generated this new way of engaging with each other online to have a conversation around problem solving. And I've actually had the pleasure of being you know, in conversation and reached out to by folks who work in entirely different fields, nonprofits that are really focused on food delivery for folks who can't necessarily get their own groceries or elderly people who can't go out and get meals and there are other organisations that, you know, they're not thinking about responsible AI, but they're trying to close gaps in their community as well and make community informed decisions. So it was really kind of cool to have something that both informs our work at the partnership on AI, but also envelops this artefact this resource, this tool or, or a system of engagement that others could use for, you know, all kinds of different problems that I could never run into.

Nathalie Post  
Yeah. So what I personally also love about the title is that it's very, let's say solution focused, it's not about identifying gaps. It's really like closing those and taking action towards what you're seeing. So I'm kind of curious about, like, you know, what were some of the main gaps you identified that really stood out that maybe you hadn't thought about before? And yeah, also, how are you working towards perhaps closing those right now?

B Cavello  
Yeah, it's a great question. Yeah. So some of the takeaways that we had. And maybe before we dive into that, although I think you're right to to frame it as something that's very solutions oriented. I also just want to highlight a couple of the kind of gaps, the challenges that people were facing, so that we could kind of contextualise what, what kind of solutions people were coming up with. So I'll just give a couple of examples of some of these kind of gaps that people identified. So they take the format of kind of, you know, someone needs something, which is difficult because of some reason, right. So, you know, we want to do this. But unfortunately, it's hard because of this. So for example, the product team needs to meaningfully engage external stakeholders like advocacy organisations and affected communities, which is difficult because we don't have a clear picture of which communities and people are affected by our work. And for example, one of the solutions generated by the community was an opt in directory for advocacy and community organisations interested in being consulted for Responsible AI work. And this could be organised by category of technology, or it could be organised as well by type of impact. Another example of the types of gaps people are facing is something like our chief ethics officer needs a comprehensive view of the AI products being used in our organisation, which is difficult because most teams don't even know that they're using an AI product. And part of the challenge here, of course, that many of us have encountered working in this space is that what counts as AI can be a little bit wiggly. And so one of the solutions recommended by the community was develop an internal definition of which types of tools and algorithms should be deemed AI. And these different kind of solutions, you know, some of them are really strong, some of them are kind of silly. There are, you know, hundreds of different examples, but some of the big themes that came out in all of this was around incentives. So, you know, who is incentivized to do what or how can people in organisations have better alignment between the incentives that they have as a practitioner, and the responsible AI principles or goals of their organisation? Another theme was around education and kind of having a shared language and, and familiarity of the the space and the goals and kind of how we're evaluating success. And the third category that also really arose from the kind of collection of insights was around tools, it's, it's great to have recommendations, it's great to have, you know, guidance, but it's also really powerful to have actual tools and resources that people can use and build into their workflow, so that they can, you know, really embody and goals and and mission of their response by AI practice.

Nathalie Post  
Yeah, yeah. And I think with those tools, I'm kind of curious actually, to hear your view on it. Because, well, I feel like there's an overwhelming amount of tools that are, you know, out there and are all doing something, but then still, you get your hands on a project and you're facing certain questions. And it's often kind of this afterthought of like, hey, maybe I should have, you know, see, like to find these tools for for these questions that I'm having in this part of my process. Is this something that you're actively looking at as in like, in which part you can embed certain tools in your process, as you know, practitioners, let's say

B Cavello  
100%. So using the responsible sourcing as an example, our practice of kind of, frankly, product design for the resources that we're developing, you know, of course, first we have to create the recommendations themselves and that's a research undertaking and and episode a, you know, in and of itself, but, in addition to that, we need to translate that into practice. And so the next phase of our development in this work is actually running product design workshops with practitioners, right? We've had practitioners provide input into the recommendations we're making to make sure that they're grounded to make sure that they are going to be feasible. But, but now it's really about where in the process is such a tool or resource valuable. And also understanding that there are many, many different types of practice and different organisations that have different approaches for doing things. And that there might not ever be a one size fits all tool. So while there is a proliferation of different tools and resources out there, it's not enough to kind of build something and say, if we build it, they will come. But we really have to sit side by side with the practitioners whom we hope to impact and have pragmatic conversations with them about what's necessary and what it's going to take to actually make that possible.

Nathalie Post  
Hmm. Yeah, yeah. So we talked a lot about basically the the closing gaps initiative, and I kind of want to refer back to something that you said at the start, which is more of your current work and current research around shared prosperity? And I'm curious if like, how, you know, how do you steer technology in a direction that actually increases this economic prosperity? And how do you go about that?

B Cavello  
Yeah, the AI and shared prosperity initiative is such an exciting area of work. And, um, you know, as I think about what we've discussed already, in terms of kind of the, the way in which people do work to develop AI systems, and we think about the people who are building the AI systems, and how, what resources and practices they need to make their process more responsible, then there's kind of the Okay, well, whether they did that or not, what is the impact on the world going to be and I sometimes joke, you know, if you go and talk to a random person on the street, about artificial intelligence, they're not going to be thinking about, you know, a criminal justice system algorithms, they're not going to necessarily be thinking about even their, their media consumption in their newsfeed. When they hear artificial intelligence, people tend to think about least here in the States, two things, they think of the Terminator. And they think about jobs. Now, I'm not working on the Terminator side of things. But jobs is something that is top of mind for so many people. And we've heard again, and again and again, so many conversations about the way in which technology is going to change society. And the truth of the matter is, nobody knows for sure. But there are a couple of really telling trends that we see. So to contrast this with the Industrial Revolution, where things were mechanised. And we had so many processes automated through sort of the assembly line, what was happening there was that there were skilled crafts people, you know, I use the example of a furniture maker who were developing things, and what the mechanisation of the Industrial Revolution allow for was to kind of chop up that task, that work of making furniture, for instance, from the skilled crafts person and turn it into something that had all of these tiny component processes, and made that barrier to entry actually a lot lower, maybe it may have devalued some of the kind of skill in the craftsmanship of making furniture. But on the other hand, what it did is it made it so that people who didn't have that experience, could walk in off the street, and, you know, basically start learning how to do a job to do a piece of that process. What's different today is that the way that we see automation tending to take place is rather than taking these kind of big, high skilled, highly educated roles, and breaking them into component pieces, more often what we see is the component pieces that have already been broken up and kind of isolated, are now being turned over from people to automated systems. And this is having a really concerning effect, because the work that's happening, is really translating to, you know, maybe some people say there's going to be a net increase in the number of jobs. That's, you know, maybe that's it, maybe optimistic, but maybe there's going to be an increase in the number of jobs. But the types of jobs that exist, are going to be ones that require a lot more training, a lot more education. These are oftentimes in kind of economic terms called high skilled jobs, which I don't love the language around because it really misstates the amount of skill required to do so many different types of work. But what I really think of it as things that require a lot of kind of pre training and pre requisite experience in order to do and what this means is that people who are oftentimes already in some of the lower paid jobs, the call centre workers, the the factory, you know, assembly line workers, those folks are now put in a position where, you know, they may have much less opportunity to actually find work. And the work that they're doing is oftentimes kind of, quote unquote, de skilled, meaning that they're, they're working in an environment where they may actually be training the system that's about to replace them. And yes, we might be creating all kinds of new fangled opportunities, and, you know, machine learning engineers and all these cool roles that exist, but those roles are just not accessible to folks. And historically, the response to this is upscale, upscale, upscale upscale, you know, we need to take all these folks who are, you know, working in the call centre, and give them a data science course, and then they can join the market as as machine learning engineers and data scientists and, and all of these other things. But there are a couple of problems with that narrative for one. And the reality is, we don't know yet there's net space for that, but two, the reality is that the folks who are having the kind of greatest negative impact from automation, are the people who are least resourced to upskill, and to change. And in the meantime, right, there's this rough transition period. Additionally, it's just not realistic, honestly. And there are so many people who preach the upskill narrative. But if you look at the realities of the world, and what the amount of time and investment and attention that it takes a lot of people especially who are experiencing economic precarity, aren't in a position to devote devote even the mindshare to taking on these new tasks, when you're worried about, you know, your family's well being and paying rent and making sure that you'll have adequate medical care. So a lot of these things, a lot of these narratives around upskilling, although we certainly should continue to invest in upscaling are not the complete picture. And so in reflecting reflection of this, some people have said, Well, we need to have things like universal basic income, we need to have some sort of guaranteed way. And you know, here in the US, I think that we're a lot further behind on these kinds of ideas, we need to have more of kind of a social safety net. But the reality is, even those things, present issues, because when we think about the ways in which something like a universal basic income might work, is that it is funded, ultimately by taxes. And by whom are those taxes paid? Well, maybe they're paid by the companies that are benefiting from all this automation, that sounds like a great idea. But in the long term, these companies are amassing more and more and more power, there's more and more power and kind of income share going to capital going to the technology owners, and less and less going to labour such that we're kind of getting to a situation where, you know, folks might not feel such a responsibility to behold and maybe they do for the first few years or first generation. But we don't know that that we'll be able to carry out. And so in reflection of this, there's really a need to think about what technologies we're building, and what types of change we actually want to see come about. And right now there's such a race to the bottom in terms of cost and performance in terms of, you know, being able to apply automation and work. But this sort of neglects this opportunity that we have to really re evaluate how it is that we design technologies and tools that might actually invite more people into the conversation might bring more people into the workforce by being able to create more opportunities for folks who who want to have them. And it's not to say that we shouldn't have guaranteed income or that we shouldn't have upscaling. But we also need to really interrogate as technologists and as innovators. What can we do to actually steer this technology in a direction? That doesn't put us in this position of having to make these tough choices, but instead says How can this be a tool of empowerment and have some people call it pre distribution rather than the redistributive effects of things like UBI. And I just find that so exciting, and filled with so much opportunity? My passion, part of why I joined the partnership on AI is coming from working in a tech company and realising that there are so many good, well intentioned people who want to do really good stuff in the world. And to me, this is an opportunity to actually interrogate that and say, how can we make that happen? How can we build in a way that makes this possible and just to give an example for reference, one of the things that I think is sometimes under underrepresented in conversations about upskilling is how much of the world is still experiencing rates of illiteracy and, and the idea that somebody is just going to, you know, go online and take some some online course and become a data scientist just doesn't recognise the reality of having unstable electricity, let alone internet access, having, you know, resources that are available in your language, being able to, you know, have all of the different kind of preconditions that make upskilling possible. And a tool that I think is interesting, and I'm really informed by a lot of incredible activism for folks in the disability community is saying, Well, what can we think of as a tool for increasing access, maybe folks experiencing, you know, folks who can't read could use audio interfaces, or maybe there are opportunities to design, you know, mobile browsers or things like this, that might be more accessible to folks to be really adaptive to the kind of personal context of people or even a lot of really interesting pre distributive conversations around data rights and how people who are building you know, bringing us full circle the upstream and the downstream people who are actually informing and building and generating the content the the, you know, the the blog posts that ultimately feed our massive natural language processing models and and and language generation models, how can those people actually have a share in the benefits in the in the prosperity that comes from those technologies, so there's so much to explore, and the work that we're doing is really just at the beginning of what I think is going to become a much bigger conversation, we've been shifting from the kind of, um, you know, fourth industrial revolution upscaling framework into a recognition that like, wow, we might need to do more than that. And there is this burgeoning conversation, that we've had the pleasure of convening a steering committee of such brilliant experts from labour, from technology, from civil society, from all of these different perspectives, to say, How can we actually think about this? And what are the questions that we need to answer first, in order to even help somebody who is a technologist decide? Well, you know, I've got a choice between making this product and that one, which one is going to have a better impact on economic justice? We don't actually know how to answer that question yet. And that's the really juicy, exciting research work happening now is, how do we even know what the impact of technology is going to be?

Nathalie Post  
Yeah, I mean, this is amazing. I'm really fascinated by the type of work that you're doing. And like you said, we did kind of go full circle here. And so what I really wanted to ask you, given that ongoing conversation that you're basically facilitating, what can people do to become part of the conversation?

B Cavello  
Thanks. We love to have people join us in this journey. And so there are a couple of things coming up. One on the responsible sourcing front, I mentioned that we are planning a kind of product design workshop series for next year. And in that work, we're really looking to connect with product managers, data scientists, ml engineers, folks who are in their day to day life, in the position of saying, hey, I need some data, I need some labelled data. And people who find themselves in that position we want to talk to you. So definitely reach out to us the partnership on AI. Another way to get engaged with the shared prosperity initiative that I mentioned is, well, one, please go to our website, partnershiponai.org/shared-prosperity, you can learn a lot more about the initiative, you can see our steering committee, but you can also sign up for updates and and the folks who sign up through that page. And well, we will call on for input and comment, as we have drafts and things to share along. So those are two ways to get engaged, certainly follow us Partnership AI on Twitter, Partnership on AI on LinkedIn, and participate in the conversation with us because the goal of the partnership is to bring together the diversity of voices and perspectives that need to be considered even if they haven't been so far in the development of AI technology. So if that's something that's important to you, and we'd love for you to engage with our work,

Nathalie Post  
Thank you so much B, and I think those were some amazing closing words as well. I just really want to thank you for your time here on the podcast. I think you shared some inspiring things and I'm definitely encouraged to you know, dive deeper into all the work that partnership on AI is doing. Thank you.

B Cavello  
Thank you so much I get so pumped talking with you about this. So it's a pleasure to be here.

permanere audire

Continue listening...

newsletter

Want to stay up to date?

Sign up for our newsletter, and we’ll keep you posted on our research, podcast and other AI goodies.
* We don't share your data. See our Privacy Policy
Thank you! You've subscribed.
Oops! Something went wrong while submitting the form.