This episode seeks to explore the challenges surrounding ethics and AI and examine how these tools can benefit humanity while mitigating potential harm. Enjoy the conversation as they highlight the need for a robust and inclusive dialogue on these issues and invite you to share your thoughts and experiences related to AI and its ethical implications.
Guests
Ravit Dotan, AI Ethics Expert and Director of the Collaborative AI Responsibility (CAIR) Lab at the University of Pittsburgh
On LinkedIn | https://www.linkedin.com/in/ravit-dotan
Website | https://www.ravitdotan.com/
Hosts
Sean Martin, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining CyberSecurity Podcast [@RedefiningCyber]
On ITSPmagazine | https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/sean-martin
Marco Ciappelli, Co-Founder at ITSPmagazine [@ITSPmagazine] and Host of Redefining Society Podcast
On ITSPmagazine | https://www.itspmagazine.com/itspmagazine-podcast-radio-hosts/marco-ciappelli
____________________________
This Episode’s Sponsors
Devo | https://itspm.ag/itspdvweb
CrowdSec | https://itspm.ag/crowdsec-b1vp
Bugcrowd | https://itspm.ag/itspbgcweb
___________________________
Episode Introduction
In the latest episode of a popular podcast, Ravit Dotan, Sean Martin, and Marco Ciappelli delve into the complex and timely topic of ethics and artificial intelligence (AI). Dotan, a respected figure in the field of ethics, talks about her work in co-founding a lab at the University of Pittsburgh that seeks to improve the governance of AI systems. With the goal of developing responsible AI and reducing unintended consequences such as discrimination and privacy violations, the lab aims to create a more equitable and just ecosystem. Dotan emphasizes the importance of alignment across all parties involved in the development and deployment of AI tools, including investors and those who purchase these tools.
Throughout the podcast, the speakers engage in lighthearted banter on various topics, from fishing to Ryan Reynolds' voice, and even make reference to their previous podcast episode featuring mentions of the Wizard of Oz. They also encourage listeners to check out their previous conversation on advanced technology, which delved into more philosophical and ethical aspects of the field.
This episode seeks to explore the challenges surrounding ethics and AI and examine how these tools can benefit humanity while mitigating potential harm. Enjoy the conversation as they highlight the need for a robust and inclusive dialogue on these issues and invite you to share your thoughts and experiences related to AI and its ethical implications.
The development of AI has rapidly progressed in recent years, with its potential to revolutionize many industries and aspects of daily life. However, this rapid advancement has also raised concerns over the potential unintended consequences and ethical implications of these technologies. As AI tools become more prevalent in our society, it is crucial that we consider the impacts they may have on individuals and society as a whole.
Through the discussion in this podcast episode, listeners gain insight into the current landscape of ethical considerations in AI development and learn about efforts to create more responsible AI. With the hope of generating more interest in the topic and promoting greater awareness of its ethical dimensions, the speakers encourage listeners to engage in dialogue and share their perspectives on these important issues.
In a world where AI technologies are increasingly becoming an integral part of our lives, it is essential that we approach their development and deployment with a critical eye toward ethical considerations. The discussion in this podcast provides an opportunity for listeners to engage in this important conversation and consider the implications of AI for the future of humanity
____________________________
Resources
____________________________
To see and hear more Redefining Technology content on ITSPmagazine, visit:
https://www.itspmagazine.com/redefining-technology-podcast
Are you interested in sponsoring an ITSPmagazine Channel?
👉 https://www.itspmagazine.com/sponsor-the-itspmagazine-podcast-network
Please note that this transcript was created using AI technology and may contain inaccuracies or deviations from the original audio file. The transcript is provided for informational purposes only and should not be relied upon as a substitute for the original recording as errors may exist. At this time we provide it “as it is” and we hope it can be useful for our audience.
SPEAKERS
Marco Ciappelli, Ravit Dotan , Sean Martin
Sean Martin 00:01
Marco, ask me a question and I'll give you an answer.
Marco Ciappelli00:08
Are you gonna give me an answer you're gonna use? I don't know, some chat that is going to produce the way that you would answer. If you were going to use that
Sean Martin 00:21
I would do it in the voice of Ryan Reynolds, one curse word and one joke. I don't know if you saw that yet, he had some technology create an ad for him with those parameters. So. So you ready for the F bomb? I won't drop that right?
Marco Ciappelli00:41
No, no, don't. Don't do that. Don't do that. It's unethical. We're not going to do an
Ravit Dotan 00:47
F bomb. And in our preliminary chat, we talked about your history fishing. I'm thinking is it the fishing? Tuna fish? Is it about octopus? octopi? How do you say the control that you put?
Sean Martin 01:04
Well, if it's fishing, it could be the pH bomb. They feed bring cybersecurity in the plane. But now we're going way off way off chat here. Now, obviously, people listening or hearing a third voice that isn't you or I, Marco. That's Ravit Dotan, and for those watching, I can actually see her. And she has been on the show before she was on a panel for the other society. So good to have you back.
Ravit Dotan 01:32
Thank you. So good to be back. I had so much fun last time. And I'm really looking forward to the conversation.
Marco Ciappelli01:39
Yeah, and it was a good conversation. It was a great conversation. And of course, I'm going to do a little plug here and tell people to go and check the other society on itsp Magazine and find that conversation which was with with a bunch of academics really interesting people. We went very philosophical, very ethical around advanced technology. We enjoyed it so much. And there was so little time and so many topics that we decided, there you go, Sean, for the people listening to a podcast, he just dropped the camera for people watching everything, probably thinking there's an earthquake there. So real quick, we are. We're happy to have you back. And we would like for you, though, to introduce yourself and share with the audience of this podcast slash webcast, what you're up to,
Sean Martin 02:39
because there's something new even since the last time we spoke.
Ravit Dotan 02:42
Yes. I also want to add about the previous podcasts that we did that we had a game that I enjoyed so much. The game was, let's try to incorporate mentions of the Wizard of Oz.
Marco Ciappelli02:57
That's right. That's right.
Ravit Dotan 02:59
now I wonder if you want to mentioned Wizard of Oz again, or should I go for something else? Let's I'll put this in the background as I'm introducing myself. And maybe by the end of that I'll Um, okay, yeah, so my name is Rohit, I work in the field of ethics. So that means working on how do we get AI technologies and really old technologies that involve big data? How do we get those technologies to work for the benefit of humanity? So that means how do we get people to create AI tools that push humanity forward where it needs to, for example, climate change? And also, how do we get people who develop these technologies, who fund them who buy them to do something to do so in ways that mitigate unintended harms? So these tools are already notorious for creating discrimination, violating privacy of those great things, how do we get less of that? That is what I want. That is the goal of my career, to get these technologies to really help us as humanity and also minimize unfortunate side effects. So because I have this cause, the recent developments have been here less time is it i i co founded a lab at the University of Pittsburgh, there's a center called the Center for governance and markets. That is where I am. I'm a postdoc there founded co founded this lab with others in the centers and the goal is really, how do we how do we get that outcome of better, more responsible AI and more specifically, we want to see how we get that outcome by improving the governance of AI systems. Both in tech companies but also supporting everyone who finance tech companies like investors, procurements and those who buy the AI insurance company, I think everyone needs to be aligned in this ecosystem. And that is, that is my goal, in my work in the lab. And also generally in my career, in addition to my work in the lab, I also work in the private sector. So I work with startups, investors, all of those actors that I mentioned before I work with them hands on to see to see what we can do to see how we can improve.
Marco Ciappelli05:38
Sounds like easy peasy, done late.
Ravit Dotan 05:42
I will be done by next week. That's my theory.
Marco Ciappelli05:45
Yeah, I'm gonna I'm gonna kick off with this. Because before we start recording, we were talking about Yeah, where should we focus a conversation and we talked about pointing fingers. And when I think about pointing, pointing fingers here is a lot of people refer to technology as if technology was something or someone or an entity that can just think, and, you know, I think they're already into the next development of general AI that is controlling ourselves or controlling itself. The truth is that right now, I always feel like AI and technology in general does what we tell them to do. So I feel like what we need to change is our attitude towards what we do with technology. So I think this could be a good way to start. So what does motivate you? What is the moral? What is the, you know? Are you doing tech for good? Are you? Are you trying to sell something? Are you making money? And whatever it goes, it goes. So I mean, it's a very technical problem. That is, to me a very human problem. What's your take on that?
Ravit Dotan 07:00
Yeah, I agree. Some people really, on traumatize AI. So to humanize it, right? So they say, Oh, the AI made a bad decision. And I understand where this is coming from. Because one thing that makes this technology different from other technologies is that the Creator, the creators of this, if an AI system have less control over what happens, then other technologies, so they're not necessarily going to be able to say why an AI system made the decision that it did their commendations. If it did that prediction that he did, they don't know. It can make it too easy for them to avoid their responsibility, because they could say, I didn't make this decision. I didn't mean for this to make this decision. It's the AI's fault. And so we're pointing the finger to the AI system for humanizing it. I think that's the wrong way to go. Because it is indeed people who created this AI system. It is also not a single individual, it is a company. And it's even more than a company. It's the entire ecosystem around it, someone invested money in it, someone ensured it, someone bought it, someone regulated, or did not regulate it, or somebody enforced or did not enforce laws, it's have to do with it. And so it's a collective responsibility.
Marco Ciappelli08:35
Let's talk about training it in a certain way.
Ravit Dotan 08:40
Yes. Because often what happens is that the tech company that develops the AI system, it sounds like they did everything from scratch, something said it from scratch, but some things they took off the shelf, right, so they don't even know necessarily what went into the training. And that can also be a factor that people use to to shrug responsibility.
Sean Martin 09:05
Boy, I was gonna say let's talk about the ecosystem a bit. But now I'm just thinking, maybe a different tack to start, or to keep going from here is what it's used for, because he kind of kind of gave a few examples. At a high level. We spoke to some companies earlier, that are very purpose built for specific things. And they're leveraging algorithm algorithms and AI to help them achieve a certain outcome. And many of those can help solve individual problems for humans and society. And then there's the grand, more general AI where I want to build something that can do a lot of things for a lot of reasons for a lot of people. And to me that scale, which I think we're getting close to it being possible We'll have the scale gives us less control, right, as well. So what what are your thoughts on the purpose built versus general? Built AI? And are there different choices and how we govern, then eventually we want to talk about governance here.
Ravit Dotan 10:20
Okay, so let me see if I understand the question. So, some, sometimes we build AI for a specific task, I might build an AI system to sort through resumes. That is a specific goal. But other times we build AI systems for more generic purposes, such as a chatbot. I can chat about it, I can chat with it about many things, maybe about sorting resumes, maybe about something else. And I think you're asking, tell me, if I'm understanding you correctly, you're asking is there a difference? In there? Should there be a difference in our expectations of governance when developing this or that system?
Sean Martin 11:04
Yes, yeah. Does the does a risk level to humanity? Change? And does that change how we need to define and then manage its development?
Ravit Dotan 11:15
Okay, um, yeah. So now we have another interesting question, is the risk level different? And should a governance be different? Right? I think so this is a question I need to think more about. But my knee jerk reaction is the governance, the principles of governance should be the same. Almost no matter what the risk level is, because a part of the job of the governance when it's good governance, is to monitor the risks. And always think about what are the appropriate measures for that risk level. So in a way, in a very high level way, it's always the same. And the way that I think of governance, there are three elements that I'm wanting to see, in good governance of AI systems, no matter what kind of AI system it is, whether it's generic or specific. And these three factors are, one is knowledge. I'm expecting the company to have knowledge about AI ethics, and be curious and always learning about the kinds of risks that its technology poses. I'm expecting the company to consult diverse stakeholders about those risks, and I'm expecting the company to educate its employees. Now the format in which that might happen, may be different. And the difference will depend on many things, the risk will be one of them, but they will have to find out what the risk is. So that's a part of knowledge. The second, the second pillar is workflow. It's great that the company knows stuff about the risks that its AI poses, but what is it doing about it? So I'm expecting you here are three things that I'm expecting from companies have a plan, have a strategy, and have measures and have standards, things that you're going to measure. And you're going to commit yourself to also have some sort of processes that you do in your day to day work, to to make sure that you meet those goals that you set for yourself, and last, have incentives. If employees are, if they're their KPIs or goals or about other things, they will never get to this work. And again, these are things that companies should do, regardless of what kind of AI is developing. And then the third thing is oversight. What does the company do to keep itself accountable? Is it reporting internally about ethics progress? Is it reporting externally? Is it having external audits? And so to me, when I look at my expectations of a tech company, these are the expectations are around these pillars. Now the question about the risks will kind of it will go into that, right, because a company will have to figure out given the technology that is developing, what should it be doing? And so whether it's generic or specific, is just one aspect of it. Other aspects are even if it's specific, what is it doing? Is it a healthcare thing? Or is it is it something to assist in brain surgery, for example, that is very high risk. And so wouldn't necessarily divide the risk levels based on whether it's generic or specific. The risk level assessment would need to be more sensitive to the particularities of the case.
Sean Martin 14:45
Interesting and in Marco, you and I, we've had many conversations around risk and, and cybersecurity and, and perhaps even privacy and we often look at the automobile industry for Here are some insight into how we may have overcome some of these challenges in the past. And I don't know what car can can only be built to go so fast, right? But they, they continue to push the envelope there yet. Most roads, it's 5060 miles an hour at 120 kilometers an hour that you can't you can't go beyond that anyway, even the car can be three or four times faster, faster than that. And then also in the vehicles, places our motorcycles, right. Similar technology, different format, similar purpose, but built in a different way. And I guess what I'm, so I'm just looking at different systems built different way using using different parts for different outcomes. They are all kind of governed still by the automotive industry, right? They have to meet certain emissions, they have to the users have to abide by certain rules and laws. I don't see belts as the example we often refer to when we talk about cars and this type of thing. So it's an observation. I'm just wondering, are there other areas like that, that we can draw upon or have drawn upon for AI as a technology to say, you know, we've done this before? Why are we reinventing the governance wheel here? Just because we can like everything else with technology?
Ravit Dotan 16:25
Yeah, that's a fantastic question, because there's a tendency to reinvent the wheel with AI. And there are some questions that come up. And it's as if they're totally new questions. But actually, we've dealt with this questions before, starting with questions about governance. People are asking questions about AI governance, but sometimes they ask them as if we have never needed to govern tech before. So one thing that I've been doing there is this field called responsible innovation. It is an academic field of research. So I actually started by looking at what is going on in that field? What do we already know about governing technology responsibly. Fortunately, there isn't a lot. It's also an emerging field. When I think of comparisons, one area that I think is helpful is actually sustainability. Forget if we talked about it in our free conversation before we recorded or after, but climate change is also something that we're dealing with as a society. Companies impact climate change a lot. And we're dealing with this question of how do we motivate companies to do better in terms of sustainability? And I think that that is something to learn from, because in research, in recent years, we see more and more awareness in companies at least claiming to try to do things and I we can criticize, how successful it is, but we see that this is a movement now. And the question is, how can we? How can we learn from that? So that that is one side for comparisons? For me, the sustainability movement, and also the DEI movement, diversity, equity and inclusivity. How did that happen? How can we have awareness for this now what can we learn to apply to, to the field of AI? Another another area to learn from, I actually also think of the of the vehicle industry, but in a slightly different angle. You know, going back to the theme of people shrugging responsibility in AI because they feel it's not my fault, I didn't do it. That is also not something new. That's when something goes wrong. Every person in the chain wants to save up money, someone else's fault. So when a car crashes into a tree it could be very easy for the for the company to say that's not actually our fault. It's a driver. It's a driver who ran into a tree, so don't blame us. I think there's some analogy to companies saying about an AI system. We didn't make this discriminatory decision. It's aI D AI drove us into a tree. And we can learn from the tree example to see why that is shrugging responsibility. Because even though the car engineer wasn't in the car during the crash, if their reason for the crash was that they didn't check the brakes, then it is the company's fault, even if they are not the ones who are driving the car at the moment. So even if there was another entity, who was the more immediate cause of the crash. They had their spot I sibility as a manufacturer, to do something in the background to prevent that, or at least to mitigate the harm, if they didn't install safety belts, that is also their fault. And so another place where I learned from the past and past technologies is where we do put responsibility on companies for action for consequences that do not seem like they are the direct outcome of something that an engineer in that company did.
Marco Ciappelli20:37
So, okay, we're going in a lot of places, but all I can think is we have a human a human problem, more than a technology problem. So I'm gonna go back to that, like, it's a very philosophical issue, like, let's just say to define, you know, ethics is like doing things following immoral behavior. Yeah. And the morality is to do the right thing. What is the right thing for me or the right thing for you? What the cultural influence the coming to that I mean, we all assume we're all driven by the same good and evil, right. But in the end of the day, when it comes to especially a company, we are defending market, we're defending money, we're defending profit, we're trying to make profits. So what is the drives what? And the reason I keep bringing this is because I think that we need to agree to what does it mean for technology to be ethical to me means that it improve our human condition. Either by you know, helping, when met in the medical field hopping to people now can kill in the road, or, you know, if we're gonna go to the car example, or the street or the motorcycle, but also the environment, because ultimately, it's gonna kill us. So is it about preservation is about improving our life. So if we could just follow those basic principle, and we all can agree a list on I don't know, 2345 of those, then my opinion that the decision could be applied to the way that AI should behave, you should behave like a responsible human being.
Ravit Dotan 22:25
Okay, there are many interesting issues to address here. So my background is philosophy. Like I know, that's why I went there. Yeah, if there's one thing that I know, is that we do not know. We do not have a universal agreement and what is ethical? Yep. I don't expect us to have that anytime soon. And so we but we feel like we want to have this answer because we want we want to somehow say what is okay and what is not okay with these AI systems? And, and that I think drives us to say, but what are the ethical rules? I think it is a less helpful way to go pragmatically because we do not have a universal ethical theory. These are not only epic, these are political questions, social questions, we are going to have diversity of thoughts. So this is not about relativism, or absolutism. Maybe there is an absolute moral truth. I'm not gonna, I'm not gonna say anything about that. But like even but even if there is a universal moral truth, we don't know what it is. Or at least maybe different people think different things about what it is. And that is, that is the situation that we're at. That is the situation that we're that we're at with companies, generally, we have many issues, ethical issues that we would want the companies to address. What happens pragmatically is if companies decide on your own values, and sometimes they tell us what our values are, and in the good case scenarios, they act to realize those values that they have decided for themselves. In a way it's how individuals do it. I have my values, I try to act on them sometimes it's difficult. We talked about being about vegetarianism. Some people were vegan, some people are vegetarian, so people eat meat. To an extent we were open to different perspectives, if it's okay, but we expect people to say or companies to say what their policies are to have some kind of policies that we would accept within the bounds of law. As consumers, we can also decide not to buy from a certain company if it's misaligned with our values. And so what I'm driving at is that to me, when I think about AI systems and ethics in the Workplace what I want from companies is not necessarily to align with my own ethics, but rather to have transparency about what their values are, and then act to abide by those values. And and it sounds maybe trivial, but it is not. Because what we see often in the AI space is that companies are just not even thinking about it. And that is a real problem to just release some solution onto the world, a really powerful tool that can have severe unintended consequences and not even stopped to think, what will this do? What is the safety belt that I need to install? So when the conversation is not even? Do they rely on the right values or not? The conversation is? Are they even thinking about this? Are they doing anything to stop unintended harm? That is where we're at. And I think because it's so in a way behind, we're not even there yet, like what is, you know, to, I don't want to say split hairs, but like to have this debate on what is ethical or not, because we're not the companies are not even stopping to think about the consequences and how to stop them.
Marco Ciappelli26:24
Alright, so Sean, let me add something, you go cuz I'm thinking, I like that idea that you're giving us a list, let's put it out there. Let's write let's put what our values are. And then you can align with my value or not. And that could go even deeper into philosophical conversation like, freedom, what the hell is that freedom to do? What? Until I arm someone else, I had the freedom to do what I want, blah, blah. So to bring back the example of technology, and I'm going to connect it to the car again, because this was a conversation we had many years ago, Sean on on a podcast, or whatever, where? Yeah, where you ran that I don't remember the sociological experiment, but the car needs to decide, or the driver needs to decide what who is going to kill? Is it going to kill himself to avoid to kill somebody when it cannot break on a on a cross street? And does it kill yourself? does it kill an old person? Does it kill like a mom with two kids in the in the carrier or, you know, there is an ethical decision there? And I remember the question was, like, Who made that decision in what the car would do if it's driving itself, right? And I remember somebody said, Well, if I buy the car, I want a car that protect me. So if Mercedes or another brand comes and say, I'm going to protect the driver, no matter why some people are gonna buy that car, some other may be like, now I want to protect other people's life, and maybe another car, decide that. So in a way, we do agree that there are different value. And we respect that and we choose what we want. We're never going to really apply a regulation where we all gonna agree.
Ravit Dotan 28:25
I want to give another example. Yeah, I mean, I agree with what you said, I want to give another example that could help I think, crystallize the point. When we think about equality, right? We think about equality. We say we want an AI system. To be fair, there's a question of, what does that mean? There are many open questions about that. So which groups count? Which groups count more? Do we want to be equally fair to all of them? But what what does that mean in practice? Recently, I read a really good paper about discrimination and mortgage loans that I want to use to illustrate. Mortgage loans are really important because the ability to buy a house is super, super important for social mobility. I live in the US. In the US there is a sad history of discriminating against black people when it comes to purchasing houses. In the past, they were just not allowed to buy some houses which was terrible. Today, they are allowed to buy houses, however, there's discrimination against them to buy a house today, a lot of people most people probably need to get a mortgage loan. If they cannot get that mortgage loan, they cannot buy the house, which leads to practically being prevented from buying a house. In this study what they did, they wanted to see how I impacts loan approvals in terms of minorities, especially black people, and so they started with, I can send you the link later in the home with the name of the authors. So they started with let's go to the historical data about loans, because loan applications, they have to be reported. And let's identify discrimination there. And they found a very sad reality that black people are 54% less likely to get their loan approved, that is just a situation right now in the US, there are more discriminated than any other minority than any other group. And so then the authors of the paper said, great, so this is or not great, terrible, but this is the situation that we are now. Okay, now let's take an AI system, a generic loan, decision AI system and run it on the same data, let's see what kind of discrimination we get. The result is that it's much worse. So now black people were 67% less likely to get their loan approved. And so if you imagine a company, that your loan company, and now they want to they want to become more efficient, so they decided, well, let's get this AI system. And they apply it without thinking, they just apply this AI system, because if they want to save money, whatever, you're just going to get much worse discrimination. So the question of responsibility is, when you buy this AI system, and you decide to use it, sit down, and think about the possible negative implications of doing so. Some of the questions that we touched on earlier when he talked about ethics is, but what are the desired outcomes? Which groups in the population, we would like to equalize? So do we care about black people and also Hispanic people? And also indigenous people? And also European people? Who are the groups that are going to pay attention to? And what are the thresholds? So do we want equal approval rates? For everyone? Do we want the full loan amounts for everyone? These are the complexities that the company needs to think about. And that is where the values are going to come into the surface, you're going to need to decide what do we mean by fairness? Who do we care about? What aspects do we care about? But that is a question that comes after, first of all, realizing you have chosen to use AI, now you have this responsibility to sit down and make some difficult decisions. And so that's the that's the distinction I'm making the difficult ethical decisions come after realizing that you have this responsibility. And some variation is inevitable in how we're going to answer those questions. We took relations can't it's a political question. But the thing that we should not compromise on is expecting companies to ask those questions and make those difficult choices and be transparent about them.
Sean Martin 33:06
Yeah, yeah, possibly, certainly political, but perhaps even more economical. I'm just thinking, unless you're a nonprofit to new and perhaps even nonprofits cross the line, to care only about how much money they can get to do whatever they're doing. I find it hard to believe that any company is going to have any, any attribute or, or goal that is above making money so they can stay in business and make more money. Underneath that they may have some altruistic motives and things like that. So I put that out there first. And then And then even if, even if second is to be moral and ethical and have a good view for society and humanity. Let's just take the food industry, right. We don't know what's in food, that we have some labels for some things, right. And medications. And I know that it gets very hard, it's very hard to get a view into what's in the stuff that we're consuming. And obviously, regulation has come in to put labels on on food products to give us some insight there. We don't have we've talked about this before Marco and in the context of IoT, what's in our IoT networks and devices. Now we're talking software and AI, same same thing here. So even just the statement that says, We want to drive equal response in the decisions that this ai ai is making, in this instance, or loans, in this instance, for job hiring, in this instance, for school admissions, whatever it is, right letting letting a group or a group in our group out. I don't know. I don't know if we're gonna have visibility into that. Unless somebody audits, so I guess what I want to go back to an example, you mentioned, I presume that technology was built by someone as perhaps as a service that's used by other loan companies that then offer loans, and then they those loans probably get sold. There's an ecosystem here. And of course, we can't forget the people using it applying for the loans. And so there's an ecosystem there of who, how and where does governance take place? So I know at the beginning, I want to talk about kind of the flow of governance and where does it fit in? And how do we kind of keep an eye on this stuff? If we can't see it?
Ravit Dotan 35:42
Excellent question. I'm hearing three topics that I want to address. incentives. It's one of the topics we talked about. In the business world, I hear that people care about money. It's a rumor that I've heard. And there's this perception that companies are not going to want to think about responsibility, because it is intention with their profits. And so that's why they're not going to be incentivized to do this. I'll stop there. And then I'll get to where the governance happens and the role of various actors in the ecosystem. Because I, for me, I really focus on this incentives there the incentive question and in how it brings different stakeholders together in an ecosystem. Okay, so I think, responsibilities, sometimes conflicts with profitability, but not always. And that is a point that is important to understand that developing AI responsibly can actually be a competitive advantage for a company. I'll just mention five reasons briefly. And then we can get into that later. But I'll just mention them briefly. Let's start with compliance. We already have a lot of laws that AI is subject to, and they're not enforced as much as they could be. But I think it's just a matter of time. And we also have laws that are coming specifically to AI. We don't have too many of them yet. But it's increasing. And again, it's just a matter of time. So if a company is not prepared for that, it's just gonna cost more money to prepare for it later. So compliance is is one thing. And of course, when I say prepare, also pay for lawsuits, which already is happening. A second really obvious thing is reputation. It does really matter. The first of all, no one wants their names in the news with some AI scandal. But also it impacts adoption of the product, because people are more inclined to purchase more be more loyal to brands that are ethical. It also reputation influences talent acquisition, which is especially important to startups, because sometimes they feel that they're not able to be they can't offer the same salaries. So sometimes they have trouble attracting talent, we have enough statistics showing that the ethical perception of the leadership really matters for talent. And then I'll also also mentioned the quality of the product, developing air responsibly can be really helpful for having a better product. For example, if you think about fairness, what does that mean? Typically, it would mean that your product is gonna you want the product to work equally well, for more user profiles? What do you have a better product that works for more people, where if you're thinking about transparency, and explainability, and those kinds of things, it gives the company more information about its own product, a better understanding of what is on product is doing. So now, it can improve the product. Having said all of that, sometimes responsibility does conflict with profit. And yeah, we do have to make difficult decisions. Bridge brings us to incentives in in the ecosystem. Some of the low hanging fruits are companies and investors who already realize that sometimes we do have to make these difficult choices. And that's what we're gonna do some thinking, especially the ESG movement. So those for those who are not familiar is G's acronym for environmental, social, and governance. So today, we have a lot of companies and investors who think that, yes, we do want to make profits, but also we don't want to destroy the world. And so we're looking for that balance. So since we already have this group that accepts that profit should be made but with minimizing harm we already have that it's only a question of, let's now also add AI if you're going to do ESG fantastic. You keep that going. But also think about AI in that context. Another really important group that we have is called, it's a smaller group, but it's called impact investors, they in a way, go the extra mile. They say not only do we want to minimize harm, but we also want to have positive social impact, we do want to make this profit, yes, but alongside the financial returns, and so when I'm thinking of the tech ecosystem, some actors they are, they're gonna do what they're gonna do, we're not going to be able to reach them at this point. But that is okay, because there are many others who are willing to make those difficult choices when it comes to other things, such as sustainability. So that's where that's one of the places where I'm learning from sustainability because the ESG, movement, diversity and inclusion, climate, these are things that are already highlighted. So let's, let's just add them back to this question of where does governance sit? And how the different players in the ecosystem play together? Here's an ideal situation, how would we like things to work? It starts with a tech company, tech companies looking for an investor, tech company is developing an AI investor is asking the tech company as a part of the due diligence. So when the investor is deciding whether to invest in the company, the investor is also evaluating the responsibility of the governance in that company. So the investor is asking about, what does the company even know about ethics? What does it actually do? And what kind of accountability structures that does it have? So the investor diagnosis where things stand, if the investor uses that information to inform the investment investment decision, maybe it's so bad that the investor says no, and now the company and, and they say to the company, actually, you're too much of a risk, we evaluated your governance in AI, we also evaluated the risk of your technology, the gap is too big, it's risky, and you're doing nothing about it. So you're not going to invest? What does the company learn, you need to shape up when it comes to responsible AI. Now, suppose that investor does choose to invest in the company. Now they can help the company grow, they can set goals for the company, they can ask the company to report on his progress, they can introduce advisors to the company, just like that investor would do with other things like marketing, like finance, do it with ethics, as well. So the governance sits, you know, it's the responsibility of the company to figure out how to governance technology, but the investor can really help from the outside. Same goes for those who buy the AI, right? If you have a company that wants to sell the product, and as a customer, you're saying, you're asking questions, just ask questions. Where does your data come from? What do you do for fairness, and if you're not satisfied by the answers, you can either say, actually, that's not good enough. Or you can say, I will only buy for you, if you improve on say privacy, fairness, yada, yada. So then again, it's the responsibility of the company, the governance is in the company, but the part of the incentive, it comes from the outside, same goes for the insurance company. So that's a slightly long answer to your question.
Sean Martin 43:35
That's good. We can probably do a whole episode on itself. But
Marco Ciappelli43:39
I think we're getting close to call it off. But it's I don't want to because I still have to quote the Wizard of Oz. I know, I was just thinking, we'll find a way. But also a lot of a lot of us know that? That's a lot of ethics in there, too. And the technology of the wizard? No, I mean, this is a never ending conversation. And I think there's a lot of valuable points to make people think, you know, one of the things that you just said, I think is really important, and the bottom line I think we cannot always think about is either perfection or is nothing, because otherwise, we're not going to do it. So at a minimum, to evaluate the fact that I'm a big fan of, you know, Tech for Good you make money or company for for good, make money, do the right thing, okay? It's a good balance. And I think as you say, investors that come in with them mindset that can really help. And with that in mind, we can look to a lot of things that that will be more specific for each different industry because to connect with The Shawn's on question about the general vision of versus the applied or versus, do we stop? Do we use AI to do research, but not to actually make a decision that will be another? And good things to do? Like, give us all the tools, but then humans are gonna make the decision. But is it really the human less or more ethical than the AI wants is fed the right data? Vic? It's a never ending, you know, question here that that we have. But at the same time, I do think that there are certain areas certain industry that AI, it's already making the difference, and we need to appreciate that it's saving lives. It's it's searching for, you know, cancer is, you know, I drive my car, but I appreciate the fact that I find distracted, he's gonna, he's gonna break. No, no, if I will let him make any decision. Let him make every decision. But helping. Yeah, that that's, that's where you know, that balance can can be, at least this is this is me, hoping also to have you back soon. And to keep going with this conversation? Because I feel like so many more places, we we should go. So
Ravit Dotan 46:23
yes. Like the Land of Oz. Exactly.
Marco Ciappelli46:28
There is not just one yellow,
Sean Martin 46:30
sticking out from under the under the house.
Marco Ciappelli46:34
And, you know, it's, that's a very simplistic way to put it just follow that yellow brick road. But there are so many different yellow brick road here that we can take. And it's not just what is exactly, not just one and then that's for sure. Well, I think we need to we need to commit to a follow up conversation. If you're up for that. I would absolutely love to. Yeah, maybe we pick a more specific industry. And we look at the we dissect that that one. I know we went into the cars, we went into few other areas, but so much more to
Ravit Dotan 47:14
Yeah, and I that is my approach. Also. In my work, I like to focus on specific sectors. So two sectors that I focused on is the world of fintech. So financial services that are using AI, that's why I have this mortgage example in mind. And then another sector that I've been focusing on is generative AI, generative AI is like chat GPT or dolly. So it's AI that generates content, such as texts, images, and also manipulates content. And there, both of those industries have fascinating questions. Yeah.
Sean Martin 47:55
Throughout this whole thing, I think Marco at one point, you said, good and evil. And I don't know, I don't even know if it's that. Black and whiter. one and zero. If it's not good, does that make it evil? It's not evil doesn't make it good? Because to me, it's it's an advantage, or disadvantage? And who's using it against? That's, to me, that's the question I have.
Ravit Dotan 48:24
Yeah, or even. I think a lot of it is really about these unintended consequences. Because most people, the vast majority of people do not go to work every day and saying, Today I'm going to discriminate a little bit. No one that says that. Today, I look forward to coming to the office. So that can violate some privacy. It's not how it works, how it works is
Sean Martin 48:49
that I want anyone to get get forward with with less effort. And
Ravit Dotan 48:55
yeah, yeah, and that is exactly. To me the problem that needs to be tackled how we first of all, realize that with AI, the potential for harm is humongous. Which is why we really need to be on our toes. And also, there isn't as much tension between developing AI responsibly and just having a better product. As people think I think it's I think there are some modifications that are really not that big of a deal. To me.
Marco Ciappelli49:36
Yeah, I totally
Sean Martin 49:38
agree. And my brain is gone.
Marco Ciappelli49:41
I know. So I'm gonna draw the line here and invite everybody to stay tuned to catch up on that episode I mentioned on the other society that which you can find by tsp magazine.com. And of course, this one is going to be in redefining technology and Shawn, we always have So many issues say, this goes through refining technology or redefining society, because the bottom line is that is very intertwined is a synergy of this. And I'm going to close with this. I mean, maybe make some comments if you if you agree, or, or disagree, maybe this whole conversation about technology and artificial intelligence and ethics is a way that we should take this opportunity. And we are going to understand better our human condition, because I think we never spoke about ethics as much as we're doing. Now. When we look at artificial intelligence, so
Ravit Dotan 50:37
I agree and and as a person with a philosophy background, I think this whole debate of AI is an opportunity to, for philosophers to think about how they can connect with issues that come up in the world, because there's a tendency in academia to insulate to isolate, sorry. And suddenly, ethics is becoming a topic of conversation for a lot of people. And a lot of philosophical questions are coming to the fore.
Marco Ciappelli51:14
Yep, we'll leave it right there, for people that are not philosopher to kind of think about these things. And that will,
Sean Martin 51:21
There are robots that are not philosophers.
Marco Ciappelli51:24
And aliens. I mean, I can't wait to have an ally on this podcast and see what they think about their ethics and their morality,
Sean Martin 51:32
or other channels might do that. You never know.
Marco Ciappelli51:34
But that's, yeah, that's true. But that's, that's a topic for another time. So thanks, everybody. There'll be notes and rabbit, if you can share some of those resources with us. We'll put it in the notes so people can check it out. And of course, check out what you do and connect with you on social media and maybe ask you a question and maybe you may have answered or maybe not. So stay tuned, and please come back again. We would really appreciate
Ravit Dotan 51:59
Thank you so much for having me. Bye, everyone.
Marco Ciappelli52:02
Bye bye!