top of page
c837a6_3f53cdbff83b47b19a97bd35c767333c~mv2.webp

AI Ethics: AI Guardian's Responsible AI Leaders Series

Charting the Course of AI Ethics: Insights from Abhinav Raghunathan on Building a Safer AI Landscape


Watch the full video or read the recap or transcript below. Check out the other interviews in the Responsible AI Leaders Series here.





Interview Recap

(Summarized using ChatGPT)

In this insightful discussion, AI Guardian’s Chris Hackney delves into the challenges and necessities of governing AI technologies with Abhinav Raghunathan, founder of the Ethical AI Database (EAIDB) and a data scientist at Vanguard. The conversation explores the differentiation between companies that use AI to solve problems and those addressing the problems inherent in AI.

Raghunathan emphasizes the urgency of implementing ethical and responsible practices in AI development. He compares AI technology to driving a car - stressing the importance of having 'brakes' or guardrails to prevent potential societal harm. The EAIDB was established to research and categorize companies actively working towards constraining AI's potentially rampant effects. Initially tracking around 100 companies, the database now follows nearly 300 organizations dedicated to promoting responsible, transparent, and safe AI.

How do we make sure that there were breaks set up at every point of the pipeline? And that, to me, is governance.

Raghunathan expresses concern over the misuse of advanced AI technologies, such as generative AI, for inappropriate applications. He cites examples where these technologies have been used in contexts that do not require their complexity, leading to unintended and potentially harmful consequences. This reckless adoption, driven by a fear of missing out, could lead to chaos and a rapid escalation of problems.


Discussing geographical variations, Raghunathan notes that European countries prioritize ethical AI more than the profit-driven approach prevalent in the U.S. However, he anticipates upcoming legislation in the U.S. that may incentivize ethical AI practices. Developing countries are also approaching AI adoption with a focus on responsibility, having observed the repercussions of irresponsible development elsewhere.


The conversation also touches upon AI governance. Raghunathan defines it as a system of checks and balances at various stages of the AI development pipeline, including data governance, model governance, and implementation. These measures ensure safety, trustworthiness, and compliance. He stresses the importance of building organizational policies that define what is safe and trustworthy, involving diverse perspectives to avoid echo chambers.

I think at this stage it comes down to one thing, and that's just build your organizational policy for what you define as safe and what you define as trustworthy.

The interview concludes with Raghunathan's forecast for AI adoption in the coming year. He predicts a continued scramble to adopt AI but warns of potential black swan events due to the unsustainable pace of current developments. Finally, Raghunathan shares his vision for the Ethical AI Database, aiming to expand its influence to developing nations and continue advocating for safer AI practices.



Interview Transcript

(edited for length and clarity)


Hackney: Today, I'm excited to have Abhinav Raghunathan here with us today. It’s a pleasure to have you here and appreciate your time. So on your side, the reason we have Abhi here is he is actually the founder of the Ethical AI Database (EAIDB) along with his day job as a data scientist at Vanguard. The Ethical AI Database hit our radar because it's really one of the interesting areas where someone is trying to provide transparency into the developing areas of AI and the vendors and partners who are trying to to influence it, such as ourselves at AI Guardian. What was really interesting to me was how you described the Ethical AI Database. In particular, it's providing the differentiation between companies that solve problems with AI and companies that solve the problems of AI. Can you talk about that a little bit from your perspective? And what was the impetus behind the EAIDB?

 

Raghunathan: AI is not new. ML is not new. There have been years and years of track records in which ML has caused real damage. And that's only carrying over into the current day and age with AI and all these more powerful technologies. So, the thought behind it [the EAIDB] was like … you can't really drive a car unless you know that there are brakes. You can't accelerate AI unless you know that there's some kind of guardrail. There has to be some brakes on the system, so that everything isn't going to just melt and collapse. And I think that system was not very focused on when AI and all this generative AI stuff came out. I think that, especially now given that the stakes are a lot higher, that there's even more need. An uncontrollable technology that interacts with society on a near daily basis… It's scary. Part of the effort with the Ethical AI Database was to categorize and research the companies that were actively trying to constrain the rampant effects of AI and ML as we know it. The effort started out really small. The space itself is still new. When we started, there were only around 100 companies that were actively in this space. We're currently tracking close to 300 companies that are all trying to enable more responsible, more transparent and safe AI. And I think that's a testament to how much it's needed in the current day and age. And it's a testament to how difficult the problem really is.


Hackney: That makes a ton of sense. I've been around SaaS for a long time and I remember, there's was a company out there called Luma who would do these Luma-scapes. It would start in a SaaS place with like 10 companies, and then 100, and then eventually, you’d just get to this map that looks like a galaxy of stars. I assume we'll get the same place with AI governance, and other areas as well. 


You mentioned the example of a car without brakes… we've used that example as well. It's an amazing invention - the automobile - but early on, faulty brakes, no seatbelts, no safety devices … you had a lot of accidents along the way. 


I'm curious, as someone who's worked in these spaces, what are you most worried about as it relates to AI nowadays? 


Raghunathan: I think this is one that I probably share with a lot of other people as well. I think the most dangerous thing is applying generative AI and all these really complex technologies for problems that don't require them. I think the most simple example is a dealership that recently announced that they're going to start selling cars through a ChatGPT-enabled interface. And then people found out how to get the chat interface selling cars for $1. Just by prompt attacking and things like that. The question is, did you really need each ChatGPT to sell people cars? Is that really a proper use case for that technology? I’ve talked to a lot of consulting firms that work on the same problem - advising companies on how to use generative AI and their biggest fears are the same. Companies are still not sure what appropriate use cases are for generative AI. I think that's a very scary thing because if you don't know what it's supposed to be used for, you tend to think, “let me use it for everything I can.” It's like a “fear of missing out” problem, too. Then you get a lot of chaos along the way. I feel like, given the pace of technological innovation in that space, too, when you start throwing things at a problem, especially generative AI, things can cascade very quickly. So, that's my biggest nightmare. And it's already happened a little bit. I’m just hoping it doesn't continue the way it is currently.


Hackney: So, you don't think fast food companies doing pilots of ChatGPT in the drive thru window are the right exercise?


Raghunathan: No. I think that's a really interesting use case from a research perspective, but I would never think that would work in practice. But, hopefully they use one of the vendors in the Ethical AI Database to hold their older system together.

 

Hackney: What do you anticipate is the pace of generative AI when you think about how generative AI is a part of our lives, say by the end of this year? Where do you think this is headed and the pace by which it's going to be there?

 

Raghunathan: I think when we talk about the generative AI hype cycle, we're at a point now where the technology is still advancing very rapidly. But all of the innovation is on the extremely technical side. We won't see something like ChatGPT (a different version of ChatGPT) coming out and completely blowing everyone away. I think everything that comes out this year is going to be a more nuanced version of that. I think people are going to have to level set their expectations on generative AI this year. Companies are already coming to that realization. People are just figuring out that it's a lot harder to actually build these systems properly than it seemed at first. You can't just connect to OpenAI and solve all your problems, right? It's not like that.


Hackney: You just killed half the business plans of Y Combinator with that comment.

 

Raghunathan: I build generative AI for Vanguard as my day job (completely separate from the database) where I’m a practical applied person, a student of generative AI. Then, when it's nighttime, I come back and look at everything holistically and I'm like, “there's no way that this should be used the way it is.” So, it's an interesting place to be. But I do think that companies in general (I'm not speaking for Vanguard, completely personal opinion), are realizing it takes a lot of effort and a lot of people - a whole village - to raise generative AI within an organization.


Hackney: That makes perfect sense. So let me ask you on the ethical side. So obviously, creating a landscape of companies through the Ethical AI Database helps facilitate the adoption of AI ethically. But when you talk to organizations, how are you seeing them approach and prioritize the ethical component of AI at the stage? Are they taking it seriously yet? Are they thinking about it? As you go out with this landscape, what are your conversations like on this subject?


Raghunathan: For the most part, it depends on where you are geographically. You'll notice that in Europe, the prioritization of doing things properly is definitely up there. The EU is always a leader in policy and legislation, regulation. The U.S. tends to be more profit-focused. It’s not the easiest thing to demonstrate that being responsible is profitable. But, in Europe, a lot of the focus is on how do we build these systems properly. Think about the Nordics and the Nordic culture. The whole Nordic system is about society. And in that regard, they treat this very seriously. But organizations in the U.S. … there's a lot of moving legislation that's coming. And I think that's going to be the incentive that pushes corporations to start caring about it. There's no intrinsic motivation in the U.S., but there will be extrinsic motivation. I think that's going to be the turning point. Europe is a little bit more of a balance. 


And what's interesting is, a lot of developing markets like Australia and Asia, they're building in tandem with being responsible because they've seen what building without being responsible can do. So they're treating these things with seriousness, so I think it's a mix. And it depends on where the corporation really sits.


Hackney: It's an interesting dichotomy for me, looking at the space. You've got Europe coming out of the gate strong. Obviously, certain states in the U.S., like California, will do well, get ahead like they have with data privacy. But it's interesting. Every one of these pieces of legislation is focused on what do we see as the societal harms, like you mentioned, but every one of them carves out government and military use from its regulation. And I don't know if that's just because they know they can't actually regulate it or not. But there's a distinct fear of a company going sideways and creating Skynet. And there's a distinct fear of every government in the world trying this at the same time. Even OpenAI last week pulled back their language around banning military or government use cases. (They adjusted their terms of service to pull some of that language out.) 


So why do you think ethical and responsible AI adoption is critical for most organizations right now? They're obviously running forward as fast as they can. But if you're going in front of them, and you're saying, “Hey, here's the real benefits of it. I know you want to run fast, you want to go and break things the Silicon Valley Way.” When you talk to them, what are the benefits that you think are important here, as they adopt AI?

There's very little work being done by the big AI providers on that trustworthiness aspect. That's not a part that is really profitable to them, and so they don't do it.

Raghunathan: I think it comes down to customer trust, and literally just trust. I think no business can ever operate without trust from consumers. And I think when you run any kind of generative AI application, you run the risk of something happening that makes customers lose trust in your product. And I think, even if you're internal, even if you're at a massive institution, and you're trying to sell a product to even internal teams, you have to demonstrate that there's a reason to trust the product. And I think that if you use generative AI, there's a very high likelihood that you can't trust the output. If you just plug and chug with ChatGPT … “there's a hallucination right there. It's marginal, but it's still there.” And the question is, how do we know? How can we tell? The whole system of trust around generative AI has not been built. There's very little work being done by the big AI providers on that trustworthiness aspect. That's not a part that is really profitable to them, and so they don't do it. You'll see that a lot of companies in the Ethical AI Database who have added to their product mix or pivoted slightly to compensate for generative AI applications. They have added capabilities to provide that trust to the Gen AI pipeline because it doesn't exist currently. So that's why I think it's important. I mean, if you can't trust the product, if you can't trust the output, then what's the point of the product, right? It's going to die if you try to sell it internally. So that's the biggest thing. And from a profit perspective, in general, you should try to be responsible, regardless, because it's the right thing to do. That's intrinsic motivation. We've talked about that already. 

  

Hackney: I love that answer of trust because I agree with you, 100%. I've been through a few of these cycles in tech, and one of the oldest ones is nuclear power. You had, I think, a perfect parallel on trust with nuclear power in the U.S. You had an event with Three Mile Island that scared the country back in the 70s. Any logical way of looking at nuclear power, it's one of the safest, cleanest energy sources you could employ. You've seen countries like France employ it for almost 100% of its grid. But we don’t do it in the U.S. because as a country we got scared of it and we walked away from it. My biggest fear of AI adoption is not whether we move fast enough, but that we move too fast and don't think about what happens if there's a Three Mile Island event in our space, a black swan event, that scares our nation away from it. And I think trust comes down to it. You've got this reservoir of trust that can get eroded really quickly. There's really responsible regulation out there, but there could be stuff that shuts it down. And because our country or world gets scared by it. 


So, how do you think about AI governance or responsible AI? How do you define it? How do you think about this, as it applies to ethical use cases?

... the way I define it goes back to that brake system. If there's a way to place trust, a way to place checks and balances at any point during the pipeline - some sort of management, some sort of governance.

Raghunathan: It is very hard to define what AI governance really is because there's tons of different stages. That whole pipeline has governance attached to it in various locations. So, for example, there's data governance, there's model governance, there's general governance after implementation, there's a whole spectrum of companies that are … but the way I define it goes back to that brake system. If there's a way to place trust, a way to place checks and balances at any point during the pipeline - some sort of management, some sort of governance. Speaking from the Ethical AI Database perspective, you'll see companies that try to govern your data - what data goes into a model? How safe is that data? How much PII is in that data? Can we redact it? Can we treat it differently? All that stuff. And also, where are the touch points of your data across your organization? How much of that is dangerous and sensitive data? Then you get to model governance … models can be very tricky and can oftentimes be biased and can make bad decisions. So the model governance pieces include … Given this model, how do I make sure that this is a safe system? How do I make sure it's making the right decisions? Is it trustworthy? Is it safe? All that stuff. And then there's the implementation piece that ties everything together … Given my data position, given my model, how compliant am I? How safe am I on a very holistic basis across an enterprise? But all of this is just about - how do we make sure that there are breaks? How do we make sure that there were breaks set up at every point of the pipeline? And that, to me, is governance. That, to me, is knowing where exactly the breaks are across your enterprise. And knowing that they are there is the important part.

 

Hackney: That makes sense. Yesterday, the World Economic Forum AI Governance Committee came out with some good frameworks. And embedded in it, they had the blessing, the shift left (mentality). Like “take everything you should be doing on governance and shift it left one.” Get in earlier in the model journey with the things you think you should be doing. I think that absolutely strikes a key right now.


When you think about AI governance efforts, I think so many companies are struggling with what to do now, how to get started on it. What do you think companies should be implementing now, in this regard? Should they be focusing at the model level? The operational level? Both? And specifics? If you have to tell them, “hey, at least do these three things,” what do you think they should be doing at this point?


Raghunathan: I think at this stage it comes down to one thing, and that's just build your organizational policy for what you define as safe and what you define as trustworthy. And don't make it an echo chamber within your own organization, bring in people from outside, diverse perspectives. Figure out what it is for your company to be safe and trustworthy. Determining how that's done is a different process. But I think it's important to figure out how exactly you want your organization to position itself. And what are the things that are necessary to make sure that you instill trust in your consumers? And I think that itself is a very challenging task, because that requires commitment, first and foremost. And so that's what I'd say. In response to the EU AI act, that's what people are doing in Europe now, right? Corporations are looking at the act as an incentive to say, “Okay, here's what we need to do. We can just start forming a whole framework around how our organization is going to operate.” I'm not I'm not an expert in frameworks or anything like that, but at the base level, if you're an organization that uses AI on a wide scale, you have to know how to be, you have to know what you want to do to it in order to make that first step towards implementing solutions that solve your your governance problems. That’s what I would say.


Hackney: I did like what you said earlier, too, about the use cases. You have to determine what’s use case worthy or not. What are the things we should be applying to or not at this stage, given the inherent complexities. It’s honestly dangerous that it amplifies certain issues quickly. That makes a lot of sense. 

 You can't do GRC if you don't know what your internal policy is going to be around GRC. And I think it's one of the prerequisites to doing GRC properly.

Raghunathan: One more point there. The way a lot of these AI governance tools work is they give you a way to comply with your own framework. (And with the larger regulatory scale.) If you don't have that internal framework, there's nothing to really automate that compliance for. You can't do GRC if you don't know what your internal policy is going to be around GRC. And I think it's one of the prerequisites to doing GRC properly.

 

Hackney: We do say, “here's five things you should do.” And policies are one of those. And it's funny how many of them, when we talk to partners - their only policy is that “thou shalt not” policy right now. Like, “don't do.” That's all they've got. And so other business leaders are coming to the risk management groups and going alright, but I've got … forget models … just “I have a third party vendor that's now got AI in it. Are we allowed to turn that on?” And there's no path to yes, yet. It's a whole bunch of paths to no, because they don't know how to get to yes, because they don't have a policy that's really defined. 


We've mentioned the EU AI Act and some other things. Do you have a viewpoint on what role legal and regulatory bodies should be playing in AI at this point? 

  

Raghunathan: I think it's going to take a long time for regulatory agencies to ramp up on the space. It's a very abstract technology and I think really understanding how it works is a big part of that. I don't expect any big stepping in at this point from most regulatory authorities. I was very surprised at the pace of the EU AI Act. I don't think the U.S. is going to be quite as fast to do something similar because they require scaling up on that knowledge first and I think that's a difficult task. I do think legal is starting to get more and more involved. We saw that New York Times lawsuit that's still ongoing. And there's a lot of other lawsuits around IP and things like that and copyrights. That whole battle is still being fought. A lot of the more successful companies in EAIDB, at least in the governance aspect, do have some kind of legal background as well. So I think about Enzyme Technologies out of Ireland, there's a couple of others that have a legal background. And that helps in a lot of ways, right? Because companies at the end of the day, they want to know how to be compliant and avoid lawsuits. And I think that part is a very incentivizing notion for a lot of companies. So here's my question


Hackney: How do they avoid lawsuits? The regulations are here but the case law is all gray area, and it's not gonna be settled for years. How do you navigate that, right now? 

But it's knowing how to defend yourself and knowing, given these things that I've tried to do, if you can demonstrate that you actively tried to mitigate the disasters that they're mentioning in the lawsuit.

Raghunathan: That’s a good question. I wish I had an answer. That's the million dollar question. I think probably someone with a legal background might be able to answer that one. But I think you're right - the law is a gray area. But it's knowing how to defend yourself and knowing, given these things that I've tried to do, if you can demonstrate that you actively tried to mitigate the disasters that they're mentioning in the lawsuit. I just feel like having a diverse set of perspectives when you do GRC [is important]. When we do build an organization using AI, legal shouldn't be one of those concerns, because that's one of the larger regulatory bodies, if you will, that almost plays a part in the whole thing.


Hackney: I think you're 100% right on the defensibility piece. Case law in the U.S. has historically been deferential when they know there's not established case law, but they look and go, “Did you try to do the right thing as you went along, even though the legal system hadn't caught up to this new innovation?” And if you can do that, through AI governance, you can show that you did these policies, you laid out approvals properly, you did testing as you should. Even if Getty comes and sues you for IP issues, or a music artist does, you can say, “I was trying to act responsibly in an environment where the rules weren't set.” That gives you a lot of leeway sometimes early in that process. 


It's interesting what you say on the U.S. I'm divided on the U.S. at this point. At a federal level, you're probably not going to see a lot, especially with an election coming. Beyond what he can through the executive order. But I do think you're going to see states rise up, particularly California, and push just like they did with CCPA. The EPA is a great example out of California. They dictated car emission standards back in the 60s and 70s. And they were like, “We don't know combustion engines. We don't know how to get there. But we're just going to say it has to be x by x date, and then let the car companies figure it out.” I wouldn't be surprised if you have some of those things here, where it's like, “I don't know how to figure out transparency for LLM or judgments either. But we're going to throw down that gauntlet and make them conform to it.” 


Raghunathan: It's funny you mentioned that. I think it's a lot easier to give a quantitative measurement of what safe is to the EPA for cars, because there's an amount of emission that's measurable. How do you measure transparency for a large language model? I think it's a really tricky problem. But maybe, if there's a way to quantify - maybe money spent on it? But that's not a it doesn't seem like a good option …


Hackney: When it comes to regulation, I don't know if there's ever going to be a perfect way. But you can see some big hammers coming in some regard. 


What's your forecast for the year ahead on AI adoption ethical practices? Do you see a really fast year one with bumps in the road? Black swan event coming? When you think about this - where do you see the year ahead?

I think this year might be one where we are going to see a couple black swans. And I say that because you can't have the pace at which we're going - that cannot be sustained for forever without something breaking.

Raghunathan: I think everyone's still scrambling to adopt AI. That's always going to be the case. I think this year might be one where we are going to see a couple black swans. And I say that because you can't have the pace at which we're going - that cannot be sustained for forever without something breaking. And I think something eventually will break. And we've already seen little tiny breaks in AI, not with generative AI quite yet. But you kind of feel like it's coming. It's only going to take one big company to slip up and then all of a sudden it's going to be a huge fiasco. And Google, Facebook, they haven't had the best track records with these kinds of things. When they started using ML, there were tons and tons of examples of how their systems broke on a regular basis. But because everything wasn't under the microscope at that time, they got away with a lot of things. And I think that's not going to be the case with generative AI. So I think this year, everyone's trying to scale up on AI and Gen AI, because of the whole “fear of missing out” idea. But I think there will be a point at which something bad happens. Unfortunately, I think it's going to take that to really turn the tables on how organizations look at how they're governing their own software and systems. If you look at things like the AI Incident Database, you'll see that the number of incidents increased pretty significantly from last year to this year. There have been more cases of AI breaking, more cases of traditional ML breaking as well. So everything has a tipping point. And I think that tipping point is coming very soon. 


Hackney: The funny thing about black swan events is you don't know what they are. That's why they're black swans, but you actually can feel the odds of it going up over time. And it feels like that right now. 


Abhi, last question for you … what's your vision for the Ethical AI Database this year? What do you hope to get to in the year ahead?

 

Raghunathan: Good question. It's still an evolving project. In the last couple years, I've been consulting a lot with governments to profile the companies in their jurisdiction and make sure that they're aware of who's actively doing work in this space. I'd like to continue doing that. And spreading out to more developing nations is a part of the project. I want to get more involved in African initiatives that are coming out. Australian initiatives, South Asian initiatives, things like that. There's a lot of developing markets and I think that if we enable safer, build safer, AI systems as they're scaling up, I think that's going to have a very dramatic effect later down the road. And so that's ideally what I'd like to do. I think it's a hard task because while you're scaling on AI, AI is all you can think about. And the whole “build fast, break things” mentality, that's what we're trying to change. So, I think that's what's in store. And, of course, we're going to keep releasing reports, tracking more companies, updates, all that stuff. So stay tuned.

 

Hackney: That type of inclusiveness, and particularly the material impact it can have on those countries is ginormous. So I can see exactly why you'd be focused there. Abhi, I really appreciate you coming on today and joining the discussion. All the best in building out the Ethical AI Database in the year ahead.


Hackney: Thanks. You too. Should be a good one for us. Thank you.

Commentaires


bottom of page