top of page
c837a6_3f53cdbff83b47b19a97bd35c767333c~mv2.webp

The Criticality of AI Governance for Responsible AI: Perspective from Christopher Stevens

Watch the full video or read the transcript below (edited for length and clarity using ChatGPT). Check out the other interviews in the Responsible AI Leaders Series here.





Chris Hackney: Welcome to AI Guardian’s responsible AI Leadership series today. I have the really good fortune of bringing to this series Christopher Stevens. He is a privacy and cyber risk analyst at King & Spalding. He's also an adjunct professor at Drexel University's College of Computing and Informatics, where he teaches courses in data protection, information privacy and information security. He has a long, long, distinguished record in these spaces that goes beyond that. But as a quick intro, I think that works. Chris, welcome to this edition. I'm glad to have you here.


Christopher Stevens: Chris, thank you. First and foremost, I want thank you and AI Guardian for giving me this opportunity to talk about 2 topics that are near and dear to my heart - privacy and AI. 


Hackney: I think the nexus of those 2 coming together is going to be a really interesting area to talk about now. How are these things coming together? As a legal expert, you've got a ton of legal expertise here in both areas and you’ve got your feet in both of them. Where are you seeing flash points between the 2 spaces today? Where do you see this coming together and getting interesting?

 
We're seeing companies going back and including privacy officers - the CPO and others - in these AI adoptions because it's clear that not only do they impact individuals, they impact organizations from a privacy perspective.
 

Stevens: I think companies today have rushed to embrace AI technologies to solve a business problem. Then later, in retrospect, they're finding that there are ramifications for a quick rush to these technologies. We're seeing companies going back and including privacy officers - the CPO and others - in these AI adoptions because it's clear that not only do they impact individuals, they impact organizations from a privacy perspective. One of the things that we have to look at is now as we're using these training sets, these data sets. - where do they come from? What type of data are we looking at? Is it personally identifiable information? Because then you have to go back and look at privacy laws and their principles - transparency, accountability. We can go to the AI side - explainability, interpretability. What happens if I am a person that says “I don't consent to you using my personal information in your training data sets. Do you break the algorithm? Do you break the model? It's not as simple as “I sent a letter to a company and say, ‘Hey, I don't like your practices. I wanna opt out of your using my information - sharing and selling it.’” You can't readily do that with AI.


Hackney: That's an interesting area where we're seeing as a flash point - retroactive contract issues. For example, I'm building a private AI model … let's take our customer data and put it in there. Someone finally raises their hand and asks from a legal perspective, “Do we have the consent of our customers to use their data this way?” And I think, from a personal perspective with PII information, you're gonna see the same thing, right? We've gotten consent to have this information, but not necessarily for this use case. And what does that mean for us to have to go back [and modify the AI model] at this point?


Do you see folks just nibbling around these issues right now? Bringing in the chief data privacy officer or chief data officer? Or do you see them trying to learn at this stage?


Stevens: I think they're doing both. I think mature organizations have realized that because you have these huge impactful data privacy and data protection laws, that you've got to have that privacy expert at the table in your AI governance. And so they are quickly integrating at the executive level - they're bringing in those senior leaders. Also at the action level - they're bringing them in there as well, because we're at the point now where we almost have saturation, Generative AI has scared the heck out of all of us. Now you made a great point when you talked about consent but what happens if you have a generative AI model and it's out there generating data, whether it's images or text. It's taken from authors without consent. We have the copyright issues, we have the intellectual property issues - that's becoming more and more significant. But, from my perspective, we need to look more broadly. You have AI components embedded - things like Microsoft Azure, Google search engine - and a lot of data is being generated and used for training. But where's the consent? And not just from you and I, as the average consumer, but what about those individuals from a copyright perspective? trademarks? IP? Again, we're going back and looking at consent. There was recently a case where the New York Times is suing OpenAI, for using its content without approval. We're going to see that more and more. You've got to have that privacy person there. 


You've done a great job since you and I first met developing this pretty awesome platform that's going to help organizations understand: 

  • What business problem am I trying to solve? 

  • What is AI gonna do for me that traditional capabilities can't do? 

  • And then going through that process of governance, risk assessments, ethical assessments, policy, procedures, guidelines and standards training.

You have to apply those steps to a pragmatic approach to AI adoption. 


And, to your original question, some people are doing it gradually, some people have learned from prior mistakes so they're using a more structured, pragmatic approach to AI adoption and governance. 

 
People have slowly started to understand the value of their data - of who they are - but I don't know if they understand the value of their digital presence and how that's represented as a persona across digital channels. - Hackney
 

Hackney: Yeah, that's absolutely fair. And it's been interesting - most of what we talked about to date has been about inputs, like data privacy as an input, like in the training model, what is the consent? But an interesting one that's going to come up is the outputs - misrepresentation of you as a person through AI. I don't think it's one we've thought of as much. People have slowly started to understand the value of their data - of who they are - but I don't know if they understand the value of their digital presence and how that's represented as a persona across digital channels. And so, as AI comes out with outputs, it might be visual outputs of you, representations of your PII information, whatever it might be. Whether it [AI] can do it, or, if it's factually correct, is going to be an interesting piece that I think most celebrities and public figures have had to deal with for a long time. But at this pace of AI, I think everyone's going to have to deal with it. The idea that Chris Stevens is as you represented on the Internet is not actually who he is in all these ways because of AI is going to get really interesting for you, me and everyone else as well. 


Stevens: You're exactly right, Chris. We’ve got those adversarial models out there. You can see AI is wreaking havoc with social engineering, deep fakes. And you hit on something extremely important. You know, we talked about the model itself or the technologies. We talked about the training data, the inputs. We've talked about the system itself, the models. But the output is extremely important. And that's why you need a human presence constantly evaluating. You need to do those repeatability assessments to make sure that those outputs are consistent with the desired outcome. What we're finding is, especially with generative AI, is you have these hallucinations where it's starting to walk away from the truth. And now you have output that's generated solely by the model that's inaccurate. Another thing that you mentioned that we don't realize is that we become overly reliant on the technology. It must be true because the model produced it. So we don't question it until you have that input. 


Another cogent point you made was when we talked about those inputs. I don't know if you're familiar with when Samsung used training data with its proprietary data. It didn't realize that that data becomes a part of the global data set. Everyone sees it. And that's a no, no, Chris. That's why you’ve got to help them. 


Hackney: Yeah, that's why they had to do what a lot of companies have done - the “thou shalt not policy” of don't use ChatGPT as an employee. But I think you're gonna have to get smarter over time. You have to understand how to use these tools. But in a way that doesn't bring your proprietary information into those systems in a way that can be used for training or other purposes.


I will give OpenAI credit - they've done a good job being explicit on the controls around that being explicit. I saw an interesting poll last week, where all these companies are saying it. But when you look at global populations, almost no one believes it. And that's a hard thing to overcome - once you've lost trust on issues like that. If people don't believe it, even when you say it, how do you still operate as you move forward?


Stevens: I can remember growing up and watching “2001: A Space Odyssey”. Looking at “I, Robot” with Vicki. Of course we all know about Skynet with the “Terminator”. There's always that fear, although we trust the technology, that somehow it is going to harm us. 


You mentioned this earlier when you were talking about some of these laws, Rule 144 in New York State says that before you can even introduce these technologies and use them for employment hiring purposes, you’ve got to go into a risk assessment one year out.


Hackney: Honestly, I agree with that. Our legal system operates one way but I think AI can be so powerful that it has to reverse what we do in the legal system. AI systems should be treated, in a lot of ways, as guilty until proven innocent. OpenAI and all these folks are going to have to learn how to be open and transparent. It can't be a black box. People have to test it as third parties. And I think you're going to see governments, through the EU AI act and others, start to push for that because of how quickly these can scale and the impact they can have. 


Stevens: China has banned ChatGPT. They're trying to grow their own startup and so they want you to use that vice using other technologies. 


Hackney: Let me ask you a question since you're so deep, also, in the data privacy side. I've dealt with data privacy, probably as much or more so over the years than AI, even and you already had a lot of companies really struggling with governance and compliance around data privacy. Now you've got AI on top of that. Where are you seeing companies, in your overall experience, doing well from a governance and compliance perspective in these areas? Where are they still struggling? What are their good and bad sides of this right now?

 
First you have to understand the law. Then you have to translate that law into your own internal policies and procedures, guidelines and standards.
 

Stevens: Let's go back to the elephant in the room - the EU GDPR and it's true essence. Introduced in 2016, enforced May 25th of 2018, and look at the struggles. Even today, 5 years after the fact, we're seeing organizations struggle to comply with it from a governance perspective. Because first you have to understand the law. Then you have to translate that law into your own internal policies and procedures, guidelines and standards. I think that some companies, especially the smaller ones, debate “do I pay for this pound of compliance or do I just skirt the edges of that compliance until I get caught?” I think that more and more companies are going to take a different stance when it comes to this AI Act. When you look at the penalties. We just made it through the trilogue. We have this provisional agreement. Germany and France. “Hey! You can't hamper our own national industries,” and so we'll have a compromise. But when you look at it when they start rolling it out, Look at the fines - 

  • 7% or 35 million euros

  • 3% or 15 million euros for lower infractions. 

  • Just not giving the right information still comes out to a significant amount - 1% or 7.5 million euros. 

That's gonna drive compliance. Why do you think we're looking around the globe like we did with the GDPR. Now, we're seeing governments are starting to pass their own laws to drive compliance. They don't want to be led by the nose by the EU AI Act. It's going to be extremely important from a governance standpoint. The big companies are already well positioned. If you've learned to comply with the GDPR and other laws, you're going to be able to comply with the EU AI Act. But it's not only the EU AI Act. You’ve got the proposed AI Product Liability Directive. You are going to have to look at your organization, look at the inventory, your AI technologies, their capabilities, governance - you've got to have a governance structure. And not only for the big companies. You're also looking at the small and medium sized because we have to carry them along with us. And so, you get that governance structure in place. You write a policy. You train people on the policy. You're going to have to have a third party oversight. You've got to look at the model. Look at the algorithm through his life cycle. repeatability, assessments, algorithmic risk assessments and the like throughout this life cycle. If you're going to make it.

 
You are going to have to look at your organization, look at the inventory, your AI technologies, their capabilities, governance - you've got to have a governance structure.
 

Hackney: You mentioned a lot of things we talk to our partners about - policies, basic project tracking, transparency, training, etc. What we saw with GDPR was a race to standardization. There's a reason why you see the same consent button on every website you go to. Do you see a race towards some sort of standardization? Or do you think it's going to be fragmented for a while? 


Stevens: I think it’s going to be fragmented because you have to understand it. Remember, we still don't have the final language. At least we're better off than we were before December. We know it's coming. People knew the GDPR was coming, but they took a wait and see approach. We're talking about 24 months like any other EU regulation adoption. But if you are a high risk or prohibitive AI developer, we're talking about 6 months.


Hackney: That's the most interesting part of this. They upped the amount of fines. They realized they wanted to be serious about this, so the fines are a sledgehammer. They copied the playbook on GDPR which made it extra territorial - by definition, it covers the globe. You have to pay attention, no matter where you are. But what was most interesting was the timelines. The full piece goes into effect in 2 years, but 6 months for prohibited use cases. 6 months is like almost nothing in terms of regulation. That's extraordinary. Do you think there's a reason why they're driving so quickly on the timeline here?

 
This is going to be far more impactful than the GDPR.
 

Stevens: Harm. Potential harm. Because when you go back and you look at the EU AI act in its own form, you’ve got Article 3 that talks about those high risk systems. Article 2 highlights some of those systems, general AI foundational, but it also has those prohibited ones. They have to go after those systems to drive compliance otherwise everything else falls by the wayside. That's why we have 4 tiers. So you take your prohibitive systems. And then you give them 24 months to really start working towards those high risk systems. And then, for everyone else, like any other regulation you've got to 24 months. But this is going to be far more impactful than the GDPR. It's so broad in scope and I don't even think the European Union truly understands the impact that this Act will have on us globally.


A wall of dartboards representing conformity
Learn more about the Conformity Assessments required by the EU AI Act

Hackney: I think they have an aspiration because the EU does not want to be ignored on this, even though most of the innovation is coming from the US and Asia. But what's interesting about that, because we've had this conversation with a lot of our partners is like you said, there's the prohibited and the high risk, etc. And most of our partners we talk to. They say, well, we're not doing the prohibited stuff - that's like biometric, facial scanning and other stuff. But what's interesting about the preliminary language is you still have to submit conformity assessments and impact assessment to prove that you're not like. That's the whole point. We've had a lot of partners who are like, “well, that's high risk stuff that we don't do”. I'm like, “yeah, but you still have to show that your models aren't that.” There's no free pass in the early language - there might be later. And I think that's what's most interesting in our conversations is most people not realizing that they still have to go through the certification gauntlet, even if they're not [high risk] to show they're not, which has been interesting.

Person walking through an electronic doorway
Learn the essentials about Fundamental Rights Impact Assessments (FRIA), required by the EU AI Act

Stevens: Just because it's not mandated doesn't mean you shouldn't do it when you get to like minimal and limited, because when you get to minimal and limited it's not mandatory. But they want you to adopt those practices voluntarily. And what's going to happen is this is just an initial iteration. It doesn't mean that this Act won't be revised. based on a year or 2 years. And so everyone should take the high water mark. You don't have to comply with a lot of these laws, but it behooves you at least to adopt the spirit of it. You should be giving people the right to opt out of having their information sold. Dealing with these data brokers and things like that - yeah, you don't have to, but you should. We've got this impactful Act that's coming. But look around the globe. I think in your questions we looked at C27 in Canada. One component of that is the artificial intelligence and data act. Around the globe, they don't want to be left behind, they want to have a voice in this AI regulation. You're going to have to deal with it, if you're a multinational organization. So what? You may be able to skirt the EU AI Act, but these other laws are built on that. So, you're in a catch 22. Another great question you asked earlier, when you talked about these flash points. companies are gonna be caught in a catch 22. How do I comply with this data privacy law from the standpoint of transparency, lawfulness, fairness? All of those principles, and then these principles that are laid out in the EU AI Act, or the OECD or Singapore's Model AI Framework. How do I comply with all that? That's costly. You're already struggling to deal with compliance with these different data privacy acts. Now, you’ve got to comply with these AI regulations, and that's why you have to start with governance. 

 
You may be able to skirt the EU AI Act, but these other laws are built on that. So, you're in a catch 22.
 

You’ve got to ask yourself - from a project management perspective - why the heck do I need AI? What business problem am I trying to solve that I'm not already addressing? And then structure that approach. You’ve got to get buy-in from your senior leadership. You’ve got to have a governance structure. Otherwise you're going to have the wild wild west of AI in your organization. 


Hackney: That's what we've seen a lot of. It's interesting. When we talk to CEOs and boards, you ask them the basic questions. “Is AI important to you? Are you doing something?” you get. “Yes, yes, of course.” But then you ask, “Are you sure you know all the AI projects in your own company?” Every one of them is like, “I do not know all the AI projects in my own company,” and that's the frightening part and why governance is so important. Governance can seem complicated at some level, but at a basic level of just asking a question like that, that's governance. Do you have your eyes and the right people looking at the things that could either be great for your company, or terrible depending on how you implement them. 


Stevens: And the shocking part is, it's a disconnect between wanting this technology because it gives you a cost advantage, cost differentiation, focus. So you’ve got to have it because my competitors have it. But you haven't answered the question - Why do you have it? Where do we have it? You know we've got systems out there from the legal perspective… I mean a great course over at MIT and it's forcing us to develop this business strategy based on all these technologies. And, when you look at it, you can do an assessment on a technology and you find it has an embedded AI component. They use supervised or unsupervised machine learning to train that system.


Hackney: Most of it we used to call ML, and machine learning and now, generative AI is different because it works on a different neural network system. But you're blending these things together very quickly.


Stevens: You’ve got to account for machine learning. It's not gone. You got to account for NLP. Now, those machine learning models are being incorporated into technologies today. And so you get Co pilot with Microsoft, where there is an AI component integrated. And those should be on your inventory, the type of Ml technology you use. What type of training data? Did you do a repeatability assessment? Did you do a algorithmic risk assessment? And that's an iterative process. 


Hackney: It's gonna be constant. But I say, just like with consumer data, the reason companies didn't walk away from that with GDPR was because it was so valuable to them as a company. I think you're gonna have the same situation here - there's gonna be hurdles to overcome but as an overall governance, they are good hurdles to have. But you can't walk away from AI right? And that's going to be the tough thing for a lot of companies is, how do you do this as best you can in a rapidly changing environment?


Stevens: We need it, Chris, you're exactly right. I was asked to take an organization and look at it, the way it operates today and the benefits and disadvantages of adopting ML, the types of ML, NLP and robotics. I'm a big RPA guy, so I'm looking forward to writing that paper. But when you look at those types of technologies - we need them. We can't be Luddites. We can't be smashing computer screens because we're scared of Vicky. We're scared to Skynet. but you need a measured approach to adoption. 


I've known you now for several months and I've looked at your platform. These companies (not the biggies, but the small and medium sizes) need that pragmatic, structured approach to AI adoption. They can't figure it out because they don't have time. So they need companies like you to come in, do an assessment of where they are today - Current state / desired state. Let's look at the gap and let's address that through governance.


Hackney: … and get to the right spot that works for our company. I think that's going to be important by company type. it's very different solution by company type. Outside of the regulation that's going on. What's best for you? 


White House surrounded by artificial intelligence icons
Everything you need to know about Biden's AI Executive Order

We talked a lot about the EU AI act. Since it’s your home base, I’m curious on your take on the Federal government level in the US. We had Biden's executive order, we have agencies working on that now, like OMB. Where do you see the US Federal Government fitting into regulation? What leadership role or not?


Stevens: Remember, we still don't have a national data privacy law. We got close, but it's an election year, so nobody's going to do anything with that. But what I see is … It goes back to what we talked about earlier, the impetus and the impact of the EU AI Act on other jurisdictions. I think the President made a good start with his Blueprint for the AI Bill of Rights. It's got its 5 principles there that he wants us, the Federal Government. (We're only talking about the executive branch. I learned quickly when I worked for the Office of Cybersecurity at the legislative branch. The House helps pass these laws, but it doesn't have to comply with them. I was told that I was so. It's just the spirit of the laws, but we don't have to comply. That's an executive branch thing.) I think the President really stepped forward and in late October with the safe, secure and trustworthy development and use of AI. It goes well beyond the Bill of AI Rights. It gives specific guidance to those agencies, but he's done it in a way that also benefits the private sector. I'm a big proponent of NIST - the NIST privacy framework, the cybersecurity framework which is now in its second iteration. But he got ahead of this, back in 2021 or so. when NIST was directed to develop this NIST AI RMF (National Institute for Standards and Technology Artificial Intelligence Risk Management Framework), which I like a lot. Govern in the center, map, measure and manage with subcategories. But they didn't stop there with NIST. NIST also looked at its cross walks that walked back to the RMF. It's got a roadmap. You can call NIST up - private sector or public sector - they will help you. 


Hackney: Can you actually contact them? 




Cover of the NIST Artificial Intelligence Risk Management Framework
Learn the essentials of the NIST AI RMF

Stevens: Yeah, that's the purpose of the roadmap - as guidance components. What the language says is that NIST is there to help you. They’re not going to do it for you. “We're just going to help you get there.” And they've given us the playbooks - and I love the playbooks. As a company, you can go to the playbook and it deconstructs the RMF to where you’ve got these subcategories. If it fits for you, you adopt it. Now, here we talked about governance. Governance is the center of the RMF. You’ve got to get governance right before you can map it, manage it and measure it. 


Hackney: Exactly - governance is your basic hygiene. NIST is something we’ve put a lot of effort into. I'm curious … you and I are breathing this every day. It's an extension of cyber security and other areas. Do you see a lot of people you work with have an understanding of NIST and what they can use as frameworks at this point, or is that still in a learning / training mode?


Stevens: They understand the NIST cybersecurity framework in the private sector. Many of them have already adopted it, tailored it to the organization's needs. They've done that with the privacy framework. You already see companies adopting the NIST AI RMF, I think what we need as a driver is a law. A lot of the privacy pundits out there said, “Hey. let's get this data privacy law right and then we can talk about AI. Let's not just jump in AI.” Because that's what many companies are doing. Hey? I got my new bobble, my AI bobble. I'm gonna use it. I don't know why I have it, but I'm gonna use it to get impact. If you've got the stick and the stick can move you in that direction, that's when you have adoption. That's when you have compliance. Without the stick, I might get to it. It's the fifteenth thing on my list of a lot, and nobody's caught me yet or fined me. So, when you catch me I'll comply. 


Hackney: You mentioned folks knowing NIST on cyber security and data privacy. Usually that's centered more in the CISO area / chief data office. We are seeing AI integration or implementation, especially AI risk management and governance, managed almost all over the place across companies. There's no centralized person in a lot of cases that's consistent. Where do you think AI risk management should live at a company? Should it fold into traditional areas of risk management like the CISO or CDO? Should it be its own special role? What's your thought on that right now? 


Stevens: It depends on the organization. If you have a mature organization and you want to build upon existing practices, then you should have the CIO own it. That entity is already doing risk assessments. It's already doing the periodic auditing and monitoring. It exists within that construct. And so what you do is you make that the point person - the CIO or the CISO. But you must have the other business units there - legal, compliance, perhaps procurement. When you talk about governance, before I buy that AI technology/capability, someone should have already assessed that technology. Do we have reuse? Am I buying this just for one small unit, or is there enterprise use of this capability? It goes back to having a structured approval. So for me, I would say as a start. create an executive level AI governance forum or committee. Have it chaired by the CIO. Where you have balance is, you have the CPO there, you have the CDO there, the CTO is there. These are the officers that are gonna make those decisions on AI adoption. And then it can cascade down to those action officer levels where we're actually going to do the assessment.


AI Governance committee meeting, Chief AI Officer and AI Ethics committee meeting
Leverage these free resources to help form your governance committee


Hackney: The thing I always like to leave these discussions on is one we started to touch on just now, which is companies really want to strengthen their privacy and AI governance efforts but they don't know where to start. “Where do I start?” is the number one question we get. You mentioned the AI committee … Do you have advice around what they should be doing? Where should they start on this journey for themselves right now?


Stevens: The first thing that they must do is define the purpose of AI. It goes back to what problem am I trying to address with AI? You can go out and buy a shiny bobble, but how is it going to help you achieve a competitive advantage? How is it going to make you better than your nearest competitors? You’ve got to answer that question before you devote resources, time and effort. Once you do that, then that's why the AI risk management framework has governance at the center. You've got to establish that governance structure. And then it will define for us: 

  • Have a charter

  • Start looking at what AI means to this organization

  • A structured process to propose adopting a technology. What will it do for us? What does it cost? But it has to be aligned against this. We don't all have deep pockets.

  • Risk assessments (external and internal)

  • Third party vendor assessment - as part of your third party risk assessment, you’ve got to have an AI component.


Hackney: That's an interesting one. I can't tell you how many folks we talked to who they're really wound up about model governance of their own model, which is great. But 90-95% of your use of AI is through third party vendors. Right? It's applications, it's not building your own model. And so, yeah, I think you're exactly right. How do you look at third party risk assessments and incorporate that? We already have areas like SOC2 starting to bring that in, I trust, into their viewpoint as well. So it's gonna get interesting, real quick in those regards. But. Chris, that has been extremely insightful, really, really appreciate your time. Hopefully, folks who are watching this get a lot of insight from it. I always say - it's a rapidly changing landscape right now, so I'm sure half of what we said is valuable today. And we're gonna just keep coming back with more as that landscape continues to develop. But, Chris, thank you so much for your time today. 


Stevens: You're very welcome, and I wanna thank you again for this opportunity. And I want AI Guardian to continue doing what you're doing because organizations need you. You said, “How do we do it?” Well, what you can offer them is a pragmatic, structured approach, and how we introduce governance. Once you get back done, I'm not saying it's easy, the rest will fall into place. But it doesn't happen without a governance program. 


Hackney: I really appreciate the sincere comments there. And we hope a lot of companies start to take this really seriously. As they successfully implement AI in their own organizations.


Stevens: Thank you, Chris. Either they're riding the paradigm shift or they're going to get left behind, definitely. 


Hackney: If you’re not riding on this, you’re going to get run over by it. So we hope to work with people who want to ride on it.



Comments


bottom of page