top of page
c837a6_3f53cdbff83b47b19a97bd35c767333c~mv2.webp

Navigating Responsible AI before AI-specific Regulation

Baker Donelson Legal and Business Advisor, Justin Daniels offers insights and tips to manage AI risk.


Chris Hackney and Justin Daniels tackle President Biden's Executive Order, the NIST AI RMF and general advice from Justin on responsible AI. Watch the full video or read the transcript below (edited for length and clarity). Check out the other interviews in the Responsible AI Leaders Series here.



Hackney: Hey, this is Chris Hackney. Welcome to the next session in AI Guardian’s Responsible AI Leadership Series. I have the distinct pleasure this week of having Justin Daniels joining us. He's currently the legal and business advisor at Baker Donaldson and his practice areas are really great areas for the subject we're about to talk about. His distinct experience and focus on cybersecurity, blockchain and now AI. He's the co-chair of Baker Donaldson's blockchain and digital assets practice. He also is one of the hosts of the “He Said, She Said” podcast. So welcome, Justin, to the discussion today.


Hackney: It's a pleasure to talk to you. We've had a lot of great guests on, mainly on the enterprise side / the product side. So this is really our first one where we're talking to someone from the legal side. I’m excited to get your perspective on responsible AI. So first question, as a legal expert, where are you seeing companies excel, or struggle with, general AI integration and usage today?


Daniels: Let me start with struggle - that's a little easier. I think companies are struggling in the best sense of the word because when I'm talking to security teams, or privacy teams, they're really working hard to figure out “what does the risk management plan really look like for AI?” because they're getting a lot of pressure from the business team. They say, “hey, everyone's using this, we need to get out there and they're trying to figure out how do we get the business out there, but also put something in place that isn't scattershot, that isn't narrow-minded, they are really coming at this holistically, so I commend them for that. But having said that, the challenge is there really aren't any regulatory or other kinds of guardrails out there. So companies are kind of all over the place, I don't really talk to any company that has a similar approach to any other company. And part of that is because AI is evolving so quickly. ChatGPT, when it came out last November, was like a thunderclap. And now, the race is off and companies are trying to figure out how to go to market and some are doing it in different ways than others. There's really no rhyme or reason to the approach..


Hackney: When you talk about enterprises coming to you for risk management framework advice, what are the basics you're telling them at this stage? It's still early days for a lot of them. But what should they absolutely be doing at this stage in your mind?

 

AI risk management is a team sport, you need to get a variety of different perspectives.

 

Daniels: Well, I would say the answer varies. Let's start with some of the larger enterprises. At this point, much like cybersecurity and privacy, AI risk management is a team sport, you need to get a variety of different perspectives. It could be from your security folks, your privacy folks, your marketing folks, legal, business risk management. Then the other thing I've been telling companies is a lot of them are putting together these committees, but no one person is responsible. So if you have a committee that is trying to do something together and no one is responsible, you're going to end up with problems, because nobody's driving the bus. It's just meandering wherever.


3 side by side images - An AI Governance Board, a Designated AI Officer and an AI Ethics Committee
AI Guardian offers free roles & responsibilities documents for both committees and leaders in the Resources section of the site.

Hackney: That is the tough part. We've seen a lot of companies putting committees together. At least someone raising their hand to drive. But their role and responsibilities (in this area) at the company are not defined.


When you think about an AI committee, or one of these committees coming together, what are the core things they should own as a group for that organization right now?


Daniels: What are the core things that they should own? At the end of the day, you have to figure out what is the risk profile of our company when it comes to artificial intelligence. So I have companies who have said they outright want to ban it. I have other companies who said, “Hey, we really want to move forward, because we want to be known as a leader in this space.” So I give the extremes, but what I'm really saying is, if I'm a company and I want to put this committee or put someone responsible, what does that look like?

  • What is our company's culture around understanding?

  • What are the benefits we want to get out of AI?

  • What are the risks, but also, how does AI impact individuals in our company, our company, our industry, society as a whole?


Cover page of the NIST AI RMF
Learn more about the NIST AI RMF in this primer

And that's why, in a lot of the conversations that I have with companies, I point them to the National Institute of Standards and Technology with their AI risk management framework because right now, nobody has all the answers, I certainly don't. But the idea is to start asking the right questions, to start to frame up the issues so that you can come up with a holistic, comprehensive approach to “how do we try to maximize the benefits?”, but also acknowledge, be aware of, manage and try to mitigate the risks.


Hackney: That makes sense. I'm curious … when you're talking to companies, are there particular legal blind spots they have today? Certain areas that you're seeing more consistently? On our side, we've seen it on IP infringement and some of those IP issues but I don't know if you've seen it in certain areas yourself.


Daniels: I think where I see it is not connecting the dots. And what I mean by that is, the first large AI deployment that I worked on was in the employment space, a company wanted to use AI to match up employees with job descriptions. So when you get into that, in that particular case, there are issues of privacy, cybersecurity, employment, and discrimination. And connecting all of those dots, for one use case can be a challenge, because a lot of in-house counsel may be a specific expert in one space, but not appreciate how to connect the dots amongst these other spaces.

 

That, to me, is where I think companies from a legal perspective are missing it - failing to connect the dots about how cross functional AI is across various different legal disciplines.

 

And you point out a great example of IP - of not realizing that when AI is generating IP, that in and of itself may not create intellectual property because AI, according to the Copyright Office, can't but it could be where you create something and have AI help you augment or embellish it. That could be a different thing. And so that, to me, is where I think companies from a legal perspective are missing it - failing to connect the dots about how cross functional AI is across various different legal disciplines.


Hackney: That makes a lot of sense. And I think the tough spot a lot of companies we deal with right now is that you're in this middle zone. You''d love to have case law that's been decided that says, “do exactly x”, or a law that says “do exactly y” but laws and regulations are coming. The case law is not going to be established for a number of years, and you have to interpret a gray zone, which is always tough for a general counsel or risk manager at a company.”


Daniels: That's what I get paid to do is figure out the contours of the gray. But the thing you bring up, Chris, is without this regulation, that's why it's important to come up with “what is your risk?” How do you look at the ethics? Because there's the letter of what the law is, and then there's the spirit of it? And you might say, “okay, is this going to be deceptive? Is this going to treat a customer in a way that they don't anticipate?” I don't think you have to be a rocket scientist to start to say, “look, we want to create a program, even though there's not regulation. But maybe we take a look at what the EU AI Act says, and what are some of the principles from that and from NIST” and start to come up with, “Hey, this isn't in place yet. But here's the work that we did to figure this out.” So if a regulator or someone does come calling, you're able to show your work, you're able to take them through a thought process. They may not agree with it, but you're probably going to get hit a whole lot less hard than if you had nothing. If you just said that “we just decided to figure out whenever you came calling”, as opposed to saying, “here's what we did, here's the analysis we went through, this is why we put these guardrails in place.” As a lawyer, I like that approach a whole lot better than “I will just figure it out when the regulator shows up” because something has blown up.


Hackney: Exactly. Waiting till the last minute is never a good legal exercise. It's been an extremely busy last week on the regulatory front. Major announcements coming out, both here in the US and then at the conference in the UK. So I'm curious for your perspective. You've seen this happen in other areas like data privacy and cybersecurity, where do you see AI regulation going? Both in terms of what entities are going to drive it, and what areas they'll focus on?


Image of a building with columns (similar to the White House) surrounded by images
Learn more about The White House's Executive Order on AI

Daniels: The Biden administration passed an executive order. That's helpful, if for no other reason that keeps AI in the forefront. Where I think we're going to have a bit of a challenge is it took us three weeks to figure out who the speaker of the house was going to be. And now AI regulation sits next to privacy regulation and cybersecurity regulation.


I think the FTC might be able to be a driver of regulation, because under Section Five about unfair and deceptive trade practices, that law is already on the books. EEOC, there's law already on the books around discrimination, and there's no AI exception for any of that. So to me, there are certain laws that are out there already. I think the SEC’s new cyber rules will be a backdoor way to regulate AI from a security perspective. So I think you'll see some of that - the regulators just have to have the will.

 
Under Section Five about unfair and deceptive trade practices, that law is already on the books. EEOC, there's law already on the books around discrimination, and there's no AI exception for any of that. So to me, there are certain laws that are out there already.
 

Where I think we're in a bit of a tougher place is - I personally think California will have an AI law before the US Congress does anything. And the thing that makes this so hard is, if you look at the Senate, let's be honest, it's like an old person's home. A lot of those folks are in their 60s. And now you're going to ask them to try to understand artificial intelligence, which very smart people who've been in the tech industry for a long time are honestly struggling (in a good way). So I think it'll be kind of tough, I don't know that we'll have regulation in the US government anytime soon that the Congress passes. I think you could see a replay of what you've seen with data privacy laws, where states pass laws one by one by one. And while I think that's helpful, it also has a hidden cost, which is if you're an innovative company, and you have to comply with this quilt patch of different privacy laws or AI laws, it makes it very difficult, very costly. And ultimately, I think it stifles innovation. This would be much better if it were done on a federal level.


Hackney: I think you're 100% right. And especially like you mentioned with the FTC and tacking it onto existing laws. You've already seen the statements coming out of California that they feel like they've got the authority of the CCPA to tack on AI regulation to the data privacy regulation that's already there.


Daniels: At its core for AI to work, it needs terabytes of data. When you see the regulatory approach from a GDPR country versus the US that has nothing … ChatGPT had to halt operations in Italy because it violated the GDPR, which is the European privacy law. Because in order for AI to work, and to really regulate it, you have to get at the root issue, which in my mind, is data collection, and use practices that we have in this country, because that directly impacts how we regulate AI. And to date, we've had this state by state approach. And you see the difference between that versus what you saw and how Italy reacted under the GDPR to ChatGPT.

Hackney: That's an interesting point for me on the EU AI Act. Are we just set up for the same thing that happened with GDPR, where Europe really leads the way and we follow because we can't get past a quid pro approach to it. And they're actually building something with the EU AI Act, that's going to be a comprehensive view that has uniformity to it?

Daniels: I think part of the issue is the way our government is set up. Using privacy and security as an example because AI obviously impacts both … you have all these different federal government regulators… we have the FTC, we have the SEC, we have the EEOC. All of them have different purviews when it comes to regulating these various things. In the aftermath of 911, we created the Department of Homeland Security. Maybe AI is big enough that we need to create some type of regulator or government entity that is around artificial intelligence. I've seen ideas bandied about …we have licensing so we're going to require an AI license. But the challenge with that is getting the license gives a company a competitive advantage that is hard for more innovative companies to be able to surmount. I guess my point in all this is - it'll be interesting to see where it goes. There's a lot of stuff out there, but I just really struggle with getting a Congress where they're so divided and not really tech savvy, to come up with a comprehensive approach. It really requires government (dare I say it, Chris? Try not to laugh) to be innovative with regulation.


Hackney: That’s the conundrum (or Catch 22) for almost any company, especially US companies. I think US companies would largely like the US to lead on this. I think that's why you've had so many large players really try and work as much as they can with the administration. I'm not sure the US is situated to move quickly on regulation and legislation, like the EU can. At the end of the day, they have to pick a horse, right? They have to go “hey, at least there's some global standard that we can all rally behind it.” My guess is that's going to happen with the EU AI Act just because it's a horse they can pick.


Daniels: To a degree and I'll use privacy as an example. The GDPR came first in 2018. And then California took parts of GDPR but then they put their own slant on it because one of the things to appreciate (and I didn't really get this until I've spent time in Europe, I've spoken at conferences), is the different cultural approaches in Europe. Empathy is viewed as a fundamental right. AI is just another technology that can either enable or undermine that. And the US, culturally, we take a different approach. It’s business, government and the consumer last. And so to me, when you have that cultural and public policy bent, I think that's why you will get a different result, because the culture in our country is just different than what it is in Europe.


Hackney: I would agree 100%. EU GDPR became the model legislation with spins off of it. I think you'll see the same here. This one's more interesting to me from an industry perspective because so much of the AI industry is coming out of the US and Asia. Look at Europe's contributions from a corporate perspective, there's not much. There's not much for them to lose in handicapping the industry, as much as there might be somewhere else.


Daniels: I think it's funny you say that, because if you want to really say what has Europe been innovative about? It's probably regulation.


Hackney: They had the Bletchley Declaration on AI safety last week. Do you put much in that? Obviously, they got a lot of countries, including the US and China to agree to a statement, which is hard enough on its own. But what do you think about something like that?


Daniels: Again, it's helpful because it's a bunch of countries documenting and talking about the risks with AI. But in my view, (I'm a lawyer), the devil is in the details. One of the things that concerns me (as Justin, the consumer) is, when I watch all of those tech executives meet with the Biden administration … they know regulation is coming so they can either fight it, or they can be the ones to try to shape it. I look at the example of what happened with FTX and in the crypto industry with Sam Bateman creed. I have a healthy degree of skepticism when you're allowing the companies who need to have access to terabytes of data to make AI work coming in there. I think they should have a seat at the table but I don't think it's a good idea when they have meetings with the Biden administration that aren't public, because I don't know what gets discussed behind closed doors. As a person, I'm very concerned about how AI can weaponize certain things that go on in our social media with misinformation, lies and how that could be exacerbated if we don't put the right guardrails around AI.


Hackney: It's like that song in Hamilton, I want to be in “The room where it happens”. Yes, that needs to be in the public square at this point.


Hackney: Let's talk NIST a little bit. We talked about the regulatory environment. Much of what the Biden administration came out with, their executive order and others, is really pointing to the NIST AI risk management framework as the foundation for how agencies and regulation should be pinned together - as a standards framework. Can you kind of give a little bit of your thoughts on both what NIST is and then your thoughts on how it can be helpful right now?

 
What I love about the framework is not that it has all the answers, but that, like what I do for a living, it's all about asking the good questions that help you frame up the issue so that you get to the answers that work for your organization.
 

Daniels: NIST is the National Institute for Standards and Technology. It is a department of the US Department of Commerce. It's nonpartisan. They put out all types of frameworks to help guide companies in various areas. The one I'm most familiar with is the Cybersecurity Framework. They have one for privacy, and on January 26 of this year, they came out with one for artificial intelligence. It's called the AI risk management framework, or RMF. NIST is incredibly well respected across the aisle in the industry. And you see that like in the SEC cyber regulations - you can get parts of that regulation that might have been written right out of NIST.


What is this AI RMF? It's a voluntary framework, meaning you're not legally required to comply with it. But I think for now, in my view, it is the best practice around how to create trustworthy AI, and also promote - as the framework does - responsible design, deployment and use of artificial intelligence. Okay, those sound like lofty words. I can sound like another expert and use the words trustworthy and responsible, but what does that really mean? What I love about the framework is not that it has all the answers, but that, like what I do for a living, it's all about asking the good questions that help you frame up the issue so that you get to the answers that work for your organization. Delta Airlines may have a very different risk profile than another company that is just a SaaS product.


It comes with four big principles - govern, map, measure, and manage. Then within those four core functions, there's a bunch of sub-functions. The whole point of it is you go through that whole framework and the various analysis and questions that it asks you to then come up with what is a holistic risk management approach to hopefully maximize the benefit of AI, but also have a good understanding of what trustworthy and responsible AI is. So that is infused in the whole process, from design, to deployment to continued maintenance of AI systems.

Hackney: What I love about it is it's systemic common sense. It's just “have you thought about all these things?” And it addresses one of the biggest problems we've seen with most enterprises as they've done, or at least started to implement, AI which is -

  • How do you even measure what risk you have right now?

  • How do you do an assessment around it?

  • Once you know what risk you have, how do you build a mitigation plan for it?

  • What are the things you can be doing to bring the risk levels down?


When you talk to clients today? What are you telling them, especially around NIST? What are the things they shouldn't be thinking about in its application to what they're doing today?


Daniels: The best way I can answer that question is to go back to an example. You want to implement an AI tool so that you can figure out in my fortune 500 company “How do I get resumes and match them up with the right jobs?” Or maybe an applicant is even familiar with a different job. So let's talk about how that might look like under NIST. So, in particular, what does successful AI look like there? Is it that the right applicant is matched up with the right job 80%, 90%, 95% of the time? And understanding that AI is not going to get it right all the time - what does that margin for error look like? And then, from a risk mitigation standpoint, one of the biggest things that I have to be concerned about, at least with that application, is getting sued “under pick your discrimination” law. So, then the question becomes, “okay, as I look at the different risks for this particular application, I'm thinking about the litigation risks of violating federal or state law.” (New York has an AI X specific to employment.) But then there's the ethics behind it - what kind of company culture am I creating if people are getting hired and realize they don't even talk to a human being before some machine learning? How do you feel about that as a person? And so, when you start to ask the questions you say, “Okay, how do I rank those risks,” and based on how we rank them as a company. Now I start to put together what that risk mitigation plan might look like for that particular use case, because this ran me down a series of questions that helped me create my company culture around that responsibility. And then, once I've done all that, now I can look at this use case, then I can come up with how to measure what quantitative and qualitative figures for success with it look like. And then, how am I managing the risk? which in this case, I think is litigation risk from discrimination. Also the ethics and how your company is going to react to that employee on down the line.


Hackney: One of the things I really do like about the EU AI Act and some of the early proposals are the delineation between high risk and low risk AI applications. And you just mentioned one that shows up in the high risk, which is HR person decisions.


Most of the EU AI Act and others are focused on “national defense high risk”, but when you think “litigation high risk”, are there areas you are getting more nervous about and the application of AI when you talk to clients, like the HR side?


Daniels: The HR side would be one. Another one that I've been involved in is when you start to use facial recognition or ways to identify people in the use of cameras. So particularly, and I'll just use Atlanta as an example, there's been a real uptick in crime with snatch-and-grabs where they'll go into stores and just grab merchandise. One of the responses to that, and I've seen it now, they want to come up with ordinances that require cameras. I told him, “you can't have that conversation without talking about facial recognition. “And so that is another one. The EU AI Act would put that in. And I think that one - that's an unacceptable level. But in this country, that's not well thought out. But when I talk to companies, and I'm bringing this up to them, it's either “Oh, we didn't think about that,” or “oh, we want to mitigate that.” But to me, you can't have that kind of conversation about facial recognition. Think how you feel, Chris, if you go in a store and you're buying for your wife, then “Hey, Mr. Hackney, we've found out you've been in four other stores and we'd like to have a chat with you.” Kind of have to think about that. Because the facial recognition saw you and they thought you were some guy who had knocked off three Lululemons.


Hackney: I think understanding the categorizations is very important. There's a lot of things you can go quick on - no one really cares about your accounting software and AI in it but once you get to human outcomes, it gets really hard and you need to take a close look on it.

 
Looking for the policy reasoning behind the EU AI Act can help companies now, absent regulation, come up with how they want to view it so that is very defensible.
 

Daniels: I think, Chris, one other point you bring up about the EU AI Act is the way that they're regulating is they're not really regulating the AI itself. They're regulating what they perceive to be its impact. So they have different types of regulation for different perceived impact and its significance. So where I think that's important for the audience is, even in the US without the guardrails around AI yet, you should be thinking when I'm doing this particular use case of AI, like maybe the Chatbot to talk to Chris when Chris wants to go and get his Delta ticket. Well, you know, that might not be too bad. But maybe you need to let Chris know that he's talking with an AI chatbot. So at least you're aware, as opposed to facial recognition from the impact. And that's what I'm saying is looking for the policy reasoning behind the EU AI Act can help companies now, absent regulation, come up with how they want to view it so that is very defensible. So, if you're trying to move forward in this, and it's a gray area, what the rationale of how you got to where you got to, it makes sense.

Hackney: I think that's a great point. I think in most of these regulations, they are absolutely telegraphing where they're going to go. And you should be listening to that right now.


Last question: You're obviously giving a lot of advice to folks who are trying to bring AI into their enterprises. What's the one or two things you're hammering with them right now that they should be thinking about, or doing around AI and responsible AI? If you leave them with one or two things, what is it?


Daniels: Number one, you have to have a team approach. But somebody has to be responsible because if somebody is not responsible, then the effort is going to be diffused, and you're not going to make an impact. And then I guess the other thing I will tell the audience is, you really have to connect the dots, as we've been talking about. Understand what the different risks are and really be thoughtful about understanding what the impact if AI goes wrong and your use case would be and how that might impact. If the accounting software goes bad and you have some bad numbers, that's pretty bad. But if you start arresting people or imprisoning them or make hiring decisions off of them, that is an even bigger impact. I think a lot of business professionals tend to look at the opportunity and not see what's going on with managing the risk. And after what we've seen with social media with AI, that's just not a plausible approach.


Hackney: That makes perfect sense. Justin, we'll leave it on that note. I really appreciate you coming by and talking a little bit with us on this. It's going to be an important subject. And trust me there's gonna be new news every week on the compliance side of this. So thank you very much, Justin. Much appreciated. Thank you.


 
I think a lot of business professionals tend to look at the opportunity and not see what's going on with managing the risk. And after what we've seen with social media with AI, that's just not a plausible approach.
 



Comments


bottom of page