top of page
c837a6_3f53cdbff83b47b19a97bd35c767333c~mv2.webp

Navigating AI Legalities and Responsible AI: A Dialogue with Austin Mills

Morris, Manning & Martin, LLP partner joins AI Guardian's Responsible AI Leaders Series


Watch the full video or read the transcript below (edited for length and clarity using ChatGPT). Check out the other interviews in the Responsible AI Leaders Series here.



Hackney: Welcome to AI Guardian’s responsible AI leadership series. Today, we are very fortunate to have Austin Mills with us. He is a partner in Morris, Manning and Martin, LLP's Corporate technology practice and the chair of both the firm's Artificial Intelligence Group and the Blockchain and Cryptocurrency group there. So it is probably fair to say: he's on the cutting edge of a lot of legal issues right now. Pleasure to have you, Austin, on the show.


Mills: Thanks, Chris. I guess I'm finding out that I'm attracted to shiny things in technology. Happy to be here.


Hackney: As a legal expert, where are you seeing companies excel or struggle as they integrate generative AI into their operations right now?


Mills: One major theme on the struggle side is the fear of the unknown. There's been a noticeable shift since the release of GPT 3.5 by OpenAI, about a year ago, which marked the beginning of a recent wave of attention and activity. Initially, conversations with clients were dominated by this fear of the unknown, questioning what this new technology is and its potential impact.


On the success side, the early achievements we've seen involve people becoming more comfortable with internal use of AI, rather than employing it as an external facing product or service. Its application has been particularly beneficial in creating marketing materials or other content. The advantage lies in doing things faster, creating more, refining, and modifying to different forms, while ensuring the end product is not particularly risky or mission critical, thanks to human oversight.


The next significant trend among our clients has been the use of software coding tools, such as GitHub Copilot, Tab9, or other major ones. These tools have their own legal issues and risks, but overall, they have been very useful to clients. They are generally used in ways that mitigate these risks.

We're starting to see the integration of language model systems (LMS) and other new AI products into clients' product stacks, including in their software. This functionality becomes part of a broader set of offerings. Over the next 12 to 18 months, we expect to see significant movements in this area. These initial struggles and successes have shaped our current understanding and use of AI.


Hackney: On our side, we've seen both fear of missing out and fear of the unknown all at once, and it probably depends on the role of the company. When you're dealing with legal issues and helping companies at this stage, what type of role with the company is really leading that right now?


Mills: It depends on the size of the organization. With our clients, it's almost always the C-suite involved first and foremost, particularly the CEO, COO, and often the CTO or someone in a similar role. Some of the more advanced organizations have formed committees or working groups, involving a broader range of stakeholders within the organization.


Our clients are predominantly in the technology sector, with a focus on software-based technologies. There's a strong recognition of the significance of this area at multiple levels. This is why we're seeing initiatives being driven in a top-down approach.


Hackney: I'm curious from a legal perspective, are there any areas that are particular blind spots that you're having to help a lot of clients on at this stage?


Mills: There are a few major buckets. These are not necessarily blind spots for a lot of our clients, but they're important. And these are a lot of the conversations we're having.

  • IP Ownership Implications:

    • Concerns about ownership and rights to AI-generated output. In the United States, purely AI-generated output generally lacks copyright protection.

    • Reliance on AI for significant parts of software could lead to challenges in copyright claims, though trade secret claims might be possible.

  • Data Analysis:

    • AI tools enable data analysis that wasn’t previously possible.

    • Existing contracts with customers and third-parties oftentimes restrict the use of desirable data.

    • The use of AI on customer data without proper approvals can lead to legal challenges.

    • Example: A couple of our larger clients have sent us letters reminding us to get the use of new technology approved before use and specifically highlighting that the use of generative AI in connection with their account is not approved.

    • You should consider the same when it comes to your vendors and the data they hold for you.

  • Data Privacy Regulation Implications:

    • Overlap of AI use cases with data privacy laws, including data brokerage laws and biometrics.

    • Possible implications of AI use on data privacy regulations that were previously not a concern.

  • Risks of Dependency on Third-Party AI Model Providers / LLMs:

    • Contracts with major vendors like OpenAI are going to give them lots of flexibility to get out of the contract.

    • What happens to your product if the whole product stack is built around an AI tool and that tool is withdrawn due to regulatory changes or contract cancellation (e.g. uncomfortable with your use case)?

Hackney: The last one you mentioned was the public LLMs (ChatGPT, etc). A lot of them have been issuing indemnification policies. Do you see value in that? How do you see that as a help for companies that you work with?


Mills: I think that's mostly on the IP infringement front. That's such a big issue - on the data input side. There's lots of mass litigation going on where major sources of data that these elements were trained on their data are now suing, suggesting they couldn't use the data. This has been a high area for technology litigation and debate on the screen scraping side for the last decade, and that's still not well settled. What are third party rights to public data? Can you just take all the data off LinkedIn and use it for your own training, benefit, or background use? Or can Linkedin protect that? Different courts are giving different answers.


There's notable confusion over what constitutes fair use, which often leads to legal issues. This confusion persists even among copyright lawyers until the Supreme Court makes a decision, a rare occurrence that might happen only every decade. Regarding the commercial aspect, the goal is to sell enterprise licenses. The terms of service for software coding tools have undergone significant changes. Initially, these terms gave more rights to the provider for using and retaining data. However, a market-driven shift occurred as larger corporations were reluctant to agree to broad data usage rights by providers. This shift is evident in practices like IP indemnification. For example, Adobe offers its customers protection against IP infringement claims for AI-generated content. This practice, highly valued by customers, has set a precedent in the industry, indicating a trend towards more protective measures for users of AI tools and services.


Hackney: Are there certain things you're telling clients now that they should be doing as best practices in those areas of IP to protect themselves, even though there's not case law settled yet on this.


Mills: Let me answer that with an example. You want to create a one-page marketing brochure about a product. You tell ChatGPT, 'Here are the core themes that I want to be clear, here's the functionality of the product that I want described. Please create marketing materials with this tone.' Then, you could have DALL-E create an image that goes along with this. So just that raw output would not be copyrightable or protectable. But, if you go in and make sufficient modifications, once there's a significant amount of human involvement, it goes from not copyrightable to copyrightable, not just the part they change but the whole thing. That's one way to increase that probability of IP protection.


On the software programming front, it's a bit tougher. Ideally, there should be active involvement and manipulation by the developer. They shouldn't just rely entirely on the tool for hundreds of thousands of lines of code. There are steps you can take to improve the likelihood of a good outcome. Going back to the example, that image would not be copyrightable. The question is, do you care if someone steals it? But if you don't, then it's not much of an issue.


Hackney: I think the hard part is when that image becomes something like your company logo and then your company takes off. And then how do you protect your own company IP?


Mills: In that scenario, you'd have trademark protection if you could get it. Which wouldn't look at the same questions of AI generated content.


Hackney: That's fair. We have a lot of partners we work with who the gist of what you say, but are asking “how do I safeguard in the interim period?”


Mills: I that’s think the hardest part for most folks in the C-suite right now. It might be a bit unusual for an IP-heavy attorney to say this, but not everything needs IP protection. I've used AI to generate output myself. In one of my recent client alerts, I added a bit of a snarky comment at the end. It was about IP rights related to AI-generated content. The article included an image that I created by copying the article into a chatbot and asking it to make an image that goes along with it. What it produced is what I used. It's not copyrightable, so anyone can take it if they want. There's such an obsession with IP protection, but not everything has to be protected. Of course, this doesn't apply to things like source code, but sometimes there's an over-sensitivity to IP issues.

 

It's probably going to be the craziest wave of regulation I've ever seen in my career. AI is going to touch every aspect of our lives, including society, business, and government. It's such a general-purpose application, and it's understandable that everyone, including regulators, wants to talk about it.

 

Hackney: Changing gears just a little bit here… the last few weeks, we've had a lot of news on the regulatory front .. the White House Executive Order and the big confab in the UK. As a firm, and as a leader at that firm, how do you view the regulatory path ahead right now for AI governance and compliance?


Mills: It's probably going to be the craziest wave of regulation I've ever seen in my career. AI is going to touch every aspect of our lives, including society, business, and government. It's such a general-purpose application, and it's understandable that everyone, including regulators, wants to talk about it. There's so much going on it's impossible to keep up with all of it. Fortunately, I'm not a lobbyist, so I don’t have to. I focus on what the regulations actually are when they're published, and we're still early in that process. The White House executive order and the OMB guidelines are the inevitable and appropriate first steps, acknowledging the major issues this new technology presents, from the safety and soundness of AI to its implementation, including fairness and bias issues.


There's an appreciation that those who excel in AI will be more competitive, but operating out of fear and banning things until we're comfortable leads to different problems. Walking that line is probably an impossible task because there are always trade-offs. Protecting privacy and civil liberties while managing the government's own use of AI are all themes that make sense, but the question is how they'll get implemented and what the unintended consequences will be. How will these things be used for regulatory capture in a way that is meant to benefit the large stakeholders over the public in general, which is always a risk when it comes to these sorts of things. So I think it's a good first step.


This is a hard area to regulate. Take the EU, for example. I think they will likely lead and they'll probably go too far. They'll probably do things that have worse trade offs than the things they're protecting against. I also think one big theme of a lot of the regulation and legislation is that it seems disproportionately focused on the kinds of things politicians care about (e.g. manipulation of elections). But I'm more worried about the person in his basement that can use GPT 7 when it rolls out in 4 years to basically design and create a stupor or whatever. … Nuclear weapons require massive infrastructure costs - no one can do it in their basement. We're now creating a powerful tool without meaningful barriers to entry.


There's a focus on fairness and bias, which matters, but it's also an incredibly hard problem to solve, with a lot of subjectivity. What one person sees as objective fact, another might see as bias. This makes it an easy political issue. However, I question whether that's where the focus should be, or whether these are the biggest risks.


So, while regulators are doing their job, I acknowledge that this is an impossible task. There is no perfect line to walk that maximizes the good and minimizes the bad. Trade-offs are rooted in subjective differences. Some people might prioritize bias and fairness so much that they're willing to sacrifice other net gains, while others believe we should push forward, accepting some messiness for the greater good.


Hackney: That's the hard part, technology is moving so fast now it moves faster than governments can process things. We need to accelerate it, right?


Mills: In fact, our systems are probably just getting more cumbersome, bogged down, and slower. It's such a polarized environment. That's another part of the problem. By the time there's some alignment on regulations, we're going to be three generations ahead in technology. We'll have all kinds of new, unanticipated things to deal with.


Hackney: It's funny, because I'm old enough to remember previous tech waves… I always say, “tech ways attack the weakest parts of the regulatory or legal environment.” It's the same thing that happened with the Internet and years of Amazon not collecting taxes. Because why? We have a patchwork sales tax environment in the US and so it's impossible to collect taxes at that level.


Mills: Same with mobile.


Hackney: Yeah, the idea that suddenly everyone in the world had a phone with a camera in their pocket. What are the legal implications of recording and video taping on the fly? And I think you're gonna have the same things here. It's gonna be really, really interesting to see how the US responds to the EU leading with the EU AI Act. How states like California respond because they already feel like they've got power with their privacy laws to extend into this space. But it's going to get real interesting.


Do you have a perspective on timeline? When do you think things are going to start to really hit? Where you have to prepare clients, not only for good governance now, but start to think about compliance with things as they come down the pike.


 

So there's already small pieces, and - to your question - we'll continue this. Especially at state and local levels. We'll see things incrementally every month, and then it'll be every day at some point.

 


Mills: In part, there are already regulations that have to be navigated, like the New York AE DT (Automated Employment Decision Tool) law, which regulates the use of AI and similar technologies in hiring and promotion practices. We have several clients in that space with software tools designed to help companies automate the sorting of candidates, narrowing it down to a top percentage for interviews. These tools are now expressly regulated in New York. They've had to find auditors to audit their bias, register, and comply with these regulations. So there are already small pieces of regulation in place, and we'll continue to see this, especially at the state and local level. We'll see incremental changes every month, and eventually, it could be every day.


As for the bigger picture, I mostly represent early-stage and middle-market tech companies, not the mega-tech companies like Google or OpenAI. For those larger companies, lobbying for their interests is part of their job. However, for an early-stage tech company, that's a waste of time and resources. You want to know the general direction of where regulations are headed, especially regarding how it might affect your product development roadmap. For example, if you're developing hiring tools and New York is working on a bill that impacts your market, you want to be prepared for that launch. Most regulations, especially those requiring significant time and investment, give you a good amount of lead time to become compliant. I don’t recall the exact lead time for the Ae Dt law, but I believe it was probably a year or more before companies had to be in compliance with it. So, for most companies, it's better to wait and see, then react when the regulation is enacted. Trying to be too proactive might not be the best approach.


I think you'll probably see something start to hit in Q1. But they'll have those moratorium periods.


Hackney: It's a matter of how quickly do you have to start to get in line during that moratorium period once it is established.


Underpinning a lot of the - at least the US - effort on regulatory is the NIST AI Risk Management Framework. What is your view of those standards? And should clients be thinking about those standards as they enact their own responsible AI process?


Cover page of the NIST AI RMF
Learn more about the NIST AI RMF in this primer

Mills: Admittedly, I've not read the full NIST standards. However, from what I have read, I like them. NIST, as an organization, generally produces more quality standards and outputs than most others, providing a good framework. In terms of what's currently available, I would regard the US NIST model as one of the best resources to look toward. It will almost certainly be heavily factored into how legislators and regulators implement their initiatives. So, I think it makes a lot of sense and is certainly more sensible than most alternatives.


Hackney: When you think about responsible AI and advising your clients on other best practices, are there certain frameworks you point clients to as they start to integrate AI?


Mills: No, there aren't any specific external frameworks or resources, apart from NIST, that we're focusing on at the moment. But lately, we do encourage our clients to look at their use cases on a case-by-case basis and consider things like internal AI usage policies. This includes looking at

  • Responsible use,

  • Compliance with laws,

  • Respect for privacy and data security,

  • Fairness and inclusivity,

  • IP rights,

  • Transparency, and

  • Accountability.

For example, when considering something like a marketing brochure, there might not be much risk management needed. But for more complex applications, like using ChatGPT or GPT-4 as the backend of a chatbot in a mobile banking app with the ability to transfer money, caution is advised. There's a risk-reward balance to maintain. You can try to avoid all risk, but then you might end up with a product that's too safe and not competitive in the market.


As for risk management, it's about considering existing legislation and what's clearly moving towards implementation. However, it's hard to say there's a one-size-fits-all approach, as our clients have very different risk frameworks compared to larger tech companies like Google or Apple.


Hackney: Last question … about what actions can be taken now. You have many clients seeking guidance to position themselves correctly regarding AI implementation and responsible AI. While you've mentioned committees, policies, and such, what are the key things, maybe three or five, that you advise your clients to do right now in this area?


Mills: I'd say the first step is to take inventory and analyze your desired potential use cases. What do you want to do with AI theoretically? Before that, it's crucial to engage the appropriate stakeholders. C-Suite participation is essential for the same reason the government focuses on AI. For example, the OMB AI guidelines mirror how I encourage our clients to think about their internal AI use. AI governance involves managing risks from AI use, acknowledging at the federal level that AI has implications across every agency and part of an organization. Every agency needs to consider AI, and the same applies to companies. You want a diverse set of stakeholders involved across the organization, including marketing, development, compliance, and more. These stakeholders will likely have ideas about how AI can be leveraged.


Next, assess the risk implications for each AI use case. Once you're doing anything interesting with AI, an AI usage policy likely makes sense. This would include guiding principles for the organization, protocols, procedures, and how you think about AI use. You might want to enact approval rights for any new use cases or tools. For instance, if your software programmers have diligenced tools like GitHub Copilot and Tab9 and are comfortable with them, you can use those. If you want to use another tool, it should be approved by a committee.


These are the major first steps for organizations as they manage AI going forward.


Hackney: Very, very helpful. I asked that question at the end because I think it's the number one thing that stymies most folks we talk with. How do I get started on this journey? So I really appreciate your perspective there. And Austin, i really appreciate your time today. Thank you so much for joining us. Always a pleasure to talk to you.


Comments


bottom of page