Watch the full video or read the recap or transcript below. Check out the other interviews in the Responsible AI Leaders Series here.
Interview Recap
(Summarized using OtterAI)
In our interview with Kelly Koscuiszka from Schulte Roth & Zabel, we discuss the evolving use of AI in financial firms, with an emphasis on the need for flexible compliance policies due to rapid technological changes. Kelly highlights the importance of firms addressing AI use proactively, including developing policies, educating staff, and ensuring accuracy. We discuss proprietary information risks, the need for human oversight to prevent misuse, and the importance of collaboration between technical and compliance teams in AI usage. Kelly stresses the significance of accurate marketing and the potential legal risks associated with AI-generated content. Our conversation also touches on the frequently raised concern of AI transcription tool use at financial firms, the importance of understanding record-keeping obligations, and the potential impact on compliance and litigation.
Interview Transcript
(transcribed by OtterAI, edited for length and clarity)
Layton: Joining me, we have Kelly Koscuiszka from Schulte Roth and Zabel. Kelly, comes to us with a background focused on SEC regulations' intersection with technology, including cyber security, privacy and modern technologies such as AI.
Koscuiszka: You covered it really well. I'm co chair of the regulatory group here, where we're focused on private funds, and so I tend to cover, I still call them emerging technology. AI's been really exciting for us for the past two years, and just interested to see where else it's going to go, but making sure the private fund community does it in a way that the SEC thinks is compliant.
Layton: And just to give folks kind of putting themselves in the shoes of prospective client of yours, if, if I were a hypothetical client of yours, what are some concerns that I might bring to you? What are issues I might have, and how would you help me?
Koscuiszka: If you were one of my clients, it wouldn't have been surprising if you called me in March of 2023 obviously, a number of our clients have been using Machine Learning well before that, but it felt like Open AI had exploded... and usually when there's something everyone's buzzing about, the regulators are laser focused on it. So what should I be doing to make sure that I'm doing it in a compliant way? And that ranged from everything at that early time, from some clients who put out notices until further notice, don't use this for work purposes. For others, it was they wanted to, real quickly, if they didn't have one already, try to get an enterprise solution and build an LLM, so really for us, it was making sure you said something, because what we always want to be careful of is if employees are doing something related to their work outside of the firm, and that's one of the things with open AI and some of its competitors. It makes the technology so accessible. Previously this type of technology, it had to go through a contract or there was some kind of easy gatekeeping function, but suddenly it could be on everybody's phones and everybody's laptops. And for some clients who want to embrace it early, we were writing policies in March, and we've rewritten them 12 times since, and we don't view that as a bad thing. We as a compliance team, we want to make sure that we're keeping up with what's changing, and with something like AI that feels like it's changing weekly, if not daily, we want to make sure that policies are flexible enough. So if you have a firm that's embracing the technology, and we've got great policies and procedures today and next week, there's something new that really the policies haven't considered, or we're unintentionally we're stopping something that really should be okay with some guardrails. We want to move that forward, and so the only way to do that, really, is to work closely with the people who understand the technology. So you really see this partnership between the compliance team that is there, not trying to be the Department of no but trying to enable it and keep the firm safe and the individual safe and the investment professionals, scientists who buy into that understand they need to do it in a compliant way, and they're trying to raise their hand and help shape the policies so that they're responsive to the needs of the firm.
Layton: That's really well put. And I want to highlight a couple of things there that I think are important takeaways for everyone - just recapping, if I'm hearing you correctly. Number one, get out there and say something - this is not something that we can be silent about, especially now, if your firm doesn't yet have an AI policy, it needs to now. It needs to yesterday. But a quick answer to that could be just a blanket, no, however, that is very much just a temporary stopgap solution, because these are the sorts of things that offer so much value and are so prevalent and so available that a blanket note is just going to drive this underground. And you can still be held accountable for that shadow AI use regardless of the policy that's been formally stated. So it's really important to have that policy adapt and adjust to the reality of where we are that these tools are going to be used. I think a recent study from LinkedIn and Microsoft probably not even that recent anymore. I'm sure this number has gone up, but it indicated 58% of folks who are bringing their own AI tools to work... so a blanket no is just not feasible at this point. More than just that stop gap and then really keying in on what you're saying there about the iterating on this, not only is it important to iterate from, you know, point zero to the next step, this is something that's completely constantly changing every day. I feel like there's a new tool, a new use of the technology, a new model or competitor that's using a trillion more parameters than it was the day before, it seems. So I think really keying in on those three points key takeaways for our audience here. So thank you for sharing that, and you've already, I mean, shared a good bit about kind of the common uses you're seeing. What are some other specific ways you see financial firms using AI today?
Koscuiszka: Sure, so our clients, we represent a wide client base, so it's really different at different firms. I mean, there are some firms that really they're just not using it. It's five people. They've checked in with all five people, and it's just not meaningful to their business yet. And there are other firms where it's, you know, they've had some form of machine learning for over a decade, different use cases. Certainly with the language models, there was a desire to use some of it in the research process for the materials, like, for example, in the healthcare space, like being able to get through tree disease and figure out, like, in a way that we couldn't before, without lots and lots of humans, then probably couldn't accomplish it. You're looking for a reference to a particular drug, just doing research at a different scale because of the computing power and how quickly it can go. I think in the early versions, it was kind of a joke how it couldn't do math. But if you took some comfort in that, you're well behind, because that's just how quickly the technology is moving. At other places, there's some experimenting with it on pros, like they're putting out marketing materials or putting out investor letters there, you have to be really careful, because there's a lot of important data in there that's heavily regulated by the SEC. So are you using it in a way that maybe is going to be problematic, or it's going to cause the information to be wrong, or using it, you know, to take this sentence and just make it a little bit punchier. So unlike other technologies, where I think they were a little bit more limited. The uses are as varied as the number of people who might be using it, and so that was one of the challenges, and required some creativity in designing policies and procedures. You don't want to if you just have one use case in mind and are very prescriptive, it might be work really, really well for that use case, but not more broadly around the firm. So what we tried to do is both, on the front end, okay, what are the use cases today and the foreseeable ones? And then also, even if we thought we had some good policies in place, do we have a working group or some kind of governance team, or just an open dialog depending on the size and culture of the firm, so that we don't take false comfort that we had a great policy for our needs today, but the needs changed dramatically over a matter of weeks or months, and now you have this mismatch between the policy and what's actually happening. Our clients are subject to routine examination by the SEC. Those can go very deep and be very thorough, especially so
when the SEC comes in, they're very focused on not just what your policies and procedures look like on paper, but also how they're implemented.
So you could actually have really good systems. You know exactly what your people are doing. They're constantly checking in with you, but you didn't write that down and put it in paper. You're not going to get full credit from the SEC. The analogy I always use is like math class in school, where you only got half credit if you got the right answer, but you didn't have didn't show your work. So it's very similar on an SEC exam. The other side of that is you could design as compliance officer. You could design a beautiful policy on paper, but it's so impractical that nobody's actually following it, and then you have a ton of violations of your own policies. So it's really trying to find that happy medium. And I think with something like AI, it only works if you're really in constant dialog with people using it.
Layton: I think that's a great that metaphor of showing your work. In the case of my particular handwriting being so illegible at times, I'm not sure that it would count as showing my work. That's beside the point.
Koscuiszka: I put a lot of time and effort into my penmanship classes in school. Now I do Skype all day long, so it doesn't really matter. So right to spend it on math and technology instead of handwriting.
Layton: Well, for those of us who are maybe earlier on in our use of AI, I know some of us might be to the point where, okay, this is old hat. We've done, as you were saying, decades of machine learning model use at this point where others might be scattered use cases throughout the organization, where maybe HR is considering using it for resume scanning, and others in marketing might be using it for some use cases you mentioned to improve to delivery of a particular message. For folks who are maybe on the earlier side and not using the more advanced in house development capabilities, what are some of the first things that they should do beyond what we've covered earlier, with their core policy and procedure work up front?
Koscuiszka: Yeah, so I think it's important to be open to educating yourself and to be asking lots of questions. It doesn't help anyone to pretend like you know everything about it day one. And I think actually, when people are nervous because of technology, that they don't think of themselves as technology people, they don't want to ask a stupid question, so where we can kind of end up in a bad place. So I always joke I'm a history major and an English minor, but I happen to focus on things that are very relevant to tech. I've been doing electronic communications, AI alternative data. Whatever the case is, you become the expert in that thing, and you learn how to rely on the actual experts and ask the right questions and figure things out so you can handle your cases and litigate them, and so I bring that same energy and philosophy to any problem that I'm trying to tackle. So if I'm sitting there with let's say it's a data scientist. It works both ways. I'm not sitting there saying I know everything about the law. Just defer to me. Explain the tech to me. We're going back and forth. And I particularly like working with data scientists and people in technology, because they really care about how things work, not to make sweeping generalizations. But you know, it's just it makes sense that you'd be interested in designing things. And so I'm often sitting there explaining like, this is where the law stands today. This is what we're trying to solve for. And they're using analogies, trying to explain to me how the technology works, because we're trying to get to the right place. And if I just said, you know how that brainstorming so I find that, you know, I'm not afraid, even if somebody calls me an expert, to ask the stupid questions, or what might sound like a stupid question, and say, Hey, can you explain that to me? All right, when we're talking about fine tuning, what exactly do you mean by it? There are already a number of terms in AI, but you shouldn't be afraid to ask what they are. I think, not to make any negative analogies to crypto, and I did a lot of work in crypto, but I did feel like there was a time in crypto where if you asked questions, you were like, don't ask. Why is he a genius or not? Don't and that it actually had this weird kind of gatekeeping approach that did stop people either from getting involved in it or feeling like they didn't know what it was. AI, I think is a little bit different, because it's not like a small group of people who are doing it. It was just everywhere, all at once. So I think you can find lots of different ways like, I think, you know, there are great professors who are putting out really interesting stuff that's very accessible, that makes it something that you can read, that just, I think the lack of an intimidation to it really makes a difference. And what we've seen is, at least in my field, historically, when compliance officer said, Oh, that's a little too technical. I'll leave it to somebody else that hasn't gone well on their SEC exams, where there's expectation that compliance, whether it's fair or not, is involved in all aspects of the business, but when you see it as look, I'm here to protect you, but if I don't understand what you're doing, then I can't do that for you or for our firm. But then I think that fosters a culture where you're working together.
Layton: I really appreciate just even in our interactions, I think the collaborative approach has been very apparent. So as a client, I can only imagine, but I really think that collaborative approach there that you're describing, not just in your interaction with clients, but with your clients, kind of interactions amongst each other, between compliance leaders, technology leaders, members of the different business units, that's so essential, because we don't there's so much that we cannot possibly understand about the entirety of the realm that is AI, the entirety of the realm that is the law, the entirety of the realm that is a given business. So I think we cheat ourselves when we assume that we can know all there is to know and not ask those questions. So so important to tap into expertise and leadership and expertise like yours, and if you're a technical person, really work closely with your compliance counterparts and vice versa, because there is so much to learn there, and we're ultimately all marching towards the same goal. So I really appreciate that moving on in terms of key risks that firms face when it comes to use AI, and again, you've hinted at the number of these already. Are there particular areas that you'd like to highlight as the greatest focus, the greatest areas of risk?
Koscuiszka: Sure, I mean, firms need to consider where they're putting their proprietary information. Do they feel comfortable on the front end, from both the cyber perspective and also the terms and conditions? Like, are you putting information into a system that's going to train on it? Share it more broadly? That might be okay for certain types of information, but you really have to understand that before you approve it for certain use. And so for some of our clients, they'll take what I call bifurcated approach. If we're using something in the public domain, where we have less control over it, like train on our data, or we can't be sure. We're just we can only trust the cyber control so much. We might say, okay, no proprietary information can go in there. We're just doing simple queries. On the other hand, if we're building something internally, we're comfortable that it's secure, as secure as any of our internal systems, and we've negotiated contracts, or there are very clear terms and conditions. We're not going to train on your data. We have no rights in your inputs. We know rights in the outputs. We'll feel very differently about using that. So to me, that's very important on the front end, and then once you're comfortable with the systems that you're allowing your people to use, I think the key piece is the ownership of the accuracy that we hear a lot about hallucinations and the human in loop. For most of our clients, they are using it in experimental way, or it's very much in parallel with other systems. And we just want to make sure you have those checks on it that requires both integrity of your people, right? They should never be passing something off with something they worked on for 10 hours over two weeks, if it's really something that was spit out of AI in two minutes. And really, for something that you know, some of the results you get from AI at first blush, they're just they seem so accurate and so competent. But if you don't really know the underlying material, you don't realize, okay, AI changed a few words make that sentence tighter, but it actually changes the meaning. And so I do think, while it's going to be a great tool for at all levels, I think at the junior levels, you still really have to reinforce the underlying skills, because you'll have a group of people, if you don't do that, where they're just not going to, you know, they got by the first couple of years. Because, sure, it's what first, second year, third year to law firms or investment firms used to do, but now they don't actually have the underlying skills. So I think when you see it as that as like a as a tool, and you're really focused on, how do we ensure the accuracy? And that can be different, like at some firms, it's being very clear about who the owner is at other firms. It's labeling things, right? Labeling things internally. Okay, I might think a little bit differently about this document if it was used, it was created using AI, versus if it wasn't. And what I like about using things in parallel is that you're starting to realize, where am I better? Where is the AI better, and how can we put those two together so that we end up in a much better place? I think clients are also starting to develop now where things are done in parallel and there's always a human checking it, because that's one of the easier ways to get comfortable over time. We don't want the human involved as much, right? We're supposed to get much more efficient with this. So developing, the testing, the oversight, the monitoring. That makes sense, because that first mistake that happens that impacts markets or really is disastrous for an investment firm, the SEC is going to be in there real fast, and it's gonna be second guessing everything. And you want to be able to say, 'Look, I know this happened, but I had the right policies. I had the right training of my people. We were trying to do everything right. We even had testing in place. Yes, there was error, but it's not because we didn't try.' Versus a firm that has something bad happen, but they can't point to any of that because they were just so excited to embrace the technology, or didn't have the right controls in place, and now they're on their back foot trying to explain that why they didn't have that before they were launching this type of technology. And then the last piece for investment firms, as we've seen, the SEC bring so called AI washing cases where you're talking about your use of AI, and turns out it's not accurate. So you always want your statements to be accurate. The SEC can come after you under anti fraud provisions, even the marketing world, depending on on what it's in, but particularly in hot areas where you have people excited and promoting it, and maybe you're losing a little bit of control the narrative. That's why you where you really want to bring your people in. We know you're excited about it, but everything we say has to be accurate, and we either you have a tight group of people who are allowed to talk about it, or you have very clear talking points and that you really train people that, look, I know you're excited. I know this is coming from a good place, but really getting too far out and misleading investors or the public could be a real problem for our firms. We have to think carefully about what we say and make sure we can back it up and substantiate it. It's interesting thinking about the hype of AI and around AI and even that it produces itself. I think it's fascinating when you think about the accuracy and how we talk about it. On the one hand, it's so important, as you mentioned, to have a an accurate portrayal of what you intend to do with AI. In the meantime, I think it's also fascinating that AI is like the most confident student in class, but not necessarily the best student in class, it there's a lot of information there, and it always portrays things as the right answer. That's what it's been trained by humans to do. So I think it is so easy for us, as you've noted, to get caught up in the hype, because we see something portraying answers so confidently that we maybe trust them a little bit more than we should. Why it's so important that you've noted to have that human supervision component, particularly with more junior resources, we might be in time to take the AI's enthusiasm as a sign of accuracy, and then similarly, getting caught up in the fervor around that we have positive results delivered by AI 10 times, and we say, Oh, we must market this very important that we are careful in how we're marketing our capabilities there. So appreciate those highlights. Thank you. Yeah, and whatever you agree, it's what are you solving for. So when I use it, I'm a very factual writer, right? That's what I do all day. Wrong? I'm just writing in bullet points or in emails. So if I'm going to do a client alert or I have to do something that's like my own bio, I can recognize good, somewhat creative writing, but it's very, very hard for me to do it myself. So whereas I used to rely on a colleague and say, Hey, can you take this paragraph? I know the content is accurate. Can you make it punchier? Can you make it I will now do that with AI, and that's where I've seen I know the content really well. I kind of know where I want to get it used to take me two or three hours. Now I put in a prompt saying, Could you make this sound a little more professional? Or I think these two sentences are redundant, and it's really helpful on that I know the underlying content. So sometimes it changes my words in a way that looks really good, but I'm like, actually, that makes a sentence in different so for me, it's really getting to the heart of the language model, because I've been practicing for a long time. So it's talking to associates and saying, What are you using at work? If it's going to take the place of all the research you used to do, that's not going to work. But even we had this with Google years ago when, you know, I always said, start and end your research with Google. You can't exclusively rely on Google, but it can sometimes point you in the right direction. And if there was something that you could find in a 32nd search on Google that all your other research didn't turn off. You don't want to be embarrassed by that. So I think it's figuring out where it fits in, and especially when it's people you're supervising using it. How are they using it? Like, part of the feedback, meaning, can be, because when you're trying to understand, sometimes you get work product and you're like, I don't, how did we get here? Like, and then when you understand, not just what was the thought process. But where did AI come in that can be a good teaching moment as to why AI worked well here, or it didn't work well here, I will say one of the topics that has come up repeatedly in previous conversations we've been a part of is the topic of AI transcription. Ironic. We are using it currently, but recognizing it's been such a hot topic in finance and in legal, just a couple of thoughts on what are some of the main concerns that folks should be considering when they are evaluating potential use as an AI transcription tool? Yeah, I think it would have been an important topic in any environment. But I think the texting suite by the SEC against corporate dealers and investment advisors really happening at the same time created extra anxiety around it, and the SEC has taken a very expansive view of the types of messages, if things should be captured, and at the same time you have something new coming out, and you have new record keeping questions, and everyone was a little bit in a tough spot trying to deal with the new technology and also new interpretation and some overlap between the two, although I think texting with a human and working with AI are too distinctive. So I think the what we focused on are a couple things, what are your actual record keeping obligations? What are your surveillance obligations? Because often the SEC looks at these things, including texting, not only from what did you actually keep under the record keeping rules, but also what were you keeping and surveilling under the compliance rule and certain rules around insider training? Then we also think about is, even if we feel like we have satisfied all of our obligations on the technical rules, what are we doing with these transcripts? And do we want to have lots of transcripts around? Are they accurate? Is it going to be problematic that suddenly conversations that were really informal are reported, and even though nobody was doing anything bad, we're really sarcastic, and I didn't think about how that was going to look at a transcript two years later, you also have litigation risks when you have a lot of records around, and you're not thinking about, not only what you have to keep, what's prudent to keep, can often put you in a worse position for litigation either because You have so many more records to review, you've now driven up the cost of it, or again, because of sometimes referred to as hygiene, things sound bad, even though there isn't really something bad happening. And how do you unparalleled tracks both encourage open debate, but also make sure you're not you don't suddenly have a record that could be misinterpreted. So we spent a lot of time with clients for the past few years, and continue to as you saw in the webinar we did together, where we were taking questions that I think dominated maybe 40 minutes of the hour webinar, which was great and really interactive. I think people do want to get this right. They're a little bit gun shy, and we don't get have guidance from the SEC. And so it's not surprising that that's where a lot of the interest is coming from. And a lot of the anxiety is. Also, I think your AI note taker just quit after I noticed it suddenly disappear.
Layton: I think it was, it's offended. I understand entirely I would if I was the AI note taker. And I think it's interesting, like just thinking about AI note taking and what you were describing earlier with kind of your own writing, the limits of large language models, because they're kind of general language today, they don't necessarily speak the language that you speak on technology and the law white as fluidly. So there are certain terms that could misinterpret. There's limitations when we don't have an expert language model. And I see that's one prediction for the future of AI in generals, we're going to see more of these micro language models with high degree of specialization, the way that we have human beings with expertise and high degree of specialization speaking common language. So looking forward to that, and when they get better, it maybe can learn a little bit from you again, hopefully, no offense. Take an AI note taker. So I know we don't have a lot of time before we've got to let you go, but we've not really dived into the SEC and all of the really magnitude of the actions that have been taken pertaining to, let's say off Channel Communications, and AI washing and so many of the other activities related to technology. I think we could probably fill four hours with just this topic alone. But any highlights that we've missed that you'd like to tee up.
Koscuiszka: I mean, as I mentioned, with AI washing, I think it's pretty straightforward. Don't lie, and don't you know, speak about something if you're not able to back it up with that, especially when it's such a hot topic for electronic communications and record keeping. I mean, we're still continuing to see the actions come out. Expect more before the fiscal year ends for the SEC which is September 30, for that, I do think, as an industry, we've moved a whole lot in the past few years, and enforcement actions tend to lag behind a little bit the examinations, which will be with us for a long time. We are seeing a shift and managers who have gotten their texting issues to a better place. Nobody's perfect. We are seeing that go better for them on exams, and some of that effort being rewarded, although it's still pretty stressful, I'm actually cautiously optimistic that AI - as we are creating lots more records and more record keeping issues - I am hopeful that it will help provide a solution in the future to some of these electronic communication issues. I think one of the hardest things is, we use these things for convenience, and with text and WhatsApp, it can be hard to separate personal from business. And I think AI will bring us solutions on that front that are pretty good so that we can continue to have the convenience and the work messages will end up where the work messages need to be, and we'll be able to protect people's privacy and personal issues, so I see AI as a solution to some of the problems, and hopefully I'm not wrong about that.
Layton: So a positive outlook on how AI can actually help us be compliant, and not just be the subject of an examination, but also good advice, don't lie. Well taken, all right. Well, in the final moments, just looking back and saying, first off, thank you so much for your time here, always a pleasure speaking with you. Are there any key takeaways that we've missed that you would like to make sure we leave folks with before they go on to the rest of their days.
Koscuiszka: I think you've done a really good job of hitting all the topics, so I don't have any words of wisdom that I haven't already shared, but look forward to coming back, because I imagine we could be here a year from now and have be having a completely different conversation, whereas with other things I've always thought I'd been like, predict at least six months to a year out with AI. It's a constant guessing game.
Layton: We could go ask ChatGPT what it thinks we're in for it next, but not sure it knows either. Well, thank you very much for sharing your time with us today and looking forward to working more with you in the future to share more updates as we learn more in this constantly evolving world of AI and regulations in The financial sector. Thank you.
Comments