Anthony Diana is joined by Therese Craparo and Marcin Krieger to start a new series on AI-enabled e-discovery. This series will look at practical and legal issues related to the use of AI in e-discovery, with a focus on actual use cases rather than theoretical discussions. Topics for this episode include how AI is currently being used in e-discovery, the rapid proliferation of tools on the market and the speed of their adoption, and guidance on how to get started – including key do’s and don’ts.
Transcript:
Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.
Anthony: Hello, this is Anthony Diana, and welcome to Tech Law Talks. Today, we are starting a podcast series on AI-enabled e-discovery. The podcast series will focus on practical and legal issues when considering using AI-enabled e-discovery, with a focus on actual use cases, so not just on the theoretical. Joining me today are Therese Craparo and Marcin Krieger, colleagues of mine at Reed Smith. Welcome, guys, and let's get started. So let's start with just an idea, Therese, on and what are we seeing right now in terms of people actually using AI in e-discovery?
Therese: Yeah, I think what's interesting is we're really starting to see an uptick in the use of AI in e-discovery in a few different areas. I think the first and where we're seeing really a lot of promising AI technology is in data knowledge and data farming. So speed to knowledge, we call early case assessment. It's taking that initial data set and using GenAI to quickly identify key concepts, key issues, anything that you want to do to get your handle on that data and the facts of the case, the potential facts of the case sooner than later. Really seeing a lot of that. And as a corollary to that, a use of GenAI for investigations, right? Because that's really where a lot of this, you know, early data assessment comes in handy. finding, zeroing in on the key issues and the key events really quickly to be able to make a quick assessment of what legal actions need to be taken. Also seeing in accelerated privilege models, so the ability to more effectively and quickly identify privileged documents and privileged information to sort of better facilitate privilege reviews, and along with that, of course, document review. We are absolutely seeing Gen AI tools popping up that are helping to facilitate, you We'll call them next-gen document review, automated review, again, ways of quickly, more efficiently and effectively identifying relevant documents to move them through the review process. Of course, we're also seeing use of AI for what we'll talk about as sort of data analysis and presentation, right? Now I've got the documents. I need to identify the data for my witness kits, for what I'm going to use at trial, to look for timelines of events and things like that to really help get handles around the actual relevant data and what's happening in the case. And then I think some really fun applications of Gen AI, like with translations. So on the fly translations, instead of hiring, you know, armies of folks who can, you know, in various languages, or even interpretation of the data without the translation, something that's in another language to get a sense of the content without actually having to translate the data. So I think there's a lot of really interesting and promising areas for AI-enabled e-discovery right now. And I'm sure Marsheen has thoughts on that as well.
Marcin: Yeah, for sure. You know, it's interesting. There's been an absolute explosion of AI tools that have been coming out based upon generative AI because it's a very new way of analyzing documents. It's a really new way of getting at data. But the actual adoption appears to be a little slower than the interest. The interest is everywhere. Everybody wants it. There's a lot of hesitation in terms of how to adopt it. there are still many clients that are struggling with even the classic AI, the use of TAR. And I know that we're going to talk about that in future podcasts and how do we use all the different types of AI in the right way, in the right time, in the right place. But when you actually look at what's available in the market and the development, many of these tools are still in their infancy. And a lot of these tools are products that are looking for a problem to solve. And so what that means is that clients are seeing these tools and they're saying, you know, how do we adopt them? How do we best use them? And they're going to their outside council, their discovery council, and saying, how do you use this? How will you use this? And so there's just a forest fire of interest and demand. And yet there aren't as many advancements, surprisingly, in terms of just broad adoption. I think that everyone is tiptoeing to see who's going to be the first one to make the big splash with the use of GenAI in discovery, who is going to be brave enough to be the first one to gain judicial acceptance with the use of GenAI. And it's a very exciting time to see who's going to be, what's going to be the right case, what's going to be the right judge, what's going to be the right court that makes that possible. But what I will say, though, is the tools are fascinating. The tools are interesting. Like you said, Therese, speed to knowledge. We have tools that allow you to ask open-ended questions, your classic chatbot model, and you can take almost as much data as you want and glean key insights very, very quickly. Not only in ECA, but also in those midnight data dumps before the night of a deposition, opposing counsel drops 5,000 documents on you, hoping to bury the relevant information. Yet, if you have the right tool, you can ingest that and in moments sift through it to get to the most important information for tomorrow's witness. We're also seeing the advancements in the per document review, a totally different use of GenAI, which isn't just chatbot my whole corpus, but how do I code each document for relevance? This new version of TAR, we can't really call it TAR 3.0 because it's a completely different way of looking at the world of document review, but it is essentially the next evolution. So going to these conferences and watching these products evolve and watch them get better every week, every month is just absolutely fascinating and super exciting.
Therese: Yeah, and I think just quick to pick up on that, when Machine's talking about the interest around it and the lag in adoption, which of course, as he said, we saw with TAR, I think what's really exciting about the GenAI tools is that There's such a variety of potential uses that in some ways, a lot of them have lower risk. Like a lot of the hesitation with TAR is, I don't want to have legal fights over it. I don't want to have discovery about discovery, which I think held a lot of people back. But the variety of these tools that are being developed gives the opportunity to start tipping your toe in the water without necessarily having to go to those areas where people are a little bit more nervous, which I am hopeful will really drive, you know, adoption. and, you know, development going forward.
Anthony: Yeah, and I think it's telling, right? We've been, you know, we've talked about this a lot, TAR, the use of machine learning, artificial intelligence, has been around for more than 15 years, right? And it's been accepted by the courts and the like. Even with that, it's a relatively low adoption, right? I'm struck by the fact, I mean, I think that legal tech and stuff like that, I was talking to a lot of vendors and I was just asking the question, how often is this being used, right? How often do you actually use TAR? And generally, they said maybe 20% of the cases. Again, these are large cases, not just the small ones, which maybe doesn't make sense. But even in large cases, because of some of the things you talked about, even though there's just acceptance, people are like, well, I have to negotiate this or whatever. I'm going to put it off. But I think the legal industry as a whole has probably failed if TAR is only being used in 20% of large cases. And I don't think we have that option for GenAI. I think it's moving so quickly. I think we have to get past it. And I think importantly, as you mentioned, is in some ways, GenAI adoption, I think, may be faster, not in all use cases, but maybe faster because a lot of the discussions we're talking about in our podcast series is going to go into all these use cases. But a lot of them, you don't need the other side to agree, or you don't need the court to agree. And we're going to get into a little bit of that, of how do you make sure of that and protective workers alike. But generally, I think we're going to see much faster adoption. I think we're seeing it, I mean, you should know this, it's like every day, almost every one of the matters I'm working on, people are using. GenAI. We use Harvey here. But everyone is now used to it. They're trained on it. They're using it in their everyday. I think that'll actually help on the e-discovery side because people are using it. And they're like, but I do this all the time. So next is, okay, we talked about it. Generally, there's a lot of excitement. How do we get started? So, Marcin, they're sitting there, an in-house attorney. They want to start using it. But lots of issues. How do I just get started in starting to use AI-enabled.
Marcin: Absolutely. So one thing that I might slightly disagree with you on is your statement that everyone's using AI and we're comfortable using it. I think it's a very interesting nuance to that. Everyone is using AI, but they don't actually know they're using it. We use AI every day, every time that you talk to your cell phone or now anytime you go to Google. And for attorneys that are hesitating with using AI in their practice, they just need to realize you've been using AI. And if you want to take yourself to the next step of using AI as part of your practice, one of the first ways to do it is you can just sign up for a free account with one of the existing models, not for client work. But for example, myself personally, I got an account with OpenAI on ChatGPT. And my daughter and I use it all the time Because, you know, with our hobbies, for me, it's vinyl records. I got very comfortable with learning the right way to write prompts, the right way to understand responses. That's step one. Get comfortable with the technology. And, you know, there's a lot of guidance out there. We've done CLEs on things like competence with the use of AI. But the first step is just get comfortable with how GenAI works generally. The next thing that you do is survey the market. Find the right tool. Don't find a tool to solve a problem. Have a problem and find the tool that solves it. You mentioned we use Harvey. Harvey is a fantastic tool, as are many others. It's a jack of all trades that lets you do a lot of things, but that may not necessarily be the right tool for a particular attorney's problem. Maybe you're a transactional attorney. Maybe you're an M&A lawyer. Maybe you're a discovery attorney. and if you're listening to this podcast and you want to learn how do i take that first step well there are a lot of attorneys out there that specialize in ai and we can advise you on it but the most important thing is don't just grab a tool and then try to force it into your toolkit have a problem go out there survey the market and that would be like the first big step we call it try before you buy right then there are a number of other steps you have to be aware of we can't really get into all of them here. But if you look at the ethical guidance that's out there, if you really want a great primer, the ABA guidance that was published a year ago actually has the framework for the things that you need to do to get started once you found the right tool. And in a future conversation, we'll go much deeper into these things. But there are steps that you have to take in terms of knowing how to protect your data. Knowing how to protect your client's confidentiality, understanding if there is a landscape, be it regulatory landscape or NDAs or maybe even restrictions from the court. But that's the next step. Find the tool, then make sure the tool is used in a way that complies with your obligations with data privacy. And then the last step, of course, is learning how to validate what the AI is doing. And that is a very product and use case specific conversation, which we will have down the road.
Anthony: Yeah and I think one of the things that I hear from in-house folks is they're going to have to make the business case. It's not enough to just say, I'm going to use it because they may have policies and the like that are saying you have to be super careful when you're using AI. I think that helps. And that's why I think this podcast here is going to focus on use cases because that is have a use case that actually makes sense and then say, okay, here's the tool that fits that use case, you know, whether it's privilege or something like that. And then explain it to both whoever, all the stakeholders, as to why the present tools don't work, right? And this is a better solution. I do think it should not be sort of vendor-driven, which I think right now it's a little bit because every vendor is coming out and saying, here's another tool, here's another tool, here's another tool. It really should be client-driven, right? The client should be saying, this is what I need. Get me a tool that does this, because this is a pain point for me. And I think we're not quite there yet, but I think we'll get there.
Therese: And I think for me, that's really the most important thing. There is this minute, I think, and Marcin touched on this, that the AI can do anything. Any AI tool can do anything because it's AI, right? And I think that's a dangerous thing for people to think because in reality, the most effective AI tools are the AI tools that are designed to do a specific thing. Right. General use AI tools can be fun and interesting, but they're not usually very good at solving the particular problem that you want to solve. And so I think that's for most people, it's really important to say you need to be focusing on tools that are developed to do the specific thing, whether that's a review, whether it's data analysis. Whatever it is that you're looking to do, look for the tools that do that and definitely test them out. Start with testing them, right? Because, you know, people are looking to sell, right? That their tools, their sell, AI is hot. Everybody wants people to buy. And it's really take your time to evaluate the tools to find the ones that are custom designed to do what you want to do. They can be incredibly effective. It will also be much more efficient and cost effective for you. And it avoids the, oh, that didn't work now. I never want to use AI again problem. And I think when you're looking at it that way, you're much more likely to both get comfortable and to truly find the tools that are going to help with the work that you're doing.
Anthony: Yeah and I think the reality is is for particularly for this AI-enabled e-discovery and effective program and you probably should have a program and not you know just a you focus on a product, but you're probably gonna need lots of tools. And I think that's going to be a challenge right because you have to evaluate each of them but I think it is it's not going to be I can go to this one vendor for all my AI and e-discovery needs. I know the vendors aren't going to do that. But not all are going to be good at everything, right? It's impossible so they're going to have to focus and prioritized based on what they're hearing from their clients. So I think that's one of the things from an in-house person, even a law firm, like you have to figure out what tools make sense. And it's going to mean you're going to have to have lots of vendors, which is hard to manage. But okay, so now let's talk a little bit, Therese, about like, how do you avoid getting in trouble when you're getting started, right? Everybody wants to use it. There's a rush to use it. What are the things, like the high points, and we'll get into them more in the podcast series, but what are some key things to think about so you don't get in trouble when you're getting stuck?
Therese: Look, I think the number one thing to always keep in mind to make sure that you don't get in trouble is simply to be mindful as your obligations as a lawyer, right? I mean, we have obligations as lawyers. And if we are following those the way that we should, you're going to be, you know, catching some of the potential gotchas that can get you in trouble. So number one, right? Confidentiality. We have an obligation to keep our clients' data confidential, right? Whether you are in an in-house counsel or an outside counsel. When you're looking at these tools, you need to make sure that they will properly protect that data, that they're not using that data for something else. They don't disclose that data to something else. And I think the most obvious one, of course, is. Don't use public tools. We're confidential, sensitive data. And I think that's number one, right? We are always mindful as lawyers. We need to protect our clients' information. You want to ask the same questions and do the same thing about any tool that you're using for AI is make sure that all the protections in the safe card to ensure that data is used, confidential and used properly. Number one. Number two, while we're talking about how you should use GenAI, don't trust GenAI. That doesn't mean, you know, it's not valuable. It means we have an obligation to check everything and to look at the responses we're getting, whether that's from a human being or a computer, to make sure that that information is accurate. We all know AI has the potential to hallucinate, to provide information that's inaccurate. It isn't actually thinking and giving you an analytical answer. And so I think always being aware of the fact that you can't trusted on itself. You have an obligation to check and to validate that data and to make sure that you have mechanisms in place whenever you're using it to make sure that data is checked and is validated. Being mindful of your obligations for what you can and cannot do in using GenAI. Does your client have requirements about whether or not you can or how you can use AI? Do they require you to disclose it? Does your firm have requirements of how you can use GenAI? Make sure you're aware of those obligations and those rules and that you are following them very clearly. Again, you always need to make sure that if you're using a particular AI tool, we see a lot of companies coming out, some saying, no, you can't use it without my permission. Some saying you absolutely better be using AI in my matters. So it's being mindful of what the parameters are for your organization and your client to make sure that whatever use you are doing is consistent with that. Disclosures, right? Being mindful of when and if you need to disclose the use. And again, whether that use is in a case or with other people involved, if the AI is being used on their voices or their information, be aware of what the requirements are and whether or not you have an obligation to disclose. And if so, what those disclosures may be. And I think when you're going down the road is protecting the data. Is someone else going to use AI on your client's data? And if so, right, think of the other side or the other party. Are there parameters you want to set? Do you want information in protective orders and the like that are going to protect the use of that data? And then I think finally, and this is what, you know, tripped everyone else up quite a bit with TAR is validation. If you are using AI, what are your validation mechanisms for defensibility purposes? If you are going to use it in a case and you are going to have to defend it, you have to be able to say not just, I was using AI and it's fine and I have a really good vendor and it's a really good tool. You have to say, and here's the mechanism I use to validate the results so I know that it's working and that my response, my production, whatever it may be met my legal obligations.
Marcin: So I'm going to go backwards then because I'm going to start where Therese ended. You don't want to become a headline. Check your sites, check your sites, check your sites. It is the easiest thing that you can do when you're using AI, and yet it's the number one thing that will land you in the news. From there, things get a little more subtle because we only hear in the news about hallucinations when it comes to things like sites. But AI has the tendency to come up with incorrect conclusions, the summaries may be slightly off. You have to know how your tool works in order to be able to get an inherent sense of when it's not doing what it needs to be doing. That comes with practice and familiarity. But ultimately, attorneys are skeptical. And when you use AI at any level, the level of validation that you need, of course, depends on the task and the kind of AI you're using. But remain skeptical. Automation bias, our inherent desire to trust computers, is the number one way that you fail to validate. So we can talk a lot about validation in future podcasts, but that's how I'm going to end my little rant here. The easy thing is check your sites and everything else from there becomes a lot more nuanced, but remain skeptical. Learn the way your tool works so that you have a sense of when things aren't going the way they're supposed to be.
Anthony: Yeah. And I'm going to go back to, and this is, I think one of the lessons learned from Tar and the use of Tar is a lot of people were scarred. It's people process technology, right? You have to have people who understand it. That's number one. But process is the one thing that people forget all the time. Validation is part of it. But we saw with Tar where you were like, oh, we're going to use TAR. It's going to save you all this money. Again, whether you're AI-enabled e-discovery saves money. It remains to be seen, but it should be more effective, right? And it's the effectiveness that matters. Maybe there's cost savings associated with it, but in the end, you want a process that makes sense. One of the things we saw with TAR was there wasn't developed processes around it, not just validation, but things like we load it in, we do all this training, and then there's another set of custodians you can load in. And suddenly it's like, oh, now we have to do the training again. And people are like, well, that's not what you sold me it's not costing no money so I think really going through as you're doing and figuring out how I'm going to use AI is developing what the process is going to be to make sure you're dealing with all these anomalies so that everyone is aware of what's going to happen when something happens right and I think that's that's key too so I think as we talk and we're going to have a podcast series on on the podcast series we're going to talk about Tar because I think there's a lot of good lessons learned about what we did in Tar and how we can apply it to these. So thank you, Therese and Marcin. I think this was a good episode that sort of started us off and laid the groundwork for our series. So I hope everyone joins us as we go through this journey and talk about AI-enabled e-discovery over the next few months and years. Thank you all.
Marcin: Thank you.
Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's Emerging Technologies Practice, please email techlawtalks@reedsmith.com. You can find our podcast on Spotify, Apple Podcasts, Google Podcasts, Readsmith.com, and our social media accounts.
Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers.
All rights reserved.
Transcript is auto-generated.

