Reed Smith partner Anthony Diana sits down with Dera Nevin of FTI Consulting to explore how AI-enabled e-discovery is transforming litigation-and why the best time to adopt these tools is now.
From accelerating early case assessment to revolutionizing privilege review workflows, Anthony and Dera break down where generative AI is already delivering real results-not just theoretical possibilities. They discuss how forward-thinking legal teams are training large language models to surface key documents, generate case timelines, and dramatically reduce time-to-knowledge on complex matters.
Transcript:
Anthony: Hello, this is Anthony Diana from Reed Smith and welcome to Tech Law Talks. Today we are continuing a podcast series on AI enabled e-Discovery. This podcast series will focus on practical and legal issues when considering using AI enabled e-Discovery with a focus on actual use cases, not just the theoretical. Joining me today is Dera Nevin from FTI, a good friend of mine and a good friend of Reed Smith for many, many years. So welcome, Dera.
Dera: Thank you so much, Anthony. Thank you for having me on this podcast.
Anthony: Excellent. So, Dera, let's talk about AI-enabled discovery. We know this is the new thing, particularly the use of GenAI and AI agents and all of that. And it's obviously a lot of change, right? A lot of change in the e-discovery field. It has been pretty static, I would say, for the past 10 years. And now there's a lot of change. So I think a lot of my clients are really interested in what's happening out there—what's available, what's good, what's bad, what challenges we have. So today I really wanted to focus high level on what you're seeing and your opinions. Let’s start with GenAI and AI agents and the like. What do you see in terms of the marketplace—where we are today versus where we’re going to be in the next one or two years?
Dera: Okay, well that question's not broad at all, right? The great irony, of course, is that eDiscovery has been pioneering the use of AI for well over a decade. What’s happening now is a completely different category of technology coming into the mix, in various flavors. I’m not going to talk about the AI we’ve been using for years, even though I think some people are suddenly realizing they have been using AI for years in a CAL or TAR model. We’ll focus on generative AI use cases and how that is starting to be used in e-discovery. I think we’re still at the beginning of it, even though in the past year it has become much more pervasive and the quality of the technology baked into commonly available platforms—like Relativity or Reveal or some cloud-based platforms—has improved. The interesting thing is this is the worst it’s ever going to be; it’s only going to get better. A lot of the use cases we’re seeing are early-stage experimentation, but we’re already starting to identify places where it can have meaningful impact—not only on workflows, but on outcomes, whether cost or speed.
Anthony: Yeah, and I like hearing that because when we talk about e-discovery—EDRM—we’re getting questions about preservation, collection, review, production, privilege. There are so many areas where it’s in use, but obviously there’s risk because it’s new. I think at least for the major players, issues like security—not using public ChatGPT—have largely been resolved, though you still need due diligence. Where in the EDRM model are you seeing the most opportunity today? If someone wanted to start experimenting now, what process or tools should they focus on?
Dera: A place where I’m seeing good results—not perfect, but good—is at the earlier stages of the process. Interposing generative AI early in discovery, including as an ECA mechanism, has been effective. A lot of people want to jump straight to the end and say, “Review my documents for me.” But we’ve seen strong results when we receive a large inbound production and don’t fully understand its contents. Using LLMs to summarize, inventory, or list what’s inside gives us a directional understanding—enough to decide whether it belongs in the discovery workflow. It doesn’t need to be perfect; it needs to be directionally accurate. It helps us identify questions to ask and keywords to use during discovery planning. There are real opportunities to reduce volume, speed up understanding, and grasp case nuance earlier, which is a tremendous advantage.
Anthony: I agree. I think this is going to be a sea change. Typically, once there’s a complaint or investigation, we jump straight into review. Even though we’ve had ECA tools, I don’t know how fully they’ve been utilized. The speed to knowledge early on is critical. I’ve used AI tools to generate timelines and identify key people right after gathering documents. It surfaces who to talk to and what to ask. It’s not about replacing reviewers—it’s about enabling partner-level understanding so clients can quickly grasp the issues. That’s a massive opportunity.
Dera: Absolutely. One significant example involved a very large inbound production from an opposing party. A senior associate worked with our data scientist to train an LLM in the issues, law, and domain area of the case. We had the LLM review a subset of documents to identify responsiveness, fed that into a traditional TAR model, and ran an iterative process. The lawyer was impressed with how accurately documents were summarized and how well the risk and issues were reflected. After reviewing a small fraction of a very large corpus, they were prepared for depositions and next steps. They then applied the same approach to their own outbound production. Having a specialized LLM layered on top of the review process created a deeper understanding and enhanced case development.
Anthony: I can see this being especially useful in investigations where you’re rushing to produce documents and don’t always fully grasp what’s in the production. The reality is regulators like the SEC and plaintiffs’ lawyers will use these tools on your production sets. No one is going to review every document manually anymore. They’ll use LLMs to understand the case quickly. So you might as well see what they’re going to see. It accelerates early case assessment and levels the playing field.
Dera: That leveling may actually be beneficial. Both sides will need to form a clearer point of view earlier, potentially changing litigation strategy. Large teams often struggle to create a uniform view of the evidence. An LLM can provide that unified perspective, allowing teams to test biases and scenarios, improving strategic decision-making.
Anthony: TAR has been effective, but training models can be time-consuming. Using GenAI to assist with initial training sets could reduce costs and improve speed. You can still rely on TAR for production since it’s court-accepted, while using GenAI earlier in the workflow without getting into disclosure issues about prompts. That way, you gain benefits while avoiding litigation over methodology. Are you seeing developments on the privilege side?
Dera: Privilege remains accuracy-focused, so we’re cautious about using LLMs for individualized assessments. However, using LLMs as a countermeasure in triage is promising. For example, if both traditional methods and the LLM agree a document is privileged, it goes into a human review workflow. If both agree it’s not, it likely isn’t. Conflicts go to further screening. Early indications show this streamlines workflow and improves speed. We’re also using LLMs to assist in generating privilege log summaries, improving preliminary outputs and drafting efficiency.
Anthony: From a cost perspective, that triage is huge. If contract attorneys and the tool agree something is privileged, you may not need a law firm associate to review it. You can sample for quality control, but it’s a major cost saver while managing risk.
Dera: Exactly. Training LLMs to understand the case context improves determinations. We’re also using LLMs to supplement automation in privilege log generation, speeding up final entries.
Anthony: Have you seen strong use cases yet for AI agents in discovery?
Dera: There’s experimentation, but nothing I’d call fully ready. We’re seeing LLMs trained to perform tasks that agents may eventually take over. Right now, I’m focused on defining use cases that produce accurate, benchmarked results and allow documentation and human oversight. Good use cases involve implementing instructions or providing outputs for further review rather than replacing foundational decision-making.
Anthony: That makes sense. We learned lessons when technology-assisted review first emerged. Adoption slowed after some early bad experiences. Now people understand TAR better. I think we’ll see wider adoption again as understanding improves.
Dera: Interestingly, people adopt TAR without calling it AI because they now associate AI with generative AI. One of my favorite use cases involves historical or handwritten records. Traditional OCR struggles with cursive. We’re using AI-enabled OCR combined with LLM post-processing to predict likely words and improve accuracy. The results are remarkable. Even when I can’t read the handwriting, the AI-enhanced version clarifies it, and the content becomes searchable. It’s transformative for cases with large volumes of handwritten material.
Anthony: I’ve heard similar things about PDFs and image-based documents. AI image technology can significantly improve data usability. We’re still in early days, but there are many practical, low-risk use cases available now. Most don’t require agreement from opposing counsel, especially when used for early case assessment. I think this year we’ll see GenAI used in nearly every case.
Dera: I agree. In cases involving significant video or audio, AI will also make a difference. Being able to search for spoken phrases within video content will dramatically improve efficiency in discovery.
Anthony: Well, Dera, thank you so much. This was a great overview of where we are and where we’re heading. The theme seems clear: if you’re not using GenAI yet, start using it. Thanks again, and thanks to our listeners. This podcast is part of a series, so we hope you’ll join us next time.
Outro: Tech Law Talks is a Reed Smith production. Our producer is Shannon Ryan. For more information about Reed Smith's Emerging Technologies Practice, please email [email protected]. You can find our podcast on all streaming platforms, reedsmith.com and our social media accounts at Reed Smith LLP.
Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers.
All rights reserved.
Transcript is auto-generated.