Craig Chaney here, reporting from judicial orbit. Today’s mission briefing: examining one of the most closely watched developments in legal tech. A structured look at how federal judges are engaging with AI performed by Northwestern University, in coordination with The Sedona Conference and the New York City Bar Association.
Researchers surveyed a stratified random sample of 502 federal judges; 112 responded (a 22% response rate). The full report, Artificial Intelligence in Federal Courts: A Random-Sample Survey of Judges, will appear in The Sedona Conference Journal later this year.
Joining me in the command module are Sharon Ann Doherty, who will guide us through the data, and Marcin Krieger, who will interpret what it means for practitioners.
Crew Module Check: What’s happening inside judicial chambers?
Marcin: According to the survey, only about one in five judges are using these tools with any regularity, the bench is still conducting early testing, the use of AI has not yet blasted off. That said, given the pace at which this technology is evolving, I expect these numbers to shift dramatically in the coming years. Law firms that are not evaluating and adopting this technology are already falling behind in this “technology space race.”
Onboard Systems: Which AI tools are judges using?
Sharon: Judges prefer legal-specific AI tools over general-purpose ones. Platforms like Westlaw's AI-Assisted Research (38.4%) are more popular than general-purpose tools like ChatGPT (28.6%). Vendor familiarity and perceived reliability strongly influence which tools judges are willing to use.
Mission Functions: What are judges using AI for?
Sharon: Legal research is the dominant use case. Judges most commonly use AI for conducting legal research (30%), followed by document review (15.5%). Notably, very few use AI to draft filings or make decisions (1.8%). AI may assist with reconnaissance, but judges aren't letting it pilot the ship — they are not delegating judgment.
Marcin: This may be the most important finding in the entire study. The use of these products is just now beginning to lift off, and judges are drawing a bright line between using AI to find and synthesize information and using it to exercise judgment. What this tells me is that judges understand the difference between AI as a research assistant and AI as a decision-maker. All attorneys need to remain cognizant of our ethical obligations under the Model Rules, and how those obligations guide where and how we use AI in our practice as this technology takes flight.
Flight Protocols: How consistent are chambers’ AI policies?
Sharon: Chambers' AI policies vary widely. About one in three judges permit or encourage AI use; roughly 20% prohibit it outright. One in four have no formal policy at all. Many judges noted they evaluate AI use on a case-by-case basis rather than applying blanket rules.
Marcin: The short answer is that they're not consistent at all—there is no universal flight manual. That's the point practitioners need to internalize. Do not assume that what was acceptable in one courtroom will be acceptable in the next. Check local rules, check standing orders, and if neither addresses AI, consider raising the question directly.
Reentry Guidance: What’s the takeaway for practitioners?
Sharon: The wide variation in court policies, from encouragement with guardrails to outright prohibition, means that practitioners cannot assume a uniform approach. The judge on your case may hold very different views on AI than the last judge you appeared before.
Marcin: While individual judges' tolerance for AI use vary dramatically, tolerance for its misuse remains universally low. Beyond local rules on AI and the preferences of the judge on your case, every lawyer, and those working under their supervision, must be competent in how they use AI tools in their practice. The upside of AI-assisted work product is quality and efficiency; the downside is sanctions, malpractice, and reputational damage. The survey makes clear that judges who permit AI use still require independent verification of every output. “Trust but verify” isn't just good practice, it's the price of admission, and there is no line cutting for getting a ticket for this journey to the stars.
Behind the Scenes —Expedition Limitations
Like any mission, a few caveats are worth noting. The sample of 112 judges carries a margin of error of roughly ±9%. Results for appellate judges (only 6 respondents) are essentially anecdotal. Self-selection bias is also possible, as judges with strong views on AI may have been more inclined to participate. Additionally, the study focuses solely on federal judges and does not capture perspectives from state courts, leaving that as uncharted territory for future analysis.
As we return from orbit, one thing is clear: AI is becoming part of the legal toolkit. It is not replacing the decision-maker, but it is reshaping how work gets done. The question is no longer whether to engage, but how to do so responsibly. The law, like exploration, does not stand still; it expands to meet new frontiers.