What happens when artificial intelligence meets legal briefs, and why AI is no substitute for attorney diligence
The recent United States v. Farris decision out of the Sixth Circuit is a stark reminder of the ethical responsibilities lawyers must uphold when using AI in their practice, particularly when it comes to ensuring the accuracy of legal filings.
In Farris, an attorney was appointed to represent the defendant in a criminal matter and directed a staff member to use Westlaw’s CoCounsel, a generative AI tool, to upload court documents and generate first-draft appellate briefs. Having only recently adopted CoCounsel, the attorney did not sufficiently review or verify the output. His principal brief contained inaccurate quotations and misrepresented holdings from United States v. Washington, 715 F.3d 975 (6th Cir. 2013), and United States v. Anthony, 280 F.3d 694 (6th Cir. 2002). The errors were not caught through standard legal review, and they were flagged by the court itself, which issued a show-cause order.
The Consequences of Misusing AI
The fallout was significant. The errors consumed substantial judicial resources, requiring the court to investigate whether AI had been used, coordinate a response, and facilitate additional steps in the appellate proceedings. Defendant’s counsel was disqualified from representing Farris any further, and was denied compensation for their time on the appeal. The court also initiated disciplinary proceedings. Most consequentially, the ordeal delayed resolution of the underlying criminal case, leaving Farris to wait longer for an outcome.
Key Takeaways for Legal Professionals
This case is yet another example of how AI should, and should not, be used in legal practice. It is not the first instance of improper reliance on generative AI (see McCarthy v. DEA, No. 24-02704 (3d Cir. Sept. 12, 2024)), and it will certainly not be the last. As tools become more prevalent, attorneys must stay vigilant and ensure that any AI-assisted work is thoroughly checked for accuracy. The court's ruling is a cautionary tale about the dangers of over-relying on AI without proper verification.
- Know your tools. Understand the capabilities and limitations of any AI tool you use. Deploy it to enhance your work, not to cut corners. If you're unsure how a tool works, investigate before relying on it.
- Verify and validate. Even if AI produces a seemingly accurate draft, always double-check citations, legal arguments, and factual accuracy. Do not delegate this responsibility to non-attorney staff, as an attorney must review and own the work product.
- Set clear policies. Firms and legal departments should establish clear guidelines on the use of AI in legal work. Staff must understand both the importance of verifying AI-generated content and the consequences of failing to do so.
- Ethical obligations remain. Technology, no matter how advanced, does not absolve an attorney of the duty to act competently and ethically. The duty of verification extends to all forms of legal work, including work assisted by AI.
United States v. Farris is a clear reminder that while AI has tremendous potential to assist with legal work, it is no substitute for an attorney's core responsibilities. As the court noted, the duty of competence requires keeping up with “changes in the law and its practice,” including “relevant technology.” As the legal industry continues to embrace these tools, rigorous oversight and verification will be essential to maintaining ethical standards and protecting the integrity of the legal process.