AI is now a common part of the review process. Officers, directors, audit committee members, and in-house teams are using AI platforms to summarize drafts, generate issues lists, and test disclosures, often producing in minutes what would otherwise take days of manual review. That efficiency is real, and companies that use AI well will move faster without sacrificing quality.

But AI’s speed creates its own risks when a reviewed document contains material nonpublic information (“MNPI”), privileged legal advice, or other confidential information. A board member who uploads a draft preliminary proxy statement to a consumer large language models (“LLMs”) may be transmitting MNPI to a platform that owes no duty of confidentiality to the company. A CFO who pastes deal terms from a draft registration statement into an unsecured tool may be creating a selective disclosure problem under Regulation FD. And any upload of a privileged document to a third-party platform that lacks adequate confidentiality protections carries a risk of waiver. These are not hypothetical concerns; Reed Smith is seeing them in practice.

The basic rule is straightforward: do not put a sensitive document into a public AI tool and assume settings will protect you. Use an approved platform. Strip out identifying information where possible. Assume every AI output needs human review. The six rules below translate that principle into a practical framework.

  1. Use only approved tools for sensitive documents. If the document contains draft SEC disclosure, board materials, financing terms, deal terms, or legal advice, the default answer is not to upload the document to an AI platform unless the company is using an approved enterprise platform or a controlled counsel-managed workflow. Consumer-grade AI tools (free-tier LLM and browser-based assistants without enterprise agreements) generally reserve the right to use inputs for model training, retain data indefinitely, and share aggregated data with third parties. Those terms are fundamentally incompatible with the confidentiality obligations that attach to public company documents.

    The risks of such consumer-grade AI uploads include potential Regulation FD exposure, waiver of attorney-client privilege, and gaps in the company’s disclosure controls. A single incident can trigger all three. 
  1. Anonymize where you can, but do not treat anonymization as a cure-all. Before using AI for review, remove company names, counterparty names, transaction structure, pricing, dates, amounts, and any other details that would let a third party reconstruct the terms. Use placeholders throughout. But remember that context, industry markers, and structural details can still identify the company or the deal even after names and numbers are stripped. Anonymization reduces risk. It does not eliminate it.

    Where possible, bolster anonymization by limiting inputs and providing only the portions of a document that are relevant to the task at hand, rather than uploading entire agreements or datasets by default. This minimizes data exposure and helps the AI tool focus on the most pertinent sections.
  1. Lock In Enterprise Licensing with Your AI vendor. Consumer-grade AI tools are the wrong answer for sensitive documents. Public companies should negotiate enterprise terms that address, at minimum: (a) a prohibition on using customer inputs to train the provider’s models; (b) contractual confidentiality obligations; (c) data-deletion commitments upon request or termination; (d) SOC 2 Type II or equivalent security certification; (e) clear data-residency and data-handling representations; and (f) administrative controls, logging, and export capability sufficient for the company’s records retention needs.

    The point is to know what happens to the data, who can access it, and whether the company can preserve and retrieve what matters. The cost of an enterprise license is trivial relative to the risk of a confidentiality breach involving pre-announcement transaction documents or SEC filings.
  1. Keep legal, compliance, IT, and governance in the loop. AI review should be governed by a written company policy, which should identify approved tools and use cases, prohibited inputs, anonymization requirements, verification steps, and escalation paths. The policy should be circulated to appropriate employees, executive teams and board members routinely, as well as at the outset of a transaction or filing cycle. All AI use cases should be pre-tested with AI impact assessments to determine and document possible risks to the company (such as waiving privilege or inadvertent disclosure of MNPI) as well as associated risk remediation steps.

    If AI is used in the disclosure process, the company should consider whether its disclosure controls and procedures under the Sarbanes-Oxley Act need to account for that use. The audit committee has a particular interest here, both as the body responsible for oversight of the disclosure process and as a natural checkpoint for new risk vectors in the company’s information-handling practices. Existing governance documents (committee charters, confidentiality policies, insider trading policies, information security policies, etc.) should be evaluated for gaps and updated where AI use creates risks that current language does not adequately address.
  1. Treat AI output as a first draft, not an answer. AI can help management and directors engage more deeply with a document. AI cannot replace judgment or expertise. LLMs can misread defined terms, misapply legal standards, miss context that an experienced reviewer would catch, and fabricate citations to authorities that do not exist. Every comment, summary, and issues list should be checked by a qualified human reviewer before anyone relies on it. This “human-in-the-loop” review is especially important for documents that will be filed with the SEC, distributed to shareholders, or relied upon in connection with a transaction closing. Remember: complex judgments, such as materiality, enforceability, and strategic implications, require human expertise.
  1. Preserve what matters. If AI output is used in the disclosure process, informs a board or committee decision, or becomes relevant to a transaction or dispute, the company should think about preservation at the outset. That means considering litigation holds, records retention obligations, and the practical ability to capture the prompt, the output, and any material iterations. Most consumer-grade AI tools do not automatically save each version of their output. A system that cannot preserve work product may be the wrong system for the job.

Used correctly, AI can help public companies review complex documents faster and more efficiently. Used casually, it can create confidentiality, privilege, and control problems where none existed before. Companies should view AI hygiene as an ongoing discipline that calls for thoughtful tool selection, clear guardrails, practical training, and a willingness to adjust as the technology and its risks continue to develop.

Client Alert 2026-084

Related Insights