/ 1 min read / Reed Smith Client Alerts

AI on the hot seat: Innovation vs. guardrails

Reed Smith hosted a live edition of its Tech Law Talks podcast during SF Tech Week, convening a fast-paced discussion on whether AI should be regulated now or whether premature guardrails risk chilling innovation. The session, produced as both a video recording and an audio podcast, brought together Reed Smith’s Jason Garcia, Gerard Donovan, and Tyler Thompson, along with Databricks’ Suchismita Pahi and Christina Farhat, for a spirited conversation on where regulation is heading and what it means in practice for companies building or deploying AI.

Framing the debate

  • The central question: “AI is moving faster than law – should we regulate now?”
  • Weigh the benefits of early rulemaking against the risk of chilling beneficial use cases: sector and use-context matter.
  • Different risk profiles for consumer apps, safety‑critical systems, and highly scaled models point toward a “risk‑tiered” regulatory approach.

Hot-button issues: deepfakes, bias, and copyright

  • Synthetic media and deepfakes: heightened liability under false advertising and right‑of‑publicity theories, platform obligations, and rapid adoption of content provenance standards.
  • Bias and fairness: momentum toward auditability, documentation, and explainability; prioritize training data governance and post‑deployment monitoring.
  • Copyright: ongoing uncertainty around training data, authorship requirements, and output ownership; trend toward greater transparency on data sources and clearer allocation of risk in commercial contracts. 

Ethics and power: what should AI never do – and who decides?

  • Legal and ethical norms are converging as organizations operationalize responsible AI.
  • “Soft law” via investor and enterprise procurement is already shaping practice: expect model cards, risk registers, incident reporting, and red‑teaming evidence.
  • Proactive, documented, and scalable governance is becoming a commercial necessity in addition to a legal expectation.

Where regulation is likely to land

  • Near‑term rules are more likely to emphasize transparency, risk management, and accountability than prescriptive technology bans.
  • Expect movement toward disclosures about AI use in consumer and workplace contexts.
  • Anticipate targeted controls and obligations for high‑risk applications.
  • Strengthen supplier due diligence with contractual flow‑down obligations.
  • Make enforceable representations on data rights, safety testing, and misuse prevention a standard part of commercial deals.

Practical implications and action points

  • Adopt “compliance by design” across AI initiatives.
  • Prioritize: mapping AI use cases and risks; tightening data licensing and provenance records; integrating bias testing and safety evaluations into development pipelines; implementing incident and model‑change management; and updating vendor contracts for training data rights, output ownership, indemnities, and audit rights.
  • For brands and platforms: invest in content authenticity tooling, streamlined takedown workflows, and coordinated crisis response for synthetic media events.
  • For product and go‑to‑market teams: align disclosures and user messaging with the expected direction of transparency obligations.

As one panelist put it, “Innovation and guardrails are not opposites – done well, guardrails are the scaffolding that lets responsible innovation scale.” Companies that operationalize risk-tiered governance now will be best positioned to move quickly as formal rules solidify and market expectations harden.

For a deeper dive into the debate, the live session is available on our podcast and on YouTube.

Join our mailing list to stay up-to-date on future events, insights, and alerts.

Reed Smith Client Alert 2025-255

Related Insights