Reed Smith hosted a live edition of its Tech Law Talks podcast during SF Tech Week, convening a fast-paced discussion on whether AI should be regulated now or whether premature guardrails risk chilling innovation. The session, produced as both a video recording and an audio podcast, brought together Reed Smith’s Jason Garcia, Gerard Donovan, and Tyler Thompson, along with Databricks’ Suchismita Pahi and Christina Farhat, for a spirited conversation on where regulation is heading and what it means in practice for companies building or deploying AI.
Framing the debate
- The central question: “AI is moving faster than law – should we regulate now?”
- Weigh the benefits of early rulemaking against the risk of chilling beneficial use cases: sector and use-context matter.
- Different risk profiles for consumer apps, safety‑critical systems, and highly scaled models point toward a “risk‑tiered” regulatory approach.
Hot-button issues: deepfakes, bias, and copyright
- Synthetic media and deepfakes: heightened liability under false advertising and right‑of‑publicity theories, platform obligations, and rapid adoption of content provenance standards.
- Bias and fairness: momentum toward auditability, documentation, and explainability; prioritize training data governance and post‑deployment monitoring.
- Copyright: ongoing uncertainty around training data, authorship requirements, and output ownership; trend toward greater transparency on data sources and clearer allocation of risk in commercial contracts.