/ 3 min read

Designing for trust: SXSW insights on responsible AI governance

At SXSW, I attended a session titled "AI Safety & Trust: Building Responsible Media Tech," featuring my Reed Smith colleagues Nikki Bhargava and Andy Splittgerber alongside Tripadvisor's Vanessa McKay and Alexander Ahmadi. The conversation offered a clear message: responsible AI isn’t about slowing down innovation. It’s about building technology people actually trust.

Innovation and regulation are not opposites. As Nikki put it, a good product is a trusted product. Safety, transparency, and responsible design shouldn’t be treated as legal hurdles, but as market advantages. When product and legal teams treat AI governance as a shared goal, not competing agendas, the result is tech that performs better and earns lasting trust.

Embrace iteration over perfection. The panel also emphasized a truth most of us know but (at least I) often ignore: nothing gets put out perfectly the first time. Companies need frameworks built for continuous learning. Rigid, one-time approval processes may not keep up with technological changes. Opt for governance frameworks that anticipate change - processes that allow for rapid iteration while maintaining appropriate oversight. This means building feedback loops into compliance architecture from day one, rather than reactively bolting them on after problems emerge. 

The irreplaceable role of human judgment. Despite advances in automation, the panel raised the importance of keeping humans in the loop. Not for box-checking, but for real oversight. Human involvement serves two related functions: (1) ensuring the process is actually working as intended, and (2) confirming the product remains useful or viable. AI systems can drift, degrade, or produce unexpected outputs over time. Automated monitoring can catch some of these issues, but human judgment remains irreplaceable for the kinds of contextual assessments that determine whether a product continues to serve its intended purpose and maintain user trust. The takeaway here is that human-in-the-loop requirements should be designed thoughtfully. Generic policies that mandate human review without specifying what reviewers should be looking for, or how their findings should be escalated, provide limited value. Effective oversight requires clear protocols and genuine decision-making authority.

Pressure testing for hidden bias. Andy highlighted a powerful idea: use AI to test AI. Just as IT security teams conduct penetration testing to find vulnerabilities before attackers do, AI teams should pressure test their models, pushing them to their limits to uncover bias and identify edge cases where they fail. A proactive approach is far more preferable to discovering bias through user complaints, media coverage, or regulatory enforcement, while also demonstrating the kind of good-faith effort that can matter significantly in enforcement contexts.

Companies that embrace these principles will be better positioned to navigate an evolving regulatory landscape while earning consumer trust and standing out in an increasingly crowded market.

As AI becomes central to media and entertainment, safety and trust are non-negotiable.

Read more