Auteurs: Jason E. Garcia Gerard M. Donovan Tyler J. Thompson Suchismita Pahi, Christina Farhat
Reed Smith’s Jason Garcia, Gerard Donovan, and Tyler Thompson are joined by Databricks’ Suchismita Pahi and Christina Farhat for a spirited discussion exploring one of the most urgent debates of our era: Should AI be regulated now, or are we moving too fast? Settle in and listen to a dynamic conversation that delves into the complex relationship between innovation and regulation in the world of artificial intelligence.
Transcript:
Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.
Jason: All right. Well, welcome, everybody. Thank you for coming today to Reed Smith and for the special live edition of our Tech Law Talks podcast. Now, not to freak anybody out, we're not going to give you law school lecture where everybody's going to fall asleep and be called upon. We're here to just have a big picture conversation. And since this is Tech Week, we're keeping things at a very high level, practical, and interactive. And so with that, let me start with an introduction of our panelists. I'm moderating. My name is Jason Garcia. I'm a partner in the Reed Smith Palo Alto office. I'm an office managing partner there. My practice is analogous for property.
Tyler: Hi, I'm Tyler Thompson. I'm a partner in the Denver office of Reed Smith in the Emerging Technologies Group. My practice is primarily compliance, privacy, and now AI compliance as well, and was in-house out here in Silicon Valley and at another place in Denver before going to private practice. And yeah, happy to be here.
Gerard: I’m Gerard Donovan out of our DC office. I'm an IP attorney. I mostly litigate patent, trade secret, copyright cases, advise clients on issues relating to that. I'm a computer engineer by background and so I've been doing stuff relating to machine learning, AI for a while. And obviously over the last few years, it's blown up. And so I look forward to talk with you guys about this.
Christina: From the Databricks side, hi, I'm Christina. I manage the product legal program at Databricks, where I have the extreme pleasure of working with Suchi, who you'll meet shortly, on the AI policy and compliance program.
Suchi: All right, hey folks. My name is Suchi, I’m lead counsel for AI at Databricks. I lead our AI compliance program as well and work primarily with all of our AI products, although now that's really enterprise-wide so we have a full group of product council and we work on some really interesting things. Really excited to talk to everyone today and hear some interesting stuff.
Jason: Great. Let's dive in. As everyone knows, AI is moving incredibly fast. Let's start with kind of a first question with is, do you think we should regulate it now or does regulation risk killing innovation before we even know what AI is capable of doing?
Tyler: I can start. You know, I think it's too soon. I think you see countries or states think about AI and think that they need to be a leader, right? But regulation is not innovation. And I think it's too soon. We just don't know a lot of the harms yet. I mean, we have a lot of speculation on what could the harms be of AI. But for the most part, typically, although it might be unfortunate in small circumstances, like you have to see the harm and then you have the legislation, the regulation to fix that harm. And I think we risk doing it too early and putting the brakes on some really interesting ideas in parts of the space.
Christina: Yes, agree with that. And in addition, I think the question is phrased often in a very binary manner, like, should we regulate now or not? And I think oftentimes what you can see is there's so much more nuance, right? Like the right answer is probably iterative. And so approaching it from that perspective and learning more about these harms and how they're playing out as we go, in my opinion, is the right approach.
Suchi: Yeah, I think one of the things the question might do is exclude the existing regulations that are already going to impact AI. And that would be things like your data protection regulations, which I'm sure, Tyler, you're very familiar with and practice in day in and day out. So things like that. And then we have tort law, which, you know, will impact in like the manufacturing industry and things like that. But I agree with y'all as well that there's some more ambiguity that should be there and also maybe a little bit of a slowdown, too. What do you think, Gerard?
Gerard: I’m going to generally agree with everyone else. I think in this space, it's better if the regulations are more evolutionary than revolutionary. I think, I mean, it was only a year or two ago that I think there was a lot more traction for the AI doomers saying that we just need to slow all of this down because we're about to have all this trouble from AI. And if there was comprehensive regulation with the goal of slowing it down, of AI being a problem, we wouldn't have what we have today. We wouldn't be as competitive. And so as things develop and as we have a better understanding, there probably should be some additional regulation. But I think jumping to it and doing it before you understand when we see the winds changing all the time already, I think we need to take a slow approach to make sure we don't make big mistakes along the way.
Jason: Good point. And with that in mind, we've regulated airplanes, cars, and even the internet, but at differing speeds in terms of how those markets and products have evolved. What can we learn from the past tech regulation and about when the right moment is to step in? Gerard?
Gerard: I think I'll talk at least from a car context. I've had a number of cars over the years and you can just sort of see my first car didn't have seatbelts in the back and then didn't have emissions controls in my second car. Airbags sort of evolved in from one for the driver to other seats. And I think over time, as people saw the real risks, the ways to efficiently address them to not stifle innovation and overregulate, that helps to regulate in ways that are, I think, largely sensible. They are able to be implemented and they haven't had to backtrack too many things. We haven't made, I think, huge mistakes in the same way that you could in other technology spaces if you just slam down heavy regulations from the start before you understand everything.
Tyler: Yeah, I agree with that I mean to to keep with the automotive analogy and analogy I've seen in this space a couple times you know when the first cars came out people talk about how well you'd have to walk you'd have to have somebody walking in front of your car with the red flag right to let people know, oh here's a car, here's a moving vehicle. People talk well okay it's because the steering the brakes weren't good enough yet and I think it's valuable to look at that analogy but also realize hey just because somebody was there with the red flag you know steering and brakes would of developed anyway, right? Like that's something that it's going to happen regardless. And to go back to some of Gerard's examples of things that are required now, I mean, a lot of the things and to keep it automotive that are required on today's cars didn't exist, weren't even conceived of when cars were first formed. I think you can make a really clear analogy to some of the stuff in the AI space the same exact way, right? It's, you know, if you try to regulate today, you're going to be using today's tools. And five years from now, or really in this space, it might be five months from now, those tools might be completely obsolete. It's just not going to be a fit.
Christina: Maybe I'll just chime in quickly with the obvious parallel, which is the internet. I think in the 90s, we didn't really know what this new emerging technology was in the same way people are apprehensive or find AI complex. And we sort of waited on that. We didn't really see privacy regulation. Cyber regulation come out until quite recently, and the internet evolved really organically. And so that is an approach. And we've seen all the benefits of the internet and sort of waiting it out and seeing those fears materialize and regulating accordingly versus jumping in and regulating right now.
Gerard: One thing to add on that, for example, the DMCA came out of international treaties in the late 90s, I believe, going towards some of the internet control issues and the piracy and things that were growing. And by waiting to see and getting consensus views on ways to approach this, we had a lot of countries come together and address it in the same way, which then over the following years resulted in the DMCA here in the US and European regulations. And so a slower approach can also help everyone handle it together and get on the same page, get more views.
Jason: If we agree that some guardrails are needed, who should set them? Governments, industry groups, or companies building AI?
Gerard: So I think part of it is that companies building AI almost have to implement some guardrails. They're the ones designing the systems. And so I think part of system design is just deciding what guardrails are there based on your own decisions. And they have some motivations to do so, at least. They don't want to have reputational or brand harm from doing things that people think are bad. They also don't want civil liability. Suchi was talking about different regulations that already exist and do impact what they're doing. And so there are some guardrails that are in place. At the same time, I think those companies have interests that aren't necessarily the same as everyone else in the public. And so regulation should also be done, I think, or guardrails should also be put in place, sorry, in conjunction with governments, industry groups, ultimately will need some other views on this. We just need to, I think, not move too fast and use the information that the companies, the industry groups can provide to help figure out what the right issues are to regulate, how to do it in a kind of a responsible way. it.
Tyler: Yeah, I can add on to that. I mean, I really like the industry group approach to analogize it to privacy law. The PCI DSS is a regulatory scheme about payment card data, right? And so it's very sensitive. This is your credit card information, but that regulatory scheme is generally seen as quite effective, right? There's no scheme that's perfect, but it's seen as very effective. And now it's all industry groups, right? It's largely self-regulatory. And so I I think, you know, thinking about that as an example shows that there could be some real legs, especially in an industry like this, where there is some large major players. I think there could be some real legs to some industry group kind of self-regulation and self-control. And, you know, candidly, we see it with PCI DSS. I think we're already seeing it on the AI side. You know, the true experts in this, to be frank, do not work for the government right now. And there are, of course, exceptions, but I don't think it's controversial to say that, you know, a lot of the people that are true experts in this are not turning down these massive paychecks to go work for the government. And so, you know, relying on some of that expertise that these folks have. Yes, I get it. They have incentives that might not be fully in line with some of the self-regulatory maybe principles. But I think, you know, the expertise there could maybe help offset some of that. They actually understand how this works. And I think that's just a big problem, is having people that are creating regulatory schemes that actually know what they're talking about.
Suchi: Tyler, I just, I barely waited to interrupt you with that. But I do think actually some of the top minds are like at NIST, which is definitely a government group and to be very contrary with what you were just saying. I do think that, you know, yes, we should have companies set up guard rails and participate with industry groups and things like that. I just think it'll be a very good idea to be. I think, careful and take away some of the learnings that we can from the data privacy space / data protection space where there's a little bit of the genie's out of the bottle already sense, especially when it comes to things like cybersecurity incidents and you get your 50th Friday night mail before the holiday and they're like, hey, by the way, breach all your data. Good luck. Here's five bucks. Go sign up for Experian. I think if we can avoid that in something like AI technologies that are going to be long-lasting and really underpin a lot more of our setup of society, then I think we can have, I'll call this like a better second chance for guardrails, like cooperative building of guardrails or passing legislation and things like that.
Jason: That's a good point. If we have, on the one hand, if we have regulation that's like that, it's going to help build public trust and prevent harms. With that, do you think cut out some and give a benefit to the larger public companies that favor the incumbents, for example, versus boxing out some of our newer startups and companies that want to get into the industry? Suchi?
Suchi: Yeah, I think we can once again look at some of the existing examples that we have right now. So we have the EU AI Act, which passed and there's a lot of swirl around it. There are a lot of blog posts and LinkedIn influencers and all sorts of conversations going on. But there's a sense that the EU AI Act was a little bit too early, especially when it comes to these general purpose AI models. The EU was working towards regulating one technology, which suddenly became another technology. So they just seem to have hit the mark a little bit sooner than would have been helpful for this industry. And we still aren't, in my opinion, really in a position to have regulations that regulate a particular type of AI technology because of the speed with which AI technology is advancing and companies are innovating. I mean, the flip side of this, though, is we have the EU leading the charge like they did in data protection. So companies are building their compliance programs to match the EU AI Act baseline and going from there. So I think the EU was trying to do something that would build public trust and to an extent has done that. But I'm not sure that it was entirely done in the way that like the impact wasn't the way that they wanted it to be because a lot of people don't understand still what is AI. So if you don't have the understanding of what's AI and what's being regulated, you really don't have a regulation that's made the impact that it was supposed to have. And that's where I'd leave that. And I don't know that that preferences incumbents over startups necessarily. I think the playing field now is very different from the playing field back in 23.
Gerard: Yeah, I'll add to that. I also, I don't see the current regulations in the U.S. We don't largely have at least AI-specific regulations as necessarily favoring the incumbents. It seemed like a couple of years ago, there was a big concern that the incumbents were going to be heavily involved in writing regulations and that, you know, Because of that, inherently, they would be favoring the incumbents. And I don't think we've seen that play out, in part because there hasn't been as much comprehensive legislation or regulation here in the U.S. But I think also just because I think a lot of people realize we need a better understanding of this. We need to kind of figure out what the consequences of these regulations are. So as regulations do come into effect, you know, beyond the other regulations that also impact, you know, AI, I think we'll see some will probably more heavily impact incumbents or large companies generally. Others might more heavily impact startups. I think we'll continue to see agility in small companies to adjust and, you know, in the incumbents in the space to a great extent. So I think it's yet to be seen, but so far, I don't think it's really had that disproportionate impact.
Tyler: Yeah, I think, I don't think it favors the incumbents that much, if nothing else, just because they are going to be key regulatory targets or key targets of just litigation as well, right? I mean, everybody goes for the deep pocket. Everybody goes for the name that you know. I do worry about it boxing out startups. I mean, I remember back when GDPR, you know, Europe's privacy regulation came out, having a client that was going to launch an app in France. And after we talked to the GDPR, they said, well, forget it, we're not going to do it. And now in my home state of Colorado, which is the only state that has comprehensive AI legislation with the Colorado AI Act, I've seen that same thing, right? Of people thinking about, well, where should I do my AI startup? Well, obviously not Colorado or any place but Colorado. So I have some worry about it. I think there is real cost to regulation. You just can't avoid that. But I worry about it more for the startups than I do an unfair advantage to incumbents.
Jason: In terms of what, are there any things that you, that people think here from the panelists that AI should never be allowed to do? For example, hiring decisions or medical diagnosis without oversight, running weapon systems. Just before our panel started, we had some interesting conversations with various people in the audience. And some of those items were not directly on that list, but tangential. So, interested to hear what the panelists say.
Christina: Yeah, I can go ahead and kick that off. I definitely want a human in the loop with the nuclear weapons codes. Hopefully we can all agree on that. But I think, you know, as AI really permeates society and our daily lives, that line is going to shift for folks. And I think it's hard to have this conversation of a red line without really thinking and talking about bias. So, humans are biased. The data that we've gathered from humans is biased and so model outputs are often biased and so how do we define bias? How do we measure it? What are we doing with the outcomes when we're measuring these things and how are we like fine-tuning models to improve outcomes is a really important question, continuous evaluation and measurements as that line kind of gets closer to daily tasks and hiring decisions and things like that. But I hope we can all agree on the very extreme cases today and explore how that moves as the technology evolves.
Gerard: I think to add to that, I think one of the questions is also, are you talking about AI shouldn't be involved at all or it shouldn't be a fully autonomous AI system where AI is making all the decisions? Because to me, I think for things like military applications, AI might be incredibly useful for targeting systems, for things like that. However, I absolutely want a person to be there making decisions on whether to pursue the target and whether to accept what the AI is supplementing. There's other spaces like, you know, biochemical weapon creation, where I'm not sure if we want AI to be a tool that's being used to supplement, you know, human ability, because it just might create, new, you know, very dangerous, you know, weapons and things that shouldn't be created at all. So I think it kind of depends on how you're analyzing the role of AI in any solution.
Tyler: Yeah, I mean, I agree with that pretty wholeheartedly. I will say, you know, I just don't think that any, if it's a bright red line, it's just not going to hold up, right? I mean, it'd be like saying, you know, for 15 years ago, is there something that the internet shouldn't touch? Or maybe that's more like 25 years ago now. But you know what I'm saying? It's just, it's going to touch everything. And so I think one of the, you know, perhaps in my opinion, very few things that some of these AI regulations are getting right is, you know, they tend to rank things out by sensitivity, right? And the more sensitive your use case is, the more restrictions or things you need to do. And I think that is a very reasonable approach of just saying, hey, there's some things that we really need to ratchet up the controls, put that human in the loop, to use the phrase, right? But if we're going to say there's some things that AI can just never, ever touch, like the biological weapons example, I think is a great one, right? But you know if you're not going to do that, your enemy is going to do that. And so to just stay on par, I think on some level, you're going to have to get it involved. I think we should be pushing for the sensitivity, the risk rating to keep the right controls in place.
Suchi: I’m really curious, Jason, what the examples you were getting were from the audience, maybe their personal red lines. I'd love to hear a little bit about that if you could share. Well, it wasn't red lines, but there was definitely some in medical field, some in like workers' comp, some legal, you know, and so it It had me thinking this matched very well, this question. No Terminator or, you know, Cyberdyne Systems or whatever the name was back in the day.
Suchi: That's really interesting. I think it's really interesting when you think about, like, the reaction folks have to the automated call systems, which aren't typically AI. It's just an automated call system where you press a button and get to another automated part of an answering machine. And I think the level of frustration you see from folks with just that experience, I'm really curious about what that would look like in an entirely AI-driven world and how folks would adapt to that.
Gerard: I think to that example, I mean, I would think that there would be a lot of initial pushback on that. People don't want AI taking over things that are human jobs or human interactions. But at the same time, a lot of times, once you use at least a pretty good AI solution, you realize, wow, this can be a lot better. And so I was calling a court's clerk office this afternoon and had to wait till option six to know I need option five. And, you know, me and a post-counselor just sitting here on the line. And if you could just get to an AI system and say, hey, I need to talk to them about XYZ, and it can just patch you through, I think a lot of people would start to say, hey, this is pretty nice.
Jason: So in terms of different approaches to AI, taking the US, Europe, China, do you think this is going to create a global race? And could too much US regulation push off innovation to outside the US internationally?
Suchi: I think we're in a global race in innovation already. And I do think that China has done quite well so far because they started quite early with their facial recognition systems back in the day, which were something that we used to discuss a lot in our data protection analyses and things like that, or legal circles and things like that. But I don't know that U.S. Regulation of AI will be the singular thing that pushes innovation offshore. I think there could be many, many other levers that will be pulled on this one. And it's important to think about those as well as we look at this opportunity that's in front of the U.S. right now.
Gerard: I’ll add, I mean, I do think we're going to be in a race and continue to be in a race for a long time to come, and it's going to be international no matter what. I think there is a possibility that in the U.S. Or in other locations, over-regulation could impact that, but I don't think it's going to end the race anywhere or that be a singular reason that drives things away from the U.S. I think also, even if there is no regulation, there's probably going to be a lot of incentives for companies to pursue opportunities outside of the U.S. And so, you know, you're going to see worldwide races in any event.
Tyler: Yeah, I think an interesting thing about the races to the extent that the countries are somewhat copying each other. I mean, I think it was Wall Street Journal that called the White House's AI action plan state-directed capitalism, which is, you know, somewhat of an oxymoron, I suppose. But so there is some similarities into how the sides are running the race. And, you know, I think I agree with what folks have said. I think it's a lever. It's not the only lever as in terms of regulation hurting it. But when you think about nuclear energy, I think is a great example of something that the United States was clearly a leader in the 1940s and 1950s. Certainly regulation wasn't the only thing, but I think it was a very major part of why we are not a leader in that space anymore. And so it's it's not the only lever, but I think it is a lever and it's something that we have to take into account.
Gerard: I think one related issue is not directly regulating AI, but things like the diffusion rule for our GPUs that when under the last administration, there were the three tiers and there was a big question of which chips would be going to China and so on. And I think that it both created some barriers for some countries, but also created motivations. You had kind of a re-energized foreign entities that are looking to develop their own technology, develop their own competitive chips, because they're recognizing the question of the availability of U.S. design chips. And so I think you're going to have this race that's going to continue, and how you regulate will impact, but it might be a more complex response than what you're anticipating. It might not be, hey, we slow them down because impeding for a year or two, given how quickly things can ramp up other places, could mean they're getting ahead.
Jason: In terms of that regulation, is there any input on what, like a suggestion of a regulation that would help startups in the AI space, Sushi?
Suchi: I don't know if I have a regulation that would help startups necessarily, but I do think that it would be helpful to have a tie-in with local community groups or state groups that are promoting AI literacy. And I say that because if you have a population that is AI literate, understands what AI is doing, then you're more likely to have better uptake of products and services, better adoption within your various institutions and things like that. Where AI is right now, depending on what generation of folks you're talking to, you get things like reliance on AI as emotional companions without an understanding that this is not a thinking human being. And then I think this causes problems. So I don't know that I'd recommend a regulation, but I'd recommend an approach that would be ground up to help move everyone forward into this AI underpins social setup.
Tyler: Yeah, I agree with that. I think the AI literacy is a great way to put it. I mean, the thing I would say would be transparency. I don't know if it needs to be a regulation. Perhaps that's more in the LLM, GenAI space. But I just think increased transparency is something that I think is pretty low burden on companies. It's pretty uncontroversial. And it's just something that I think the more folks can see what's happening and being transparent about how an AI system is working or how it's processing my data or making decisions on my behalf. It just seems like a pretty low-hanging fruit to me. That is a nice win.
Gerard: I don't disagree with the value of transparency, although I think there's probably some audiences who are going to benefit more, be more interested in the transparency than others. It seems like AI is quickly, I mean, in the first couple of years, 2023, 2024, I think AI was being much more used by people who really were interested in it, forward tech people who wanted to give it a try. And now it's more your parents are using it. Everyone's using it. It's backing your search results. And so you have, I think, this past just six months, just a much broader adoption and use. And I think that that's a great thing. And I think that educating and letting them know how this works is going to help. I don't think those people are going to be interested in some of the transparency that sort of AI followers are interested in. So I think that the transparency is hugely important and you do need it to make sure the companies are on a level playing field, make sure things are done right. But I think for a huge part of the population that will be AI users if they aren't already, it probably won't move the needle a whole lot.
Jason: All right. So here's the question, last question for the panel. If we had to predict five years from now, what's one area of AI that will most certainly be regulated first and why?
Suchi: I think it's going to be any area that impacts children, children's education and things like that. This is a cultural value that people can coalesce around pretty quickly. So I think that will be the first in five years.
Tyler: I think you're already seeing the target area. And to me, it's, you know, it's deep fakes, it's high quality image and video creation, a plurality of states, I think perhaps even, you know, very close to a majority already have a law that is at least somewhat related to this. And I think, you know, the implications on IP for things like celebrity politician impersonation, even things like child pornography, the implications there are so broad that I think that wave has been very uncontroversial on both sides, that wave of regulation. And we're just going to continue to see that increase.
Gerard: I think Tyler's probably right on some of the first things that will be regulated. So just to throw something else out there, I think one of the areas that will be regulated, at least at a federal level, probably pretty quickly, is military applications and related export applications, just because I think we're a country that's very driven by our military capabilities and AI is going to fundamentally supplement them. And I think there's going to be concerns about that, you know, that being available to adversaries.
Christina: Yes, definitely. National security. I think piggybacking a little bit off of what Suchi said, but from a healthcare perspective, we've seen a lot of folks relying on AI for therapy, for medical diagnoses. And so it'll be interesting to see how regulation develops in the healthcare space.
Jason: That's the end of our podcast. But as you can tell, you know, regulating AI is complicated. There's no clear answers. Stakes are high. Technology is powerful and the future is unwritten. You know, the question is, are we going to regulate? Should we regulate now and when? But, you know, it's something to monitor, especially if you're in the AI space. Hopefully, this conversation has been helpful. I want to thank my colleagues for joining, Christina and Suchi from Databricks for joining on our panel. You both are amazing, my two new favorite people, and looking forward to embracing and doing AI in the future together. Thank you.
Suchi: All right. Thanks, everyone.
Outro: TechLaw Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's Emerging Technologies Practice, please email techlawtalks@reedsmith.com. You can find our podcast on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts.
Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers.
All rights reserved.
Transcript is auto-generated.

