Authors: Tyler J. Thompson Abigail Walker

Tyler Thompson sits down with Abigail Walker to break down the Colorado AI Act, which was passed at the end of the 2024 legislative session to prevent algorithmic discrimination. The Colorado AI Act is the first comprehensive law in the United States that directly and exclusively targets AI and GenAI systems.
Transcript:
Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.
Tyler: Hi, everyone. Welcome back to the Tech Law Talks podcast. This is continuing Reed Smith's AI series, and we're really excited to have you here today and for you to be with us. The topic today, obviously, AI and the use of AI is surging ahead. I think we're all kind of waiting for that regulatory shoe to drop, right? We're waiting for when it's going to come out to give us some guardrails or some rules around AI. And I think everyone knows that this is going to happen whether businesses want it to or not. It's inevitable that we're going to get some more rules and regulations here. Today, we're going to talk about what I see as truly the first or one of the first ones of those. That's the Colorado AI Act. It's really the first comprehensive AI law in the United States. So there's been some kind of one-off things and things that are targeted to more privacy, but they might have implications for AI. The Colorado AI Act is really the first comprehensive law in the United States that directly targets AI and generative AI and is specific for those uses, right? The other reason why I think this is really important is because Abigail and I were talking, we see this as really similar to what happened with privacy for the folks that are familiar with that. And this is something where privacy a few years back, it was very known that this is something that needed some regulations that needed to be addressed in the United States. After an absence of any kind of federal rulemaking on that, California came out with their CCPA and did a state-specific rule, which has now led to an explosion of state-specific privacy laws. I personally think that that's what we could see with AI laws as well, is that, hey, Colorado is the first mover here, but a lot of other states will have specific AI laws in this model. There are some similarities, but some key differences to things like the EU AI Act and some of the AI frameworks. So if you're familiar with that, we're going to talk about some of the similarities and differences there as we go through it. And kind of the biggest takeaway, which you will be hearing throughout the podcast, which I wanted to leave you with right up at the start, is that you should be thinking about compliance for this right now. This is something that as you hear about the dates, you might know that we've got some runway, it's a little bit away. But really, it's incredibly complex and you need to think about it right now and please start thinking about it. So as for introductions, I'll start with myself. My name is Tyler Thompson. I'm a partner at the law firm of Reed Smith in the Emerging Technologies Practice. This is what my practice is about. It's AI, privacy, tech, data, basically any nerd type of law, that's me. And I'll pass it over to Abigail to introduce herself.
Abigail: Thanks, Tyler. My name is Abigail Walker. I'm an associate at Reed Smith, and my practice focuses on all things related to data privacy compliance. But one of my key interests in data privacy is where it intersects with other areas of the law. So naturally, watching the Colorado AI Act go through the legislative process last year was a big pet project of mine. And now it's becoming a significant part of my practice and probably will be in the future.
Tyler: So the Colorado AI Act was passed at the very end of the 2024 legislative session. And it's largely intended to prevent algorithmic discrimination. And if you're asking yourself, well, what does that mean? What is algorithmic discrimination? In some sense, that is the million-dollar question, but we're going to be talking about that in a little bit of detail as we go through this podcast. So stay tuned and we'll go into that in more detail.
Abigail: So Tyler, this is a very comprehensive law and I doubt we'll be able to cover everything today, but I think maybe we should start with the basics. When is this law effective and who's enforcing it and how is it being enforced? So the date that you need to remember is February 1st of 2026. So there is some runway here, but like I said at the start, even though we have a little bit of runway, there's a lot of complexity and I think it's something that you should start now. As far as enforcement, it's the Colorado AG. The Colorado Attorney General is going to be tasked with enforcement here. A bit of good news is that there's no private right of action. So the Colorado AG has to bring the enforcement action themselves. You are not under risk of being sued for the Colorado Privacy Act from an individual plaintiff. Maybe the bad news here is that violating the Colorado AI Act will be considered an unfair and deceptive trade practice under Colorado law. So the trade practice regulation, that's something that exists in Colorado law like it does in a variety of state laws. And a violation of the Colorado AI Act can be a violation of that as well. And so that just really brings the AI Act into some of this overarching rules and regulations around deceptive trade practices. And that really increases the potential liability, your potential for damages. And I think also just from a perception point, it puts the Colorado AI Act violation in some of these kind of consumer harm violations, which tend to just have a very bad perception, obviously, to your average state consumer. The law also gives the Attorney General a lot of power in terms of being able to ask covered entities for certain documentation. We're going to talk about that as we get into the podcast here. But the AG also has the option to issue regulations that further specify some of the requirements of this law. That's the thing that we're really looking forward to is additional regulations here. As we go through the podcast today, you're going to realize there seems like there's a lot of gray area. And you'd be right, there is a lot of gray area. And that's what we're hoping some of the regulations will come out and try to reduce that amount of uncertainty as we move forward. Abigail, can you tell us who does the law apply to and who needs to have their ducks in a row for the AGE by the time we hit next February?
Abigail: Yeah. So unlike Colorado's privacy law, which has like a pretty large like processing threshold that entities have to reach to be covered, this law applies to anyone doing business in Colorado that develops or deploys a high-risk AI system.
Tyler: Well, that high-risk AI system sentence, it feels like you used a lot of words there that have a real legal significance.
Abigail: Oh, yes. This law has a ton of definitions, and they do a lot of work. I'll start with a developer. A developer, you can think of just as the word implies. They are entities that are either building these systems or substantially modifying them. And then deployers are the other key players in this law. Deployers are entities that deploy these systems. So what does deploy actually mean? The law defines deploy as to use. So basically, it's pretty broad.
Tyler: Yeah, that's quite broad. Not the most helpful definition I've heard. So if you're using a high-risk AI system and you do business in Colorado, basically you're a deployer.
Abigail: Yes. And I will emphasize the fact that it only applies to most of the requirements of the law. Only apply to high-risk AI systems. And I can get into what that means. High-risk, for the purpose of this law, refers to any AI system that makes or is a substantial factor in making a consequential decision.
Tyler: What is a consequential decision?
Abigail: They are decisions that produce legal or substantially similar effects.
Tyler: Substantially similar.
Abigail: Yeah. Basically, as I'm sure you're wondering, what does substantially similar mean? We're going to have to see how that plays out when enforcement starts. But I can get into what the law considers to be legal effects, and I think this might highlight or shed some light on what substantially similar means. The law kind of outlines scenarios that are considered consequential. These include education enrollment, educational opportunities, employment or employment opportunities, financial or lending service, essential government services, health care services, housing, insurance, and legal services.
Tyler: So we've already gone through a lot. So I think this might be a good time to just pause and put this into perspective, maybe give an example. So let's say your recruiting department or your HR department uses, aka deploys an AI tool to scan job applications or job application cover letters for certain keywords. And those applicants that don't use those keywords get put in the no pile or, hey, this cover letter, it's not talking about what we want to talk about, but we're going to reject them. They're going to go on the no pile of resumes. What do you think about that, Abigail?
Abigail: I see that as kind of falling into that employment opportunity category that the law identifies. And I feel like that's kind of almost like falling into that substantially similar thing when it comes to substantially similar to legal effects. I think that use would be covered in this situation.
Tyler: Yeah, a lot of uncertainty here, but I think we're all guessing until enforcement really starts or until we get more help from the regulations. Maybe now's the time, Abigail, do you want to give them some relief? Talk about some of the exceptions here.
Abigail: Yeah, I mean, we can, but the exceptions are narrow. Basically, as far as developers are concerned, I don't think they're getting out of the act. If your business develops a high-risk AI system and you do business in the state, you're going to comply with it.
Tyler: Oh.
Abigail: Yeah, or face enforcement. The law does try to prevent deployers from accidentally becoming developers, and that's a nuanced thing.
Tyler: So I guess that's interesting. What do you mean by that, that it tries to prevent them from becoming developers, and how does it do that?
Abigail: So if you recall when I was talking about what a developer is, you can fall into the developer category if you modify a high-risk AI system. You don't have to be the one that actually creates it from the start, but if you substantially modify a system, you're a developer at that point. But what the law tries to do is make it so that if you're a deployer and your business deploys one of these systems and the system continues to learn based off of your deployment, and then that learning changes the system, you don't become a developer as a result. But, and like, this is a big but, that chain of events, the system modifying itself based off of training on your data or your use of the system, that has to be an anticipated change that you found out about through an impact assessment. So, and it also has to be technically documented. I'll give a crude hypothetical for this, just like a simple one to kind of help you wrap your mind around what I'm talking about here. Let's say I have a donut business and I start using Bakery Corporation's AI system. And then that system starts to become an expert in donuts as a result of my using it. It can't be a happy accident. I have to anticipate that or else my business becomes a developer.
Tyler: Yeah. Donuts are high risk in this scenario, right?
Abigail: Well, donuts are always a consequential decision, Tyler.
Tyler: That's fair.
Abigail: But there's more. Deployers have a little bit of a small business exception. And I think that this is going to end up really helping a lot of companies out. Basically, you will meet this exception if you employ fewer than 50 full-time equivalent people, if you don't use your own data to train the high-risk AI system. And if you use the system as intended by the developer and make available to your consumers similar information as to what would go into an impact assessment, then you get out of some of the law's more draconian requirements, such as the public notice requirement, impact assessments, and the big compliance program that the law requires that we'll get into later. Tyler: Okay, wait. So if my donut business is already providing consumers with some of the similar information that would have been in the impact assessment, I don't actually have to conduct a full impact assessment then?
Abigail: Yes.
Tyler: But wouldn't I have to basically do the impact assessment anyway to know what the similar information is? Like, how can I provide similar information without knowing what would have been in the impact assessment?
Abigail: Yes and no. You have to do it, but you don't. And I think this is another spot where the definitions are kind of doing a lot of work here. I think that what the law is trying to do with this exception is trying to not force small businesses to have these robust, expensive compliance programs that you and I know are a heavy lift, while still kind of making them carefully consider the consequences of using a high-risk AI system. I think that's the balance that's trying to be struck here is, you know, we understand that compliance programs, especially the one that this law dictates, are very expensive and cumbersome and can sometimes require whole compliance departments. But we also still don't want to let small businesses employ high-risk AI systems in a way that's not carefully considered and could potentially result in algorithmic discrimination. Tyler: Okay, that makes sense. So maybe a small business would be using the requirements of an impact assessment, not actually doing one, but using the requirements as a guide for how they should go about using the AI system. So they don't actually have to do the assessment, but just looking at the requirements provides a helpful guide.
Abigail: Yeah, I think that's the case. And we'll get into this more later when we talk about some of the enforcement mechanisms, but they also wouldn't have to provide the attorney general with an impact assessment. That's part of the enforcement aspect.
Tyler: So wait a second. I think we've been positive for probably almost a minute or two. So I think it's time for maybe the other shoe to drop, right? So you said that this only exempts small businesses from a number of requirements. I think they still have to tell customers if a high-risk system was used to make a decision about them though. Is that right?
Abigail: Yes, that's right.
Tyler: Okay, interesting. So is there any other not very relieving relief that you want to share with us?
Abigail: Yes. So I also want to circle back on the high risk thing. Like for example, the law does explicitly say that AI systems that consumers talk to for informational purposes, you know, like if I go to one of these language models and I say, write an email to my boss asking for a last minute vacation, these are not high risk as long as they have an accepted use agreement, like a terms of use.
Tyler: Okay, so that's interesting. I think I see what the act is getting at there. So if I ask an AI model with a terms of use to write that vacation email, then it results in my resignation, probably because I didn't read it before sending it, then that's out of scope.
Abigail: Yes. And one last thing, if I may.
Tyler: Of course you may.
Abigail: Yes. I want to make sure that this is clear. All consumer facing AI systems, high risk or not, have to disclose to the consumer that they are using or talking to AI. There is a funny little exception here. The law includes an obviousness exception, but I would not counsel anyone to rely on that. And I'm sure, Tyler, you've seen people on social media fall for those AI-generated videos where they have like 12 fingers and there's like phrases, but they're not using real letters. I think obvious is too subjective to rely on that exception.
Tyler: Yeah, I agree. And of course, I would never fall for one of those and certainly have not numerous times. So good to know. So let's switch gears. Let's talk a little bit about internal compliance requirements. We've spent a lot of time talking about the who and the what that the Colorado AI Act applies to. I think now we shift gears and we talk about what does the Colorado AI Act actually require. And I guess, Abigail, do you want to start by telling us what developers have to do?
Abigail: Yeah. So first and foremost, I will say both developers and deployers have an affirmative responsibility to use reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination. And I'm sure, Tyler, that you're probably prickling at the reasonably foreseeable aspect of that.
Tyler: Yeah, I think in general, right, the regulator always has a different idea of what's reasonably foreseeable than, you know, a business in this space that's actually operating in this area, right? You know, I would say that the business could be the true expert on AI and what is algorithmic discrimination, but reasonable minds can disagree about what's reasonable and what's reasonably foreseeable. And so I do think that's tricky there. And while it might seem like a gray area that's helpful, I think it's just a gray area that adds risk.
Abigail: Yeah. And now getting kind of more into what developers have to do, I'm about to bomb you with a laundry list of requirements. So if this is overwhelming, don't worry, you're not alone. But one of the main aspects of the requirements for developers is that they have to provide the deployers. So remember the people that are using the developer's high-risk AI system. They have to provide them with tons of information. They have to give them a statement describing reasonably foreseeable uses and known harmful or inappropriate uses. They have to document the data used to train the system, the reasonably foreseeable limitations of the system, purpose, intended benefits, and uses, and all other information a deployer would need to meet their obligations. They also have to document how it was evaluated for performance and mitigation of algorithmic discrimination, data governance measures for how they figured out which data sets were the right ones to train the model. The intended outputs of it, and also how the system should and should not be used, including being monitored by an individual. It's really tracing this model from inception to deployment.
Tyler: Wow. Woof. Well, I want to get into some of this intended uses versus reasonably foreseeable uses thing. Talk about that for a minute. I think a key point here will be trying to address some of these things in the contract, right? You know, Abigail, you and I have talked a lot about artificial intelligence addendums, artificial intelligence agreements that you can attach to kind of a master agreement. I think something like that, that gives us some certainty and something reasonable in a contract might be key here, but I'm interested to hear your thoughts.
Abigail: Yeah, I agree with you, Tyler. I think that this intended uses thing, it's interesting that the law also requires developers to also identify what they think are not intended uses, but possible uses. And here I'm thinking that a developer probably in their AI addendum is probably going to want to put stuff in there, especially tying to like indemnification, kind of saying, hey, Deployer, if you use this in a way that we did not intend, you need to hold us harmless for any downflow effects of that. I think that's going to be key for developers here.
Tyler: Yeah, I'm with you. I think the contract is just so crucial and just have to have that in my mind moving forward to do this the right way. You talk about the deployers. Dare I ask about the deployers and what the act requires there?
Abigail: Yep. And here I think our listeners are going to really see why the small business exception is a big deal. So, deployers are required to implement a risk management policy and program, and the law does not leave anything here to chance. To quote it, the program must be an iterative process planned, implemented, and regularly and systematically reviewed and updated over the lifecycle of the high-risk AI system. Basically, it's not enough just to paper up as a deployer. You have to be papering up and then following the plan that you set up for yourself.
Tyler: Yeah, interesting. And I wonder if the regulators kind of saw what happened with even privacy, right? Where there was a lot of put a policy in place, let's paper this. But on a monthly or daily or yearly basis, whatever your life cycle is, you're not actually doing a lot with it. So interesting that they have made that so robust with those requirements. And it seems like that this is where that small business exception must be pretty important, right?
Abigail: Absolutely, yes. Because as you and I know, this can get pretty expensive and it can take up a lot of man hours as well. The program also has to be based off of a nationally or internationally recognized standard, kind of like we see when NIST publishes guidance. It has to consider things like the characteristics of the deployer and also the nature, scope, and intended uses of the system. And it also has to consider the sensitivity and volume of the data process. Like I said, nothing's left up to chance here. And that's not all. This is another big compliance requirement. Tyler, do you want to give an overview of what the impact assessments have to look like? We've seen these in data privacy before.
Tyler: Yeah, for sure. Happy to. And I know it just seems like a lot because it is a lot, but hopefully the impact assessment is something that you're at least a little bit familiar with because, as Abigail said, we've seen that in privacy. There's other compliance areas where an impact assessment or a risk assessment is important. In my mind, it does follow some of what we saw in privacy where your very high level, your 30,000-foot view is we're talking about what the AI system is doing. We're going to point out some risk there, and then we're going to point out some controls for that risk. But to get into some of the specifics, let's talk about what's actually required and some of the specifics here. The first is a statement describing the purpose, intended uses, and benefits of the AI system. You also need an analysis of whether it poses any known or reasonably foreseeable risks of algorithmic discrimination and steps that mitigate that risk. You need a description of the categories of data used as inputs and the outputs the system produces, any metrics used to evaluate the system's performance, and a description of transparency measures and post-deployment monitoring in guardrails. And finally, finally, finally, if the deployer is going to customize that AI system, an overview of the categories of data that were used in that customization.
Abigail: Yeah. So I'm starting to see a lot of privacy themes here, especially with descriptions of data categories. But when do deployers have to do these impact assessments?
Tyler: The shorter answer is it's ongoing, right? It's an ongoing obligation. They have to do it within 90 days of the deployment or modification of a high-risk AI system. And then after that, at least annually. So you have that short 90-day turnaround up front. And keep in mind that's deployment or modification of the system. And then you have the annual requirement. So this really is an ongoing thing that you're going to need to stay on top of and have your team stay on top of. Also worth noting that modifications to the system trigger an additional requirement in which deploys have to describe whether or not the system was used within the developers intended use. But that's kind of a tricky thing there, right? A little bit, you might have to step into the developer's shoes and think about, was this their intended use? Especially if the developer didn't provide really good documentation there, and that's not something you got during the process of signing up with them for the AI platform.
Abigail: Yeah, and I think this highlights, again, how the intended use thing is going to play a big role in contracting. I think there's also a record retention requirement with these impact assessments, right?
Tyler: Yeah, there is. I mean, 2025, I think, is going to be the year of record retention. Deployers have to retain all impact assessments, so all your past impact assessments, each time you conduct one for your annual review. So at least three years following the final deployment. So that's important to think about, too. I mean, something that we saw with privacy is, A, it's an updated impact assessment, and some of those old impact assessments would just be gone, be removed. Maybe they were a draft assessment that was never actually finalized. Now it kind of makes it clear that every time you have an impact assessment that satisfies one of these requirements, that hits the timeframe, we need to have an impact assessment, let's say, within that 90 days. Now that's an impact assessment that you have to save for a minimum of three years following that deployment. Also, if you recall, deploy has a really, really broad definition of just to use. So really, it's three years from the last time the system gets used. I think that can be incredibly tricky. and certainly it's a couple years down the road, but that can be an incredibly tricky thing, right? If you have an AI system that is kind of dormant or maybe it's used once a year for a compliance function, something like that, every time it's touched, that's going to re-trigger that use definition and then you will have deployed it again and now you have to do another three-year period from that last deployment or use.
Abigail: Wow, yeah. Thinking like you're going to have some serious admin controls on when you put a high-risk AI system to bed. I think, too, there's also some data privacy-esque requirements involved with these. Do you want to go over that really quick?
Tyler: Sure, yeah. I mean, these are some of the transparency things that, again, like you said, Abigail, folks might be kind of used to doing some of these transparency-type requirements from the privacy side. The Colorado AI Act has these requirements, too. So first, notification, opt-out, and appeal. So remember, we're talking about AI systems that are helping to make or actually making consequential decisions. In that case, the law requires the deployer to notify the consumer of the nature and consequences of the decision before it's made. So before that can actually, the AI system can make a decision or help, the consumer has to be notified. I have to tell the consumer how to contact the deployer. This might seem easy, but as we've seen with privacy, you might have a whole different contact set of information for something AI related than like your general customer service line, for example. If applicable, you have to tell the consumer how to opt out of their personal data being used to profile that consumer in a way that produces that legal effect. So that's similar to what we've seen in Colorado privacy law and other state comprehensive privacy laws in the United States. And then finally, if a decision is adverse to the consumer, provide specific information of how that decision was reached, along with opportunities to correct any wrong data that produced the adverse decision in a way to appeal the decision. So that's a big deal there. I mean, providing that information on how that decision was reached, I think that requirement alone might be enough to cause some businesses to say, we don't want to go down this road. We don't want to provide it because we don't want to have to provide this information on why an adverse decision was reached.
Abigail: Yeah, I would agree with that. And I want to reemphasize here that small businesses are not exempt from this part of the law, even if they're exempt from the other stuff.
Tyler: Yeah, sadly, that's correct. And most importantly, deploys have to make sure this information, it's the consumer, which can be tricky, of course, right? And then even if not interacting with the consumer directly, you got to figure out a way that the consumer can actually have this information. And then it has to be in plain language and accessible. So I view this as another spot that a contract can come into play because there's going to be maybe some real requirements here that you're not going to be able to handle. You might have to use that contract to make sure that you have the information that you need and to maybe obligate other parties to provide some of this information to the consumer, depending on your relationship.
Abigail: Yeah, absolutely. Should we really quick talk about the notice provisions?
Tyler: Yeah, I think that'd be great.
Abigail: Okay, so real quick, both deployers and developers are going to have to have some sort of public facing policy. I really think that this is going to become a commonplace thing, an AI policy, kind of like we have privacy policies now. And some themes for these policies are going to be describing your currently deployed systems, how you're managing known or reasonably foreseeable risk, insights about the information collected and processed. And the other thing is that there's an accuracy requirement here. If you change any of those things on the back end, you need to update your AI policy within 90 days.
Tyler: And I know we're kind of glossing over this a bit, but I feel like this is kind of a trap, right? We've seen this before where a business can get dinged because their privacy policy or something didn't accurately reflect their data practices. I think this is similar, right? Where maybe, arguably, they would have been better off by not saying anything.
Abigail: Yeah, of course. I think this aspect kind of opens companies up to some FTC risk. They're going to have to stay on top of this, not only to comply with Colorado law, but also to avoid federal regulatory scrutiny and kind of the same unfair and deceptive trade practices arena.
Tyler: Well, I know we're getting to the end here, but I think we've got to quickly talk about how much insight the act entitles the AG to. And then maybe, Abigail, just go on and talk about some of the attorney-client privilege, that weirdness that we have as well.
Abigail: Yeah. So I think this is kind of like where the law gets really scary for companies is it enables the AG to ask developers for all of that information that they have to provide deployers that we went over quickly earlier in the podcast. And then the AG also gets to ask deployers for their risk management policy and program, their impact assessments, and all the records that assist the impact assessment.
Tyler: Yeah. And then do you want to talk about some of the weird no waiver of attorney-client privilege piece? I think that's really strange.
Abigail: Yeah. So we've seen this with the privacy laws as well, because I think that if I'm remembering correctly, the AG gets to ask for those impact assessments as well. And it has this provision that says having to provide the impact assessment doesn't waive attorney-client privilege when you comply with it, which is, I think, interesting because then the AG has now seen your information, but they're not allowed to use it against you. I don't know how that's going to work in terms of enforcement.
Tyler: Yeah, that's pretty strange, right? And there is a 90-day deadline in responding to these requests, so it's kind of a short deadline.
Abigail: Yeah. And then finally, the last kind of, like I said, scary AG notification requirement that I really want to point out is that there's a mandatory reporting requirement if a developer or employer discovers that a high-risk AI system has had algorithmic discrimination, then they have to alert the AG. There is an affirmative defense if they rectify the issue, but you only get the affirmative defense if you have those NIST-like frameworks in place. And also, I want to point out too that the law does not require deployers to tell their developers that they found algorithmic discrimination. They just have to tell the AG. So I think this is another issue if you're developer side in contracting that you need to insist that your deployers are also alerting you to this kind of issue.
Tyler: Yeah, right. Otherwise, you know, your deployer is going to tell the AG that, hey, that developer's product is discriminating. You might never even know that it happened. You'll have an investigation maybe pending against you and you had no idea that it was even going to happen.
Abigail: Exactly. So since we're going to, let's wrap up here, Tyler, I want to reemphasize, I think you talked about this in the beginning. If this goes into effect in a year, why are we talking about it today?
Tyler: Look, I think, you know, from this conversation, it's obvious, right? There is a lot a lot here. This is going to be a big project for any business that it covers. I think there's also even threshold projects of determining, hey, is this something that is going to apply to us? And that's going to be big as well. You know, as I've seen in my years doing data privacy, there's probably going to be a little bit of an initial lag in enforcement. So I don't expect, hey, once we hit February 2026, a bunch of enforcement actions. But you could be wrong. And those enforcement actions, when they do come, are going to come seemingly out of nowhere. And they're going to be backwards looking until the effective date of the law. So you really don't want to be caught off guard here. There's a lot to do.
Abigail: Yeah, I think that's especially true considering how much documentation is involved. I feel like this law really implicates a lot of business planning and decision making. So you kind of need to have your business side of your team kind of really thinking about whether these systems are worth it in the end.
Tyler: Yeah. When you think about the compliance costs, the amount of oversight, you just have to be honest with yourself, I think, if you're a business, as to whether implementing a high-risk AI system is really worth it for you, at least in Colorado. And I think we're going to see it in other states as well. I think this is especially true for the business that just barely misses that deployer exception. And if you just barely miss that deployer exception, that can be tough, right? Because you might have the bigger compliance obligations. And so that's something you have to think about if you're in that gray area or or maybe some of the other gray areas in this law, think about whether it's really worth it. Well, we've covered a lot here. I think the bottom line is the risk here is real. There's real action items. There's real things you need to do. Please reach out to us. Abigail and I, as you can probably tell, we love nerding out about the subject. We'd be happy to talk to you high level and just help you brainstorm whether it applies to you and your company or not. Again, thanks so much for joining. Really appreciate your time and hope to see you on the next one.
Abigail: Yeah, thank you, everyone. And thank you, Tyler.
Tyler: Yeah, thanks, Abigail.
Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's emerging technologies practice, please email techlawtalks@reedsmith.com. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts.
Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers.
All rights reserved.
Transcript is auto-generated.