States are continuing to accelerate the regulation of AI in consumer health care, with Texas and Utah leading the way. This client alert provides an overview of the latest developments in Texas and Utah, with a focus on the unique requirements for health care entities that develop and deploy AI.
The Texas Responsible Artificial Intelligence Act
Texas is the latest state to pass AI legislation (H.B. No. 149) with the Texas Responsible Artificial Intelligence Governance Act (TRAIGA), set to take effect on January 1, 2026. TRAIGA applies broadly to businesses operating in Texas, providing products or services to Texas residents or developing or deploying AI systems in the state – including those used in health care, but it is not breaking new ground with its limited requirements.
Provisions of TRAIGA relevant to consumer health care include:
- Disclosure obligation. The statutory language is clunky, but Texas authorities will likely interpret TRAIGA to require organizations that use AI systems in relation to health care services or treatment to clearly and conspicuously disclose to consumers – before or at the time of interaction – that they are interacting with an AI system. Health care services include services related to human health or to the diagnosis, prevention, or treatment of a human disease or impairment provided by a licensed individual. According to TRAIGA, this disclosure must be in plain language, avoid dark patterns, and be provided even if the AI interaction is obvious. TRAIGA allows for the use of hyperlinks to direct consumers to a separate web page for this disclosure.
- Prohibition on discrimination. TRAIGA prohibits the development or deployment of AI systems with the intent to unlawfully discriminate against protected classes in violation of the law. To prove a violation of TRAIGA, Texas authorities would need to prove that such unlawful discrimination was the intent of the AI system. Managed care organizations otherwise subject to TRAIGA are exempt from this prohibition as long as they are separately regulated in a manner that prohibits unfair discrimination. Other regulated organizations are not exempt.
- Chatbots that could cause physical or mental harm. TRAIGA prohibits the development and deployment of AI systems that intentionally incite self-harm, harm to others, or criminal activity. There have been instances where chatbots have made recommendations that could result in self-harm or harm to others. To prove a violation of TRAIGA, Texas authorities would need to prove that such harm was the intent of the chatbot. However, regulating chatbots in health care is tricky when considering “harm.” If a chatbot recommends a health care intervention with known negative side effects, would that violate this prohibition?
- Recordkeeping considerations. In the event that Texas authorities investigate developers or deployers of an AI system under TRAIGA, they can request:
- A description of the purpose of the AI system, how it is used, where it is used, and what benefits it provides
- The types of data used to build or train the AI system
- The types of results or outputs the AI system produces
- The types of results or outputs the AI system produces
- Any measurements or standards used to monitor the quality of the AI system’s outputs
- Any known weaknesses or problems with the AI system
- How the AI system is monitored after it is put into use, what protections are in place for users, and how the developer/deployer oversees and addresses any issues
- Enforcement and oversight. Regulated organizations have a 60-day cure period after receiving a notice of violation from a Texas regulator. Violations can potentially result in significant penalties. Courts may issue injunctive relief, and civil penalties range from $10,000 to $12,000 for curable violations, $80,000 to $200,000 for uncurable violations, and $2,000 to $40,000 per day for ongoing violations.
Utah’s H.B. 452 and S.B. 226
Utah recently narrowed its Artificial Intelligence Policy Act (Utah’s AI Policy Act), which we previously discussed. It primarily focuses on consumer disclosures for generative AI (GenAI) and took effect in May 2024. The updates include:
- Narrowing the applicable GenAI. The definition of GenAI now includes only AI technology designed to simulate a human conversation with consumers (as opposed to any interactions that generate content).
- Required disclosures for regulated occupations are more limited. The pre-disclosures required by the Utah law for regulated occupations are now required only when sensitive personal information (e.g., health, financial, and biometric data) and high-risk interactions (such as those that could be relied on to make significant personal decisions, including medical and mental health decisions) are involved, instead of all GenAI interactions with regulated occupations. “Regulated occupations” are those that require a state license or certification.
- Consumer requests for AI disclosure must be “clear and unambiguous.” A deployer that uses any GenAI (not only for high-risk interactions) to interact with an individual in connection with a consumer transaction (e.g., offering, selling, or providing goods or services) must disclose to users that the they are interacting with GenAI if they “clear and unambiguously” ask or prompt the deployer or GenAI about whether AI is being used.
- New safe harbor. A safe harbor is available if an organization’s GenAI clearly and conspicuously discloses that it is GenAI and not a human (1) at the outset of any interaction in connection with consumer transactions or provision of regulated services and (2) disclosure throughout the interaction.
Even though Utah has a general GenAI law, Utah also passed a law (H.B. 452) focused on mental health chatbots used by Utah residents. A mental health chatbot is GenAI that (1) engages in interactive conversations with users similar to the confidential communications that an individual would have with a licensed mental health therapist and (2) is offered as a tool that provides mental health therapy or that will help treat mental health conditions. Certain provisions include:
- Prohibitions on selling and sharing personal information, with certain exceptions. Pursuant to the Utah law, regulated organizations are prohibited from selling or sharing personal information received through a mental health chatbot, except for sharing with health care providers or health plans with the user’s consent, or sharing with service providers that agree to comply with the U.S. Health Insurance Portability and Accountability Act (HIPAA) (even if they are not technically regulated by HIPAA).
- Prohibitions on advertising using conversations with mental health chatbots. Under the Utah law, mental health chatbots are prohibited in Utah from targeting or customizing advertising based on the content of conversations with users. Further, a mental health chatbot cannot include any advertisements for specific products or services unless the information is clearly and conspicuously marked as an advertisement and the chatbot deployer communicates the relationship between the deployer and the advertiser. A mental health chatbot can recommend that a user seek counseling, therapy, or other assistance from licensed professionals without complying with these requirements.
- Disclosures to users. Similar to Utah’s AI Policy Act, this law requires certain disclosures to users. Specifically, the law requires that the mental health chatbot “clearly and conspicuously” disclose to Utah users that the chatbot is an AI technology and not a human. The disclosure must be made:
- Before the user may access the features of the chatbot
- At the beginning of any interaction with the user if the user has not accessed the chatbot within the previous seven days
- Anytime a user asks or otherwise prompts the chatbot about whether AI is being used
- Enforcement. The Utah attorney general can enforce the law by imposing an administrative fine of up to $2,500 for each violation, by seeking injunctions, and by posting publicly a list of mental health chatbots that are not in compliance with the law. A deployer has a defense if it complies with and files detailed documentation with the Utah Division of Consumer Protection.
Regulated organizations are required to comply with the Utah laws that are already in effect, and they have until January 1, 2026, to prepare for TRAIGA. Regulated organizations should promptly consider whether these laws apply to them and their business practices. Compliance will involve evaluating whether AI systems are compliant, updating disclosures and practices, implementing processes for evaluating the performance of automated decision-making tools, and potentially preparing significant written documentation. If regulated organizations do not comply with the new laws, enforcement authorities in Texas and Utah could stop the use of AI systems and impose significant financial penalties due to a failure to comply.
Client Alert 2025-173