Companies in the health care industry are commonly at the forefront of collecting and using personal health data and cutting-edge technology to improve and personalize patient care. That objective often means that health care companies must interpret and apply old (and obsolete) laws and regulations to products and services unimagined at the time of enactment. The lack of a modern legal framework related to personal health data and emerging technologies in the United States and the associated legal and business risks can lead to the unintended negative consequence of reduced investment by the health care industry in new products and services.
For better or worse, Congress has not acted with alacrity to modernize laws that regulate privacy, cybersecurity, data breach response, or young technologies growing in importance, like artificial intelligence (AI). The Health Insurance Portability and Accountability Act (HIPAA) regulates the collection, use, disclosure, and security of certain personal health data and practices in the health care industry, but HIPAA was originally enacted nearly 30 years ago and is of limited scope.
That is not to state that Congress has not been considering the issues, but proposed bills in these areas have made little progress. For this reason, some state legislatures have enacted laws to help fill the void. Over the last year, the White House has been more potent than Congress in its sphere of control. While such activity could potentially influence laws and regulations at the federal level of government in the United States, the impact may be limited as the White House’s role is to enforce laws – and not create them.
This alert looks at some of the most recent activity by Congress and the White House with respect to the development and use of AI so that companies in the health care industry can be aware of what the future may bring.
Congressional activity
In 2023, Congress established working groups, issued requests for information from private companies, and began holding hearings and listening sessions with executives from technology companies developing AI services. In September 2023, Congress announced a bipartisan framework on AI legislation intended to identify specific principles that would potentially underpin future legislation of AI. For example, the framework posed a licensing regime for AI developers with an independent oversight body, a requirement that AI developers be subject to a private right of action for certain harms, the promotion of transparency with the use of AI and how AI is trained and operates, and special protection of privacy and children. Some noteworthy aspects of the framework are described below.
- Establish a licensing regime administered by an independent oversight body. Under the framework, Congress would seek to require developers of general purpose AI or AI used in high-risk situations to register with an independent oversight body. The developers would also be required to implement risk management, pre-deployment testing, data governance, and adverse incident reporting programs. In addition to managing the licensing regime, the oversight body would be tasked with ensuring that AI systems adhere to ethical and safety standards through audits, and that it have the ability to enforce compliance along with state attorneys general.
- Ensure legal accountability for harms. The framework prioritizes legal accountability by indicating that future legislation should hold AI developers responsible for harm caused by the AI, particularly related to privacy and discrimination. Importantly, the framework proposes that a legal safe harbor currently protecting online platforms from certain liability would not extend to AI solutions. The intent is to encourage the development of safer AI systems by adding a remedy against AI developers that do not do so.
- Promote transparency. The framework also calls for increased transparency in AI systems to build trust among users and stakeholders. Transparency requirements could include companies providing affirmative notice to users when interacting with AI they deploy, as well as developers’ disclosures to users deploying AI systems on topics related to training data and limitations, accuracy, and safety of the AI. An oversight body could also create an “adverse event” database to track (and possibly publish) incidents of AI causing harm.
- Protect consumers’ privacy and kids. The framework proposes safeguards to ensure that AI products and services are designed with consumers’ best interests in mind – particularly with respect to vulnerable groups such as children – by requiring companies deploying AI to implement safety brakes to prevent harm.
Also in September 2023, the Senate Committee on Health, Education, Labor & Pensions (the Committee) published a white paper that proposed a different framework for placing guardrails around AI. It stated that Congress should not adopt a “sweeping, one-size-fits-all approach for regulating AI . . .” The white paper states that the Food and Drug Administration’s (FDA’s) framework for preclinical and clinical investigation of new drugs is likely well suited to adapt to the use of AI in drug-related research and development. The Committee stated, however, that the FDA’s current framework for regulating medical devices that incorporate AI may not be adequate and that Congress may need targeted legislation to help the FDA regulate medical devices. The Committee further called for a framework that promotes transparency, develops methods to measure AI effectiveness, protects privacy, and clarifies potential liability related to the use of AI in the health care industry.
White House activity
In October 2022, the White House published a “Blueprint for an AI Bill of Rights,” which is a white paper focused on five primary topics: safety and effectiveness, discrimination, privacy, transparency, and human involvement in decision-making.
About a year later, the White House issued its Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence on October 30, 2023. Two days later on November 1, 2023, the Office of Management and Budget (OMB) released a proposed memorandum for the heads of executive departments and agencies to use in implementing the executive order’s actions. We highlight eight noteworthy features of the executive order below where significant actions will likely occur over the next year.
1. The executive order stresses prioritizing federal investment in AI projects, training future AI researchers, and establishing national AI research institutes.
2. The Department of Health and Human Services (HHS) is required to establish an AI Task Force that will create a strategic plan (including guidance, resources, and frameworks) dedicated to responsible AI deployment and use related to:
- Predictive and generative AI-enabled technology
- Long-term safety and real-world performance monitoring
- Equity principles
- Safety, privacy, and security
- Documentation
- State and local settings
- Promotion of workplace efficiency and satisfaction
3. HHS is required to establish an AI safety program that will (a) identify and track clinical errors produced by AI services in a centralized tracking repository and (b) propose recommendations for avoiding such harms.
4. All federal agencies, including HHS, should push for increased compliance with federal nondiscrimination laws and regulations. The executive order calls for additional education and greater enforcement efforts.
5. HHS is tasked with evaluating the current quality of AI services and developing the policies and infrastructure to support its continued oversight of AI quality, including in medical devices, in the long run.
6. The executive order requires HHS to develop a strategy for regulating the use of AI in the drug-development process, including the evaluation of resource needs, areas where HHS needs more authority, topics for future rulemaking, and objectives for regulation.
7. The White House calls on Congress to pass a federal privacy law. The HHS AI Task Force’s strategic plan is also required to incorporate safety, privacy, and security standards targeting the software development life cycle.
8. The National Institute of Standards and Technology (NIST) is required to develop guidelines, standards, and best practices for AI safety and security and to promote industry standards. This work would include developing a companion resource to the Security Software Development Framework to incorporate secure development practices for generative AI and foundation models. NIST has already been helpful in the AI space with its “AI Risk Management Framework.”
Takeaways
Up until this point, Congress has operated in the learning and thinking stage, and its proposed frameworks are not particularly sophisticated or detailed. That may be a good result if it means Congress will not aspire to heavily regulate AI in a manner on par with the European Union’s AI Act. Congress has signaled that it may first focus on pushing AI developers and companies that deploy AI to take a human-centric approach by, for example, creating mechanisms for individuals to hold them accountable for certain types of harms caused by AI. In addition, Congress appears to be focused on transparency for the purposes of monitoring the use of AI, preventing perpetuation of discriminatory practices through the use of AI, and allowing consumers to better protect themselves. Therefore, companies that use AI in the health industry should consider how they can operationalize transparency and fairness in their use of AI, because the risk from a failure to do so could increase materially as Congress takes action.
In the medium term, the White House’s actions will likely have a much greater impact on AI use in the health care industry than what Congress has done so far. We expect that a year from now, HHS will be experts on the use cases, the actual harms (e.g., physical safety, privacy, and discrimination) experienced by consumers, and the best practices related to the development and use of AI in the health care industry. Given the broad scope of the mandate for the HHS AI Task Force (as described above) and the potential regulation of all of those areas, companies in the health care industry will likely need to implement and maintain a robust, multi-disciplinary AI governance program to materially reduce legal risks. An AI governance program can help companies prepare now to meet future requirements that may stem from what HHS learns in 2024.
We have previously written about the core elements of an AI governance program. Compliance-by-design efforts will be significantly more effective than trying to add compliance measures to an AI system post-deployment. Thus, companies that focus on developing an AI governance team and program can help prevent expensive mistakes and remediation from an AI system.
In-depth 2023-261