Autores: Wendell J. Bartnick Vicki J. Tankle
Reed Smith partners share insights about U.S. Department of Health and Human Services initiatives to stave off misuse of AI in the health care space. Wendell Bartnick and Vicki Tankle discuss a recent executive order that directs HHS to regulate AI’s impact on health care data privacy and security and investigate whether AI is contributing to medical errors. They explain how HHS collaborates with non-federal authorities to expand AI-related protections; and how the agency is working to ensure that AI outputs are not discriminatory. Stay tuned as we explore the implications of these regulations and discuss the potential benefits and risks of AI in healthcare.
Transcript:
Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.
Wendell: Welcome to our new series on AI. Over the coming months, we'll explore the key challenges and opportunities within the rapidly evolving AI landscape. Today, we will focus on AI in healthcare. My name is Wendell Bartnick. I'm a partner in Reed Smith's Houston office. I have a degree in computer science and focused on AI during my studies. Now, I'm a tech and data lawyer representing clients in healthcare, including providers, payers, life sciences, digital health, and tech clients. My practice is a natural fit given all the innovation in this industry. I'm joined by my partner, Vicki Tankle.
Vicki: Hi, everyone. I'm Vicki Tankle, and I'm a digital health and health privacy lawyer based in Reed Smith's Philadelphia office. I've spent the last decade or so supporting health industry clients, including healthcare providers, pharmaceutical and medical device manufacturers, health plans, and technology companies navigate the synergies between healthcare and technology and advising on the unique regulatory risks that are created when technology and innovation far outpace our legal and regulatory frameworks. And we're oftentimes left managing risks in the gray, which as of today, July 30th, 2024, is where we are with AI and healthcare. So when we think about the use of AI in healthcare today, there's a wide variety of AI tools that support the health industry. And among those tools, a broad spectrum of the use of health information, including protected health information, or PHI, regulated by HIPAA, both to improve existing AI tools and to develop new ones. And if we think about the spectrum as measuring the value or importance of the PHI, the individuals individuals identifiers themselves, it may be easier to understand that the far ends of the spectrum and easier to understand the risks at each end. Regulators in the industry have generally categorized use of PHI in AI into two buckets, low risk and high risk. But the middle is more difficult and where there can be greater risk because it's where we find the use or value of PHI in the AI model to be potentially debatable. So on the one hand of the spectrum, for example, the lower risk end, there are AI tools such as natural language processors, where individually identifiable health information is not centric to the AI model. But instead, for this example, it's the handwritten notes of the healthcare professional that the AI model learns from. And with more data and more notes, the tool's recognition of the letters themselves, not the words the letters form, such as patient's name, diagnosis, or lab results, the better the tool operates. Then on the other hand of the spectrum, the higher risk end, there are AI tools such as patient-facing next best action tools that are based on an individual's patient medical history, their reported symptoms, their providers, their prescribed medications, potentially their physiological measurements, or similar information, and they offer real-time customized treatment plans with provider oversight. Provider-facing clinical decision support tools similarly support the diagnosis and treatment of individual patients based on individual's information. And then in the middle of the spectrum, we have tools like hospital logistics planners. So think of tools that think about when the patient was scheduled for an x-ray, when they were transported to the x-ray department, how long did they wait before they got the x-ray, and how long after they received the x-ray were they provided with the results. These tools support population-based activities that relate to improving health or reducing costs, as well as case management and care coordination, which begs the question, do we really need to know that patient's identity for the tool to be useful? Maybe yes, if we also want to know the patient's sex, their date of birth, their diagnosis, date of admission. Otherwise, we may want to consider whether this tool can be done and be effective without that individually identifiable information. What's more is that there's no federal law that applies to the use of regulated health data in AI. HIPAA was first enacted in 1996 to encourage healthcare providers and insurers to move away from paper medical and billing records and to get online. And so when HIPAA has been updated over the years, the law still remains outdated in that it does not contemplate the use of data to develop or improve AI. So we're faced with applying an old statute to new technology and data use. Again, operating in a gray area that's not uncommon in digital health or for our clients. And to that end, there are several strategies that our HIPAA-regulated clients are thinking of when they're thinking of permissible ways to use PHI in the context of AI. So treatment, payment, healthcare operations activities for covered entities, proper management and administration for business associates, certain research activities and individual authorizations, or de-identified information are all strategies that our clients are currently thinking through in terms of permissible uses of PHI in AI.
Wendell: So even though HIPAA hasn't been updated to apply directly to AI, that doesn't mean that HHS has ignored it. So AI, as we all know, has been used in healthcare for many years. And in fact, HHS has actually issued some guidance previously. Under the White House's Executive Order 14110, back in the fall of 2023, which was called Safe, secure, and trustworthy development and use of artificial intelligence, jump-started additional HHS efforts. So I'm going to talk about seven items in that executive order that apply directly to the health industry, and then we'll talk about what HHS has done since this executive order. So first, the executive order requires the promotion of additional investment in AI, and just to help prioritize AI projects, including safety and privacy and security. The executive order also requires that HHS create an AI task force that is supposed to meet and create a strategic plan that covers several topics with AI, including AI-enabled technology, long-term safety and real-world performance monitoring, equity principles, safety, privacy, and security, documentation, state and local rules, and then promotion of workplace efficiency and satisfaction. faction. Third, HHS is required to establish an AI safety program that is supposed to identify and track clinical errors produced by AI and store that in a centralized database for use. And then based on what that database contains, they're supposed to propose recommendations for preventing errors and then avoiding harms from AI. Fourth, the executive order requires that all federal agencies, including HHS, focus on increasing compliance with existing federal law on non-discrimination. Along with that includes education and greater enforcement efforts. Fifth, HHS is required to evaluate the current quality of AI services, and that means developing policies and procedures and infrastructure for overseeing AI quality, including with respect to medical devices. Sixth, HHS is required to develop a strategy for regulating the use of AI in the drug development process. Of course, FDA has already been regulating this space for a while. And then seventh, the executive order actually calls on Congress to pass a federal privacy law. But even without that, HHS's AI task force is including privacy and security, as part of its strategic plan. So given those seven requirements really for HHS to cover, what have they done since the fall of 2023? Well, as the end of July 2024, HHS has created a funding opportunity for applicants to receive money if they develop innovative ways to evaluate and improve the quality of healthcare data used by AI. HHS has also created the AI task force. And many of our clients are asking us, you know, about AI governance. What can they do to mitigate risk from AI? And HHS has, the task force has issued a plan for state, local, tribal, and territorial governments related to privacy, safety, security, bias, and fraud. And even though that applies to the public sector, Our private sector clients should take a look at that so that they know what HHS is thinking in terms of AI governance. Along with this publication, NIST also produces several excellent resources that companies can use to help them with their AI governance journey. Also important is that HHS has recently restructured internally to try to consolidate HHS's ability to regulate technology and areas connected to technology and place that under ONC. And ONC, interestingly enough, has posted job postings for a chief AI officer, a chief technology officer, and a chief data officer. So we would expect that once those roles are filled, they will be highly influential in how HHS looks at AI, both internally and then also externally, and how it will impact the strategic thinking and position of HHS going forward with respect to AI. Our provider and tech clients have also been interested in how AI and what HHS is saying affects certified health IT. And earlier this year, actually, ONC published the HTI-1 rule, which, among other things, is establishes transparency requirements for AI that's offered in connection with certified health IT. And that rule, the compliance deadline for that rule is December 31st of this year. HHS has also been involved in focusing on non-discrimination just as the executive order requires. And so our clients are asking, can they use AI for certain processes and procedures? And in fact, it appears that HHS strongly endorses the use of AI in technology, improving patient outcomes, etc. They've certainly not published anything that says AI should not be used. And in fact, CMS issued a final rule this year and FAQs that clarify that AI can be used to process claims under Medicare Advantage plans, as long as there's human oversight and all other laws are compliant. So there is no indication at all from HHS that using AI is somehow prevented or companies should be worried about using it as long as they comply with existing law. So after the White House executive order in the fall of 2023, HHS has a lot of work to do. They've done some, but there's still a lot to do related to AI. And we should expect more guidance and activity in the second half of 2024.
Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's Emerging Technologies Practice, please email techlawtalks@reedsmith.com. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts.
Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views opinions or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers.
All rights reserved.
Transcript is auto-generated.