Our latest Tech Law Talks podcast covers the legal and practical implications of AI-enhanced cyberattacks; the EU AI Act and other relevant regulations; and the best practices for designing, managing and responding to AI-related cyber risks. Partner Christian Leuthner in Frankfurt, partner Cynthia O'Donoghue in London with counsel Asélle Ibraimova share their insights and experience from advising clients across various sectors and jurisdictions.
Transcript:
Intro: Hello, and welcome to Tech Law Talks, a podcast brought to you by Reed Smith's Emerging Technologies Group. In each episode of this podcast, we will discuss cutting-edge issues on technology, data, and the law. We will provide practical observations on a wide variety of technology and data topics to give you quick and actionable tips to address the issues you are dealing with every day.
Christian: Welcome to the Tech Law Talks, now new series on AI. Over the coming months, we'll explore the key challenges and opportunities within the rapidly evolving AI landscape. Today, we will focus on AI and cybersecurity threat. My name is Christian Leutner. I'm a partner at the Reed Smith Frankfurt office, and I'm with my colleagues Cynthia O'Donoghue and Asélle Ibraimova from the London office.
Cynthia: Morning, Christian. Thanks.
Asélle: Hi, Christian. Hi, Cynthia. Happy to be on this podcast with you.
Christian: Great. In late April 2024, the German Federal Office for Information Security has identified that AI, and in particular, generative AI and large language models, LLM, is significantly lowering the barriers to entry for cyber attacks. The technology, so AI, enhances the scope, speed, and the impact of cyber attacks, of malicious activities, because it simplifies social engineering, and it really makes the creation or generation of malicious code faster, simpler, and accessible to almost everybody. The EU legislator had some attacks in mind when creating the AI Act. Cynthia, can you tell us a bit about what the EU regulator particularly saw as a threat?
Cynthia: Sure, Christian. I'm going to start by saying there's a certain irony in the EU AI Act, which is that there's very little about the threat of AI, even though sprinkled throughout the EU AI Act is lots of discussion around security and keeping AI systems safe, particularly high-risk systems. But the EU AI Act contains a particular article that's focused on the design of high-risk systems and cybersecurity. And the main concern is really around the potential for data poisoning and for model poisoning. And so part of the principle behind the EU AI Act is security by design. And so the idea is that the EU AI Act regulates high-risk AI systems such that they need to be designed and developed in a way that ensures an appropriate level of accuracy robustness, and cybersecurity. And to prevent such things as data poisoning and model poisoning. And it also talks about the horizontal laws across the EU. So because the EU AI Act treats AI as a product, it brings into play other EU directives, like the Directive on the Resilience of Critical Entities and the newest cybersecurity regulation in relation to digital products. And I think when we think about AI, you know, most of our clients are concerned about the use of AI systems and being, let's say, ensuring that they're secure. But really, you know, based on that German study you mentioned at the beginning of the podcast, I think there's less attention paid to use of AI as a threat vector for cybersecurity attacks. So, Christian, what do you think is the relationship between the AI Act and the Cyber Resilience Act, for instance?
Christian: Yeah, I think, and you mentioned it already. So the legislator thought there is a link and the high-risk AI models need to implement a lot of security measures. And the latest Cyber Resilience Act requires some stakeholders in software and hardware products to also implement security measures and also imposes another or lots of different obligations on them. To not over-engineer these requirements, the AI Act already takes into account that if a high-risk AI model is in scope of the Cyber Resilience Act, the providers of those AI models can refer to the implementation of the cybersecurity requirements they made under the Cyber Resilience Act. So they don't need to double their efforts. They can just rely on what they have implemented. But it would be great if we're not only applying the law, but if there would also be some guidance from public bodies or authorities on that. Asélle, do you have something in mind that might help us with implementing those requirements?
Asélle: Yeah, so ENISA has been working on AI and cybersecurity in general, and it has produced a paper called Multi-Layer Framework for Good Cybersecurity Practices for AI last year. So it still needs to be updated. However, it does provide a very good summary of various AI initiatives throughout the world. And generally mentions that when thinking of AI, organizations need to take into consideration the general system vulnerabilities, the vulnerabilities in the underlying ICT infrastructure. And also when it comes to the use of AI models or systems, then, you know, various threats that you already talked about, such as data poisoning and model poisoning and other kind of adversarial attacks on those systems should also be taken into account. So in terms of specific kind of guidelines or standards that ENISA mentioned is ISO 4201. It's an AI management system standard. And also another noteworthy guidelines mentioned is the NIST AI risk management framework, obviously the US guidelines. And obviously both of these are to be used on a voluntary basis. But basically, their aim is to ensure developers create trustworthy AI, valid, reliable, safe, and secure and resilient.
Christian: Okay, that's very helpful. I think it's fair to say that AI will increase the already high likelihood of being subject to cyber attack at some point, that this is a real threat to our clients. And we all know from practice that you cannot defend against everything. You can be cautious, but there might be occasions when you are subject to an attack, when there has been a successful attack or there is a cyber incident. If it is caused by AI, what do we need to do as a first responder, so to say?
Cynthia: Well, there are numerous notification obligations in relation to attacks. Again, depending on the type of data or the entity involved. For instance, if the, As a result of a breach from an AI attack, it involves personal data, then there's notification requirements under the GDPR, for instance. If you're in a certain sector that's using AI, one of the newest pieces of legislation to go into effect in the EU, the Network and Information Security Directive, tiers organizations into essential entities and important entities. And, you know, depending on whether the sector the particular victim is in is subject to either, you know, the essential entity requirements or the important entity requirements, there's a notification obligation under NIST-2, for short, in relation to vulnerabilities and attacks. And ENISA, who Asélle was just talking about, has most recently issued a report for, let's say, network and other providers, which are essential entities under NIST-2, in relation to what is considered a significant or a vulnerability or a material event that would need to be notified to the regulatory entity and the relevant member state for that particular sector. And I'm sure there's other notification requirements. I mean, for instance, financial services are subject to a different regulation, aren't they, Asélle? And so why don't you tell us a bit more about the notification requirements of financial services organizations?
Asélle: The EU Digital Operational Resilience Act also provides similar requirements to the supply chain of financial entities, specifically the ICT third-party providers, which the AI providers may fall into. And Article 30 under DORA requires that there are specific, for example, contractual clauses requiring cybersecurity around data. So it requires provisions on availability, authenticity, integrity, and confidentiality. There are additional requirements to those ICT providers whose product, say AI product, perhaps as an ICT product, plays a critical or important function in the provision of the financial services. In that case... There will be additional requirements, including on ICT security measures. So in practical terms, it would mean your organizations that are regulated in this way, they are likely to ask AI providers to have additional tools, policies, measures, and to provide evidence that such measures have been taken. It's also worth mentioning about the developments on AI regulation in the UK. Previous UK government wanted to adopt a flexible, non-binding regulation of AI. However, the Labour Government appears to want to adopt a binding instrument. However, it is likely to be of limited scope, focusing only on the most powerful AI models. However, there isn't any clarity in terms of whether the use of AI in cyber threats is regulated in any specific way. Christian, I wanted to direct a question to you. How about the use of AI in supply chains?
Christian: Yeah, I think it's very important to have a look on the entire supply chain of the companies or the entire contractual relationships. Because most of our clients or companies out there do not develop or create their own AI. They will use AI from vendors or their suppliers or vendors will use AI products to be more efficient. And all the requirements, for example, the notification requirements that Cynthia just mentioned, they do not stop if you use a third party. So even if you engage a supplier, a vendor, you're still responsible to defend against cyber attacks and to report cyber incidents or attacks if they concern your company. Or at least there's a high likelihood. So it's very crucial to have those scenarios in your mind when you're starting a procurement process and you start negotiating on contracts to have those topics in the contract with a vendor, with a supplier to have notification obligations in case there is a cyber attack at that vendor that you probably have some audit rights, inspection rights, depending on your negotiation position, but at least to make sure that you are aware if something happens so that the risk that does not really or does not directly materializes at your company cannot sneak through the back door by a vendor. So that's really important that you always have an eye on your supply chain and on your third-party vendors or providers.
Cynthia: That's such a good point, Christian. And ultimately, I think it's best for organizations to think about it early. So it really needs to be embedded as part of any kind of supply chain due diligence, where maybe a new question needs to be added to a due diligence questionnaire on suppliers about whether they use AI, and then the cybersecurity around the AI that they use or contribute to. Because we've all read and heard in the papers and been exposed to through client counseling of cybersecurity breaches that have come through the supply chain and may not be direct attacks on the client itself. And yeah, I mean, the contractual provisions then are really important. Like you said, making sure that the supplier notifies the customer very early on. And then there is cooperation and audit mechanisms. Asélle, anything else to add?
Asélle: Yeah, I totally agree with what was said. I think beyond just the legal requirements, it is ultimately the question of defending your business, your data, and whether or not it's required by your customers or by specific legislation to which your organization may be subject to. It's ultimately whether or not your, business can withstand more sophisticated cyber attacks and therefore agree with both of you that organizations should take supply chain resilience and cyber security and generally higher risks of cyber attacks more seriously and put measures in place better to invest now than later after the attack. I also think that it is important for in-house teams to work together as cyber security threats are enhanced by AI. And these are legal, IT security, risk management, and compliance teams. Sometimes, for example, legal teams might think that the IT security or incident response policies are owned by IT, so there isn't much contribution needed. Or the IT security teams might think the legal requirements are in the legal team's domain, so we'll wait to hear from legal on how to reflect those. So working in silos is not beneficial. IT policies, incident response policies, training material on cybersecurity should be regularly updated by IT teams and reviewed by legal to reflect the legal requirements. The teams should collaborate on running tabletop incident response and crisis response exercises, because in the real case scenario, they will need to work hand in hand to efficiently respond to these scenarios.
Cynthia: Yeah, I think you're right, Asélle. I mean, obviously, any kind of breach is going to be multidisciplinary in the sense that you're going to have people who understand AI, understand, you know, the attack vector, which used the AI. You know, other people in the organization will have a better understanding of notification requirements, whether that be notification under the cybersecurity directives and regulations or under the GDPR. And obviously, if it's an attack that's come from the supply chain, there needs to be that coordination as well with the supplier management team. So it's definitely multidisciplinary and requires, obviously cooperation and information sharing and obviously in a way that's done in accordance with the regulatory requirements that we've talked about. So in sum, you have to think about AI and cybersecurity both from a design perspective as well as the supply chain perspective and how AI might be used for attacks, whether it's vulnerabilities into a network or data poisoning or model poisoning. Think about the horizontal requirements across the EU in relation to cybersecurity requirements for keeping systems safe and or if you're an unfortunate victim of a cybersecurity attack where AI has been used to think about the notification requirements and ultimately that multidisciplinary team that needs to be put in place. So thank you, Asélle, and thank you, Christian. We really appreciate the time to talk together this morning. And thank you to our listeners. And please tune in for our next Tech Law Talks on AI.
Asélle: Thank you.
Christian: Thank you.
Outro: Tech Law Talks is a Reed Smith production. Our producers are Ali McCardell and Shannon Ryan. For more information about Reed Smith's emerging technologies practice, please email techlawtalks@reedsmith.com. You can find our podcasts on Spotify, Apple Podcasts, Google Podcasts, reedsmith.com, and our social media accounts.
Disclaimer: This podcast is provided for educational purposes. It does not constitute legal advice and is not is not intended to establish an attorney-client relationship, nor is it intended to suggest or establish standards of care applicable to particular lawyers in any given situation. Prior results do not guarantee a similar outcome. Any views, opinions, or comments made by any external guest speaker are not to be attributed to Reed Smith LLP or its individual lawyers.
All rights reserved.
Transcript is auto-generated.