Entertainment and Media Guide to AI

Legal issues in AI part 2 - Judicial scales icon

Read time: 7 minutes

Introduction

As is evident from other sections of this guide, legal issues arising from AI are wide-ranging and complex, impacting numerous different areas of law. Beyond issues that relate to specific IP rights, the proliferation of AI systems and their potential uses – ranging from medical diagnosis to crime detection – only exacerbate the complexities surrounding AI regulation.

Presently, neither legislation nor judicial decisions have definitively resolved these issues, leaving a host of uncertainties and unanswered questions surrounding litigation over AI systems and AI-generated content. Given that the approach to AI law differs across countries, we can also expect jurisdictional issues to arise in AI disputes. We address some of the issues that are likely to emerge below.

Determining liability

Once a rights holder has identified a potential infringement of their rights or other unlawful conduct for which compensation may be payable, one of the primary challenges will be to identify the party who is legally responsible for the actions of the system. This can be a complex issue, as liability may lie with different parties involved in the AI’s development, deployment or operation, such as the creators, operators, users or owners of the system. Additionally, AI is often the result of collaboration between multiple parties, such as the AI developer, user, owner and/or manufacturer. Determining who is responsible for any potentially unlawful acts can be challenging and requires a thorough understanding of the roles and responsibilities of each party involved.

This is a particularly pertinent issue given the ongoing attempts by some to move closer towards legal personality for AI systems, such as AI pioneer Dr. Stephen Thaler’s current multi-jurisdictional litigation aiming to convince courts to recognize the ability of an AI system to be named as the inventor of a patent. This, in turn, has led to campaigns such as the Human Artistry Campaign, which aims to ensure that copyright should protect only human intellectual creativity.

For some causes of action, the state of mind and intention of the defendant is relevant to establishing liability, or to the calculation of damages to which the claimant is entitled. Examples of this include malicious falsehood (sometimes referred to as trade libel) and the calculation of aggravated damages in defamation cases. These concepts may have to be interpreted in a manner that allows them to remain applicable to reputational disputes involving AI. Such cases have already begun to emerge, such as the libel claim threatened by the mayor of an Australian town against OpenAI, the company behind ChatGPT, for false claims made by the AI system that he had served time in prison in relation to a bribery scandal, when he was, in fact, the whistleblower who exposed the scheme.

Establishing jurisdiction

Another potential challenge in enforcing rights infringed by an AI system will be determining the jurisdiction in which legal action can be commenced. Absent any clear, universal legal framework in this regard, this may have to be determined on a case-by-case basis, and it is likely to lead to forum shopping by claimants. Where an AI system is developed and/or deployed in different jurisdictions, it may be challenging to determine which jurisdiction’s laws and regulations apply.

Whilst certain actions can be commenced in the territory where the damage is suffered (under the so-called “accessibility criterion” in the EU, for instance) others require determining where the infringing acts were committed or where the defendant is situated. This can prove challenging for disputes involving AI systems, as it is not always easy to determine for example, in which territory certain acts are deemed to have been carried out.

In the EU, the AI Liability Directive proposes common rules for member states for a non-contractual civil liability regime relating to damage caused by AI, particularly high-risk AI systems. It clarifies that the regime is intended to compensate for damage caused intentionally or negligently and creates a rebuttable presumption of causality where there has been a breach of a duty of care (such as duties relating to data training, transparency, human oversight, accuracy, robustness and cybersecurity) and the output produced by an AI system (or its failure to produce an output) causes damage. The presumption will apply to non-high-risk AI systems where it is excessively difficult for a claimant to show a causal link and where the claimant does not have sufficient expertise or evidence to prove the causal link. The presumption will apply more widely to high-risk AI systems. The AI Liability Directive also includes provisions for the disclosure of evidence relating to high-risk AI systems. The regime applies to providers and/or users of AI systems that are available on or operating within the Union market.

The UK’s white paper on AI regulation proposes an AI regulation regime, which it says will be pro-innovation, proportionate, trustworthy, adaptable, clear and collaborative. As is the case in many territories, the legal framework governing AI is still in development, so it is critical for users and developers alike to remain up to date with any changes and updates.

Key takeaways
  • Litigation over AI-generated material is yet to be conclusively dealt with by statute and case law
  • Issues such as liability, jurisdiction and the burden of proof must be given careful consideration on a case-by-case basis until the law develops
  • The increased prevalence of AI systems also gives rise to practical problems, such as the admissibility and credibility of evidence gathered or generated using AI