/ 3 min read / Entertainment and Media Guide to AI

Film and TV

Read time: 6 minutes

Movies have been predicting the rise of AI for a long time, and usually not in a positive way. So it is with a mixture of excitement and trepidation that the film & TV industry has begun to embrace the advent of AI technology.

AI in the entertainment and media sector part 1

In fact, technology that relies on relatively basic AI models has been popping up in the screen sector for several years already. The most obvious use can be seen in recent productions such as Martin Scorsese’s The Irishman and the upcoming Indiana Jones and the Dial of Destiny, where it has been tasked with using so-called likeness rights (which include the look, voice and other discernible aspects of an actor) to turn back the hands of time.

Harnessing the same technology that has been used to produce deep fakes, an AI model is trained on historical footage of an actor’s facial structure and movements, and then software applies these learnings to live-action footage of the current-day actor, to recreate their youthful self. And so, an 80-year-old Harrison Ford can appear on-screen in his forties again, allowing a franchise to continue without the need to physically replace the actor who is known and loved for a particular role.

The industry is hedging its bets that these films are not one-off gimmicks: tech company Metaphysics AI-driven capabilities have impressed Hollywood to such a degree that the talent agency CAA has signed a deal with them under which Metaphysic will develop generative AI tools and services for CAAs top-tier talent roster. Robert Zemeckis is a notable fan and is working closely with Metaphysic on a new film called Here, in which an ensemble of characters will be shown at various stages of their lives, but each character will be played by a single actor.

The advent of accessible AI technology is also helping propel the industry to new frontiers behind the camera, both in the pre and postproduction stages. AI tools can now analyze a script against thousands of examples to ascertain potential profitability against certain benchmarks, as well as scanning for representation and ethical biases. Some services even produce detailed breakdowns for clients of how a chosen script compares to others, with models being trained to assess the emotional response of scenes, based on facets of cognitive psychology.

Faced with the rapid onset of AI-powered chatbots, the Writer’s Guild of America (WGA) has proposed that its writers can use AI, but that AI-generated material will not be considered “literary material” or “source material.” These terms are important when deciding writing credits (and the corresponding residual compensation) for WGA members, and the implication is that the WGA considers AI to be a tool, rather than a writer itself. This position echoes the stance that the US Copyright Office has taken to date, which is that elements created by AI will not be protected, but the totality of a work (which may include AI-generated elements) can be.

Specifically, the WGA has said “like all research material, [AI-generated material] has no role in guild-covered work, nor in the chain of title in the intellectual property. It is important to note that AI software does not create anything. It generates a regurgitation of what it’s fed. If it’s been fed both copyright-protected and public domain content, it cannot distinguish between the two. Its output is not eligible for copyright protection, nor can an AI software program sign a certificate of authorship. To the contrary, plagiarism is a feature of the AI process.

It is thought that this distinction has been made intentionally by the WGA, to allow writers to benefit from AI software without having to worry about being pulled into credit arbitrations with that software’s manufacturers, who might otherwise claim that they are a legal author of a script that has been drafted by the software. However, the WGA has yet to opine publicly on a situation where an AI program writes a script entirely on its own, without prompts from a human.

AI models can also help streamline the editing process, reducing time and costs associated with relatively mundane framing, cutting and sound design tasks. Leading AI-driven tools already available can analyze video streams in real-time and detect key moments using various audio, visual and textual cues. For sports, these include points/goals, penalties, and other major moments, which has proven hugely successful.

Elsewhere, broadcasters are exploring potential uses for AI to facilitate the clearance process for third party intellectual property, as well as to help adhere with content regulations in difference countries. AI could relatively easily be harnessed to help create multiple versions of the same content that is specifically tailored for a range of languages, cultures and ages.

Large national broadcasters and studios also often benefit from content archives spanning back decades, which could be mined for educational purposes, including to inform (or even generate) the production of new content.

In this sense, AI is already being used across the film and TV industry to augment human creativity, rather than replace it entirely. However, these uses (and the many implications of them) also create unease.

Hollywood’s labor unions are watching closely, with SAG-AFTRA recently issuing a statement supporting the Human Artistry Campaign’s core principles for AI, reaffirming its position on digital voice, likeness and performance simulations.

SAG-AFTRA views the digital simulation of a performer as the creation of a new performance that is the subject of bargaining with the union. In deeming a digital simulation a ‘new performance,’ SAG-AFTRA attempts to bring the use and reuse of such digital simulations under its collectively bargained contract provisions – namely, consent and negotiation of compensation, which are mandatory subjects of bargaining under the National Labor Relations Act.

In taking this position, SAG-AFTRA has planted a flag in the AI space, making clear that while its current collective bargaining agreements may not specifically reference AI, it views the use of AI as a potential circumvention of a performer’s rights.

The same legal issues explored in detail in this guide also apply across the areas mentioned above. The ability of an AI model to break down a script is only as commercially viable as the data it has been trained on, and the legality of the use of that training set. Similarly, copyright law is very much playing catch-up in most countries when it comes to AI-driven tools creating new output based on existing content.

At this point, the use of AI in the film and TV industry cannot be stopped. Instead, the question for businesses, regulators and lawyers is: how effectively can it be controlled?

Related Insights