Use of artificial intelligence in advertising
Brands and agencies are increasingly using ChatGPT and similar AI tools for a variety of purposes. Some have even incorporated services directly into the chatbot to establish alliances with enterprise consultancies. For instance, brands can integrate AI directly into their own digital experiences (e.g., web or mobile) through ChatGPT’s API platform. Brands can also offer recommendation engines or shopping assistants for conversational experiences or combine their CRM with AI to better understand customer behavior and preferences.
For marketers, ChatGPT is useful for content creation, lead generation, email marketing, social media management and market research, among a host of other needs and capabilities. In terms of content creation or ad copy creation, while it is still important to have an exceptional team with an accurate pulse on the target audience and demographics, ChatGPT can help streamline the process. It can draft product descriptions, headlines, blog posts, social media posts, video scripts, calls to action and other written content.
For example, Lalaland.ai uses advanced artificial intelligence to enable fashion brands and retailers to create hyper-realistic models of every body type, age, size and skin tone. With these body-inclusive avatars, the company aims to create a more inclusive, personal and sustainable shopping experience for fashion brands, retailers and customers.
Coca-Cola entered into a deal earlier this year with management consulting firm Bain & Company and OpenAI to enhance the creativity of its marketing department, and within weeks, Coca-Cola launched an AI-centric campaign using the latest versions of DALL-E and ChatGPT. The campaign, titled “Create Real Magic,” gave consumers access to a library of hundreds of the company’s visual assets – including the Coca-Cola logo and various bottle and can designs – and invited them to use these assets to create AI-generated artwork. In under two weeks, people created 120,000 different images through the campaign website, with consumers spending an average of seven minutes on the platform.
On the spirits side, Absolut recently partnered with ad agency Ogilvy and used AI for an ad campaign in Canada called “Mix Your Neighborhood.” The campaign created special cocktails to embody the spirit of Canada’s diverse and distinctive neighborhoods. In celebration of National Margarita Day in February 2022, Patron launched the “Patrón dream margarita generator,” an AI system that generates images of margaritas based on responses to three prompts: location, flavor and garnish.
Use of generative AI to create advertising content and materials
Generative AI tools are useful for creating advertising content and materials. Generative AI is a machine learning algorithm that can create new content based on a set of input data. Once trained, this data can be used to generate images, text or music, among other content.
Many companies have already developed or plan to develop their own generative AI tools. For example, in April 2023, Meta announced plans to commercialize its proprietary generative AI by December 2023. Meta’s AI tech can instantly create sentences and graphics, which can improve an ad’s effectiveness partly by telling the advertiser what tools to use to make it, such as AI-powered ad tools that can create content like image backgrounds, variations of written text and different images in an advertisement to suit different audiences. Generative AI technology will also be used in the development of the metaverse, allowing users to create 3D worlds with greater ease and accessibility.
Google has plans to introduce generative AI into its advertising business to create novel ads based on materials produced by human marketers. Advertisers can supply “creative” content, such as imagery, video and text relating to a particular campaign, which the AI will then “remix” to generate ads based on the audience it aims to reach, as well as other goals such as sale targets.
On the entertainment side, Roblox is bringing AI and the metaverse together by building generative AI tools to allow for easier creation in its virtual ecosystem. These offerings will include voice and text-based bots specially customized for developing game-ready assets. TikTok has similarly come out with a new generative AI tool that creates avatars.
Generative AI has opened doors to new advertising opportunities for content creators. For instance, a new trend emerged where AI creators reimagined how movies or celebrities might look using director Wes Anderson’s signature palette, including Harry Potter characters, the Pope and star athletes like LeBron James and Aaron Rodgers. The NFL even used generative AI for the NFL Draft, where it featured various artworks generated by Midjourney, a popular text-to-image AI tool, which depicted an NFL team’s home city with varying and complex details that pay it tribute. Drafted players were then photographed against the backdrop of the image corresponding to whichever team selected them.
Legal and ethical issues and risks associated with using ChatGPT and generative AI for advertising
Like any new technology, using ChatGPT and generative AI tools for advertising and marketing purposes raises several legal considerations and does not come without some level of risk. From a contracting perspective, many AI platforms do not provide representations and warranties that AI-generated materials will not infringe on the rights of others, and users of AI platforms may not be able to obtain exclusive ownership of AI-generated content or transfer such ownership rights to their clients. Some AI platforms also may not permit the commercial use of content generated by the platform.
The use of these AI tools can also raise copyright concerns, as discussed further in the IP section. For instance, the U.S. Copyright Office recently issued registration guidance for works containing material generated by AI, and depending on the level of human involvement, such works may not be protected under U.S. copyright laws. Relatedly, AI-generated content may infringe the copyright in a pre-existing work (e.g., if the content is substantially similar to protectable expression in a copyright-protected work used to train the AI platform).
From a consumer protection perspective, these AI tools can also spread misinformation and falsehoods, which may ultimately lead to claims of false advertising, unsubstantiated advertising claims or unfair and deceptive practices brought by regulators such as the FTC. This includes placing ads within a generative AI feature, which is similar to placing ads in search results; creating deepfake videos and voice clones to facilitate imposter scams, extortion and financial fraud; using chatbots to generate spear-phishing emails, fake websites, fake posts, fake profiles and fake consumer reviews; and helping to create malware, ransomware and prompt injection attacks. For example, generative AI may be used to create political attack ads and commercials to influence the outcome of elections. Recently, a canned beans commercial was AI-generated by a freelancer using Midjourney and D-ID without the involvement of the canned beans brand. Another creator posted an AI-generated ad, including voiceover, video and images, for a made-up pizzeria called Pepperoni Hug Spot to the “Midjourney” subreddit, a community of AI tinkerers.
Accordingly, the FTC has warned companies that misleading consumers via doppelgängers, such as fake dating profiles, phony followers, deepfakes or chatbots, could result – and, in fact, has resulted – in FTC enforcement actions, such as one against Devumi, a social media marketing resource company and a second against Sunday Riley Modern Skincare.
The FTC alleged that Devumi helped actors, athletes, motivational speakers and other influencers increase their credibility using social media by purchasing fake followers. The FTC also alleged that Devumi sold bogus subscribers to the operators of YouTube channels and fake views for people who posted individual videos – for example, musicians who wanted to inflate the popularity of their songs. The complaint cites over 4,000 sales of fake YouTube subscribers, over 32,000 sales of fake YouTube views and over 800 orders of fake LinkedIn followers sold to marketing and PR firms, financial services and investment companies and others in the business world. The FTC alleges that by selling fake indicators of social media influence, the defendants provided customers with the means and instrumentalities to commit deceptive acts or practices – which itself violates the FTC Act. The proposed settlement bans the Devumi defendants from selling or assisting others in selling social media influence. The order also prohibits them from misrepresenting (or assisting others to misrepresent) the social media influence of any person or entity or in any review endorsement. The order also imposes a partially suspended $2.5 million judgment against CEO German Calas, Jr. – the amount he was allegedly paid by Devumi or its parent company.
Houston-based Sunday Riley Modern Skincare sells skin creams and treatments at Sephora and through Sephora’s website, which allows consumers to post product reviews. According to the FTC, for a period of almost two years, managers and employees at Sunday Riley Skincare wrote reviews of their company’s branded products, using fake accounts they created to hide their identities. The illegal practices came straight to the top with company president, Sunday Riley, who instructed employees to create fake accounts using a VPN to hide their identities and to dislike negative reviews to get them removed.
The complaint charges that Sunday Riley and her company violated the FTC Act by falsely representing that their fake reviews reflected the opinions of ordinary users of the products. The FTC says they also deceptively failed to disclose that the reviews were written by Ms. Riley or her employees. In addition to prohibiting them from misrepresenting the status of any endorser or reviewer, the proposed order requires them to clearly disclose any unexpected material connection between the company and anyone reviewing a product.
The FTC has also warned businesses to avoid using AI tools that have biased or discriminatory impacts, and when talking about the AI tools in advertising, to avoid over-exaggerating or overpromising what the AI can deliver
With respect to influencer marketing, influencers are using AI to help create their content, which can result in deceptive or misleading advertising. For example, influencers may use AI to enhance their features when promoting beauty products in a way that does not reflect the actual results of the product. The possibilities presented for influencers when using AI are seemingly endless. The technology can allow them to produce content without even using the product, wearing the item or attending the event. Marketers will need to add parameters regarding AI and similar technologies in their influencer agreements.
There are also privacy concerns with using these AI tools, as discussed further in the Data protection and privacy section. The algorithms used for the AI tools need data, from the brand, advertiser, agency or consumer, and the tools may use personal images and videos to create branded content. Transparency, storage and sharing practices of AI platforms are not fully developed, and thus, companies should consider the risks before incorporating them into their content creation strategies.
Relatedly, traditional invasion of privacy and intellectual property issues, such as the right of publicity, may become major issues with AI. For example, a viral photo of the Pope wearing a puffer coat circulated across social media channels. Although many news outlets and bloggers caught on that the photo was AI-generated and many found the photo to be peculiar but chic, similar situations may give rise to claims that a person’s name or other indicia of personal identity have been misappropriated for a commercial benefit.
On the talent side, the use of these AI tools raises union concerns. Signatories may not avoid their contractual obligations to performers that are members of the SAG-AFTRA union by using AI to create a digital double of the performer’s image, voice and other characteristics to use in a commercial without payment to the performer. The union acknowledges that AI is here to stay and that producers under each of its collective bargaining agreements (e.g., motion pictures and television) that such technology is not used to avoid paying performers.