Authors
Read time: 7 minutes
Introduction
Businesses across all industries increasingly use AI technologies for purposes of profiling or automated decision-making. AI-powered age verification tools are also materializing, offering a new approach to age verification requirements. The article below explains these two emerging aspects in relation to data protection and data privacy in the age of AI.
Legal issues of AI in the entertainment and media sector part 2
Decision time: The rise of AI use in automated decisions
Profiling and automated decision-making are distinctly different concepts with varying definitions, depending on the jurisdiction, but they are not mutually exclusive.
Profiling generally refers to any form of automated processing of personal information to evaluate, analyze or predict certain aspects relating to an individual’s economic situation, health, personal preferences, interests, reliability, behavior, location or movements. For instance, a streaming company may use data collected from what content a subscriber views in order to make recommendations to them about other content they may want to view or listen to in the future.
Automated decision-making generally refers to the process of making a decision by automated means without any human involvement, which may have a legal or otherwise similarly significant effect. These decisions can be based on factual data as well as on digitally created profiles or inferred data. Automated decision-making often involves profiling, but it does not have to. For instance, companies may use automated decision-making tools to run a credit check for its customers or to identify a user’s age, gender or other identifiable characteristics for other legally permissible business purposes (e.g., to moderate objectionable content or to advertise products based on a consumer’s demographics).
Profiling and automated decision-making can occur in a variety of situations ranging from financial loans and social health care to employment. In the entertainment industry, for example, AI systems are regularly used in a variety of contexts, such as profiling for advertising, as discussed in the AI and advertising section, and using facial recognition technology to access events. For instance, major sporting events around the world, such as the FIFA World Cup and the Olympics, use facial recognition technology to monitor fans for safety purposes. During the 2022 FIFA World Cup in Qatar, 15,000 CCTV cameras were hooked up to facial recognition systems to monitor threats ranging from reckless football fans to terrorism. Other companies are also exploring and implementing facial recognition technologies to make payments and check-in to events.
Given that AI is dependent on huge data sets that include personal information, and sometimes even sensitive personal information, lawmakers and regulators around the world have addressed the use of AI to profile and make automated decisions under their jurisdiction’s privacy laws. These laws generally seek to ensure that businesses use AI in a responsible manner, especially when it comes to an individual’s personal information, and that they honor an individual’s right not to be subject to profiling or automated decision-making. Such laws include the EU Global Data Protection Regulation (GDPR), California Privacy Rights Act (CPRA), Colorado Privacy Act (CPA), Virginia Consumer Data Protection Act (VCDPA) and Connecticut Data Privacy Act (CTDPA).
Other jurisdictions have taken a slightly different approach. For instance, the Office of the Privacy Commissioner for Personal Data (the Privacy Commissioner) in Hong Kong does not regulate automated decision-making, but in August 2021, it did issue guidance on the ethical development and use of AI in to facilitate the healthy development and use of AI in Hong Kong and to assist organizations in complying with the provisions of the Personal Data (Privacy) Ordinance in their development and use of AI, including automated decision-making.
Regardless of the regulatory or legal approach, (a) maintaining transparency, (b) honoring individual rights, (c) determining the level of human intervention and (d) conducting risk/impact assessments are important considerations when using AI for purposes of automated decision-making and profiling. Companies will also need to consider the challenges around the explainability of the decision-making process for AI systems, especially as it relates to the expected impact and potential biases.
A risk to minors or the future of age verification?
The GDPR and state and federal law in the United States recognize that children warrant specific protection when it comes to the processing of their personal data. Existing and emerging laws in Europe, the United States and Asia require the implementation of age verification tools to protect children’s privacy online. AI technologies offer a novel approach to this requirement.
Although age verification is not (yet) an express legal requirement under global laws, understanding a user’s age is implicitly necessary to meet the growing number of youth legal requirements (as imposed by the UK and California Age-Appropriate Design Codes and Irish Fundamentals, to name just a few) and to provide age-appropriate experiences to children. These Design Codes impose requirements on websites that are “likely to be accessed by children” under the age of 18. This is a stark contrast to existing standards under the Children’s Online Privacy Protection Act, which applies to websites that are “directed at children” under the age of 13 or those that have actual knowledge a child is using the site or service. In the 2023 legislative session, several U.S. states considered and enacted laws that require social media companies to verify the age of users and prohibit or restrict users from creating accounts if the user is unable to verify that they are over the age of 18.
The emerging trend in this legislative activity has implications for online gaming, social media, streaming services and other businesses that produce content that is popular among teenage demographics. AI developers themselves must also consider the extent to which children use their services. In its recent statement suspending ChatGPT, the Italian Data Protection Authority noted that there was no age verification system in place for users of the service and that children could be exposed to responses that are inappropriate for their age and awareness. As development of consumer-facing AI products continues, this is yet another regulatory hurdle AI developers will need to grapple with.
Although many online services, especially those that produce content for teenage audiences, have managed to avoid compliance with children’s privacy laws in the past, given the legislative and regulatory activity in this space, reliance on a simple self-declaration of age may not, by itself, be sufficient to verify age. Services are being tasked with finding technology-driven, innovative, and improved ways to determine a user’s age online, and AI is beginning to emerge as a leading tool in this space. Take, for example, Yoti’s age estimation tool, which uses facial analysis technology to estimate a user’s age, or AI models that analyze the ways in which a user interacts with a service and predict the user’s age based on their interactions.
Despite these measures being designed to enhance privacy protections for children’s data, it is crucial not to overlook the profound challenges that AI poses to children’s privacy, as discussed here and elsewhere in this chapter. AI requires an immense amount of data to train models, learn, improve and stay current, incentivizing mass collection of data against the data minimization principle. Age verification tools are not immune to the privacy risks common to other AI technologies, however, when deployed to determine whether a user is a child, those risks are amplified because of the special protections afforded to children under privacy laws around the globe.
AI tools may produce inaccurate outputs – if an individual is misidentified by the models, the individual’s user experience may degrade as they may be unable to access certain parts of a service or improperly receive child-specific safeguards. In addition to the potential for inaccurate outputs, transparency is another significant consideration when it comes to AI and children. The challenge lies in explaining the complexity of AI systems in child-friendly language and in a transparent manner, without inadvertently encouraging users to misrepresent their age.
Although the use of AI-powered age verification tools may not fully comply with data protection laws and applicably privacy rights, it is still uncertain whether regulators will eventually mandate the use of AI for age verification purposes.
Authors