/ 3 min read / Entertainment and Media Guide to AI

Labor and employment

Read time: 5 minutes

AI technology has officially “arrived” in the workplace.

Those in the entertainment and media industry, and U.S. employers generally, will not only want, but need to stay abreast of the latest developments in this vibrant, evolving space. Below, we outline a few key areas where AI is already having major workplace-related impacts and some thoughts on how best to navigate these developments.

Legal issues of AI in the entertainment and media sector part 2

AI policies, practices and procedures

AI in the workplace will likely require most, if not all, U.S. employers, regardless of industry, to create and adapt new policies, practices and procedures. Employers should start with broad language and training that informs employees, as threshold matters, what precisely AI is and how it might be introduced and used in the workplace. For example, some employers might allow Human Resources professionals to use AI for decisions related to hiring, reducing costs, eliminating bias and enhancing productivity. However, the use of AI by employees whose work product depends on independent thinking and analysis might be discouraged.

It is imperative, therefore, that employers clearly define acceptable use parameters of AI within the workplace and adopt written measures that reflect this. Equally important will be employers’ ability to pivot and amend policies as AI develops and changes over the coming months and years. Flexibility by employers on this front will be paramount.

State and local regulations

Recently, there has been a substantial groundswell of legislation concerning the use of AI in the workplace. As with most employment legislation, one can look to a small group of jurisdictions that typically lead the way, which includes California, New York City and State, Massachusetts, Illinois and a few others. As to the specific issue of rules around limiting the use of AI in hiring, New York City has, to date, led the charge.

Indeed, in late 2021, Big Apple lawmakers passed a bill that will require employers to provide prior notice to job applicants regarding the use of AI during the hiring process. The law will also require businesses to perform annual third-party audits of their AI tools to ensure that their use is not resulting in biases. Employers who fail to conduct such audits will be barred from using AI to screen job candidates. The law is scheduled to take effect on July 5, 2023.

California lawmakers recently introduced Bill 331, a bill that would require AI tool developers and users to submit annual impact assessments to the California Civil Rights Department by 2025. The assessments would outline the types of AI being used, how it is used, data collection, safeguards against illegal discrimination (which would need to be put into place if not already present), potential adverse impacts and how the tool was assessed. Individuals affected by decisions made solely based on AI would have the right to opt out if technically feasible.

Undoubtedly, this is just the start of the legislative trend toward regulating the use of AI in the workplace. Employers should expect other states and cities to follow suit.

EEOC AI guidance

State and city legislatures are not the only bodies paying attention to AI. The U.S. Equal Employment Opportunity Commission (EEOC), for instance, recently announced its intention to increase oversight and scrutiny of AI tools used to screen and hire workers.

In January 2023, the EEOC published in the Federal Register a draft Strategic Enforcement Plan (SEP) outlining its priorities in the coming years. One priority is regulating the use of AI and machine learning systems in employment decisions. The EEOC points out that employers are increasingly using automated systems to target job advertisements, recruit applicants and make or assist in hiring decisions. Such screening tools, the EEOC warns, could intentionally or inadvertently exclude or adversely impact protected groups.

Following up on this priority, on May 18, 2023, the EEOC released a new technical assistance document titled “Assessing Adverse Impact in Software, Algorithms, and Artificial Intelligence Used in Employment Selection Procedures Under Title VII of the Civil Rights Act of 1964.” It explains the meaning of relevant terms, offers general background on Title VII and provides answers to questions employers may have.

Based on the above, the EEOC is clearly signaling that employers are responsible for ensuring their AI tools do not discriminate based on protected characteristics. Employers should therefore understand how their algorithms function to ensure they are not violating discrimination laws or regulations. This is particularly important because there is little precedent for enforcement actions in this space, making it difficult to know what practices are acceptable.

How should U.S. businesses respond?

So, how can U.S. employers best address the assimilation of AI into the workplace and, in particular, the hiring process?

While there is no one-size-fits-all approach, the most prudent plan of attack with respect to hiring is typically a holistic one. This means that, while AI can serve as an important tool for businesses, it should not necessarily be the only tool when making hiring decisions.

That said, AI tools have been shown to be effective in identifying key talent and, therefore, when coupled with the potential diversity, equity and inclusion (DEI) benefits, can be an extremely important part of growing a 21st-century business. Indeed, the goal of using AI in the employment screening and hiring process is to ensure that businesses select the best person for the job, regardless of any protected class, conscious or unconscious biases or the like. In this way, automated employment selection technology should absolutely help organizational DEI efforts.

Employers can also supplement existing privacy and personnel policies, practices and procedures as long as they realize that AI poses novel questions and concerns that require highly adaptive and forward-thinking decisions from the organization. In addition, U.S. businesses will want to monitor legislative and regulatory developments on this front, as we are increasingly seeing lawmakers and government administrators scrutinize the use of AI.

We anticipate that AI will affect areas of the workplace that might not have been anticipated – for instance, the collective bargaining process. While the National Labor Relations Act requires employers to negotiate with unions over certain terms and conditions of employment, such as wages, scheduling and grievance procedures, there is scant precedent for when the implementation of AI falls under that mandatory umbrella of bargaining.

The National Labor Relations Board (NLRB) and federal courts evaluating NLRB determinations have generally held that employers possess the managerial prerogative to implement technological changes in their businesses without negotiating with unions, although such analyzes are necessarily fact-intensive. U.S. employers should expect the NLRB and the courts to address the perhaps-analogous issue of AI in the near term.

In short, AI presents employers with a delicate dance where they risk falling behind competitors who used AI to their advantage and stepping into potential liability landmines inherent with intangible and novel technology. Employers should continue to monitor the AI space and exercise caution when it comes to their own use of such tools in the workplace.

Related Insights