Read time: 3 minutes
The United States currently lacks comprehensive legislation on AI. However, various administrative agencies are undertaking efforts to provide guidance on legal issues surrounding AI.
Such guidance include the Federal Trade Commission’s Business Blog posts “Keep your AI claims in check” (Feb. 27, 2023)1 and “Chatbots, deepfakes, and voice clones: AI deception for sale” (Mar. 20, 2023),2 and the National Institute of Standards and Technology’s “Artificial Intelligence Risk Management Framework (AI RMF) 1.0” (Jan. 2023)3 and the U.S. Copyright Office’s statement of policy on “Copyright Registration Guidance: Works Containing Material Generated by Artificial Intelligence” (Mar. 16, 2023).4 Certain regulations have already been implemented at the state and local level to regulate the impacts of AI in various fields, including employment law with New York City’s Local Law 1445 requiring employers to conduct bias audits of AI-enabled tools used for employment decisions.
In October 2023, President Biden issued a landmark Executive Order on Safe, Secure and Trustworthy Artificial Intelligence. The Executive Order (E.O.) establishes new standards for AI safety and security, advances equity and civil rights, stands up for consumers and workers, but also aims to promote innovation and competition and advance American leadership around the world.
The U.S. government thus seems to be taking into account the risks posed by AI systems by establishing clear rules and control measures to ensure that AI remains under control, while opening up new avenues for development.
The E.O directs over 50 federal entities to engage in more than 100 specific actions to implement the guidance set forth across eight overarching policy areas, including:
- Requiring that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government, in accordance with the Defense Production Act.
- Developing standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy.
- Protecting against the risks of using AI to engineer dangerous biological materials by developing strong new standards for biological synthesis screening.
- Protecting Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content.
- Evaluating how agencies collect and use commercially available information - including information they procure from data brokers - and strengthen privacy guidance for federal agencies to account for AI risks.
- Addressing algorithmic discrimination through training, technical assistance, and coordination between the Department of Justice and federal civil rights offices on best practices for investigating and prosecuting civil rights violations related to AI.
While the Senate Judiciary Subcommittee on Privacy, Technology, and the Law hearing on “Oversight of A.I.: Rules for Artificial Intelligence” also shows an effort by the U.S. government to understand and regulate the sector, there is no other federal legislation on AI as of the date of this article.
- ftc.gov
- ftc.gov
- nvlpubs.nist.gov
- federalregister.gov
- Executive Order (E.O.) 14110 on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence
- N.Y. City Council Report, “Automated employment decision tools” (enacted Dec. 11, 2021)
- U.S. agencies are beginning to develop guidance on legal issues around AI
- In October 2023, the Biden administration issued an E.O. which set new standards for AI safety and security and directed federal entities to take action to ensure responsible AI development
- While this shows an effort by the U.S. government to understand and regulate the sector, there is currently no other federal legislation on AI