The legal significance of President Joe Biden's "Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence" is immense, marking a pivotal moment in the governance of AI technologies in the United States, as it establishes an initial framework for regulating and governing the development and deployment of AI tools. The order encompasses a wide range of areas, from safety and security to privacy, equity, and civil rights.
While most federal agencies will have up to one year to return their recommendations and implement new guidelines, the Department of Commerce will begin imposing reporting requirements on January 28, 2024, for companies developing or intending to develop a foundational AI model known as "dual-use" as well as companies that acquire, develop, or possess a large-scale computing cluster.
As set forth below, the executive order has the potential to set a new bar in terms of compliance expectations in industries such as software development, housing, health care, and education.
Scope of Impact
The executive order applies to AI systems that use machine or deep learning, generative or natural language processing, or any other form of specialized neural network, as well as those that can directly impact an individual or environment by rendering decisions or indirectly influence an outcome by providing predictions or recommendations.
This broad scope means companies should undergo a careful assessment of how AI is used in their business and prepare for additional requirements from various regulatory agencies. These reviews should scrutinize algorithms for bias, the data sets for quality, and the overall system for vulnerabilities. As part of this process, companies will need to attempt to exploit their AI systems to identify weaknesses before public release. Those companies that build or use AI tools should monitor the applicable reporting standards for the development and use of those tools. Biotech and energy companies, for example, should be aware of the Department of Energy's upcoming tests of AI risk in chemical, biological, and nuclear research and commerce.
New Standards for AI Safety and Security
One of the key legal implications of the executive order is the establishment of standards for AI safety and security. The reporting requirements set to go into effect from the Department of Commerce by January 28, 2024, will likely be enforced in the areas of federal procurement of privately developed AI tools and products that utilize AI in their development.
The order imposes a legal obligation on these companies to be transparent about the potential risks associated with their technologies, such as harmful outputs and the tool being used for malicious acts following a breach. In requiring disclosures, developers of powerful AI systems are required to share safety test results and other critical information with the government.
Furthermore, the National Institute of Standards and Technology (NIST) and the Department of Commerce are tasked with the development — by July 16, 2024 — of standards, tools, and tests to ensure the safety, security, and trustworthiness of AI systems. In setting rigorous standards and conducting "red-team testing" to find flaws and vulnerabilities, these agencies will set forth frameworks for assessing AI systems' safety in terms of national and individual security before public release. Designed to protect the public, those standards may set a de facto legal standard for AI developer accountability.
Impact on Collection and Storage of Consumer Data
The executive order recognizes the impacts AI could have on consumer privacy. The order highlights the need to establish a comprehensive legal framework for protecting individuals' privacy in an era where AI can extract, identify, and exploit personal data more easily and efficiently.
Additionally, the order directs federal support for the development and use of privacy-enhancing technologies (PETs), which can involve advanced AI technologies. Development of PETs has been of particular focus for large tech companies, as well as government agencies like NIST. Further advancements in PETs could reduce the compliance burden associated with an omnibus privacy law. The principle of data minimization is prominent throughout the order, which could indicate a need for companies to change how data is collected, stored, and processed.
Impact on Housing, Health Care, and Education
The executive order acknowledges the potential for AI to engage in discrimination and bias in various sectors, such as health care and housing. The order calls for clear guidelines that enforce compliance with all federal housing discrimination laws for landlords, underwriters, appraisers, and any other participant in a real estate-related transaction that involves AI, as well as guidelines on how to address the use of AI in tenant screening systems. The federal civil rights offices are tasked with addressing algorithmic discrimination through training and technical assistance, underscoring that existing laws and regulations will be used as the basis for protecting individuals' rights and safety in the face of AI-related discrimination. They will also release suggestions for regulations surrounding the use of AI in hiring and the use of AI-derived data to assess workers.
While the executive order sets out to include an assessment of fairness and harmful bias in AI, it does not provide guidance for assessing how an AI system may be deemed harmfully biased or how to promote fairness in the development and use of AI. The order also does not provide guidance on how an agency will incorporate those two elements into any rulemaking. NIST, for example, has previously noted that standards of fairness and the mitigation of harmful bias are hard to define and even harder to enforce.
The order emphasizes the responsible use of AI in health care, establishing a safety program to report and remedy harms involving AI in the industry, including regulations surrounding research, drug development, health care financing, and public health projects. The Department of Health and Human Services has been directed to prioritize funding in minority and underserved communities, with a mandate to actively mitigate and rectify bias in the health care system by developing a safety program to receive and remedy reports of harmful AI health care practices.
In the education sector, the order recognizes the potential of AI to transform education and calls for resources to support educators deploying AI-enabled educational tools as well as investing in AI training and funding at all education levels.
Impact on Antitrust Activity, Immigration, and Patents
At the crux of the order is the call for the promotion of a fair, open, and competitive AI ecosystem emphasizing the importance of maintaining competition and preventing anti-competitive practices in the industry. It encourages the Federal Trade Commission to exercise its legal authority to ensure a level playing field for small developers and entrepreneurs by countering anti-competitive behavior and rectifying consumer harm. The Department of Transportation will also direct the Advanced Research Projects Agency - Infrastructure (ARPA-I) to prioritize funding opportunities for AI transportation projects.
The Secretaries of State and Homeland Security have been directed to modernize the J-1 and H-1B visa programs with a purpose of attracting foreign AI talent to the U.S. and providing incentives for companies to recruit foreign AI experts. For the domestic workforce, the secretaries will create new and enhanced training programs for scientists, companies, and universities that offer programs in AI.
The U.S. Patent and Trademark Office (USPTO) will be among the agencies required to create a chief AI officer and internal AI Governance Board. The first job of these roles at the USPTO will be to establish guidelines for patent examiners by February 27, 2024, that create new assessments and requirements around the issues of inventorship, use of AI in patent filings, and, potentially, patent eligibility of AI itself.
In conclusion, President Biden's "Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence" carries significant legal implications across various domains and is likely to produce additional regulatory requirements for both users and creators of AI systems. It establishes legal frameworks and standards with the purported goals of responsible development and deployment of AI, safeguarding the interests and rights of individuals, enforcing fairness and equity, and fostering innovation and competition in the AI space. It also underscores the pressing need for legal guardrails that holistically consider the potential risks and challenges of AI without hindering innovation. It's a reminder that in the AI era, the legal terrain remains complex and in flux.
Given that reality, companies should proactively review how any use of AI may impact their compliance needs based on the executive order, as well as additional laws such as the Federal Trade Commission Act, the Health Insurance Portability and Accountability Act (HIPAA), and the Children's Online Privacy Protection Act (COPPA), as a few examples. It would benefit companies to favor transparency in their policies, providing users with clear, understandable information about how AI tools make decisions that affect them.
For more information, please contact us or your regular Parker Poe contact. You can also subscribe to our latest alerts and insights here.
Law clerk Hunter Snowden also contributed to this alert.