Skip to Main Content

Keeping you informed

What Is at Stake in the Regulation of Artificial Intelligence?

    Client Alerts
  • January 17, 2024

Artificial intelligence (AI) has exploded onto the scene — both the technology and the scramble for laws and regulations to control it. We are often asked what's happening on the AI regulatory front. Here is what we know so far.

As with the General Data Protection Regulation (GDPR) for data privacy, the European Union has jumped out in front on AI with its EU AI Act. In the U.S., states are passing their own data privacy laws that apply to AI systems. Find more detail below on the current landscape, with apologies for the acronyms:

  • First, as to laws regulating personal information, we have GDPR, which took effect in 2018, the California Consumer Privacy Act (CCPA), which took effect in 2020 (and was expanded by the California Privacy Rights Act or CPRA in 2023), and 14 more state laws impacting data privacy rolling out over the next three years. (That count includes what the data privacy community is considering "comprehensive" laws as well as laws focused on specific types of data, such as consumer health or children's data.)  
  • California and Colorado have promulgated the only state regulations based on their laws so far, but both sets are very prescriptive and enforcement has already begun in earnest in California.
  • Six of the new state laws apply to nonprofits and 10 require data protection assessments for high-risk data processing such as the use of data for targeted advertising and profiling.
  • California has introduced new regulations that require annual cyber audits and broad risk assessments that may have to be submitted annually to its dedicated data privacy regulator (the California Privacy Protection Agency). This will create a road map for audits.
  • Speaking of road maps, several of the new state laws provide an appeal for any denial of a consumer’s data deletion request and, if that appeal is denied, the company must provide an electronic complaint form linked directly to the state attorney general (talk about audit triggers).
  • We are likely to get another 10 state data privacy laws in 2024. New Jersey has already passed one and New Hampshire is right behind. The good news is they are generally tracking each other with differences primarily around opt-in (i.e., prior consent) vs. opt-out for certain data practices, new rules for universal opt-out mechanisms, and children's social media protections.
  • Most of these new state laws authorize the states to regulate automated data processing and the use of AI algorithms. Colorado’s regulations cover these practices lightly, but California’s risk assessment rule will cover AI as broadly and prescriptively as the EU AI Act. 
  • So the states are moving forward on AI regulation without waiting for Congress to act. (For what it's worth, Congress has been unable to agree on a federal data privacy act for over five years.)
  • From the business community's vantage point, while a wave of state AI regulation is not desirable, at least the states are acting under state data privacy laws that, for the most part, do not provide a private cause of action for violating those laws. Regulatory fines and penalties pose enough risk and it appears that negligence claims will still exist for harm suffered from exposure to an AI system despite these private cause of action exemptions.
  • These damages lawsuits are a key concern, as every proposed federal legislative framework (and the president’s comments on his own executive order) speak of ensuring the availability of damage claims for harms caused by AI. Yet, if the federal government acts too broadly on AI (by either law or regulation), it could threaten innovation with frivolous lawsuits that override the litigation protections achieved in hard fought battles on these laws in state legislatures.
  • Let’s turn now to the EU AI Act, soon to be the first AI law in the world. While it will likely have a two-year transition period, that time will be needed given the scope of this new law. (While the European Parliament, European Commission, and the Council of the European Union have agreed on the text of the law, it still needs to be formally adopted by the parliament and council.)
  • First, it would apply to any AI system released into the European market, whether by a manufacturer or by a licensee. As companies use open source AI tools to create their own AI tools, they would need to (1) pass down proof of the manufacturer’s compliance with the EU AI Act, (2) verify their own compliance with that law, and (3) provide that proof to their own AI system clients before deployment of the system.
  • Proof of compliance will be a slow, complicated, and expensive process. The pending EU AI Act provides for both self-certification of compliance and independent third-party validation of compliance. If the requirement for third-party validation prevails, companies will be faced with hiring a new generation of consultants — that even the EU admits does not currently exist — to apply the broad mathematical and statistical compliance requirements for AI. 
  • These requirements for “high risk” systems include proof that (1) the model was trained on data the developer had the right to use; (2) it was high quality data that statistically and consistently provides accurate, reliable outputs; (3) the logic of the algorithms are fully explained; (4) the model is not subject to data drift as the outputs are put back into the model without altering the operation and validity of the model; (5) there are avenues for human intervention in those outputs; and (6) the developer monitors compliance through the entire life cycle of the AI system. Some models must be deposited in an EU-wide database for regulatory tracking.
  • While the current noise is about generational AI systems, the remedy in the EU AI Act and in other proposals is merely disclosure that a machine is involved so the output cannot be trusted.
  • It is easy to see that these complex regulatory regimes can frustrate AI innovation and deployment in ways that some business leaders say will be hard to overcome. And the regimes may serve as the basis for damage actions even if the regulations could not be sued on directly.
  • So some worry the broad potential economic and societal benefits from AI are at risk in the stampede to regulate it. And after listening to California Privacy Protection Agency meetings on the new risk assessment regulations, there was a stark absence of discussion about protecting innovation and avoiding the opening of floodgates to litigation. These are key issues to watch as they unfold in 2024.
  • For more information, please contact me or your regular Parker Poe contact. You can also subscribe to our latest alerts and insights here.