What the European Union’s AI Act achieved in speed it lacked in understanding, as technological advancements left lawmakers scrambling with last-minute changes to the framework. While calls grow worldwide to regulate AI, difficulty in the EU should show the problem with preemptive legislation of developing technologies.

The EU AI Act is a regulatory framework for AI. Risk categories are central to the act, including unacceptable, high, and limited risks. After the deal, there is a specific transparency risk where people must be made aware that AI creates the content they view. The original version of the AI Act did very little to regulate general-purpose AI platforms like Chat-GPT. However, the recent advances in generative AI led to the act being amended to include stricter transparency requirements for foundation models.

What changed was an unprecedented leap forward in machine learning that allowed existing technologies to process massive amounts of data. This led to technology creating human-sounding language and images from a text prompt. While it existed beforehand, 2022 saw significant advancements in generative AI. This led to changes and disagreements among the EU over how AI would be regulated to remain safe and how to capture the economic benefits it could create.

The changes in the EU AI Act and the challenges since its proposal get to the heart of how governments should approach AI regulation. Significant changes had to be made to the law from when it was introduced, and as it won’t take effect for over a year, these changes might be obsolete by then.

As shown, technology could advance in ways lawmakers haven’t accounted for. The framework proposed by the EU was an example of ex-ante regulation, or regulation to prevent a possible future issue. The problem with this approach is that it does not rely on solid data but rather on assumptions by politicians and regulators on what problems might arise in the future.

In contrast, ex-post regulation occurs after the fact and considers evidence to help avoid a situation that has happened rather than one that may occur. Whenever possible, ex-post regulations are preferable when it comes to regulating technology. Regulations should be narrow and tailored to specific harms rather than dampening innovation and creating rules that may not apply in real-world circumstances.

Ex-ante regulation is especially difficult with AI as there is no agreed-upon definition. AI does not refer to one type of program but rather a broad range of programs that work differently. This creates another level of difficulty when trying to predict and avoid potential outcomes as it’s not just one technology regulators are trying to predict, it’s multiple.

ChatGPT can create human-like text and responses from natural language prompts, and this has led to entirely new areas of concern ranging from AI being used to mislead people into thinking that the content was created by a person or fears that AI will begin replacing humans in creative fields. This revolution in generative AI substantially differed from models the lawmakers originally envisioned and led EU lawmakers to change the law to include transparency requirements for AI-generated content. Hence, users know what is created by AI. This change in concerns about AI technology was not predicted by the legislators drafting the law and showed how ex-ante regulations can fail to account for unexpected changes.

In its race to create the first comprehensive AI regulatory framework, the European Union demonstrated the flaws inherent in preemptive regulation. Because AI is advancing so quickly, waiting to regulate established harms is much better suited to AI.

Trey Price is a technology policy analyst for the American Consumer Institute, a nonprofit education and research organization. For more information, visit https://www.theamericanconsumer.org/ or follow us on X @ConsumerPal.

Share: