On April 26, 2021, the European Union released its proposals to regulate artificial intelligence (AI) among its 27 member states. In announcing the proposals, the European Commission claimed while AI can produce significant benefits for consumers, the technology “can also bring about new risks or negative consequences for individuals or the society.” While the European Union’s efforts to better regulate AI are admirable given the technologies’ potential risks, particularly in the realm of data security, the EU’s proposals are heavy-handed that could destroy innovation and deny access to groundbreaking products.

Given the lack of legislation or regulation governing AI in the U.S., there is a real risk that policymakers could use the EU’s proposals as a regulatory framework for the future. EU-copycat regulations of AI in the U.S. would not only harm American consumers but will also inflict unnecessary harm on big tech companies that are increasingly important to the modern American economy.

Under the EU’s proposals, AI technology that is likely to cause “physical or psychological harm,” through “subliminal techniques beyond a person’s consciousness,” or exploits vulnerabilities, or creates a “social score” will be prohibited. Also banned is the use of facial recognition technology which is becoming increasingly mainstream in cell phone security. Companies that release AI technology that causes” “physical or psychological harm,” can be fined up to 30 million euros or 6% of worldwide revenue.

While these regulations would seem to provide substantial protections for consumers, their expansive nature and vague definitions could inflict unintended harm. Algorithms used by tech platforms such as YouTube or TikTok are banned as they subliminally direct users to specific content. Additionally, the prohibition on facial recognition technology could leave consumers cell phones and the data contained on them vulnerable.

Europe’s regulations would also create a category of high-risk AI technology that would see systems and users heavily regulated. According to the EU, high-risk AI would include logic learning systems and systems used for “critical infrastructure such as transport, justice and democratic processes, law enforcement, border control, and essential private and public services” and “CV screening software and exam-scoring algorithms.”

Once categorized as high-risk, the AI systems manufacturer and users will be held to additional transparency standards and human oversight standards to prevent its misuse. These human oversight and transparency standards could be particularly problematic as it will undoubtedly raise the cost of doing business on the continent and prevent tech companies from bringing new and innovative products to market simply because they might be classified as high-risk. Additionally, the transparency and human oversight requirements do not guarantee AI will be used ethically.

One of the major concerns is that lawmakers and policymakers will follow Europe’s lead and pursue an overly heavy-handed approach to AI in the absence of significant federal regulations governing AI use in the United States. Currently, the White House has instructed its agencies to “avoid regulatory or non-regulatory actions that needlessly hamper AI innovation and growth,” as Europe’s proposals would do. There is a real risk that lawmakers could abandon this light-touch regulatory approach that has boosted business productivity, allowed autonomous cars onto American roads and provided doctors with advanced diagnostic tools.

While not evident at the federal level just yet, the impetuous for further regulation of AI is clear at the state level. In 2021 alone, 16 states introduced legislation aimed at regulating AI. While no state has yet to pass legislation in 2021 governing the use of AI, the willingness of state legislatures to consider legislation suggests they will be at the forefront of regulating the use of AI in the coming year. State-level legislation could pose significant problems as the patchwork of rules will impose unnecessary compliance costs on businesses seeking to operate in multiple states and disincentivize innovation.

With AI sure to play an increasingly important role in people’s lives in the coming years, the need for government regulation will undoubtedly grow. What legislators and policymakers need to ensure, however, is that whatever form the regulations take, they do not stifle innovation or make it harder for AI systems to improve people’s lives, as is the case with Europe’s heavy-handed but well-intentioned approach.

Put simply, the United States needs to chart its own path governing AI, not follow Europe’s lead.

Share: