The European Union (EU) has set out to regulate artificial intelligence (AI) more heavily across its 27 member-states, including criteria that would outright ban certain forms of the technology. The criteria, listed under Title II of the EU’s Artificial Intelligence act, prohibits AI that violates what EU policymakers consider to be their arbitrary criteria of fundamental rights. This includes restricting practices from [deploying] “subliminal techniques” to “biometric information systems.”
AI is a burgeoning field that has already created vast economic growth worldwide. To impose legislation restricting its use runs the risk of stifling research and innovation that will provide significant consumer benefits.
The issue with the criteria laid out under Title II is that it lacks precision, leaving it to broad interpretation. The proposed legislation, in its current form, is non-specific and open to interpretation and potential abuse by punitive lawmakers or bureaucrats who may set out to regulate tech giants to win political gain. The U.S. currently does not have any federal regulation on AI, and it must avoid making the situation worse by adopting similar regulation to the EU’s.
The first criterion of Title II prohibits AI practices that “deploy subliminal techniques beyond a person’s consciousness to materially distort a person’s behavior […] [causing] the person or another person physical or psychological harm.” Besides being unclear about what constitutes a subliminal technique, it does not outline how to determine whether ‘harm’ has been caused.
Take Instagram or Facebook’s recommendation algorithms, for example. These platforms use machine learning AI technology to study users’ habits and use that information to place customized content in the users’ feeds. The wording of this criterion is so vague that if a few have a bad experience with an automated recommendation, they could bring a lawsuit to the European Court of Justice. That could lead to banning the AI in question, ending the benefit of a more tailored and thus more enjoyable content experience to those that do enjoy their auto-generated feeds, thus adding value to the already free service they are receiving.
The second criterion takes aim at AI’s perceived potential to be discriminatory. The EU’s proposal prohibits AI technology that could “exploit vulnerabilities of a specific group due to their age, physical or mental disability […] [causing] them or another person physical or psychological damage.”
This criterion also lacks in its wording and it does not state how it will consider the demographics of its users. If women use a particular AI service more than men do, how can it be determined if any negative outcomes should be labeled as “exploitative?”
Take the example of AI telemedicine services. A service that automates medical recommendations or medical treatments for those with physical or mental disabilities may be subject to a ban under this proposal. If a treatment is recommended to a patient that turns out to not be a good fit for them or generates negative health outcomes, they may be able to claim that they were discriminated against by the AI, and ultimately harmed by the incorrect medical recommendation delaying access to more suitable treatments.
Technology companies may ultimately have to exclude certain services from EU users to comply with the new regulation. This would hurt European consumers and is certainly an outcome that U.S. lawmakers should avoid.
Under the new regulation, companies will also put less of their revenue into AI research, knowing that it could be banned by such arbitrary rules. This deprives consumers of the potential benefits technological innovations could deliver.
In the absence of U.S. regulations, it would be alarming if the EU’s proposals served as inspiration for the Biden Administration to implement similar policies in the US. While not yet part of publicly-stated presidential policy, many state legislatures, including California, Illinois, Massachusetts, New York, and Washington, have begun the process of regulating AI. The growing movement to regulate at the state level may ultimately force hands in Washington to take actions.
As many tech innovators are US-based, implementing similar criteria to ban AI here has the potential to cause lasting damage and stifle the future of AI. Tech companies will already be unreasonably hurt by the EU’s policies since they will add costs and discourage research and development. However, dealing with two major markets’ onerous regulations would be debilitating, possibly ending major developments in AI technology entirely.