On April 21, the European Commission released its much-anticipated proposal for a regulation laying down harmonized rules on artificial intelligence. If implemented, Brussels’ proposal would govern artificial intelligence (AI) across 27 member countries and for over 445 million people. In outlining the need for the Commission’s proposed AI regulations, Margrethe Vestager, Executive Vice-President for a Europe fit for the Digital Age, claimed “by setting the standards, we can pave the way to ethical technology worldwide.”

While the proposals are driven by a desire to protect consumers from abuse, the onerous regulations could have far-reaching consequences for the medical field and how patients receive healthcare in the twenty-first century. Unfortunately, the regulatory environment the commission’s proposals would create would end up denying patients access to groundbreaking technology that could simplify care, improve health outcomes, and predict diseases before they appear.

Given the profoundly harmful consequences of the commission’s proposals for patients, lawmakers in Washington would be wise to chart their own course on AI regulation. Rather than following Europe’s lead, the Biden administration would be wise to create an open regulatory environment that allows patients and healthcare providers to reap the full benefits of AI.

The proposals were released when investments in medical AI were skyrocketing. Recent estimates suggest that globally, the marketplace for medical AI  is valued at $11 trillion. In the U.S. alone, the value of AI in healthcare is estimated to stand at $5.9 billion, and is estimated to grow to over $61 billion in 2027. These figures alone provide clear evidence that AI is becoming an increasingly important component of the modern economy and is driving significant innovation that will benefit patients.

Some recent examples of AI being successfully deployed in the medical field include an app that employs algorithms that can diagnose illnesses, technology that can test bloodwork for cancer, and programs that can help researchers quickly find drug candidates. These are just some of the groundbreaking medical uses for AI that are allowing healthcare providers to treat patients better and improve health outcomes.

Unfortunately, the EU’s proposals would likely prohibit many of these technologies and deny patients access to these new treatment methodsUnder the EU’s proposals, AI technology that is likely to cause “physical or psychological harm” through “subliminal techniques beyond a person’s consciousness” would be prohibited. Additionally, AI that exploits people’s vulnerabilities would also be banned. Given the inherent risks associated with medical procedures and the ability of AI to predict and diagnose diseases, thereby exploiting their vulnerabilities, many medical uses for AI would be banned, denying patients access to advanced and innovative treatments.

Also banned under the EU’s proposals would be facial recognition technology, which is becoming increasingly mainstream in the medical field. Healthcare providers have been using facial recognition technology to quickly identify patients and detect signs of depression before they manifest.

The Commission’s regulations would also create a category of high-risk AI that would force users and manufacturers to comply with onerous regulations. AI systems that would be categorized as high risk are those that would “create a high risk to the health and safety or fundamental rights of natural persons.” Once categorized as high risk, AI manufacturers would have to comply with additional regulations covering “the quality of data sets used; technical documentation and record keeping; transparency and the provision of information to users; human oversight; and robustness, accuracy and cybersecurity.”

One of the major flaws in regulating AI, particularly medical AI, is they cannot keep pace with technological advancements. This has created a situation where regulators in Brussels have created a regulatory environment without fully understanding potential benefits or usage. This approach risks creating an overly punitive environment that will prevent consumers and companies from experiencing all the benefits of AI.

These requirements would undoubtedly impose additional compliance costs on those who use and produce AI in the medical field. It is widely expected that AI systems not prohibited by the EU would be classified as high-risk.

Companies that release AI technology that causes “physical or psychological harm” can be fined up to 6% of worldwide revenue under the EU commission’s proposals.

The significant potential fines for violating the EU’s rules and limits imposed on usage will undoubtedly disincentivize companies from producing AI technology and prevent them making the necessary investments in research and development that can transform how patients are treated. Stifled innovation in the medical field is particularly problematic because it creates avoidable negative health outcomes for current and future patients.

Given the onerous regulations established by the EU commission, it is readily apparent that they have fail to balance patient protections and the need to create a regulatory environment that allows new and innovative technologies to enter the marketplace and enhance patient welfare.  Recognizing the consequences of Europe’s heavy-handed approach to regulating AI, it is essential that Congress, the White House, federal agencies, and states acknowledge the need for the United States to chart its own regulatory environment – one that does not stifle innovation or create an overly punitive regulatory environment. In doing so, policymakers will balance the substantial benefits AI can provide to patients and healthcare providers with the need to ensure the appropriate levels of protection for patients. Failure to do so would only result in patients suffering and lost innovation.