On January 23, President Trump signed an Executive Order aimed at ensuring the US remains a leader in artificial intelligence (AI) technologies. To further this objective, the Trump administration should prioritize creating regulatory clarity for AI, especially in healthcare. Establishing a clear and flexible regulatory framework would foster investment and innovation, driving the development of technologies that can improve patient outcomes and save lives. With the current competition between ChatGPT and DeepSeek, it can easily get lost in the discussion that AI has the potential to revolutionize the healthcare industry. However, stifling “old world” regulations from Europe threaten that progress.
Critics allege that the United States is falling behind the European Union (EU) when it comes to regulating AI and that the US should ramp up its efforts. However, this approach is misguided. US regulators should not emulate Europe’s rigid regulations but instead treat AI, especially in healthcare, as the dynamic, evolving tool it is—with the potential to transform many aspects of the healthcare system.
AI has the ability to reduce costs and increase the efficiency of pharmaceutical research, drug discovery, design, and testing, as well as disease diagnosis. For example, AlphaFold3 has demonstrated great potential in predicting complex protein folding patterns for new medicines, doing so far faster and more accurately than current methods.
One obstacle to AI in healthcare today is regulatory uncertainty. When companies don’t know how their products will be regulated, investment in these innovations becomes less attractive. This lack of investment reduces innovation, limiting AI’s potential to save lives and reduce medical costs.
Creating regulatory certainty while allowing for flexibility is challenging, but achievable. The US Food and Drug Administration (FDA) recently published non-binding guidance on a Predetermined Change Control Plan. While this method is not perfect, it offers a framework for developers to outline future software updates, allowing products to evolve while on the market in predictable ways. At the same time, it ensures patient safety by protecting against unintended changes to the AI software.
The FDA’s AI guidelines are more product-specific than the EU’s approach. Instead of grouping products by presumed risk, the US evaluates each product individually, assessing its unique risks and benefits. The US also distinguishes between new products and those that are “substantially equivalent” to existing products, which fosters more competition in the healthcare AI market by expediting the entry of competing products.
US regulators should take a lesson from this flexible, adaptive approach, rather than adopting the EU’s obsession with rigid control. Other countries, such as Canada and the U.K., have also recognized the importance of flexibility, with their health services collaborating with the FDA on “good machine learning practices.”
It’s true that the US and EU have diverged on the regulation of AI in healthcare, as with many other industries. Historically, the EU has often regulated innovation out of its economy, and it appears poised to repeat that trend. The EU is attempting to micromanage the AI industry by creating a complex set of categories and assigning pre-determined risk levels to AI applications. This approach attempts to define and regulate each role AI plays in healthcare.
While this might seem reasonable at first glance, its rigidity is inefficient for regulating a constantly evolving industry. By preemptively categorizing AI applications in healthcare, EU regulators risk missing emerging uses that don’t fit into predefined categories, thereby hindering the development of new technologies. Additionally, the EU treats each new tool as a threat, even when many of these technologies are substantially similar or functionally equivalent to existing AI technologies. This approach unnecessarily delays the introduction of safe and tested tools, hampering Europe’s medical AI research industry.
Many states are already looking to the EU as an example from Colorado to Texas. EU regulators are pushing California to adopt similar regulations to move the US as a whole into EU style AI regulations. This will be costly, in both dollars and lives, if applied to healthcare AI. Instead, states should learn from the FDA’s flexible approach.
The US should avoid adopting the EU’s counterproductive regulations. AI is more dynamic than traditional healthcare technologies and therefore requires a more flexible regulatory approach. Software improves through iterative updates, becoming better and more efficient over time. Rigid regulations fail to account for this continuous evolution. Instead, federal and state governments should continue to embrace the technological advancements that have made the US a global leader in healthcare innovation.
Justin Leventhal is a senior policy analyst for the American Consumer Institute, a nonprofit education and research organization. For more information about the Institute, visit www.TheAmericanConsumer.Org or follow us on Twitter @ConsumerPal.