The European Union is developing a model for regulating artificial intelligence (AI). In this regulatory model, AI would be categorized in terms of the risk it poses to the public. While adopting this model is not the best approach, the one benefit is that it recognizes that AI is not a uniform product and should not be treated as such.  

The EU model creates different risk thresholds for different uses and regulations. The highest risk level is referred to as “unacceptable risk” which includes actions that would jeopardize public safety and civil liberties. Examples include social scoring systems by governments.

For applications that have the potential to be used safely but also carry risks to the general public, the EU approach labels these uses as “high risk.” The “high risk” category covers a broad range of applications including police applications, credit rating systems, or programs managing critical infrastructure.

Essentially anything that could pose a safety risk or could influence the trajectory of someone’s life would fall under this category. One instance is the use of facial recognition software by the police. Current research shows a high rate of false positives, meaning wide adoption could result in false accusations of innocent people. Systems within this level would require careful evaluation of transparency and accuracy as well as mandating human oversight.

The last two categories are “limited risk” and “minimal risk”, which require the least amount of regulation. Limited risk includes uses like chatbots which would simply require companies to let users know that they are chatting with an AI program rather than a human. The proposed framework would reduce regulatory burdens by essentially carving out minimal-risk AI from regulations.  

Rather than preemptively establish regulations like the EU, the US should still wait to determine established harms before setting rules. However, the different uses should be considered. Some states have already passed regulations on AI and a bill was recently introduced to the House of Representatives to create a bipartisan commission to review and make recommendations on AI regulation. However, these efforts are likely to be too broad as one commission can’t adequately review uses in diverse fields such as customer service to medical practice.  

AI regulation is complex and has yet to be fully fleshed out as technology is changing rapidly. Not all applications are equally dangerous and different applications require different scrutiny. ChatGPT should not be addressed with the same regulation meant to guide face recognition technology in law enforcement

As the United States moves towards reconciling its disparate approaches to AI, it is important to recognize the different risks. While the EU model may not be perfect, recognition that different applications of AI may require different rules based on established harm to human safety and liberty is a good starting point.

Trey Price is a technology policy analyst for the American Consumer Institute, a nonprofit education and research organization. For more information, visit or follow us on Twitter @ConsumerPal.