Virginia could become the next state to drastically regulate the use of artificial intelligence. Its High Risk Artificial Intelligence Developer and Deployer Act was recently passed out of the legislature and currently awaits Governor Youngkin’s decision to either sign it into law or slap it with a veto. Policymakers in Virginia—and in other states—would be wise to shun such a heavy-handed approach and embrace legislation that proactively puts consumers in command of their AI-powered futures.
Many other states have floated similar bills—all approaching the AI question from the same angle: AI is a threat to consumers from which they must be sheltered. This kind of attitude lends itself to heavy-handed governance, prioritizing paperwork and compliance, threatening competition, and slowing down development. In short, it’s a bill about micromanagement—not consumer empowerment.
The bill would limit AI use in making “consequential decisions” about education, healthcare, and more. It requires developers to predict and prevent every harmful possibility. Such a task is as impossible as it is impractical. It mandates reams of reports about how developers, deployers, and anyone else who might modify or use an AI take steps to mitigate any known or foreseeable risks. In doing so, it takes a “guilty until proven innocent” angle to AI governance.
Smaller firms would suffer more under these bureaucratic hurdles as they have fewer resources for compliance. While AI tech giants have the means to hire dedicated personnel for compliance, startups often operate at a much smaller scale. While comparing Virginia’s bill with the EU AI Act, one estimate finds compliance could cost Virginia innovators $290 million—an economic cost likely to be cushioned by larger firms with more resources at their disposal. This is a recipe for crushed competition.
Heavy-handed compliance laws slow down AI development as they hold developers to strict requirements that are already two steps behind. For example, the National Institute of Science and Technology had already released its first few versions of its Risk Management Framework before ChatGPT arrived in November 2022 and redefined the AI landscape. The framework was immediately outdated and was last updated in July 2024. Although AI has continuously evolved since then, most currently proposed or established state algorithmic fairness laws, including Virginia’s bill, treat the framework as either a baseline for further requirements or as a plausible defense for compliance. Where the framework was intended to be an evolving and voluntary set of guidelines, states have attempted to codify it into de facto law instead.
So far, with Colorado being the only state to pass an AI fairness law—which is still yet to take effect—artificial intelligence is already improving consumers’ lives in many ways. In education, surveys find that 91 percent of teachers and 87 percent of students believe it can help recover pandemic-era learning loss. In the workforce, the World Economic Forum finds AI-driven job growth will exceed job loss by 12 million. In medicine, AI can “reduce costs and increase the efficiency of pharmaceutical research, drug discovery, design, and testing, as well as disease diagnosis.” Outcomes like these are unequivocally good for consumers and should not be blocked by misplaced precautions.
Instead, policymakers would be wise to play a different tune. Before imposing new laws to preemptively curtail the unknowable, lawmakers should empower existing agencies to enforce laws already on the books. Many proposed AI laws map existing concerns—from copyright to AI fairness—onto the new tech, but states, including Virginia, already have laws to police businesses in such areas. These provide a robust foundation of existing legislation and can be enforced without the need for additional, AI-specific laws.
As for unforeseeable risks, letting NIST refine its framework based on principles and best practices would provide flexibility for lawmakers and businesses, allowing for adaptive and dynamic responses as challenges are discovered. Rushing into regulating a rapidly evolving technology will only render new laws obsolete, at best, or hobble AI innovation, at worst. Once real harms are known, not simply speculative, then policymakers can move ahead with confidence, able to make informed new laws wherever necessary. This approach can balance innovation with real harm, making sure the consumer can enjoy the full benefits of AI and be protected once concrete dangers become evident.
Regulating too heavily, too quickly, and too prematurely threatens the transformational potential of AI to improve and save lives. Allowing the new tech to grow and operate can bring about gains in education, employment, medicine, and more. Curtailing advancement in the name of risk mitigation would introduce anti-competitive compliance costs and lose the US its innovative lead. Virginia should resist the temptation to heap even more state-level regulation onto the AI industry. A better approach would be to debate and pass laws to govern specific harms as they arise on an as-needed basis.
Nate Karren is a policy analyst with the American Consumer Institute, a nonprofit education and research organization. For more information about the Institute, follow us on X @ConsumerPal.