The future of technology is notoriously difficult to predict with any degree of accuracy. Just ask those hoping to be driving a flying car on Mars by now.
Just a few years ago many experts were predicting the end of the internet as we know it unless net neutrality was restored. That didn’t happen. Now, as generative AI has become the big topic of conversation doomsday prophecies abound. These range from age-old fears about worker displacement to human extinction. However, the large number of past failed predictions should serve as a warning for lawmakers to approach these questions with humility and not legislate issues that do not yet exist.
In 2015, Net Neutrality became the technology issue of the day when the Federal Communications Commission (FCC) decided to repeal the Obama-era Open Internet Order. The fear was that without a robust regulatory framework to protect the internet, internet service providers (ISPs) would engage in anticompetitive behaviors such as blocking access to content or throttling speeds for different websites and services. New businesses would be left in the dust and free speech would be at the mercy of corporations. Net neutrality was needed so that ISPs would treat all internet traffic the same, and consumers would be protected.
But here’s the rub: none of these dreaded outcomes ever materialized. When the Trump-era FCC rescinded these regulations in 2017, consumers were just fine, and the internet thrived. Unfortunately, this hasn’t stopped the government from trying to reinstitute Net Neutrality regulations before they were blocked in court despite the negative impact they were likely to have on investment and newer capabilities like network slicing.
Today the debate surrounding AI, while different in some important ways, is following a similar pattern to that surrounding Net Neutrality. A small, but vocal, minority of people are demanding that the government impose burdensome new regulations on AI to get ahead of the, at times literal, doomsday predictions. Initial fears about the danger of AI included everything from mass job displacement, malicious actors undermining democracy through deepfakes and misinformation, and even human extinction.
Some high-profile pieces of AI legislation were written with these potential pitfalls in mind. For instance, the EU AI Act classifies use cases according to risk, some of which are grounded in reality. In California, Governor Gavin Newsom vetoed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, which if passed, would have imposed excessive liability on developers of powerful AI models. Fortunately, Newsom recognized some of the flaws of the bill and is now working on a new AI safety proposal.
Read the full article here.
Trey Price is a policy analyst with the American Consumer Institute, a nonprofit education and research organization. For more information about the Institute, visit us at www.TheAmericanConsumer.Org or follow us on X @ConsumerPal.