What Should a Federal Law for AI Look Like?

The world has been increasingly impacted by artificial intelligence, and its rise has resulted in sharp jumps in productivity for the global economy. While policymakers attempt to understand AI’s powerful effects on the economy and society, they will inevitably want to regulate its use. This has been the case with the European Union (EU) in recent months, and similar regulations might be brought to the United States soon since President Biden has already formed a ‘task force’ to tackle the issue.

Currently there is no federal law regulating AI in the United States, and the trend of rapid AI development is continuing as more startups begin to promise more effective and intellectual software. Policymakers should not impose any sweeping restrictions on artificial intelligence that would stifle valuable innovation. AI has accelerated economic output and aided consumer welfare, and these improvements should continue unobstructed.

Perhaps the starkest example of AI’s positive effects is its deployment on assembly lines in Tesla factories. AI use in Tesla factories has lowered the cost of electric vehicles for consumers by making production cheaper. Not only is AI making advanced cars more affordable, enhancing consumer welfare, it is allowing environmentally friendly modes of transport to become more widespread.

Those concerned about worker displacement created by automating factory jobs have called for taxes or even bans on AI technology. This is a wrong approach that would make products more expensive for consumers and make the US uncompetitive globally, as AI has helped lower unit costs for many American businesses. In turn, companies can sell products at a lower cost to customers. These misguided demands should not be part of any potential federal regulations.

Much of today’s innovation in AI is being witnessed in healthcare. Many futurists have correctly predicted AI that can directly interact with a user’s brain, allowing them to type words and control a computer without the need to engage with it physically. This technology has allowed people with disabilities to communicate, significantly improving their quality of life. Regulations on AI should avoid applying any limits on experimental medical uses, as these applications promise to better the lives of disadvantaged individuals.

IBM’s Watson Health is a cloud service that uses AI to gather medical data from different hospitals, insurance providers, and even telemedicine apps and store it all in one easily accessible location. With this technology, a patient’s health could be monitored on an Apple Watch and the information becomes immediately accessible to their doctor, allowing for faster diagnosis and treatment. Quicker treatment leads to better health outcomes as illnesses and irregularities can be identified and treated sooner.

The EU’s “high-risk” classification of medical applications of AI means that IBM will need to go through intense bureaucracy to make its Watson Health product compliant, costing patients and doctors valuable time. Without access to this AI, we can expect unnecessary negative health outcomes for patients as they are not serviced as quickly as they otherwise could have been.

Rather than following the EU’s blanket approach that would see technology like Watson Health prohibited,  U.S. regulators should target specific cases of misusing data rather than laying down blanket rules for an entire sector.

This is not to say that there are no ways that AI technology can be abused. There are valid concerns about facial recognition technology. AI could be used to covertly collect and distribute the personal details of users logged into a website or could be used to deliver subliminal messaging. These are the main concerns the EU had when drafting its regulations. Federal law should ensure that companies that use these forms of AI comply with clear data privacy guidelines.

Preventing whole forms of AI from developing would hurt the consumers who stand to see exciting new applications in the future. Take the fact that the same machine learning technology that tracks faces is also used to identify victims at crime scenes. Rather than an outright prohibition, Congress should look at regulating specific applications of facial recognition that might violate the privacy of Americans. It is the wrong approach to treat all applications of facial recognition technology the same. As the Biden Administration considers a federal legal framework governing AI’s usage in the U.S, it must keep in mind that bureaucracy should never hinder progress.

While there are valid concerns about AI, regulations should target specific practices and NOT the private sector’s ability to develop and implement innovative software. What AI might lead to years down the line is yet to be seen, but the extraordinary progress happening today offers a positive roadmap for consumers.

FacebooktwitterredditlinkedinFacebooktwitterredditlinkedin