As artificial intelligence (AI) technology continues to advance at a rapid pace, questions have been raised about the safety of these programs. Concerns range from how it can amplify inequality to concerns regarding autonomous weapons systems. While there is no doubt that AI represents a potentially transformative technology, concerns regarding harm should be balanced against benefits and not used to stifle the technology at the onset. One way to avoid this pitfall is by incorporating the technology into the existing liability system.

Due to the prominence of AI technology over the last year, leaders in the field have been debating how to move forward. A letter from the Future of Life Institute, whose signatories include Elon Musk, called for the pause of the development of generative AI more powerful than GPT-4. In the statement, the authors express concern over triggering an arms race between companies to develop more powerful AI before we have sufficient guardrails. If these companies create AI more intelligent than humans without proper controls, the authors believe it would be a threat to humanity’s future.

Not everyone thinks such concerns are justified with some going as far as to say it was a cynical move to give leading tech companies time to develop their AI without worrying about competition. While there are risks to AI development, we have seen benefits in fields where AI has been implemented, such as the medical field. For example, one study found that AI programs designed to search for breast cancer and other abnormalities, the program outperformed all but the best radiologists. Getting the same results from a program that does not get tired or burned out would provide a benefit to patients who depend on early diagnosis to catch these illnesses early. It would be a loss to pause AI development right as it is taking off.

Regulators and industry experts need to find ways to mitigate the risks while also not unnecessarily stunting AI’s potential for innovation. A guiding principle is to ensure that AI remains under human control. A notable but extreme example of risk is a recent comment by a US Air Force colonel saying that in a simulation an AI-operated drone opted to kill its operator. While this has been disputed by the military as more of a thought experiment than an actual one, it still serves as a reminder of the potential risks.. Luckily there is an existing legal system to handle liability that can address many concerns over the use of AI.

One area that demonstrates the ability to incorporate AI regulation into the current liability law is how it handles discrimination. Human bias can be incorporated into the underlying data sets used by AI, which influences the system’s decisions. In 2014 Amazon ran a test of a program to automate the process of hiring and found that, because the majority of its technical staff were male, the program began discriminating against women.

As discrimination in hiring is already illegal, clarifying that AI systems don’t get a legal carveout is a first step in establishing human liability with the use of this technology. Applying anti-discrimination laws to those using AI hiring software the same way they would other forms of unintentional discrimination would encourage companies to self-police without requiring a new legal framework. Establishing that humans will be held liable for their program’s actions would be a major step forward in establishing how current law applies to AI as well as encouraging companies developing and using AI to do so responsibly to avoid liability.

AI has incredible potential, but it comes with risks. By making intelligent policy now, we can safely reap the benefits of AI without destroying its potential.

Trey Price is a technology policy analyst for the American Consumer Institute, a nonprofit education and research organization. For more information, visit https://www.theamericanconsumer.org/ or follow us on Twitter @ConsumerPal.

Share: