Some consumers may be alarmed at the prospects of malicious Artificial Intelligence (AI), but some are unconcerned. In a verbal standoff, Elon Musk claims that humanity needs to fear AI while Steven Pinker (a highly respected academic) feels that perspective is unproductive fear mongering. Bill Gates and the late Stephen Hawking side with Musk, but Steve Wozniak (co-founder of Apple) sides with Pinker.

There are good reasons to respect the views of each of the brilliant five-some. The issue is important, because if Musk is right, we face danger. As well, AI is expected to add $15 trillion to the world economy by 2030, so eradicating AI flaws is an economic imperative.

Pinker and Wozniak hold that there are strong impediments to AI taking control of society, for example, robots are unable to conduct and coordinate all the mining, manufacturing and creative design needed to produce the goods that consumers (and robots) need. Pinker cites the long history of unwarranted fears that stem from society’s need to adapt to new technologies such as gunpowder, the bow and arrow, the industrial revolution, and harnessing nuclear fission. We survived those disruptions and we can expect to survive AI also.

Those who consider AI to be risky know that in the next few years, narrow AI can be tasked with attacking individuals, competitors or governments at much lower cost and with greater efficacy than can cyber criminals without AI. The attacks could be physical (e.g. drones with explosives, coordinated by AI) or cyber (AI guided fake news or highly effective spear phishing). In the longer run, they expect deep learning AI will show a much higher degree of independence both in its choice of target and how it devises its attacks.

As a dramatic example, soon an AI system could be informed by the collected knowledge of genes and the processes they mediate. With that background, AI could do a superior job developing “gene drive” mutations. Gene drives are used to alter or even eliminate specific species. In misbehavior suited to a horror movie, some “conservationists” are working in that direction to eradicate some rodents, albeit without AI help, at least not yet.

There are plenty of brutal regimes around the world practicing ethnic cleansing and other crimes against humanity who would find an AI/CRISPR gene drive to be a useful component in their arsenal – even more effective than chemical warfare.

AI itself is not malicious. If we want it to behave in benign ways, we need to build in protective guidance or “ethics.” The behavioral restraints on an otherwise freewheeling AI system can provide it with guiderails that protect us. But AI will not sprout ethics independently, developers need to actively imbed those limits into their AI systems. AI ethics principles for both the developer and for AI systems have not yet achieved widespread agreement although the potential for malicious behavior and the controlling ethics have been discussed convincingly in The Malicious Use  of Artificial Intelligence: Forecasting, Prevention, and Mitigation.

Even if there were general agreement on the principles, there is no policing mechanism that will coerce AI developers to include suitable safeguards in their AI systems. In Wired, an article called “AI Research is in Desperate Need of an Ethical Watchdog,” reports on a multi-institution study group called “Pervade” which is working on a clearer ethical process for big data research that both universities and companies could use.

Commercial buyers of AI systems will be eager to limit exposure to legal liability that their AI systems could incur. The work product from Pervade may be useful in that regard, but nation states, cybercriminals and terrorists will not tolerate the imposition of civilized society limits on their AI systems. Name calling and vowing that we will not sink to their level is just vacuous self-indulgence. It will not limit the risks we face.

Research on cooperative behavior between humans and AI systems suggests that AI can be tasked with behaviors that are dynamically adjustable and under control of humans. Although it is possible to guide an AI system long after it has left its “factory,” the quality of ethical guidance is again subject to the smarts and morality of its human co-worker, or (more likely) to the ethics of its AI developer of AI systems.” It may be soothing to agree with Pinker and Wozniak, but logic and familiarity with deep learning AI’s capabilities argue for Musk, Gates, and Hawking being the best predictors. We have not heard the last of this.

Share: