Victories by artificially intelligent (AI) applications highlight their raw power in games that are notoriously difficult to win.  AI has won against experts playing chess, the Chinese game of Go and most recently in Texas Hold‘em Poker.  In each case, the AI application gathered skill by playing thousands of games against itself.  That is an unfair advantage because humans cannot learn from playing both sides.  They cannot be unaware of the opponent’s (themselves) strategy.

To master games such as chess, Go and Texas Hold’em, an AI application has to “incorporate both the direct odds of winning and bluffing behaviour to try to fool the other player.”  That is deeper learning than “weak AI” where massive collections of data (Big Data) are analyzed to identify patterns that can be turned into winning actions.  This weak AI is broadly used in search software and speech recognition, and it will be applied to other commercial tasks.

A 2016 Forbes article speculated that:

“Businesses that use AI, big data and the internet of things… to uncover new business insights will steal $1.2 trillion a year from their less informed peers by 2020.  In 2017 business investment in artificial intelligence will be 300 times more than in 2016.”

Eighty percent of marketers are optimistic that AI’s power to shape appeals to consumers will make it an effective investment.

We can reasonably expect that AI applications will incorporate inputs from the household’s Internet of Things, as well as pupil dilation, perspiration, stress and signs of worry.  These indicators can help determine when an opponent is bluffing or has a weak position. The resulting AI application would deliver a massive advantage to a negotiator in commerce, law enforcement, or diplomacy.  AI applications are also being assisted by “echo borgs,” humans who deliver the AI wisdom via a more palatable human form and voice.  Consumers deserve limits on a marketer’s AI tactics, or at least early warning of the AI superpowers they are confronting.

AI applications can allow us to misjudge the deep intelligence and personal information behind a sales pitch.  When suitably packaged in human voice and form, AI applications will replace human workers in many roles, including jobs in “health care, transport and logistics, customer service, and home maintenance.”  AI can be seen as a more sophisticated version of the computer and technology innovations that displaced so many manufacturing and clerical jobs.  AI applications will be an economic threat to some of us even if they always behave ethically.

There is nothing intrinsically ethical about a computer.  It does what it is programmed to do.  If it is directed to learn and thereby devise ways to accomplish a mission, there is no guarantee an AI application will choose humane tactics.  Some have worried about what a robot (a physical AI application) would do when confronted by the “trolley problem.”  In that ethics question, a choice must be made to direct a runaway trolley into the path of an elderly woman or the path of a woman and child.  The trolley has no brakes and the two tragic paths are the only choices available.  There is no humane alternative.  The “problem” is contrived and it merely aggravates people’s fears that artificial intelligence may make life and death decisions.

AI ethics become more important when the AI system is given ability to change its mission.  If it is targeted at mundane applications such as finding more efficient ways to route its delivery trucks in congested traffic, there is little ethics exposure.  If, however, the AI system can become unethical if is permitted to replace that mission with a mission to route the trucks efficiently even if it means reducing sidelining competitors’ trucks.

The issue of ethics for automata is under study at several places, and generally the manufacturer is held accountable for ethical guidelines built into design and deployment of AI systems. That is marginally effective – to about the same degree as safety brochures and stern lectures enforce handgun safety.  It seems society will need to experience unethical AI behavior before any meaningful controls are adopted.

Share: