The ethics associated with Artificial Intelligence (AI)has become a serious consideration within academia and the high technology industry. Ethics are “the rules of conduct recognized in respect to a particular class of human actions or a particular group, culture.”

Artificial Intelligence (AI) developments are in their formative stage and lack generally accepted rules of conduct, although most of us welcome a moral spine that will guide the actions of AI systems. Unfortunately, diving deeper into classical “ethics” is unlikely to reveal an innate moral skeleton that AI contains. That skeleton must come from outside and many groups are working on AI “ethics.”

There are some interesting perspectives that subsets of ethics offer. For example, meta ethics addresses the origin and definition of ethical principles and the role of reason. In meta ethics we find peoples’ ethical principles containing notions of Universal Truths, God’s Will, and rules to live by.

Normative ethics defines the moral standards (specified by a person or group) that regulate right and wrong choices. Normative ethics involves good habits, ethical duties and the consequences of inappropriate ethical choices.

Finally, applied ethics are used routinely to make everyday decisions on issues considered to be controversial, such as stem cell research, animal rights, nuclear war or capital punishment.

Ethics may offer a convenient taxonomy for principles, but the hard work is in developing those principles and obtaining agreement from society and AI experts.

One useful proposal for AI principles was collected in AI-Ethics.com. The collection is from; Isaac Azimov, Future of Life 2017 Asilomar conference, Conference Toward AI Network Society, ACM on Algorithmic Transparency and Accountability, Allen Institute for Artificial Intelligence – A ‘principled’ artificial intelligence could improve justice.

Both the EU and UK House of Lords have recommended principles for inclusion in AI ethics. The House of Lords offered 5 robust suggestions:

  • AI should be developed for the common good and benefit of humanity. Prejudices of the past must not be unwittingly built into automated systems. In the recent past, some Machine Learning training sessions used biased data to train systems that look for “criminals.”
  • AI actions must be intelligible and explicable. AI systems make accurate decisions very quickly, but some systems are deficient in explaining to humans how they chose the right answer.
  • AI should not be used to diminish the data rights or privacy of individuals, families or communities. AI must ensure companies have fair and reasonable access to data, while citizens and consumers can also protect their privacy, and large companies which have control over vast quantities of data, must be prevented from becoming overly powerful within this landscape.
  • Government must invest in skills and training to negate the disruption caused by AI in the jobs market. Automation is expected to displace 494 million jobs by 2030 (236 million jobs in China, 120 million jobs in India, 73 million jobs in the U.S., 30 million jobs in Japan, 35 million jobs elsewhere).
  • The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence. Oops, Autonomous Artificial Weapons are already in use.

Proposals from the Future of Life 2017 Asilomar conference were the most detailed and they replicated many of the other submissions.

The AI ethics submissions are well thought out and if we could guarantee that AI is controlled by these lofty principles, our job would be nearly complete. Of course, there is no way to enforce compliance, any more that there is to force compliance with the criminal code. Dreams of well-behaved AI systems and robots will remain fiction.

Share: