Any real answer leads to the obligations of the designers of devices and systems.  When one of the emerging “driverless cars” collides or harms passengers, who is at fault?  When a current-day extortionist software freezes your laptop files, who is at fault?  The designers and those who deploy the devices are at fault.  But, what about the devices themselves?

In one of his short stories (I, Robot), Isaac Azimov set out Rules for Robotics that he thought would constrain a robot’s behavior into human-friendly actions.  He expected that designers would incorporate these rules into each robot made:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

In a decade, we may see how faithfully driverless cars follow Azimov’s expectations, but we can see that current extortion software already violates the First Rule, as might billions of other devices, systems, apps, robots and “things” in the internet of things.

Robots don’t need to look humanoid or even look like familiar gnarly characters in sci-fi movies.   Their purpose will dictate the physical structure and its software will control its action sequencing and self-maintenance.  If the robot is equipped with sophisticated software like the malevolent virus of Russia’s Turla (or Uroboros), then it will be able to reconfigure itself to adapt to the environment it discovers.

Dire consequences will follow if a device is endowed with the ability to replicate itself, to modify its activities, and to alter its own control software.  Writing a software system capable of altering itself is relatively easy and self-modifying software has been used for decades to conserve space or to conceal design features.  A software system capable of altering itself could erase the coding equivalence of Azimov’s Rules from its memory.  If a robot can modify its control software and if it is given sensors and information about the environment around it, then it could develop self-awareness.

Self-awareness, advanced intelligence akin to Deep Blue (1997), and human language understanding akin to Watson (2011) would equip a robot to pick its own goals and choose the steps to get there.  We would be delusional to expect a smart, coding-unconstrained, self-aware robot to behave only in ways that protect humans.  If it has a dual mission such as to protect itself and progeny and to thrive, it may soon become a foe of humans.

Azimov’s Rules were hopes for well-behaved machines, but alone they do not prevent machines from running amok.  To keep machines on the straight and narrow, Asimov-like rules must guide their behaviors, but most importantly, the robot must be unable to alter its own software controls.  To help isolate law-abiding devices from ill-behaving rogues in the Internet, designers should also conform their creations to use of strong encryption, secure device identities, and shunning of devices, users, and designers proven to behave criminally.  Shunning criminals may not be PC.  Oh well.

Commercial remedies (e.g. Norton, or FireEye) are available to tame some of today’s malicious robots or devices, but they cannot stifle the designers who create these as an outlet for their anger, religious or political ideology, greed or malevolence.

We are on our own watching out for dangers in lawless Internet territory despite well-meaning regulators and police with authority from hyper-balkanized “jurisdictions.”  Law enforcement in the Internet delivers neither quick nor sure results, and there is no heavy handed federal response unless the cyber-crime is perpetrated by a nation state actor.  Justice through court proceedings is more of a fantasy than Azimov’s Laws of Robotics.  Anyway, machines don’t commit crimes – designers do.

Alan Daley is a retired businessman who writes for The American Consumer Institute Center for Citizen Research

Share: