Artificial Intelligence (AI) is of three types: Narrow AI, Machine Learning and Deep Learning.  The demand for experienced AI developers is brisk, with Microsoft, Facebook, Google, IBM and Intel aggressively recruiting. Already non-IT firms are looking for their own AI developers.  In recent years, much AI development has been focused on practical AI products for the legal industry, for human speech processors (e.g. Alexa and Siri), for weather prediction, and for medical applications such as identification of cancer cells.  AI is no panacea – there are tasks where AI may not be effective and tasks where it may be unwelcomed.

Quickly evolving AI fields include both ethics and human-machine cooperation.  We cannot rely on Isaac Azimov’s fictional Rules for Robotics.  “The First Law of Robotics:  A robot may not injure a human being or, through inaction, allow a human being to come to harm. The Second Law of Robotics: A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.”

Azimov’s rules for robotics made good reading, but they are insufficient for AI systems.  There is an unsettled issue of enforcement as evidenced by unbridled cybercrime.  Specifying ethics and devising an enforcement mechanism for AI is essential if we are to avoid the hideously expensive litigation of finger pointing between AI users, AI developers, and the liabilities that will arise when machines and humans conflict.

Narrow AI is pattern recognition such as providing the name to go with the image of a particular face.  To achieve reliable recognition of patterns, the AI software is shown hundreds or thousands of images that are or are not the target object.  After each attempt to identify the object, the software is told yes or no.  Narrow AI learns from repetition in a way similar to the way human infants learn.

Machine learning is the practice of using algorithms to parse data, learn from it, and then make a determination or prediction about something in the world.

Deep learning AI software and computers are structured to have artificial neural networks (like neural networks in human brains) that are physically stacked on top of each other. “When deep learning software is looking at pictures containing cats, some neural layers may focus on colors, while others are determining shapes, and another layer will gather the results and try to determine if what the computer is seeing is indeed a cat.”  Layer after layer, the assessment process of images continues until connections to the final layer are established.  The final layer of connections provides a template that can be used to identify objects sought in other images.

Depending on the nature of what is being learned, each layer of the neural network decides if something in the image is “significant.”  For example, when learning to identify cancer cells in an image containing cancer and benign cells, “significant” attributes of pixels can mean something different at each layer of connections. “Significant” could be a pixel’s color, or at the next level, “significant” could mean that a connection has at least 3 adjacent connections.

In practice, the software to accomplish deep learning is based on highly technical mathematics and large amounts of layers and images.  An intelligible outline of the deep learning process is presented in a Facebook tutorial.

Machine learning might be useful for targeting advertising to Internet users based on the right combination of their personally identifiable information (PII) and their browsing records (as collected by trackers).  That might be feasible although it is unwelcomed by many.  Machine learning may also be useful in developing and targeting the right sales pitch to owners of connected cars (i.e., cars with a mobile wireless connection).  On the other hand, owners may not want marketers to use the car’s 4G connection to collect data from the connected car.  They may like it less when PIIs and car travel data are combined by AI to serve up adverts about gasoline or insurance.  Those adverts are probably less useful than preserving the privacy of car owners’ PII.

AI seems capable at digesting contracts for lawyers’ use.  These systems are good at simple categorization, but they have not yet lived up to all of the promotional hype aimed at the legal industry.  An even better approach than using an AI machine is to use a meta AI system that sets multiple AI machines in motion and then collects the best of what they produce.  AI has not yet shown it is capable of processing the ton of files often needed to achieve a satisfactory “discovery.”  Discovery remains a costly and labor-intensive procedure.

We need to hear more about progress in AI ethics.  Careful conclusions on AI ethics will reveal which new laws the Congress will need to pass.  Ethics are less well developed than are simplifying and reducing the cost of AI learning.  Nevertheless, we will need AI liability law to be settled before autonomous vehicles routinely tangle in car crashes and cause bodily injury.