With specialized training, Artificial Intelligence (AI) machines can scour a marketplace looking at prices and devising ways that all sellers could obtain higher prices, especially if the competitors acted collusively. Of course, AI is unaware of the ethical violation that arises from collusion until it is trained to avoid such behaviors. The scope of training and the degree to which appropriate ethics are imbedded are the responsibility of the AI developer. The more sophisticated the AI machine, the less likely that its developer can’t foresee everything the AI machine might do.

In one example a Mr. Topkins programmed a customized algorithm that could keep prices for classic movie posters artificially high among competing vendors in the Amazon marketplace. Once his rivals agreed to the plan, the algorithm automatically maintained what prosecutors called “collusive, non-competitive prices” on printed wall art.

Had Mr. Topkins instead used an AI machine trained to analyze the market and maximize prices, he may have not needed to talk to his competitors. That would avoid the need to leave email and voicemail evidence for prosecutors to find. Topkins and his competitors’ AI machines coulddo the colluding for them, either by using the same algorithm or the AI machines could learn how to increase prices from their interactions with other AI machines.”

When a specific market has few competitors, an AI machine can be trained to watch price movements and then unilaterally set prices to give the AI owner either a maximum share of the market or a higher total margin for the AI owner.

When an AI operates alone and therefore avoids collusion, unilateral price increases may be unethical, but they might not be not violation of the antitrust laws. The Federal government and most states have antitrust laws which “prohibit business practices that unreasonably deprive consumers of the benefits of competition, resulting in higher prices for inferior products and services.” The laws generally bar competitors from fixing prices, rigging bids, or allocating customers because those unlawful practices can cause consumers to lose the benefits of competition.

The Sherman Antitrust Act, The Clayton Act, and The Federal Trade Commission Act approach antitrust behavior from somewhat different directions. The Sherman Act might come closest to banning AI analysis of a market place that culminates in unilateral increases in prices. However, The Sherman Antitrust Act is “not violated simply when one firm’s vigorous competition and lower prices take sales from its less efficient competitors—that is competition working properly.” An AI, acting alone may be very effective at taking market share, but is it competition working properly?

Mr Topkins’ use of an algorithm has many parallels in business. Such algorithms are used in quantitative finance, in the airline and hotel industries and in online retailers such as Amazon. The uses of algorithms are spreading into transportation, healthcare and consumer goods.

The Federal Trade Commission Act prohibits unfair methods of competition in interstate commerce, and that prohibition may come closer to providing the limitation on aggressive pricing via an AI machine. But if a vendor makes the investment to thoroughly understand a marketplace so that the vendor can price to his best advantage, is it really an unfair method? At what degree of understanding or what stage of price setting, does a vendor cross the line into unfair competition, particularly if any other vendor could do the same thing? If two or more competitors use non-collusive AI machines, is that unfair?

For non-collusive AI use to be considered unfair, it would have to be from the consumer’s perspective and although we insist that consumers deserve protection from aggressive vendors, we cannot expect that vendors limit themselves to behavior guided by mere human intelligence. Perhaps a sufficient warning to consumers could come from vendors revealing how they set prices. We will likely hear more on this topic from the Federal Trade Commission.

If the AI machine acted in ways that its developer could not reasonably foresee, some will argue that the AI machine would need to be held accountable for its actions. This is not as strange as it might sound. Along their path to staking out what constitutes antitrust violation for AI machines, the FTC may conclude that like corporations, AI machines deserve acknowledgement as “persons” and are subject to similar protections, due process, and punishments. If so, then AI machines, like humans, could be subject to isolation, loss of assets and the death penalty.