Last Wednesday, Ofcom, the United Kingdom’s broadband, telecom and postal regulator, proposed an investigation into Amazon and Microsoft cloud computing services. In a market study Ofcom released with the proposal, the agency claims that Amazon and Google together constitute a duopoly in this field and litigation may be necessary to increase competition. This development coincides with a similar action in the United States, where the Federal Trade Commission (FTC) has requested comments on the state of the domestic cloud computing industry. Both countries’ regulators overlook how large-scale operating is a core component of cloud computing, enabling the industry to achieve the economies of scale needed to lower costs for consumers.

94 percent of companies now use cloud computing, with 60 percent having cloud-based infrastructure. Most of this technological adoption has been a recent development, with the old system of on-premise computing being fresh in the memories of the global business sector. To understand why this change occurred and how it wouldn’t have been possible without big companies, we need to understand what “the cloud” does to make computing more efficient.

The core difference between cloud computing and on-premise computing is location. For cloud computing, servers are in a large off-premise facility. For on-premise computing, servers are usually at the office. The core advancement cloud computing made was turning computation into a third-party service instead of something that had to happen in-house. With computing done out-of-office, cloud services could aggregate millions of clients into large facilities located anywhere in the world.

This enables four primary cost-saving advancements that all pertain to economies of scale. Firstly, cloud servers could buy equipment in bulk directly from equipment providers, creating discounted prices. Secondly, instead of having to set up servers in high-cost city offices, cloud service providers could build them near power generators. Proximity to power is estimated to cut server electricity expenses by three-fourths. Third, a single operator working at a cloud facility can maintain thousands of servers, while operators for traditional systems can only service 140 servers. Fourth, demand aggregation allows for higher hardware utilization. Unlike on-premise servers, which are idle when the office is closed, cloud hardware is constantly running computations from a global consumer base. Higher hardware utilization means greater efficiency per unit, allowing fewer units to handle more work.

Additionally, cloud computing allows computation providers to spend more on data security and concentrate those protections on a few targets. Instead of each office having to invest in its own private data security, cloud services can aggregate those security needs and hire the best security providers in the field. Fewer facilities also create fewer access points for potential hackers, allowing these aggregated protections to defend a smaller number of nodes.

Each cloud computing advantage is only possible with large systems handling millions of clients. The larger the system, the greater the economies of scale and the lower the costs for the consumer. Concerns about over-concentration in the cloud computing market ignore how cloud computing firms benefit consumers. These concerns are based on the same “big is bad” philosophy that has animated multiple agencies in the US and Europe.

In cloud computing, this philosophy runs into a problem. Bigness is not just beneficial for the industry — it is the industry. All the benefits cloud computing creates are a direct result of the size of the sector’s firms. The very innovation itself is simply aggregating an aspect of individual firms into massive collective systems. Without “bigness,” there is no cloud computing.

Share: