“We are going to go through our federal budget, as I promised during the campaign, page by page, line by line, eliminating those programs we don’t need and insisting that those that we do need operate in a sensible, cost-effective way.”

— President-Elect Obama, November 25, 2008

    

Introduction: Mainframe Waste

As far back as 1983, presidential commissions have made recommendations to Congress calling for the modernization and improvement in government computer systems and technology infrastructure. In light of President Obama’s stated goals — weeding out government inefficiencies, improving our nation’s infrastructure – as well as his selection of key personnel with the responsibility to use information technologies to reduce government waste,[1] this ConsumerGram examines a promising area for achieving these goals. The issue, as this ConsumerGram explains, is that the federal government continues to be mired in old legacy mainframe computing – computing provided by a sole-sourced provider, despite the fact that lower cost, more efficient and competitive options are available. For the administration, the problem of government’s reliance on outdated, inflexible technologies is a clear opportunity for improving government operational efficiency.

   

Four Big Problems with Legacy Networks

In light of increasing economic concerns over growing budget deficits and public spending, governments should take every opportunity to increase productivity and efficiency, or so you would think. However, when it comes to modernizing its computing capabilities, the government is hopelessly slow to act. Take for example the old mainframe computer, the precursor of the personal computer. It was physically large, slow (in today’s terms) and excessively expensive. In fact, personal computers today are faster than million dollar mainframes were just a decade ago. The sluggishness of the state and federal governments to embrace change comes despite a quarter century-old finding by the 1983 Grace Commission Report, which pointed to government waste and recommended the adoption of new computing technologies. While mainframe computers have managed to keep up with speeds, a review of the facts show that they: 1) are more expensive to buy and maintain; 2) lack competitive bidding; 3) require customized programming; and 4) are difficult to transition to competitive computing solutions – all ingredients for higher costs, lower innovation and reduced functionality for government programs and services.

 

1. Costs More; No More Output

Mainframes are simply not as cost efficient as server technology. According to Citizens Against Government Waste, for the same functionality and scope, a mainframe can cost 5 times more for storage, 10 times more for hardware and almost 30 times more for key software. We separately compared the cost of mainframes and servers in terms of cost per million instructions per second (MIPS), a standard industry measure of computer central processing speed. The chart below compares mainframe cost ($9,457 per MIPS) and server cost ($38 per MIPS) and finds that mainframes can be as much as 250 times more expensive.[2] Even if servers were only one-tenth as cost-efficient as shown here, the potential savings for state and federal government would be substantial.

 

Chart – pic1

 

2. Sole Source Providers – Buyer Beware

Similar to the unfulfilled recommendations of the Grace Report, in 1993, President Bill Clinton established the National Partnership for Reinventing Government, which, among a number of findings, recommended improvement in the methods of acquiring technologies. Yet, government mainframe computing contracts are essentially sole-sourced to IBM. This means that government buyers of mainframe technologies have no market alternatives, and are often locked into uncompetitive agreements for maintenance, parts, software upgrades and equipment upgrades. Sole source purchasing is never a best-cost decision. As with any legacy equipment, once purchased, it can become very expensive to maintain and buyers are stuck paying price increases that can run several times faster than the rate of inflation.

 

Several examples demonstrate the potential added risks and costs of using sole source providers. In recent years, IBM ran into a legal issue that banned them from federal government contracts for a week. Had the ban been permanent, federal government customers would have had no one to turn to for help and their mainframes would have become stranded investment. In another example, the state government of California could not make payroll changes because of a lack of COBOL programmers, a computer language designed in the late 1950s for mainframe computing. Similarly, a few months ago, the Texas Governor suspended IBM from its state technology contract when 8 months of data from the Attorney General’s Office were lost, jeopardizing 81 cases of Medicaid fraud. In this instance, had mainframe technology not been used, the amount of data lost could have been easily backed-up with off-the-shelf technology costing less than one hundred dollars.

 

The reality is that expertise in legacy systems and sole source providers means higher costs, and it means shortages of programmers. It also means that some crucial agencies, such as the secret service are maintaining 1980s technologies at an inefficient capacity. In 1988, the NIH spent hundreds of millions of dollars on mainframes that researchers “refused to use.”

 

These are the risks of relying on mainframe computers, instead of focusing on lower-cost and competitively-priced PCs, servers and other IT computing equipment, which makes competitive pricing possible.

 

3. Requires Customized Software

Much of the software on mainframes is customized, which is good news for mainframe programmers, but bad news for taxpayers. While our PCs can communicate and exchange information seamlessly, mainframes are often so specialized that they suffer from lack of interoperability and require customized solutions in order enable relatively simple communications and basic functionality.

 

4. Difficult to Transition

It may be in the interest of the sole mainframe provider to make transitioning to smaller, more efficient computing as difficult as possible, making government agencies less willing to move to more efficient, cost-effective alternatives. As a result, the government tends to keep their mainframe computers twice as long as the private sector does, driving up IT costs for maintaining an outdated technological infrastructure. The truth is, however, that there have been many successful transitions off of mainframes that have resulted in lower costs and increased efficiency by using solutions provided by HP, Fujitsu, Novell, Bull, Red Hat and other providers. Moreover, transitioning will provide choice and reduce costs, as well improve government services over the longer run.

 

The problem may not be isolated to the government. The financial sector, mired in the recent meltdown, is also heavily invested in mainframe technology. It is no wonder that Congress had to wait 6 months to get transaction information on alleged oil speculation, since the systems were too customized to gather complex transaction information. In addition, the inability of the financial industry to migrate its systems may have contributed to a lack of transparency. Going forward, government rescue should consider efforts to modernizing the industries technology infrastructure in ways that will improve solvency monitoring, financial reporting and transactions analysis.

 

Summary

While there is still a place for mainframe computing and IBM has taken modest steps to make mainframes faster and less costly, government buyers need to reevaluate their reliance on sole source providers and move toward competitive technologies, where possible. They also need to evaluate their buying decisions based on short and long term costs, as well as the mix of capital and expense budgeting. The cost savings would benefit taxpayers, reduce government costs, improve government efficiency and open up new opportunities for innovative and competitive businesses in these challenging economic times. Most importantly, however, modernizing state and federal government’s technology infrastructure would help agencies and public programs better serve the public’s needs. An improvement in the nation’s infrastructure and wringing out government inefficiencies are both worthy goals for this administration to uphold.

 

There is a lot of waste in the Federal Government today. Mainframe computing is just the beginning.

 


[1] For Example: Jeffrey Zients, chief performance officer and deputy at the OMB, will be tasked with streamlining government costs; Vivek Kundra, the president’s chief information office, who (among other things) is responsible for using technologies to lower government costs; and Aneesh Chopra, the Nation’s first Secretary of Technology, will be responsible for promoting IT innovation, in part, to achieve lower costs in healthcare and elsewhere. See, Hans Nichols, “Obama Names Performance Office, Vows to Trim Federal Spending,” Bloomberg, April 18, 2009; and “Obama Names Chopra National CTO, Tech Daily Dose, CongressDaily, National Journal, April 18, 2009.

[2] Mainframe and server cost is from Gartner (July 2007); server MIPS is from EDS.

Share: