Analysis 2009: IT Departments Disappear into the Cloud

By Kurt Cagle
January 6, 2009

While other IT sectors may be struggling, one area that will likely be quite hot will be in the cloud computing/hosted services market. This particular market has been the subject of a great deal of hype over the last year, but it is likely that the overwhelming factor in cloud adoption this year will less be promotional marketing than it will simply be cost.

IT plants are expensive - they take up signficant real-estate, require a significant amount of air conditioning and electrical hardening in order to take power loads and properly disipate heat, they require system administrators monitoring and repairing these systems and have to be replaced and reconfigured periodically as systems go down.

The principle benefit of cloud computing (or at least of hosted services) is that much of this cost is effectively offloaded to the hosting service. Combine this with the development and deployment of hosted applications (which are increasing both in capabilities and reliability), either VPN or web-based, and what emerges is a very compelling story for many companies that are struggling trying to contain IT costs while tightening their belts.

Most cloud setups typically consist of "supercomputers" that are built as hundreds or even thousands of commodity server units that share memory and processing power and that are in turn tied into large scale storage arrays. Within this sea of memory and processing power, its possible to launch various "instances" - virtual machines that use the resources of the host system but exists as its own, for the most part independent "computer".

What's more, because these machines are effectively software only computers, they can be saved as if they were computer documents, then can be reloaded later, starting off once reloaded at precisely the point where they were saved. This means, consequently, that it becomes possible to create templates that can be stored then automatically loaded later whenever a given application (such as an operating system or configured database) needs to be restarted. These applications are known as appliances, and appliances make possible all kinds of interesting computing within the cloud.

Many of the larger traditional hosting providers have started testing the virtualization waters, in many cases able to offer virtual computer systems to developers at a fraction of the cost of a dedicated server. Virtual machines tend to have a somewhat slower performance ratio that a typical dedicated system, as there is a virtual machine layer between the virtual computer and the real one that takes CPU cycles, but as understanding of how virtualization works within clouds has improved, so too has the performance metric.

My projection is that those companies that are in the space right now (Amazon, Google, Microsoft as well as several of the larger hosting companies) will do quite well this year, in great part because moving IT departments into the clouds can provide significant cost savings. In essence, companies would be outsourcing their IT departments, letting the host take care of the electrical bill, the infrastructure maintenance, bringing new systems online or pulling systems out when they are no longer functional.

As financial pressures (most specifically the collapse of the corporate real estate market as phase two of the great mortgage meltdown) continue to mount, a lot of companies are going to be at a point where their income is low enough (or their lines of credit have been diminished badly enough) that they can't continue meeting their monthly lease payments, and will be either downsizing their physical plant or in some cases ditching the real estate altogether and going to a nearly virtual company.

A second factor that will start factoring into cloud virtualization will be issues of electrical grid support. The NOAA forecast indicates that by the summer of 2009, temperatures across the US and Canada should be well above average, especially in the Plains states, the Southeast and the Southwest. While the economic slowdown may cut down some on electrical usage, the grid is likely to be stressed considerably by air conditioning, and rolling brownouts are likely to become more common (the current fiscal problems in California may very well add into that).

An advantage to developing distributed but concentrated computing centers is that these facilities can be "hardened" against adverse conditions. Such facilities usually set up power generators to keep systems running and that let them gracefully bring them down non-essential systems, rather than having the power kick out all at once. They can dedicate more resources to cooling computer systems and better handle the problems of heat generation (a number of pilot projects are in fact using the heat produced by server farms to drive heat pumps within buildings built from the ground up with server farms in mind). They can also more effectively negotiate power rates and draws with electrical utility companies that thousands of individual companies can do.

As the worst of the immediate crisis abates then (near the end of 2009 or into 2010) its likely that such cloud centers will become more attractive from an energy efficiency standpoint. Either way, it is increasingly likely that both public clouds (such as those of Amazon, which let anyone subscribe to the cloud centers) and private clouds (typically cloud centers that have a few large, dedicated clients) will become far more prevalent, especially for the small and medium size business. Microsoft's efforts with Azure should also start bearing fruit towards the end of 2009 and into 2010 (expect there to be a strong synchronization between Azure, Hyper-V and the next Microsoft Windows operating system).


You might also be interested in:

News Topics

Recommended for You

Got a Question?