The Weakness of Commodity Server to Cloud Server Cost Comparisons

By George Reese
March 19, 2009 | Comments: 5

It's well established that cloud computing provides tremendous economic value to applications with variable demand requirements1. On the flip side, conventional wisdom suggests that applications with steady demand requirements pay a premium for being in the cloud. While that's true if you compare three years of usage of a cloud virtual server to a comparably configured commodity server, it ignores the bigger picture cost of running real applications in the cloud.

Let's start with my opening statement about the economic value of the cloud with respect to variable demand systems. You save money in the cloud because you pay only for the virtual instances you actually use whereas in your internal data center or in the data center of a managed services provider you are always paying to support peak demand. Consequently, the shorter the duration of the peak demand and the more extreme your peaks versus standard usage, the more attractive cloud computing becomes. In my article on The Economics of Cloud Computing, I provided a rather extreme example of this value.

The issue I want to deal with here is the issue of what happens in systems that have steady usage. Let's use a simple web application that can run on a single server with a little room to spare on an Amazon $0.10/CPU hour instance. The equivalent Dell based on retail pricing is around $600. For now, we'll ignore that fact that you actually need to also purchase the $1,119 3-year 24x7x365 hardware support to end up with a truly apples-to-apples comparison.

The total hardware cost is $600 in 2009 dollars for the purchase option. The Amazon cost, on the other hand is $1,131.762. Winner Dell?

Not so fast. We have a number of problems with this seemingly straightforward calculation:

  • You need to add in 24x7x365 Dell hardware support to get some level of reasonably comparable offering. This support answers the question: what will you do if the hardware fails? Dell will take care of you when you buy this support option. Amazon takes care of you with the base price (you just start your application up on a new server).
  • If you buy your own Dell server, you need to buy server hosting space, power, firewalls, routers, and other things as well. They can all add up to the difference in the two cost models.
  • I doubt you would feel comfortable investing capital in "just enough" capacity to support your application. Even if you predict steady usage, you probably want enough excess capacity so that the system can handle unexpected peaks in demand. As a result, you would never buy the Dell server in question. But there is no issue running the system on the low-end Amazon environment. Need excess capacity? Move the application temporarily to a larger virtual server or plug in an additional app server and load balancer.

The one commodity server vs. one Amazon instance tends to be the area in which conventional wisdom hammers cloud computing economics. As I have described above, there's more to the calculation than simply one Dell server versus one Amazon EC2 instance. Very few infrastructures, however, consist of one steady-state application running on a single server. Most organizations split their application functions (for example, application server vs. database server) across multiple servers with some servers supporting a given function for multiple applications. Consequently, assigning the cost of a firewall or a router to one application is nonsensical. On the other hand, with this complexity come labor and other operational concerns.

A key feature of cloud computing often overlooked in economic discussions is the ability to automate the labor and operational concerns of a traditional IT infrastructure.

What happens when a server fails in your infrastructure? A person likely has to move all of the functionality that server was supporting to another server. If you are particularly advanced, you might be using virtualization tools that automatically recover the virtual instances on your failed server over to another functioning server. With that scenario, however, you still must have an individual to remove that server from your rack and work with Dell to return the server, retire it from service, or repair it.

In the cloud, you don't do a single thing. Tools like enStratus will detect the instance failure and recover your server automatically to another virtual instance. Amazon owns the underlying hardware and deals with the labor concerns. Other than an email from enStratus, you would never know anything ever happened.

I don't mean to argue that the cloud wins economically in all cases. But an internal data center almost never wins except in a few corner cases. The best case for an internal data center is where you are virtualizing existing server capacity and expect to see very dense virtual instance usage. To make that work, you have to have a very efficient IT staff with a top-notch, energy-efficient data center capable of leveraging virtualization technologies to their fullest. For most cases, however, the economics of the cloud are incredibly compelling.

Discuss this article further with me on Twitter. I'm @GeorgeReese.

1Don't buy this basic premise? See my article on The Economics of Cloud Computing from October 2008.

2Because this application will be using a server for three years, I assume you pay the reserved instance price. The calculations obviously get more complicated if you only intend to use the server for a short period of time, but they rapidly favor the cloud.

You might also be interested in:


convert dvd mac is an excellent conversion software specially designed for Mac users. As a multifunctional conversion software, it can rip DVD and convert video with high conversion speed on mac os x. Almost all popular video formats are supported as output profile like MP4, MKV, MOV, M4V, AVI, FLV, 3GP, 3GPP, 3G2, MPG, ASF, VOB, etc. - a simple non cloud app!

It isn't clear where you get the cost of $1,131.76 for running a small Linux/Unix instance. How do I re-create that value using the AWS calculator?

A more reasonable comparison of EC2 would be with a company that provides dedicated servers, examples of which would be Layered Tech, The Planet, Superb Hosting, Hostway, Server Beach and (that's just a sampling, there are plenty more out there). Going this route doesn't completely address the automatic fail over problems, but it does deal with the extra support costs you mentioned for the Dell and that the service provider is responsible for dealing with hardware failures. And in the end you haven't put out any capital to buy hardware, making it more comparable to EC2.

One factor that can be a huge difference in price is bandwidth. When compared to the dedicated server market the prices for AWS bandwidth is quite high. It seems that AWS is trying to make up the costs in other areas by charging a premium for bandwidth. I did some simplified sampling for the cost of one terabyte of bandwidth and found AWS services to be one of the most expensive options:

This isn't to say that the AWS services like EC2 don't have their place. They just aren't a always the best/most cost effective solution depending on what your needs are.

The Amazon cost is the PV of three years of an Amazon reserved instance with a 15% cost of capital.

Bandwidth costs are very tricky. Amazon ends up being very cheap and the very low and very high ends of the equation and absurdly expensive in the middle. For most web systems, however, the bandwidth costs are rounding errors in comparison to the server costs.

All such calcs are dependent on an assumption, never stated that I've seen. The assumption is that peak demands by cloud users are uncorrelated, or to put it another way, that demand for cloud resources is itself steady state. Wrong. Cloud providers will have to institute congestion based pricing, sooner or later.

News Topics

Recommended for You

Got a Question?