It's well established that cloud computing provides tremendous economic value to applications with variable demand requirements1. On the flip side, conventional wisdom suggests that applications with steady demand requirements pay a premium for being in the cloud. While that's true if you compare three years of usage of a cloud virtual server to a comparably configured commodity server, it ignores the bigger picture cost of running real applications in the cloud.
Let's start with my opening statement about the economic value of the cloud with respect to variable demand systems. You save money in the cloud because you pay only for the virtual instances you actually use whereas in your internal data center or in the data center of a managed services provider you are always paying to support peak demand. Consequently, the shorter the duration of the peak demand and the more extreme your peaks versus standard usage, the more attractive cloud computing becomes. In my article on The Economics of Cloud Computing, I provided a rather extreme example of this value.
The issue I want to deal with here is the issue of what happens in systems that have steady usage. Let's use a simple web application that can run on a single server with a little room to spare on an Amazon $0.10/CPU hour instance. The equivalent Dell based on retail pricing is around $600. For now, we'll ignore that fact that you actually need to also purchase the $1,119 3-year 24x7x365 hardware support to end up with a truly apples-to-apples comparison.
The total hardware cost is $600 in 2009 dollars for the purchase option. The Amazon cost, on the other hand is $1,131.762. Winner Dell?
Not so fast. We have a number of problems with this seemingly straightforward calculation:
- You need to add in 24x7x365 Dell hardware support to get some level of reasonably comparable offering. This support answers the question: what will you do if the hardware fails? Dell will take care of you when you buy this support option. Amazon takes care of you with the base price (you just start your application up on a new server).
- If you buy your own Dell server, you need to buy server hosting space, power, firewalls, routers, and other things as well. They can all add up to the difference in the two cost models.
- I doubt you would feel comfortable investing capital in "just enough" capacity to support your application. Even if you predict steady usage, you probably want enough excess capacity so that the system can handle unexpected peaks in demand. As a result, you would never buy the Dell server in question. But there is no issue running the system on the low-end Amazon environment. Need excess capacity? Move the application temporarily to a larger virtual server or plug in an additional app server and load balancer.
The one commodity server vs. one Amazon instance tends to be the area in which conventional wisdom hammers cloud computing economics. As I have described above, there's more to the calculation than simply one Dell server versus one Amazon EC2 instance. Very few infrastructures, however, consist of one steady-state application running on a single server. Most organizations split their application functions (for example, application server vs. database server) across multiple servers with some servers supporting a given function for multiple applications. Consequently, assigning the cost of a firewall or a router to one application is nonsensical. On the other hand, with this complexity come labor and other operational concerns.
A key feature of cloud computing often overlooked in economic discussions is the ability to automate the labor and operational concerns of a traditional IT infrastructure.
What happens when a server fails in your infrastructure? A person likely has to move all of the functionality that server was supporting to another server. If you are particularly advanced, you might be using virtualization tools that automatically recover the virtual instances on your failed server over to another functioning server. With that scenario, however, you still must have an individual to remove that server from your rack and work with Dell to return the server, retire it from service, or repair it.
In the cloud, you don't do a single thing. Tools like enStratus will detect the instance failure and recover your server automatically to another virtual instance. Amazon owns the underlying hardware and deals with the labor concerns. Other than an email from enStratus, you would never know anything ever happened.
I don't mean to argue that the cloud wins economically in all cases. But an internal data center almost never wins except in a few corner cases. The best case for an internal data center is where you are virtualizing existing server capacity and expect to see very dense virtual instance usage. To make that work, you have to have a very efficient IT staff with a top-notch, energy-efficient data center capable of leveraging virtualization technologies to their fullest. For most cases, however, the economics of the cloud are incredibly compelling.
Discuss this article further with me on Twitter. I'm @GeorgeReese.
1Don't buy this basic premise? See my article on The Economics of Cloud Computing from October 2008.
2Because this application will be using a server for three years, I assume you pay the reserved instance price. The calculations obviously get more complicated if you only intend to use the server for a short period of time, but they rapidly favor the cloud.