Key Security Issues for the Amazon Cloud

By George Reese
November 30, 2008

Last week I posted a blog entry on Twenty Rules for Amazon Cloud Security. In that article, I provided 20 things you can do so that you can feel confident in the integrity of your Amazon EC2 instances. As I noted in that article, I sidestepped discussing the security issues that you face in the cloud and jumped right into the solutions. Real security concerns sit behind those 20 rules; I'll now explain what they are and why those 20 rules are important.

In particular, the cloud introduces five key security issues:


  1. Amazon is in control of your data.

  2. The Amazon S3 cloud storage infrastructure is weakly secured.

  3. Perimeter security in the cloud is very different.

  4. Virtualization potentially provides a new class of attack vectors.

  5. Attacks on your pocket book are a real concern.

  6. The cloud may or may not support your regulatory/standards requirements.

Amazon is in control of your data.

The fact that Amazon controls your data is the most frightening aspect of the cloud for many people. You don't know where it is stored or how it is stored, and you have no control over the physical access mechanisms to that data. It's a very serious concern if you have even slightly sensitive data in your web application.

Let's assume that Amazon is doing everything right. It's still a problem. What if Amazon has a major, prolonged outage? What if they go out of business? What if someone sues them and obtains a blanket subpoena to all data to which Amazon has access?

Three overarching principles protect you here: encryption, credential management, and backup policies and procedures.

First, if everything is strongly encrypted, it takes greater motivation (and potentially significant computing resources) to get at that data. In particular, it protects very well against inadvertent access—like a third-party subpoena.

Encryption may seem like a "no-brainer" concept until you realize what needs to be encrypted. You don't simply setup SSL on your web server and consider yourself encrypted. You must encrypt EVERYTHING at every level:

  • Network traffic
  • S3 storage
  • File systems

And you even need to further encrypt truly sensitive data in your database. It's not terribly hard to do. The tools are out there to provide all of these levels of encryption. It's just that most people don't encrypt at all of these levels on a regular basis.
The underlying idea here is that you should create a security system with the assumption that someone will gain unintended access to your data. It's not that the cloud makes it more or less likely; it's simply that a) there are more attack vectors in the cloud that you have less control over and b) it's a good idea anyways.

So, if you assume that all data in the cloud will be compromised, you are likely wondering how you can handle your decryption keys.

You cannot include them in the root file system since Amazon has a decryption key for your AMI (it requires a decryption key so it can launch your instances). You cannot include them in your snapshots since they should be snapshots of encrypted file systems. As outlined in the 20 rules, the trick is to pass in your decryption keys at runtime and keep them in memory only for the short period of time you are executing a decryption operation.

Finally, you need to design a backup structure that enables you to pull your entire application state out of the Amazon cloud into another cloud, a managed hosting infrastructure, or your own office. In particular, you should be able to start up your entire application from your offsite back at another location with minimal trouble.

The Amazon S3 cloud storage infrastructure is weakly secured.

I don't mean to disparage Amazon S3. It is exactly what it needs to be to perform the service it provides. The nature of the ACLs and the fact that data in the cloud is not encrypted means, however, that it is possible for you to accidently enable public access to a bucket. And if someone compromises S3, the lack of encryption on your data can expose your data to an intruder.

Again, encryption is absolutely critical here. Everything that goes into S3 should be encrypted before you put it there.

Perimeter security in the cloud is different.

Not bad different. Just different.

You don't have multiple network segments management by a firewall. Instead, you have security groups which allow one server to be in multiple groups. Your servers do not have IP addresses in the same subnet, and you can block off access between two servers in the same security group. In short, there is no true perimeter.

We also know little about the network topology of the underlying environment except what we know about Xen and what Amazon's Security Whitepaper tells us. If you need access to firewall logs, you are a bit out of luck (though you can implement a software-based firewall in front of your servers to log anything that makes it past Amazon's filters). If you want to implement network intrusion detection systems, you are handicapped by Amazon's Terms of Use (don't scan your network for vulnerabilities) and the inability to sniff LAN traffic (though that is also a security bonus).

As noted in my 20 rules, you can take advantage of the different approach to perimeter security in the cloud to do things that you simply cannot do in a traditional firewall infrastructure.

Virtualization potentially provides a new class of attack vectors.

The true nightmare scenario for the cloud is an Amazon customer using their guest OS to exploit a Dom0 vulnerability to gain access to your EC2 instances. The techniques described in the 20 rules should reduce the data security issues associated with this kind of theoretical attack, but they do not address the service alteration/interruption concerns.

Unfortunately, there is little you can do except hope that Amazon is paying attention to potential virtualization exploits and rapidly patching them. It is important, however, to keep this potential vulnerability in perspective. An intruder has no meaningful way of targeting you with this exploit as they have no idea on what physical server your instances are operating nor can they target a specific EC2 server for the launch of their malicious instances.

Attacks on your pocket book are a real concern.

The ability to automatically scale your infrastructure based on actual demand is an enticing feature of the cloud. I am not a huge fan of auto-scaling for three reasons:

  • Auto-scaling can make people lazy in their capacity planning.
  • Amazon EC2 instances do not launch fast enough to support demand variances less than 20 minutes in duration.
  • Auto-scaling without governance can enable an attacker to run up your hosting costs.

For the purposes of this article, let's focus on that last point. Let's say I am a disgruntled customer or former employee and your Amazon EC2 deployments auto-scale to meet demand. I can launch a distributed denial of service attack that won't actually make your service unavailable, but it will make you launch an obscene number of instances and eat gobs of bandwidth and thus incur Amazon charges for which you are receiving no revenue.

I did not address this one in my 20 rules, so let's add rule 21: if you are going to engage in auto-scaling, either tie the auto-scaling to actual economic transactions or limit the ability to auto-scale and monitor any auto-scaling that is happening.

The cloud may or may not support your regulatory/standards requirements.

Most regulations and standards were written without virtualization in mind. As a result, they include line items that could be read as technically in violation of the standard in a virtualized environment while the spirit would easily be met. Consider, for example, a standard that requires you to keep firewall logs for the prior 3 years. Does the fact that you don't have access to Amazon's firewall logs render you non-compliant? Does sticking a software firewall in front and retaining those logs solve the problem?

You need to look at your application, the standards you are faced with, and their applicability to the Amazon environment on a case-by-case basis. Where you just cannot meet the technical letter of the specification, it is often possible to get the best of both worlds by implementing the components that are incompatible with virtualization outside the cloud.


You might also be interested in:

News Topics

Recommended for You

Got a Question?