Open Cloud Manifesto: about openness, standards, and the vitality of SMTP

By Andy Oram
March 28, 2009 | Comments: 1

Thanks to author George Reese--who wrote a thoughtful blog about open clouds and whose book Cloud Computing Architectures will be released in print next month (it's available now as an online RoughCut)--I learned about the bruhaha over an Open Cloud Manifesto. Let's put the debate in the context of some basic and perennial issues about openness and standards.

Update: George's book has been released.

Openness

You can take your data out of a cloud any time you want, and connect a server in one cloud to a server in another cloud just like any two servers on the Internet. So what's all this talk about openness? How would an "open cloud" look any different?

Obviously, people have different answers, but I think of an open cloud as a supercloud. Right now, virtualization and cloud computing mean you don't know (or care) where your server is geographically. In a supercloud, you wouldn't know or care which company was hosting the server. You could start servers on EC2 and tell them to move automatically to Azure if traffic slows on EC2.

But this simple description doesn't answer the question of whether such interoperability is desirable. I think it's possible. Sites could already choose to adhere to the Open Virtualization Format (OVF) proposed by VMware (who cloud support I reported on last October) and storage facilities could become interoperable using a distributed system like Cleversafe, whose advanced solution I reported on as far back as 2006.


Join us at OSCON 2009, the crossroads of all things open source, a deeply technical conference that brings community ideals face-to-face with business practicality. The 2009 edition of OSCON will be in San Jose, California, July 20 - 24, 2009. Register today!


Interestingly, I had a talk with George and some other people over the past couple days about a related issue, open text messaging. My boss asked me why open source advocates don't consider Twitter open, considering it has an API that lets you interact any way you want. I basically answered that an open system doesn't got down periodically. An open system in this case would be a distributed system, which is much harder to design.

Standards

The problem with sailing the seas of openness is that standards throw up a Scylla and Charybdis before you. Standards are absolutely necessary but quite risky.

You can impose a strong standard that everybody follows faithfully, and put a stop to all innovation until the standards committee meets again ten years later and updates the standard.

Or you can forget standards and compete in a free-for-all that offers new features every month but locks users in to whatever company they've chosen.

Or you can have the worst of both worlds: a never-maturing standard that companies half-implement, a standards committee constantly trying to catch up to implementations while trying to be all things to all companies, and ultimately a system that collapses under its own weight. This has happened in Fortran, CORBA, and too many other products.

There are other variations as well. Open source software solves some of the problems by allowing innovative extensions to be shared immediately, but one still has to deal incompatible features and backward compatibility.

We have kinda-sorta standards like JavaScript, and de facto standards like the use of Control-X to cut text and Control-V to paste it.

Protocols tend to be easier to standardize than interfaces. If two implementations of protocol differ, neither side can make use of some feature, so the two manufacturers tend to fix the problem. In contrast, when interfaces differ such as in JavaScript, only the user suffers.

So when can a standard co-exist with innovation?

How Can a Standard Promote Interoperability as Well as Innovation?

I think a standard works well when the designers, really, really understand what they're standardizing. It takes time. The designers must possess a kind of wisdom that lets them know exactly how what is accomplished by the system and how it achieves its goals. That lets them know the absolute minimum that must be standardized, permitting interoperability without cutting off innovation.

SMTP is an excellent example of such a standard. Its success has been imitated over the years by many other standards, including HTTP and SIP. The limitations and security weaknesses of email are infuriating, and I aired my wish for it be replaced nine years ago, but there are good reasons it's still the way most people use the Internet most of the time.

So let's take things slowly. Everybody acknowledges the value of interoperability, even Steven Martin speaking for Microsoft. His praise for "interoperability principles" isn't exactly a full commitment to interoperability, but I find it more significant than the manifesto's sweet-sounding phrases such as "an open cloud makes it easy for them to work with other groups." When we see a good standard, we'll probably know it because we'll wonder how it could be so simple.


You might also be interested in:

1 Comment

Famous examples of standards that were not defined enough to actually get connectivity were RS-232 and SCSI: even if you could plug the connectors in you could never be sure they would work.

Standardization tends to work well when standardizing the layers immediately underneath the competitive layers: in office document formats for example, consider the total lack of fuss about the XML and ZIP base, the ready accommodation of common interchange formats like MathML and JPEG, and the complete marketing freakout at the competitive layer for the features/formats themselves.

Standards need to be technical and avoid becoming part of the marketing narrative of one company or another. So the phrase in the leaked Manifesto "Cloud providers must not use their market position to lock customers into their particular platforms and limiting their choice of providers." seems like a red rag to a bull in that regard. :-)

But is the whole idea of standardization, as people generally approach it, wrong-headed? When we look at standards that have succeeded, we often see they are actually standards stacks, well-layered groupings, each building or deferring to the other, and allowing new technologies to be swapped in and out. Indeed, I think the infrastructure to allow this kind of plurality and organic development is just as important as the individual layers.

Ken Kerchmer takes this one step forwards with his "adaptability standards": if we can openly download (handlers for) the particular layers, do we really really need standards for them? Think of Codecs for media drivers: the need to know exactly how each codec works is reduced if there is ubiquitous and standardized access to the codec. (Of course, this includes on-ramp/off-ramp mechanism and the ability to keep safe copies, etc.)

Some standards do need to ask "How do we agree on consolidation of the various technologies out there?" however some also (or alternatively) need to ask "How do we support plurality safely?"

News Topics

Recommended for You

Got a Question?