SOA is Dead? It's About Time!

By Kurt Cagle
January 13, 2009 | Comments: 21

6a00d8345208e269e200e54f17c6c38833-800wi.jpgAnne Thomas Manes, a Research Director with the Burton Group, raised quite a few hackles in the IT press yesterday when she asserted that SOA is Dead. Anne has the chops to talk on the subject - beyond her respectable career as an SOA Analyst for the Burton Group, she was also a former CTO of Systinet, an SOA governance company that eventually was bought up by Hewlett Packard, and was one of the early architects of the WS-* architecture ... so when she says "It's dead, Jim", people listen.

I had the privilege of working with Ms. Manes last year, and overall, I don't think we're that far apart philosophically when it comes to SOA. I've had a number of problems with the technology, from the fact that it seemed to be less technology and more marketing term for a number of fairly distinct things, to the fact that distributed technologies are, by their very nature, distributed. The SOA model as I'd seen it painted all too often seemed to be trying hard to build centralized systems that were nonetheless distributed. Distributed programming is very different from centralized systems, and trying to apply one model to the other will get you into trouble quickly.

Perhaps my biggest reservation about SOA had to be the fact that, at the end of the day, it was still an RPC model that concentrated primarily on calling APIs that differed from one provider to the next. The result of this thinking is the sea of APIs, where there are now tens of thousands of APIs, each of which doing things a little (or in some cases, a lot) differently from one another, with very little cohesion, and with little thought to the semantic complexity that comes when you have that many microlanguages all competing for programmer attention.

Purists may argue that over time the SOA model (especially the SOAP/WSDL model) had been moving towards a more messaging-oriented architecture, but I'd counter that all that a messaging queue does is to decouple the receipt of the message from the response - if the message processor invokes a service, it is still an RPC, especially when transactions are involved.

This is one of the reasons that I think that resource oriented services - RESTful services - are beginning to gain real traction even as the big-box SOA projects are falling to the accountant's axe. The publish/subscribe model in which what you're publishing are not blogs but data documents (think XBRL or HL7) performs the same type of decoupling that message-oriented SOA did, but completely abstracts the intent from the process of communication.

For instance, I can get a listing of all XBRL documents (or subdocuments) that satisfies a given GET query (possibly via XQuery, but I'll leave the implementation details of this out of the discussion for the nonce), in one of potentially dozens of different formats. These documents essentially exist as parts of collections, which is another way of saying a queue. When I POST to that queue (or PUT a document into the queue to replace an existing document) the intent of the operation beyond the simple operation of adding or replacing content doesn't exist.

Now, the beauty of queues is that they exist. Another asynchronous process on the other side of the publishing system can retrieve all of the documents that satisfy a given criteria (has a published but not yet processed flag, for instance) and perform some action upon those documents, but this should be immaterial to the services architecture.

This subtle change in thinking has huge ramifications. By removing intent - the remote procedure call - you also simplify the interfaces down to a service location (a URL), perhaps five verbs that can be applied consistently (GET, POST, PUT, DELETE and HEAD), a data transport protocol (Atom, for instance) and the actual deployed content payloads, which are just data. You can even get by without the transport protocol, but it makes accounting a little more complicated - it helps to have a framework on which to hang publishing metadata.

I see this model exploding in use as XML databases become more widely spread, and as XQuery deployment makes it easier to abstract the collections layer ... part of what's becoming known as XRX (XQuery/REST/XML Clients) model. Such systems are generally far more stateless (the few state variables for publishing/syndication, such as paging indices, can easily be passed as part of a URL with no loss of security), which additionally means that load on individual servers tends to go down.

Finally, this model addresses one of the critical failings of SOA. In most SOA systems, perimeter transactions, such as submitting content from a web page to a server or getting a JSON stream back from a server to use in a mash-up, seemed to be something of an afterthought. I was often astounded at the fact that the original term for SOA - web services - actually worked so poorly on the web.

The justification for this was that the goal of SOA systems was to orchestrate processes through complex business systems (with lovely names such as Enterprise Service Busses) and that the web wasn't really that important (indeed, one of the goals of SOAP was to move XML across protocols such as SMTP). Ironically, ten years on, it's the application middleware layer itself that seems to be drying up and the web as transport vehicle is more important now than at any time in the past - and the mashup (i.e., AJAX components) seems to be the way that we interact with that data.

No doubt that SOA vendors will continue to try to prop up their particular dead parrot for a while, even as businesses axe SOA project after SOA project as being complex, unworkable and too fragile - just as no doubt, there will be those (few) business projects that will be held up as being successful because they happened to hit the (small) sweet spot where SOA as a model actually works reasonably well, but I for one am just as happy to see this rather ripe smelling bird now pushing up the daisies.

Kurt Cagle is an author, information architect and editor for O'Reilly Media. You can subscribe to his (RESTful) news feed or follow him on Twitter.

You might also be interested in:


If you look @ it SOA as a RPC then you are really talking of a SOA through a pinhole.
Web services had come a long way since the time when sceptics equated them to the dot coms after the burst.

From a perspective of integration consultant I'd say that the SOA has set its stakes well into the customers
are really transforming their architecture to reap the maximum.

Although I agree that there are less number of SOA Choices out there making them complex and unworkable
But there sure are smart SOA products from small cos like TIBCO that are really making waves.

All said with Due respect to Anne and the author for faithfully reproducing her preachings.

By the way, you have "securty" instead of "security".

Thanks for catching that. Corrected.

I have never understood why one would emphasize SOA so much. It is almost as old as the time when commerce happened to humankind. If you think deeper, it is actually as old as humankind itself.

For example, a kid goes to his/her Mom for a cookie. Does s/he care how Mom gets him/her the care ? Isn't this the core paradigm of SOA ? I mean, call something without having to care what/how exactly the called-entity achieves the fulfillment of the request.

More technical example. If I wrote a READ or WRITE statement in COBOL to be run on a mainframe, I have hardly ever modified it when the said file resided on disk or tape. Of course, the disk/tape themselves can have multiple variations. But, I get the READ/WRITE to do its job quite fine. Isn't this SOA too ?

If you ask me, SOA is not dead because it was never born !

Last week I had an interesting conversation with an old colleague who was working on a large SOA project in a government department here.

It seems that SOA is being adopted as a matter of policy, even when each data source only has a single data user. Point-to-point tight coupling dressed up as a service.


Heh! Yup, no doubt some marketing consultants managed to convince the powers that be that SOA was the next big thing. Beware of marketers armed with TLAs.


I've interviewed a number of people over the years about SOA, have attended SOA conferences and have generally been trying to keep abreast of SOA developments, and I would still contend that the biggest problem that SOA has is that it is an almost completely meaningless term - a marketing shibboleth that has spawned an entire sea of similar meaningless terms that SOA consultants love to trot out when selling their own services related offerings to clients.

The WS-* model, which is usually what is actually meant by SOA, was conceived originally as a binding mechanism for remote code invocation (usually wrapped up in embedded class modules that largely hid the RPC mechanism), and unless I missed something over the last decade it never really moved much from that model.

I'm inclined to believe that RESTful services, which are lower level, data resource oriented approaches to distributed architecture, may actually end up replacing the WS-* model, because there is actually an implicit understanding that such pieces are generally much more localized in nature.

This to me is the crux of why the SOA/WS-* approach failed. It tried to build an entire top-down transactional system onto what was essentially an asynchronous, unreliable protocol, and this centralized notion of architecture was simply not flexible enough to handle it.

It also failed to take into account the fact that at some point this information needed to be translated into human viewable form, necessitating as a consequence an entire presentation layer that also ran counter to the way that the web worked.

My suspicion is that over the course of the next decade, transactional process will end up moving over XMPP (which evolved a considerably saner model for handling transactions) while messaging and syndication will continue to run over HTTP and Atom. In a few cases (primarily financial transaction where millions have already been sunk) I expect that SOA will continue to survive, but I think that it will eventually just fade away as a technology elsewhere


The whole Web Services field started out as a Microsoft project back in 1999 (I was actually present at some of the earliest phone meetings there, though I eventually gave up in disgust as I realized where they were going with it).

It came about in great part because of the failure of DCOM, which basically extended the COM model to distributed systems. DCOM didn't work because sys admins were uncomfortable (rightly so) with the idea of opening up a specific data port in order to let external systems interact directly at a code level with their own internal systems, and because in general the only port that most did expose was port 80.

SOAP then, was a vehicle for creating RPC proxies between a distributed client and server. Microsoft put its marketing muscle behind it, managed to coopt IBM and BEA to get involved with it as well, and soon web services were taking the world by storm.

The problem was that this model introduced far too much complexity into the system, complexity that was not needed for about 95% of the use cases. After a while many IT departments were specifically instructing their programmers not to use this approach because of the fragility of the systems it created, and by 2003-4, web services were beginning to get something of a bad name.

Thus, around that time-frame, web services mysteriously morphed into "services oriented architecture", or SOA. You had a whole slew of new acronyms thrown into the mix, in order to make it look like it was a fairly radical new solution, most of which emerged not as technical TLAs but as marketing ones (always a dangerous sign for a technology).

Of course, about the same time, many of the initial web services architects began realizing that there were some fundamental problems with the approach that they had taken, and began to expand the definitions to look at what are increasingly being seen as RESTful systems - publish/subscribe systems, collections of resources rather than libraries of service methods, asynchronous message queues rather than synchronous RPC invocations - but I suspect that there was a fundamental disconnect arising between the marketers and the technologists.

RESTful services for the most part are still quite doable in today's credit constrained enterprises - by removing intent from the equation (one of the primary flaws of the SOAP/RPC model) you also remove the necessity of trying to create specialized services for particular clients, you cut down on the number of specialized interfaces that have to be documented and maintained, and you can move more into a loose semantic model rather than a tight functional model. I'll be exploring this in some upcoming articles shortly.

It is awfully tough to trust a component that calculates the taxes withdrawn from your paycheck, or one that calcuates what a label owes you based on the points for the copyrights unless you can be sure the algorithms use the right percentages.

Publish/subscribe isn't always enough, Candide.

It is awfully tough to trust a component that calculates the taxes withdrawn from your paycheck, or one that calcuates what a label owes you based on the points for the copyrights unless you can be sure the algorithms use the right percentages.

Publish/subscribe isn't always enough, Candide.


I'm not sure how the surety of such calculations would make a difference to the architecture involved. The SOA model - give me the tax I pay for this paycheck. The publish model - here is my invoice payment for this month's pay period, process it and let me look up the resulting statement. In both cases, you are still dependent upon an external process to actually perform the calculations involved.

The difference is that at any given point in a P/S model, I can look at a document showing my submitted invoices and another document showing my calculated statements, with linkages showing the ties between the two, whereas in the SOA model I have no effective record of the transactions.

However, neither service architecture guarantees that the processing black box is in fact generating valid answers unless you have access to the processor in question.


One thing I don't understand in your posting: what does "intent" mean? Does this mean more than the semantics of rendezvous synchrony? You mention adding or replacing as part of intent, but I wasn't certain that was entailed by the communication patterns of SOA. However, I only know SOA from light reading (say, of Erl's book), which seems to pile on a construction of abstractions with little obvious gain. Like Lampson said, any problem can be solved with another level of indirection.



Ah, I was afraid someone would ask that question ;-)

When I talk about RESTful services, in essence, what I'm talking about is treating the server as a database of documents. Consider how you use databases. In a normal database, you have tables (call them first order document collections) and you have queries (second order document collections).

Typically, most of the operations involved with such databases involve either retrieving one or more such documents from the collection, modifying a document (in a RESTful mode this would involve replacing the old document with a new one), creating a document or deleting that document. These are standard CRUD operations.

You can divide functionality on the database into two distinct types of operations - privileged and unprivileged. In an unprivileged mode, the operations that you can perform are strictly CRUD based. In a privileged mode, its possible other operations could be done, but these almost invariably require high level access. RESTful or CRUD computing is unprivileged, and accounts for the overwhelming majority of operations on the database.

One of the key points in a RESTful system is that, because you are dealing with documents, then if you modify the document on the client and validate it you can send it back to the server under a CRUD operation without any other requirements needed - if the document itself permits this. For instance, I can indicate the supported formats for a document or document set - an XML format that reflects the underlying structure, an HTML format, a syndication feed, and so forth, but only the XML format is round-trippable (possibly surrounded by an envelope such as Atom or XMPP). The other formats are read-only.

Where I'm going with this is that because of this approach, I as an unprivileged user do not need to have an intent beyond publishing or replacing the existing resource. Note, in a type two "query-based" document, the document being replaced may undergo additional processing, but that's a distinction that's immaterial to the user.

Intent plays more of a factor in privileged communication, sometimes, though even here you can build a RESTful alternative. For instance, when I send an invoice to a SOA service, I have to specify my intent - process this invoice and return to me the corresponding pay statement. In a RESTful service, on the other hand, I simply publish the invoice to a collection.

A separate process will, asynchronously, read through this collection retrieving all documents that have an unprocessed flag, create a syndication feed listing all of these unprocessed items (along with links to those resources) to the accounting department.

Accounting then opens up each item in the syndication queue one at a time, processes the invoice - which creates a new invoice with a processed flag set and detailed process transaction information added. When the publish mechanism for the invoice gets the new invoice replacing the existing one, it will also automatically generate a new statement showing the relevant account balances for the person submitting the invoice.

Note that even here the publish mechanism is not intentional - the side-effect of producing the statement is effectively created by a trigger of replacing an unprocessed document with a processed one. What's more, the specific logic for handling this can, in a well designed RESTful system, be added in as a hook to the PUT invocation rather than be hardwired.

The idea here is that you're replacing intentional processing - making specific requests of the system beyond publishing primitives - with ones in which you're only concerned about those publishing primitives.

Processing beyond that occurs as hooks on the publishing operations for those collections, with the hooks themselves essentially rule-based processing directives given in queue that operate based on priority and that can additionally either terminate or pass on processing of each additional directive.

It's a subtle point, and somewhat counterintuitive, because most people don't tend to think about larger systemic effect when building distributed systems, but the idea is well established in database theory. Database tables have no intent. You can create views which represent alternative virtual tables, but these still don't have intent. The only intent that does exist comes as a result of triggers that occur when CRUD changes happen, and then usually with the result of triggering external processes.

I'm going to be talking about RESTful services in a Webinar at the end of the month, and will try to explain the concept in more detail then.

I understand the principles of message/document based processing as an 'intention-free' system.

The problem remains intention and as related to the question Costello asked on the xml-dev list, the user has to negotiate with you and then verify that you are living up to your end of the bargain. REST is ok until you consider that where a resource is being used by many users (consider what happens if we really did do away with copyright, ASCAP and BMI) in various service settings. Then the book keeping gets more difficult.

I don't think pure publish/subscribe systems can do a good job at this because of the governing contracts. I can keep querying but eventually I have to feed that to an engine. So if I am pulling and aggregating from different resources, it is better to have common/standard engines for checking these where if one actually does need to see the algorithms and the values passed in, one can.

Of course you could exchange self-checking documents where one uses something such as schematron and local memory of the last transaction and the expected transaction. All I'm saying is REST isn't enough or in effect, there is no such thing as blind exchange.

This feels like conversations we've had before and will again. At least Battlestar Gallactica is back.


There are a few places where REST by itself is insufficient. One of them obviously is authentication. REST obviously requires a security context in which to work, and any authorization system requires the exchange of information that by its very nature tends to be very services oriented - "Give me a transaction key and store it locally?" "Okay."

I'm sure you could come up with others. You'll notice above though that I indicated that the vast majority of operations that are done on the web could be REST enabled, which implies that there will be a few that can't be. I can't fault the logic in that, though I would question the next "logical" corollary - therefore we should build the entire system to concentrate on that final 4-5% of all transactions that can't be REST enabled.

I think this is the crux of my opposition to intentional SOA-based systems - not that they are completely unnecessary, but that they work upon the underlying assumption that all distributed computing should be done over them. There are times when RPCs are unavoidable, but I would contend that for every situation like that, there are a hundred where RPCs were used, whether directly or via message queues, where RESTful architectures could have been employed more effectively and with looser coupling.

My personal feeling is that intentional services should be seen as being a lot like assembler code - something that you basically resort to in order to do very specialized things that can't be easily accomplished in higher languages, but not something that you'd necessarily want to resort to to program your website.

No disagreement then.

What one should understand is the services model per se as a model of what a system needs to achieve at point of sale and beyond is a good model. Customers can understand it and it is an easier model to negotiate. Explaining the architecture in those terms is much clearer than explaining REST. Publish/subscribe as a metaphor for linking, fetching and updating files based on static states contained in a message works for the network wonks, but a business person or any customer wants to know what services are available to them.

The copyright brouhahas are a good example of how the shift to one side of a market nash equilibrium blowout is being met by new services for tracking and monitoring file sharing. One may not like that kind of service, but as usual, qui bono? and that isn't the point. The artists are better served by the new social network services that enable them to manage fans and set their own rules for how their works are distributed and used. It is easy to talk about this all as services, but tell a label that your architecture is all about ease of sharing files (REST in a nutshell), and you will lose that business. Telling them that preventing file copying is all but impossible is telling the near truth.

Telling them that they have to get by with 1000 True Fans will just get you kicked out the front door and you deserve it. (Sorry, Kevin Kelly doesn't know the first thing about the music business and being a web tech pundit doesn't make him any smarter about that. No real professional group or artist producing first rank work can get by on 100k a year.

I digress but I want to put a cold dash of reality into the discussion. Services are part of a pitch and that pitch makes sense to the customer. They don't know or care how it gets done under the hood as long as it does what they need it to do.

I think I'm going to faint - nothing to disagree with? Wow!

I would definitely agree with your assessment about explaining the various models, though this can be a minefield to tread as a systems architect, from painful personal experience. One of the things that most business people want are conceptually simple models (which tells you more than you need to know about the typical business manager).

Unfortunately, the next step that proceeds from that is the business manager then asserting that programming should immediately commence around these conceptually simple models, regardless of whether in fact these models are even remotely close to what's most efficient in terms of solutions. This frankly is what lead to SOA in the first place.

What some see as the "Death of SOA" is really a natural part of the way new technology is typically adopted. Instead of looking too close, I think we should zoom out and take the longer view: The time has come for the 'Architecture' part of SOA.

- Carsten Molgaard, The Rasmussen Report


I definitely agree with you on this. Services aren't going away ... if anything they are the glue that holds the 21st century computing infrastructure together. However, I think that what's happening is the gradual realization that the way that we design (or architect) those solutions wasn't optimal for most use cases.

The original SOAP approach (largely built around applying the COM/OOP/RPC model to distributed systems) is seductive but significantly flawed, while what would seem to be a more primitive approach (RESTful services) actually is far more optimized to the way that the web itself works. That plus ten years of experimentation is beginning to shift the balance of thinking on this.

I just blogged that I think “Big SOA is Dead; Little SOA is Thriving” at: . Ok, maybe Big SOA isn’t “dead”, but certainly struggling to convince companies to invest in BPM, BAM, ESB (Big SOA) in today’s economic climate is a tough, academic sell when they can go Little SOA with positive ROI. Organizations want rapid results– they want SOA Today and not 6-9 months down the line!

News Topics

Recommended for You

Got a Question?