Anne Thomas Manes, a Research Director with the Burton Group, raised quite a few hackles in the IT press yesterday when she asserted that SOA is Dead. Anne has the chops to talk on the subject - beyond her respectable career as an SOA Analyst for the Burton Group, she was also a former CTO of Systinet, an SOA governance company that eventually was bought up by Hewlett Packard, and was one of the early architects of the WS-* architecture ... so when she says "It's dead, Jim", people listen.
I had the privilege of working with Ms. Manes last year, and overall, I don't think we're that far apart philosophically when it comes to SOA. I've had a number of problems with the technology, from the fact that it seemed to be less technology and more marketing term for a number of fairly distinct things, to the fact that distributed technologies are, by their very nature, distributed. The SOA model as I'd seen it painted all too often seemed to be trying hard to build centralized systems that were nonetheless distributed. Distributed programming is very different from centralized systems, and trying to apply one model to the other will get you into trouble quickly.
Perhaps my biggest reservation about SOA had to be the fact that, at the end of the day, it was still an RPC model that concentrated primarily on calling APIs that differed from one provider to the next. The result of this thinking is the sea of APIs, where there are now tens of thousands of APIs, each of which doing things a little (or in some cases, a lot) differently from one another, with very little cohesion, and with little thought to the semantic complexity that comes when you have that many microlanguages all competing for programmer attention.
Purists may argue that over time the SOA model (especially the SOAP/WSDL model) had been moving towards a more messaging-oriented architecture, but I'd counter that all that a messaging queue does is to decouple the receipt of the message from the response - if the message processor invokes a service, it is still an RPC, especially when transactions are involved.
This is one of the reasons that I think that resource oriented services - RESTful services - are beginning to gain real traction even as the big-box SOA projects are falling to the accountant's axe. The publish/subscribe model in which what you're publishing are not blogs but data documents (think XBRL or HL7) performs the same type of decoupling that message-oriented SOA did, but completely abstracts the intent from the process of communication.
For instance, I can get a listing of all XBRL documents (or subdocuments) that satisfies a given GET query (possibly via XQuery, but I'll leave the implementation details of this out of the discussion for the nonce), in one of potentially dozens of different formats. These documents essentially exist as parts of collections, which is another way of saying a queue. When I POST to that queue (or PUT a document into the queue to replace an existing document) the intent of the operation beyond the simple operation of adding or replacing content doesn't exist.
Now, the beauty of queues is that they exist. Another asynchronous process on the other side of the publishing system can retrieve all of the documents that satisfy a given criteria (has a published but not yet processed flag, for instance) and perform some action upon those documents, but this should be immaterial to the services architecture.
This subtle change in thinking has huge ramifications. By removing intent - the remote procedure call - you also simplify the interfaces down to a service location (a URL), perhaps five verbs that can be applied consistently (GET, POST, PUT, DELETE and HEAD), a data transport protocol (Atom, for instance) and the actual deployed content payloads, which are just data. You can even get by without the transport protocol, but it makes accounting a little more complicated - it helps to have a framework on which to hang publishing metadata.
I see this model exploding in use as XML databases become more widely spread, and as XQuery deployment makes it easier to abstract the collections layer ... part of what's becoming known as XRX (XQuery/REST/XML Clients) model. Such systems are generally far more stateless (the few state variables for publishing/syndication, such as paging indices, can easily be passed as part of a URL with no loss of security), which additionally means that load on individual servers tends to go down.
Finally, this model addresses one of the critical failings of SOA. In most SOA systems, perimeter transactions, such as submitting content from a web page to a server or getting a JSON stream back from a server to use in a mash-up, seemed to be something of an afterthought. I was often astounded at the fact that the original term for SOA - web services - actually worked so poorly on the web.
The justification for this was that the goal of SOA systems was to orchestrate processes through complex business systems (with lovely names such as Enterprise Service Busses) and that the web wasn't really that important (indeed, one of the goals of SOAP was to move XML across protocols such as SMTP). Ironically, ten years on, it's the application middleware layer itself that seems to be drying up and the web as transport vehicle is more important now than at any time in the past - and the mashup (i.e., AJAX components) seems to be the way that we interact with that data.
No doubt that SOA vendors will continue to try to prop up their particular dead parrot for a while, even as businesses axe SOA project after SOA project as being complex, unworkable and too fragile - just as no doubt, there will be those (few) business projects that will be held up as being successful because they happened to hit the (small) sweet spot where SOA as a model actually works reasonably well, but I for one am just as happy to see this rather ripe smelling bird now pushing up the daisies.