Content Routing in XRX

By Dan McCreary
November 2, 2008 | Comments: 3

In the past I used to think that the internet routers and database management systems were separate systems and didn't share many of the same design patterns. Routers routed information to different locations based on the content of the data in documents and database management systems stored data in different collections. But when building web applications with the new XRX web application architecture architecture you may start to see overlapping design patterns.

Recently I have been trying to auto-generate a large number of XForms applications directly from XML Schema files. The process is not too difficult since XML Schemas (when combined with a nice set of metadata registry services) have almost all the information you need to create an XForms application with XRX. But generating all of the paths in the submission element for creating new records and updating records was a little complex. I have been experimenting with using a new design that allows me to move much of the logic in the XForms submission element to the server. This makes the task of XForms auto-generation just a little bit easier.

At the core of this design is the Content Routing pattern. My new forms all use a single "save" web service. The save-service looks at the content of every submitted XML document and just "does the right thing". It does this by looking at the content of the submitted document and then saves the submitted data the appropriate collection in my eXistdatabase.

The save-service is smart enough to look at the document type by evaluating an XPath expression on the submitted document. I then looks up in an XML file where the document should be stored. If the incoming document does not have an appropriate document ID, it goes to a table of ids and assigned the document the next ID is a sequence. All the information the save service needs is stored in a single XML rules configuration file in the eXist database.

In the past each of my forms had separate XQueries for storing new files and doing updates. I found that making copies of these files took away from the process of solving the business problem I was working on. Changing the update submissions to archive old versions or to put the old documents into a version control database was also very time consuming. Now I just need to change a single rule in a configuration file.

My next step will be to create a user-friendly XForms application to manage the configuration file that does the saves and updates.

What this has taught me is that the content routing pattern is both powerful and easy to implement with XQuery. It save me a lot of code and increases my flexibility for adding new services. It also make the process of auto-generating XForms applications easier and makes those forms much easier to maintain.

What about you? Does your database management system implement the content routing pattern? How would your applications be different if content routing were "baked in" to each database server? Would a rules-based approach to object-persistence make your systems more flexible?

You might also be interested in:


Dear Dan,

Thank you for this and your future thoughts and insights on XRX . Your interpretation and use of the EIP in the context of XQuery proves how useful those patterns are: more universally than what is traditionally "Integration" space.

Having said that, since Rest is the central piece in XRX (figuratively speaking) I wanted to express the tension I feel between this style of centralizing the save function on the server, providing it the intelligence to dispatch precise "save" calls to more specialized artifacts AND what my experience with REST style seems to indicate to me as a 'sweet' style (I am shying away from the good vs. bad debate).

Let me explain : If I carry your argument further, assuming the client (XForms) submits (via POST ) a new piece of data (XML) to the (XQuery) server, the URLs becoming simpler and simpler - and more functional and less resource-ful. i.e. They will lose the contextual information as well as see the actions sneaking into the URL. So, for example the POST URL might become : . This URL has lost its expressive power - it has become too simple. To make sense, you have to dig too deep into the content - it has become opaque. The expressive-ness of the URI is one of the most powerful foundations of the RESTful architecture style. I fear that one might lose it in the hands of amateurs if one were to follow the style you propose to an extreme.

HTTP 1.1 - the application protocol of REST ( in the Web Application Architectures (WOA) ) , the verbs: POST, PUT, GET, DELETE, HEAD etc. are part of the HTTP message information (meta) while the URI is exploited as much as it can be to express the contextual information of the HTTP action. Thus the onus of providing the context information is on the client and not on the server. The more complete the description of WHAT by the client, allows a simpler and more precise behavior of HOW by the server.

So, here I would design a POST , to and so on.

I have tended to introduce an explicit REST pipeline in the middle - a cloud that eventually routes the payload to a final resting place - recently an XQuery endpoint. In case this middle cloud is not desired or is unavailable, my tendency would be to design the name and intent of an XQuery to be more resource-oriented rather than a function. Thus instead of having a save.xq and delete.xq or a update.xq that map to a function, I would design a facade of xqueries that reflect the resource. For e.g. address.xq, person.xq etc. The address.xq then detects the HTTP verb , parses the URL query strings, the URL itself, the HTTP header information and then uses all this information, applies an algorithm to determine where it should be ultimately dispatched to - the reusable functions.

This other style that I mention of "application shards" results in an infinitely expandable "Application space" that accommodates business growth (assuming URI space is unlimited for a given domain ), each unique URI points to and addressable resource or collection and the full gamut of HTTP 1.1. verbs ( or more, since you can then use current extensions such as WebDAV and Atom , future or your own custom verbs that mean something in your particular problem context) available to that URI "server" that can act on "local information". The server then can allow/deny any verbs of its choosing. The URI then becomes the API of your application, decoupling the implementations be it in XQuery now, RDF server later , J2EE , RoR or whatever that adheres to the emerging interface. Your application then starts exhibiting the "hackability" and "discoverability" and "bookmarkability" that are catalysts for application adoptions.

As I mention, it is a tension and a continuum rather than disjoint ideas. I just wanted to mention the other side so that architects have a spectral view than an isolated view.

Cheers, Great food for thought!


You bring up some excellent points. I agree totally that their is information loss in the URL when all form submissions use the same save URL. I will have to rethink this architecture a bit.

What we really want is the best of both worlds! First I want the save URLs to have precise semantics. This is so that business rules and web server log analysis tools can use the additional information in the URLs to create precise reports. We want our web logs to feed analytical tools to create average time-to-save reports for multiple different documents types. But second we also want to centralized the logic for doing all the saves, incrementing ids, updates and versioning and we want to empower non-programmers to change these settings from a single XForms application.

Perhaps the ideal solution is to do a little more work in generating a full URL for the save submission but to use a URL rewriting rule to get all the saves to a configuration-file driven single-point save script on the server.

Thanks for this excellent feedback. You are a deep-REST thinker.

- Dan

I tend towards Arun's style of thinking, for the reasons he so eloquently states, however Dan's idea of using url rewriting is one that holds a lot of merit, based on my real-world experience of using such an approach.

I've very used the URLRewriteFilter ( ) very successfully in servlet containers like Tomcat, in conjunction with eXist-based XQueries to provide clean, user-friendly REST URIs that then map into eXist's internal REST URIs. This has the added benefits of providing a convenient security layer where you can test for various authorization/authentication credentials (reminiscent of AOP style advice injection), and also hides the technical details of the implementation (eg. XQuery name, collection hierarchy, etc) from users, which can be very desirable.

News Topics

Recommended for You

Got a Question?