Fans of nerdy men with beards will enjoy the InfoQ website. I dig InfoQ's basic design, which presents recorded seminars one per web page with two synchronized embedded Flash objects: a two-postage-stamp-sized playback of the live video of the presenter, and larger canvas for the (more static) slides. A very successful approach. (It is not much good if you are in the 50% of the world with bandwidth/quote/costing problems that makes video unworkable, of course, but if that number is high because more people from less developed areas are getting more internet access, who can complain?)
I was recently watching Freeman and Feather's TDD - Ten years later and a few things stuck out.
Testing for failure
The first was the insight, attributed to Ward Cunningham, that it is not enough to test cases that you want to pass, you also need to test things you want to fail — do they in fact fail?, do they fail in the way you expect them fail?, and so on.
This is a very important approach for standards-makers to understand. Standards are very typically made with the assumption that you specify the positive case, and the negative case is everything else. I think HTML at W3C has lead the way in showing that in fact specifying the recovery from what might regarded as error states can actually be critical for some technologies: many markup languages are made with batch-processing assumptions (pass or die!) rather than online/transactional/realtime assumptions (we need to recover!)
When you judge schema languages, the distinction is very clear. Does the schema language provide meaningful mechanisms by which the results of validation can be fed back into a recovery process? I would suggest that DTDs/RELAX NG/XSD do not at all. XSD does provide an enumerated set of outcomes that certainly could be used as hooks, and XSD and RELAX NG do allow annotations of various kinds, but it is only good luck if the outcomes that validation allows matches the way the humans or computers think about the about the document and how errors are perceived and handled in a system. Schematron on the other hand is jam-packed full of features for this (SVRL, @role, @flag, diagnostics, and the mooted properties element), but they are still enabling features which requires smart people to build the systems for.
One place where this issue comes up every few years is with XML and encoding handling. The whole computing infrastructure is messed up with encoding: lazy programmers over the years have favoured APIs which used default encodings that didn't scale to multi-national uses such as the WWW. At some time in the future, everything will be UTF-8/UTF-16 we all suppose, so then the APIs can be simpler. But for the last two decades, our API infrastructure is geared towards encouraging developers to make programs with broken character encoding support.
Some of the DBMS vendors are some of the primary culprits, but they are hardly alone: they use issues of efficiency over encoding safety: they are very keen on type safety as long as it is static type safety, and character encodings frequently don't fit in this picture. (Of course, things are much better now that the current generation of programming languages actually have a notion of character rather than just storage unit.)
In the case of XML, there is a rule that wrong encodings are a well-formedness errors. But there are many encodings that have feasible byte sequences in common. So many errors cannot be detected at the byte level. So applying TDD, what tests can we have that we can expect to fail? Static failure detection is always a matter of redundant code-point detection (in the engineering conception of redundancy, a unused code point which if used is a sign of an error.)
In XML, there only sources of redundant code points is inside data and inside markup. Inside data, XML pretty much allows any character except for control characters, but the bytes for control characters are shared by most encodings (of the same family, ASCII or EBCDIC), so they are not useful. So the only source of redundant code-points are in effect the characters allowed in XML names. (However, not all the redundant code points are equally important. A topic for a different day.)
In Schematron, there is first-class support for tests that are expected to fail, by the way. The
assert element is tests that you want to pass, the
report element is for tests you expect to fail, if you like.
Back to the video. At about 25 minutes is some interesting material on Chris Stephenson's TestDox style. Turn tests into little sentences: focusing on intent rather than implementation, what rather than how. In Schematron, this is what assertions do: the text is primary, and the XPath is just the implementation.
Better presentationAbout 30 minutes is material on presenting tests with red/green colored screens. This is natural (I did it for the test results on the XSD to Schematron converter recently for example) and it strikes me that we don't have something similar for Schematron. We produce listings validation-style that tell you the problem, and we say how good it is to have multiple reports rather than fail-at-first-error, but we don't yet have a simple report with all the assertions and some green/red indicator. It would be a good project.
Freeman and Feather touch on BDD in the closing minute: Schematron shares many of the concerns in the document world which Behaviour-Driven Development has in the process world, more so than plain Test-Oriented Development.
Certainly I think that the Business Value (a BDD concern) of a schema language is the usefulness and applicability of its reports: that XSD and RELAX NG allow annotation is nice, but effectively provides nothing for the user. "Element B must come after element A": big deal...the user may need to know why (and, indeed, the reason why should be traceable to a business requirement otherwise why is it there.)
And in BDD's emphasis on tests driven by the user interface, I see a certain similarity with Schematron's pattern idea that elements and attributes participate in more interesting semantic/analytical units, and it is those we need to test.
Indeed, it is interesting that W3C XSD is completely designed around the notion of components and yet completely fails to provide any mechanism for making these first-class objects of the schema language, and for supporting languages which also need some kind of component idea. Once you twig to this, you can see that the length and complexity of the XSD recommendation is a testament to its own lack of power, and its own lack of adequate analytical apparatus. In XSD, whether an element is local or global is not indicative of the semantic coupling with its parent: in part this is a result from terrible abdication of interest in these issues that came out of the agreement with the RDF people about scopes for things. But RDF is not interested in components (or patterns) either: it is about enabling suitably fuzz-tolerant descriptions of web resources.