The W3C's RDF effort has, in the main, been an enormous flop.
In the early days, I think this was because RDF tried to bandwagon on top of XML, but its developers failed to take markup seriously: my impression is that many reduced XML to being a particular set of delimiters to serialize trees (S-expressions with angle brackets) but missed what is the essential software engineering angle of XML: that markup is annotation.
Taking this angle, what do people do with XML? (Or want to do, or need to be able to do?) They need to be able to augment existing data, piecemeal and in sync with the perceived payoffs from that augmentation. Schemas fit in, because they can augment the document out-of-band; XPath processing (such as some uses of XSLT) can fit in for the same reason. Indeed, the whole mechanism of linking is a kind of external augmentation of the existing document.
And this is not so much on an individual document basis, but a system basis: people need to be able to take an existing non-RDF XML system or workflow and augment it to become an RDF-able system without having to re-engineer any of the parts that work. XML succeeded so wildly because it allowed this kind of non-disruptive and incremental adoption.
The W3C has, sensibly, broadened the RDF effort into a Semantic Web Activity, and come up with various non-XML syntaxes that seem to me to be more satisfactory.
In the meantime, I think most XML and web people have basically ignored RDF, thinking it as something worthy of intermittent tracking, but essentially irrelevant.
The most recent work on the issue of making XML and XHTML more RDF-able (by which I mean vice versa) is 2008's RDFa and, on a small glimpse only, it seems to take markup-as-annotation commendably seriously.
If I imagine a workflow where I have an existing XML document validated by a Schematron schema, it seems that I could annotate the document with RDFa information without compromising my pre-RDFa structures, and that I could have an RDF Schema that could be converted to Schematron to validate that the RDFa markup and values were correct (for example, an RDF property would be tested by an Schematron assertion, and the RDF class structure could be implemented as Schematron abstract rules which allow mixins.) It seems to fit.
What is missing? Well, a way to retrofit or imply existing XML documents with RDF. And 2007's GRDDL fits in there. Like RDFa, GRDDL's bias is towards XHTML. It provides glue to select an XSLT script that will generate RDF. So it is an annotation-by-transformation model (like Schematron) that seems fairly pragmatic to me.
Now it seems that RDF/Semantic Web is in a much more solid position than before, and if it is positioned now as a technology that fits in and augments existing systems, it has a chance of working, which it did not when it was
a religion that required forswearing other gods ...err... elements rather than attributes.
I sometimes criticize W3C's processes as not being adequate for specifications that become internationally mission-critical. However, RDF seems to me to be the kind of project which suits W3C's processes really well: it is a technology that may take 30 years to get workable, and which benefits from having the readily discardable/supercedable recommendations that the W3C process produces.