I have for a few years been trying to come up with a good definition of publishing workflows: as an architectural pattern. The two key distinctive features, I think, are that publishing workflows are one-way flows rather than two-way flows (e.g., database/middleware CRUD and triggers), and that there is some kind of snapshotting going on: a edition is published: many individual items get a common status or version or milestone at about the same time.
When you have a publishing workflow, you can use publishing technology, such as XML with a pipeline/functional/event bent. When you don't have a publishing workflow, you may be better off using databases or objects: quasi-XML systems such as XQuery or, more likely, not use XML at all. This is the kind of issue that an introductory course on Document Engineering might cover, of course.
Wikipedia's page on ETL is highly relevant to XML/XSLT developers and explainers, even those involved with more publishy flows than those of the typical datawarehousey/enterprisey Extract/Transform/Load scenario. (One sentence in the Wikipedia page that caught my eye: in the Best Practices section: Use file-based ETL processing where possible. Nice to see the recognition that files still have their uses!)
The Wikipedia section Real Life ETL Cycle is a pretty good prototype for the steps a large XSLT system might need, even if the source was not a DBMS and the destination was not a warehouse, but just plain old XML:
So it seems to me that an ETL system, operating to periodically load data to a data warehouse, is just a publishing system. Albeit one with lots of specific details and requirements.
[Update: I liked this introduction from IBM: EII, EAI and ETL: What, Why and How!. Notable: no mention of web technologies.]