Are we losing the Declarative Web?

By Philip Fennell
March 31, 2009 | Comments: 6

I saw something the other day that I was both intrigued and bothered by in equal measure. 'Mozilla and the Khronos Group Announce Initiative to Bring Accelerated 3D to the Web'.

Mozilla will work with the Khronos Group to extend the exploration process of what an initial take of 3D on the Web should look like to a wider audience.

Apparently, the working group will look at exposing OpenGL capabilities within ECMAScript. The intriguing part is that, as a fan of 3D Computer Graphics and Animation this has got to be a good sign, especially if it is exposed in this way; but the bothersome bit is how people will end up using it as a result of it being exposed in this way. The crux of the problem for me is the question, JavaScript - what's it good for?...

Absolutely, that is the question. What is JavaScript's purpose on the web, what is it good for, and what is it not? I don't think its purpose is to create content, but that is what I fear will happen with a JavaScript binding to an OpenGL API. Not that this reflects badly on OpenGL, far from it. However, the rash of resulting procedural content will not be so easy to integrate within the host page. It is unlikely to be as Accessible (with a capital A) as a declarative graphics language, with the capacity to support ARIA Roles and Status, like SVG. You won't get the advantages of inline graphical content that would then be accessible to other parts of the host page via a common expression language (XPath). This is not to say that procedural components are forbidden; they are necessary for certain classes of shaders and ideal for some repetitive structures, but they should be regarded as the exception rather than the rule.

So, how should it be done? Referring back to a previous post where I suggested that to drive adoption of declarative languages like XForms, JavaScript implementations are the way forward as they are able to side-step the whole issue of plug-ins which has, so far, dogged XForms, SVG, X3D and the like. With a lot of interest growing in the new high performance JavaScript engines there may come a time (soon I hope) when an alliance of JavaScript and OpenGL could deliver SVG and X3D rendering within the browser. I don't know what the realities of doing such a thing would be using these technologies, but, writing the libraries to do so is a worthier pursuit than stopping short at the purely procedural level. Oh, and for those people out there who think XML was a bad idea, X3D has more than one serialization, so don't dismiss it out of hand because it has an 'X' in the acronym.

Despite the years spent developing client-side code, I'm an unashamed fan of the XML stack and because of it, the declarative web too. Many people have put a considerable amount of effort into making content mark-up both rich in semantics, just look at DocBook and XHTML 2, and extensible through the adoption of XML Namespaces and open Schemas. User input and client-side logic is well served by XForms and presentation extends these formats on many levels via CSS. Other modes of delivery, like print, are catered for by XSL-FO. Richer and more interactive experiences can be delivered either in-line or out-of-line using SVG and SMIL. Where fully supported, many of these formats can be freely inter-woven because of, rather than in spite of, XML Namespaces.

Content can be aggregated with XInclude, stored and syndicated with Atom and validated either by grammar using XML Schema or via rules with Schematron. All the afore mentioned XML languages can themselves be created and transformed using XSLT, which being XML itself can also create and transform itself, and all of this built upon a common foundation of XML, XPath and the URI.

Where people interact with content there is a responsibility to provide access to as wide an audience as possible; ARIA Roles and Status help convey, via the user-agent, the nature and state of the content being experienced at that time. Bring into the mix, annotations via RDFa that add to the semantics, richness and machine-readability of the data that lies both within the content and beyond in linked data/content and you'll start to see that the vision of a web of content and data is being realised all around us.

It would be a shame to see all that effort and progress eroded if we don't consider carefully what we do with new features/technologies. So, finally, to answer the question of what JavaScript is good for. JavaScript should be used to help implement the declarative languages that a web browser is designed to handle if those languages are not natively supported by the browser. Any other use is by-and-large a distraction from moving the Web forward.


You might also be interested in:

6 Comments

Why would OpenGL be used to create content on the web, when it isn't used to create content when used in other environments? The content is usually either a scene graph or an application's object graph, which is then mapped to 3D with OpenGL, or the output of a 3D modelling tool. Although you can use OpenGL to create everything from primitive procedural calls, it's much more common to use OpenGL to transform, manipulate and display content from another source.

Peter,

It's not OpenGL that I see as the problem, it is how it is used. You could render a scene represented in X3D with OpenGL and JavaScript, or you could create the same content programmatically with just JavaScript. It is the later that bothers me because people are likely to do it because it is perceived as being that much easier, but the result is that the 'content' is less useful and more difficult if not downright impossible to re-purpose.

Implementation of X3D in the Khronos initiative will happen. Engine writer, Tony Parisi, whom you know as having fielded the first VRML viewer, Intervista, says it's doable and I trust Tony's judgement on that. We have content that has stayed viable for over a decade. It is declarative, has XML and curly encodings, and is supported by a full suite of examples, implementations, open source, scripts and worlds.

Two questions that come up are the suitability of the DOM with it's HTML baggage for real-time networked scene graph operations that share a messaging system for state updates, and DOM performance in MMOs. The web browsing experience is simply NOT the same as world browsing experience. It would be dumb to try to push it into that mold.

We've have good standard plugin 3D engines and regardless of assessments of their 'penetration', as engines they produce a good quality 3D experience. We do not want to swap expression for browser ubiquity. IOW, the 3D artists and world makers do not need a giant step backward in power of expression to enable the programmers one more level of possibly unusable integration.

Now, make it work and be performant... Rah!

And to quote an old Microsoftie, "it has to be easy."

3D content is hard to build. It can be expensive to build. It is expensive to integrate. Therefore, if not in a declarative format, it is lost if not based in language and an object-referenced standard. Everyone from IBM to Forterra is trying to solve the problem by first picking the plug-in, then talking standards about the messages.

If IBM means what it says about the cloud initiative, it will take rapid moves to work with and further the adoption of the ISO Standard X3D. Then the messaging architecture can be worked such that even if other client languages are standard, they all can exchange these messages.

Content interoperability is not a simple matter of exchanging assets.

Imagine you could login to facebook and there was a widget there to let you diagnose your cold, flu, allergy and pretty quickly give you good advice. Our social networks are becoming our self-selected clusters of feedback, our mass notification systems, our telephone and TV. Weep or gnash, this is the TV, telephone and teacher envisioned by the communications pioneers 100 years ago. Sci-fi becomes my-Fi.

Some random disjointed thoughts about the web as a 3D world:

You read a linear story. You navigate a non-linear story.

Some say an avatar is a cursor. Others, a set of eyes. Whatever, it is the instrument of character. I am interested in telling stories and only for that, simulation. Fidelity to the story is more powerful than fidelity to the subject but the balance of these determines the raw emotive expression.

Attractors:

o Understanding a URL as a name is Web 1.0.

o Understanding a URL as a control is 2.0.

o Understanding a control in a scene is Web 3.0.

o Understanding controls in situations/scenarios is Web 4.0.

o Understanding scenarios as shapers of human behavior is Web 5.0.

Situation-space: real-time 3D with proximity/location based relationships over materials and audio. The situation determines the class of application

Longer reply:

This demonstrates the power of standards over the pronouncements of pundits (Clay Shirky's infamous "What is VRML good for? Good riddance!"). The material used to make this movie is 12 years old. It still runs in modern X3D browsers enabling me to make this movie (in four parts - this is part 1).

In the rush to do the 'next big thing', we are losing track of the game. We are failing to preserve and worse to enable reuse of digital assets. We are swapping immediate stimulation for long term sustainable works of depth.

The first volume/chapter (four total) of the IrishSpace movie is on YouTube. (approx 9:39 min/secs).

http://www.youtube.com/watch?v=-1b5wajK5Bk

This project uses both the Vivaty and the BitManagement Contact engines for rendering, Jing for screen capture, and Sony Vegas for video production and editing. 2D images have been composited via Vegas.

Some points:

1. The VRML97 code was written in 1996/97 on a schedule of 3.5 months. None of the authors met until the final assembly in Ireland. Final assembly was a day. Details of the project are online.

2. The only modifications made to the code to run it for this capture was to remove LoadURL statements that were part of the original kiosk GUI.

3. After 12 years, VRML97 code still works brilliantly in current X3D browsers. While the graphics may seem primitive by comparison to current work, this was done fast, online with multiple authors, an immovable deadline and very famous people in attendance (Neil Armstrong, Deputy Prime Minister and other Irish VIPs). Only one of the authors was a full-time professional (Paul Hoffman).

4. This work combines real time 3D, images, a full narration (performed in Ireland by citizens of Tralee) and musical score.

X3D/VRML97 has proven its suitability for long life cycle projects, for archival of real time 3D graphics, for assembly in modern editing systems, for applications such as product demonstrations, entertainment and machinima.

This is the important point:

Without a standard and process supported by a consortium with open IP policies and sustainable business models, it would not be possible for this current version of work over a decade old to be repurposed and improved. This is critical to the *business interests* of the customers of any customer of any company creating 3D products today*. Teams are required to work at this level of complexity and length. Those teams have to be supported with open tools, may not be in the same locale, may work for long periods under intense pressure, and must produce products capable of being maintained, modified, archived and repurposed at minimum expense, effort, and need to maintain skills.

We have to stop throwing away what we've built and moving on to the 'next new thing' when that thing forces us back ten years. Reach is no longer a challenge. Creating something worth preserving is.

Len,

I couldn't agree with you more. You make very valid observation regarding the need for standards and how they promote longevity of content, collaboration and the possibility of re-purposing content as standards evolve and new standards emerge.

Thank you.

News Topics

Recommended for You

Got a Question?