Hopes are high that we're enter an exciting new world of ground-breaking human-friendly devices. The graphical monitor and the mouse were wonderful advances over the keyboard in bringing computers to children and casual adult users, but those interfaces have been played out. I really don't need to spend another five hundred dollars on a processor just to get a window that rotates in 3D instead of minimizing. But will humans be able to master all the new human-friendly interfaces?
I worried about this a couple months ago after I had the pleasure of checking out FILE 2008, the Electronic Language International Festival in São Paulo, Brazil. Although it offers a wide range of games and new media, I focused on the exhibit near the front of the hall with all sorts of creative interfaces.
They reminded me of all the press reports I've read about what the ACM calls organic user interfaces\: materials that we can rub, stroke, squeeze, and even tear to express our desires to embedded sensors and attached network components. We're already surrounded by surveillance cameras, and we could apply similar technologies in a more benign way if we follow through on experiments such as Project Oxygen. More immediately, the Wii and the iPhone broke records in their industries by championing interfaces that were natural, simple, and flexible in ways that vendors hadn't thought of before.
Believe me, I'm ready for new things. When I started computing on text-based systems, I didn't mind using a keyboard because it's a great input device for text. I also didn't mind memorizing arbitrary key sequences such as Control-S or Escape-F in order to convey commands to the system. But I still depend on shortcuts like those today, because they're the most efficient way to interact with programs. On the 40th anniversary of the invention of the mouse, they shouldn't be.
But now I have to ask: how easily can we master new interfaces? They're supposed to be intuitive. But that recalls the cynical joke that no human can find any interface intuitive after the nipple. And I have to cast my vote with the cynics after playing around with some experimental devices at FILE 2008.
The most promising demo was a view of an artificial cityscape along with a double-trackball device that let you view the scene in 3D at any distance and angle. The experience was like flying in a dream: zoom in, zoom out, whip around the city like a lightning bolt, expand and collapse buildings--it was exhilarating. Except that where I tried to go was always different from where I ended up. Several minutes of experimentation failed to yield the secrets of the trackballs' algorithm.
Every other device with non-trivial behavior had the same problem. A few simple interfaces--like one that made sounds when you jumped on a mat--worked fine, but anything that took thinking ended up being frustrating. The devices didn't behave intuitively.
In addition, a high percentage of the exhibits were clearly broken. The screens were blank, or the pictures were frozen, or clicking on the buttons redisplayed the same pictures.
Was the software buggy, or were the interfaces just too complex to be learned in a few minutes of experimentation? My guess is yes and yes. Such interfaces are hard to design properly, as well as hard to use.
You probably can attest to this experience yourself if you've ever moved to a new interface, even one that's advertised as user-friendly such as Mac OS or the iPhone. Interfaces have a hidden logic, revealed only by training or sustained use.
Casual technology users are almost impervious to new types of manipulation. After all, why are the arbitrary copy/cut/paste shortcuts established by the Windows desktop decades ago still almost ubiquitous?
So how many new trackballs will we have to learn? They'd better be rolled out slowly. It'll take any device a long time to call itself my new squeeze. And anything with more intelligence to offer than a wired-up tumbling mat will make me look before I leap.