Wednesday 27 July 2011

Designing for Non-touch Screens in a Touch World

With the dawn of the third generation of “NUI” user interfaces (ie touch, but to a lesser extent gesture and sound) at hand, there has been a great deal of debate, discussion, and theorizing around what these new interfaces will look like. A great many people have focused on the technologies themselves and many more have sought to compare these new interfaces to the old.


Some have been talking openly about the problem of transitioning users used to older interfaces and there has been a lot of work in this area in the private sector. Apple's (of course unannounced) tactic has been to transition users from smaller touch-screens to progressively larger ones (expect touch desktops before long). Microsoft has an exciting and intriguing strategy of re-framing the old interface style (more on this in another post). And Google's new UI seems to be merging the mobile-friendly wide spacing into the desktop experience, as if touch-screen desktops were already here (which, by the way, they are).


Surprisingly few, however, have talked about the opposite problem: how do we design 2nd generation interfaces for users who are used to, and rapidly coming to expect, 3rd generation interfaces?


Even though touch screens are rapidly getting cheaper, many interfaces will stubbornly remain second generation for a long time. Upgrading ATM's, alarm clocks, vending machines, kiosks, telephone booths, industrial machinery, etc is costly and often (heresy, I know!) touch-screens or voice controls are not the right solution.


The problem is this: users are coming to expect screens to be touchable. How often have you tried to touch the screen of an ATM or yelled “speak to a human!” at an annoying, automated telephone system, only to have nothing happen? Did you feel kind of silly?


There is a saying in design: there is no human error, only designer error.


The problem is that these systems were designed at a time when touch-screens were not ubiquitous and when speech recognition was a dream. Users never thought to touch or to yell, because they knew that wouldn't work. So the question becomes, how do we update the design of non-touchscreen interfaces to tell users that they aren't touchable?


For things like ATM's this is actually surprisingly easy: make the labels for the side-buttons text only, and do not put a border around them. This, along with ensuring proper alignment of label to button, clearly identifies the text as a label and not a button proper.


Removing tactile affordances from older interfaces may seem like a step backward because much of the 2nd generation style was built on physical metaphors. As natural user interfaces attempt to more directly embrace these metaphors, we must be careful that the old ones (buttons, layers, shadows, edges, etc) do not take on new, unintended meanings. This is one very important reason to keep touch in mind while designing non-touchscreen artifacts, but there is of course another: the limitations of the touch-screen have taught us important things about interfaces that we didn't know before.


As the Google redesign is demonstrating, there is much to be learned from NUI's. Making UI elements less dense, a necessity for pudgy-fingered touch users, also improves the readability and focus of “traditional” interfaces. Using motion to both react to users moving through the space of the interface, and as a guide towards discovering actions could similarly be brought to “traditional” UI's (as indeed it is in Windows 8).


While “user intent” is still a poorly defined concept, great strides have been made in the domain of the touch keyboard. By clearly narrowing the range of user input to what users “intend” (eg, typing real words) touch keyboards have become amazingly resilient to typos and spelling errors. Older UI's can learn well from this example. To continue with our ATM tutorial, ATM's have a very narrow range of valid amounts to withdraw: auto-correct and verify for cash withdrawals could speed up erroneously typed transactions and prove a less frustrating experience. There are many other such areas in which 2nd generation interfaces can learn from advances in touch.


In pondering the switch from 2nd to 3rd generation interfaces, I am reminded that we've already made this switch before: from terminal (text) interfaces to GUI (2nd generation). After GUI, problems of discoverability, modality, context switching, etc etc started to become visible (literally) and only after being understood at the GUI level did these issues get names and solutions. Terminal applications didn't disappear or die, but they had to adapt and to abandon any hopes of trying to be psuedo-graphical once truly graphic applications became common.


Gone are the old psuedo-graphical MS-DOS apps. Gone (hopefully soon) are the old psuedo-physical screens.


Long live the touch-screen.