Gestural User Interface and Experience Design Considerations
The International Consumer Electronics Show (CES) 2013 wrapped up last week, and apart from 4K TVs (3D was so last year), there were many gesture-based products and services being showcased – including Leap Motion, a gesture add-on for iOS devices, and Intel’s perceptual computing.
The gesture-based products caught my attention more than the fancy TVs, since I had just bought a Kinect over the holidays. Not the Xbox gaming system, but only the Kinect sensor. I bought it along with a couple of books on hacking the Kinect, to get hands-on experience with non-gaming gestural user interfaces.
As I write this, I am using a classic WIMP (Windows Icons Menus Pointers) Graphical User Interface, while some of you reading this may be using touch-based interfaces on a smartphone or a tablet. Most of us have some exposure to gestural interfaces thanks to gaming systems like the Wii or XBox Kinect. The progression is headed towards interfaces that seem more natural to the user. Using a mouse is second nature to most of us, but if you observe someone using it for the first time, you will realize how unnatural it actually is. When iDevices made touch based interfaces common, parents started talking about how easy it was for their toddlers to use those devices, because it got more natural – if you want to move something to the left, you drag your finger to the left. Gestural interfaces take it a step further by taking third dimensional inputs from your body as well.
After installing the drivers and software, and connecting the Kinect, I started my attempt to control a Windows 7 computer using gestures. At the risk of oversimplifying it, my gestures were captured by the Kinect, and translated by the KinEmote software into movements on the screen. This allowed me to open programs like a browser, and browse websites, and close the browser when done, albeit with some difficulty.
http://www.youtube.com/watch?v=yY_eSbZS4t4
Using the Kinect as a non-gaming Gestural User Interface Device (look closely at the bottom right panel to see my controlling gestures)
Here are a few user experience implications that I became conscious of after experimenting with this non-gaming gestural user interface setup:
- The mindset of users using gestures for tasks is very different from the gaming mindset. The gestural user interface should meet a need and be useful instead of being there for the “wow factor”.
- Just as touch user interfaces had to be bigger than traditional desktop targets, gesture-based interfaces also need to be more accommodating. When using gestures from a few feet away, make it easier for users by making targets bigger and well-spaced.
- While gestural interactions have many similarities to touch interactions, one added complication is the need to distinguish between intentional and non-intentional gestures (not intended for interaction like covering a sneeze). Factor this in while designing the gestural user interface, by defining a certain gesture to initiate interaction, and confirming destructive actions like closing a window.
- Follow conventions that are being created for gestural user interface. For instance, waving is a common way to engage with the device or system
- Give users ongoing feedback so they know that the device is seeing and recognizing their gestures.
- Gestures should be logical and meet users’ mental model – for instance, moving their hand to the left to scroll to the left, and other corresponding gestures for scrolling right, up and down.
- Don’t make users switch between gestures and touch or mouse/keyboard – it makes the task inefficient.
- As with any user-centered design process, test and iterate. Test early and often in the expected context of use, including the environment, distance between the screen and users, and screen size.
- Offer a short introductory tutorial to show gestures and what they do. As always, give users the option to skip it and get back to it later.
- Gesturing over a prolonged period of time can be physically uncomfortable and tiring. Add frustration to the mix if gestures are not correctly recognized, and it is easy to see why users may give up after the novelty wears off.
There are many more uses for devices like the Kinect outside of controlling computers which range from the fun Coke Dance Vending Machine (video), to the calming yet engaging application at a children’s cancer center (video), to people using it to control their media centers (video).
From a User Experience design perspective, touch and gestural interfaces are relatively new and there is a lot to be learnt. If you’re interested in the user experience aspects of gestural devices like the Kinect, a good starting point is the Kinect for Windows Human Interface Guidelines (PDF, 7Mb). Another useful resource is the more detailed, in-depth book Brave NUI World: Designing Natural User Interfaces for Touch and Gesture by Daniel Wigdor, Dennis Wixon.
Leave a Comment