Back in the early 1970’s, it was a bit impractical to discuss a number crunching and data visualization exercise on a PC. A WIMP (Windows, Icons, Menus, Pointers) interface on a PC for what we do today would have been even more difficult to conceive.
That vision, however, is now reality. And here we are, at the next frontier. We're moving into the world of the mobile device, and its gestural interfaces. So, it might pay to be open-minded about new developments in user interfaces for data visualization.
Last week, Nick Diakopoulos blogged for Visual.ly on datavisualization for tablets and touch screens. Nick goes by (I think a humble description) the description of a consultant specializing in computational media applications. He always has these super-interesting ideas that he turns into wonderful discussions. Whenever he puts something out there, he gets my full attention.
Nick’s blog entry focused on work done at Carnegie Mellon byJeff Rzeskotarski.
Jeff and his team have developed a tablet and mobile user interface for manipulating data visualizations. They’ve produced a video that demonstrates the gesture-based interfaces they’ve created.
Nick puts this idea into context by pointing out that tablet interfaces can feel a bit pecky. This user interface proposed and demonstrated by Jeff and his Carnegie team could be a good alternative. The UI relies on more physical and tactile maneuvers such as sweeping, scraping, and shaking to reorganize the data.
Emotionally, I love this idea. It’s fresh, different, and fun. The shaking, sweeping and pinching gestures evoke the idea that data points are tangible, manageable objects that don't require a menu to manipulate .
Nick continues his blog entry by introducing Microsoft’s FLUID approach as a second example of gestural alternatives and as evidence of a growing trend.
Then, Nick asks the important, critical question: does this kind of UI pass the ‘recognition over recall’ test, and does it enhance memorability?
Since Nick is writing for Visual.ly he probably wisely leaves that question up to the reader. But as an opinionated blogger, I’ll take a shot at answering that question by talking about content, and context.
Content. I've been struggling with a content-based use case for mobile devices in data visualization. In general, analytical content is often sensitive and tricky. We must treat it with thoughtful care. Consider an analogy: fast food versus slow food. Data visualization development can be like cooking. We don’t cook tender ingredients for hours, just to have someone to eat the meal standing up and on the go. Yet here, the mobile device is for someone standing up, or at least on the move.
In Jeff's video, the narrator shows us how to use TouchViz through an analysis of the Titanic casualties. That’s a poignant, grim set of information.
However, let’s step back and reset, rather than piling on just what may be an unfortunate choice of content to use as an example. Jeff and his team aren’t necessarily advocating blithe mobile analysis; they are trying to create easier UI for analysis on a mobile device. So, if we can suspend our judgment for a minute about the kinds of content that are appropriate for mobile visualization, let's look at context.
Context: With the possible extinction or at least marginalization of laptop and desktop computing, we need to consider the possibility that most personal computing will be done with smaller screens on mobile devices. Are we there yet? No.
In the meanwhile, however, mobile devices offer us more efficient interactions with our surrounding environment. These interactions include navigating and wayfinding to a destination, or communicating with other people when environment, protocol, (or sometimes earsplitting noise!) prevents face-to-face interaction. They also offer us opportunities for analysis on the go. Analysis on the go could include tactical contexts like:
- Emergency management
- Field engineering
- Battlefield analysis
With these examples, we need a device that allows us to take our situational awareness and make assisted decisions in the midst of a dynamic environment. Would we want to use force-directed graphical displays, coupled with a regression line development or simply sorting and filtering data into discrete categories? The answer, with some further refinement and exploration, is probably yes.
That’s the use case, but what about the heart of the UI? Nick’s ‘pecky’ comment is a nice one, especially suited for the presumably bigger, more rugged digits of a boots-on-the-ground personnel, versus their slender, tender-fingered counterparts sitting in the command posts. In the field, some of the more brute gestures of pinching, sweeping and shaking would be more accommodating.
However, is this a sensible, intuitive interface for analysis? One of the most illustrative examples of UI came to me from Golden Krishna’s ongoing mantra (adapted, apparently, from Donald Norman): “the best interface is no interface.”
His most poignant, illustrative example of the interface problem is about the smartphone that opens your car door. After walking us through a sophisticated, cool, but obviously tedious thirteen-step process, he compares this experience to a desired standard:
- A driver approaches her car.
- The car doors unlock.
- She opens her car door.
Krishna concludes: “Anything beyond these three steps should be frowned upon.”
I have become very comfortable with menu-driven interfaces for selecting, filtering, and rendering visualizations of data. I’m not the only one who feels this way. Bret Victor’s (aka WorryDream) latest masterpiece, Drawing Dynamic Visualizations has, at its heart, the WIMP interface, cleverly nested in a wonderful new interface for developing visualizations.
Still, Bret’s example isn’t for a mobile context. My alternative solution to the ‘pecky’ mobile device problem? It’s pretty simple, borrowing from Golden Krishna:
- Use a gesture to make the menu bigger.
- Use the menu.
- When done, use a gesture to make the menu smaller again.
This brings up one important point that is a big plus of the proposed interface that the Carnegie Mellon team presents. In my example, making something bigger, especially on a mobile device, can create a temporary occlusion of the visualization. It’s temporary, but it comes at the moment where I most need to also be looking at the visualization for feedback on my actions. And that's why the TouchViz gestural interface is very compelling.
However, there’s workarounds. Screens can easily be split; within the screen split, views can be scaled and zoomed. We have choices here that don’t require major re-imagining of the interface.
To be clear, I’m no fan of skeuomorphs or other stubborn refusals to move on. And I think the Carnegie team is on to something cool. But when I think of simply cool, I’m expressing the Italian sprezzatura version of cool. The sense of simple effortlessness or contextual ‘fit’ isn’t here yet in this particular interface.
The Carnegie team’s doing great work, and they are heading in the right direction. It’s still a work in progress. I hope they keep working on it.