Devices today are growing more and more complex. This means that interfaces, trackpads, mice, hand gestures, keyboards and software controls are all acquiring multiple meanings. While this may be a minor annoyance now, it is possible in the future that a “combinatorial explosion” of interface complexity will make the web less sustainable. Consider the current trends, which are happening right as the majority of the human race is becoming dependent on computers:
- Software has more and more “options” and ways to be configured
- Interfaces controlling those options keep getting more elaborate
- More and more gestures (mouse, touch, keyboard) have more than one meaning
- Any one operation has a significant chance of trigger the wrong operation instead, leading to backtracking and wasted time
A visit with any of the new ultrabooks gives you a feeling for this. The newer trackpads are larger, and several gestures are possible across the trackpad. Until one learns the “quadrants” for the trackpads, you spend lots of time executing the wrong commads, and backtracking. In spite of touch screens, I find I fall back more and more on key commands, which are less ambiguous.
In fact, during the typing above, I managed to trigger a drag, plus a screen zoom by accident!
It would be interesting to see how, with a set of tasks that people have used PCs for since the 1980s, the time spent now compares with the past. We have faster computers and more access to information, but have to do more due to interface complexity. Older systems were slow, but the range of “features” was less. I suspect we might see an effect similar to that of software. In a study done in 2007, a 1986 Mac Plus running Word and Excel (from that era) was able to trounce an AMD Duo-Core system built 20 years later. Despite the newere computer being about 100 times faster than the old, the net time for things like word search/replace had gotten slower.
Before you put this at MSFT’s door, consider how klunky all software seems these days. Win8 is just a symptom of the times.
Thinking in a sustainable way, we can wonder what the future would be like. If computer interfaces get more complex, there will be a point where it would be unproductive to use them. We would have a software version of Joseph Taniter’s “complexity curve” which featured in his theory on the collapse of civilizations.
Of course, the counter-argument is that the wave of “touch” devices, and devices lacking keyboards are easier to use. This is true – but another feature of “apps” versus “productivity software” is the default choices being made by the software. In other words, a typical Android app can be operated with a few swipes – but “configurability” suffers. This typically is not seen as bad in the app design community. In fact, it may be seen as a virtue, if you adopt a theory of “nudge” for interface design.
What is “nudge?” This is a theory, first articulated in detail by Cass Sustein and Richard Thaler in their 2009 book. The idea is that you can steer a middle ground between providing “choice” and forcing an action from your users. Lots of old software enshrines “choice” – it is a Swiss Army knife of options. The users must make complex decisions regarding these choices, which the software shows as relatively equal. It was also characteristic of Web 1.0, when a big menu of links was morally superior to providing a preferred path, even if many users were confused.
Well, Web 2.0 introduced choice “nudging.” While one still might have the same options, the one deemed best for the majority of users was highlighted, with less common choices less visible. A Web 2.0 page had a few big buttons, and a little link in the corner for options only a few people want. The result is that the majority of people “tend” to make the correct choice, guided by the design hierarchy along a preferred path.
In the case of web apps, the limits of touch interfaces have pushed “nudge” to the extreme. Apps make choices that used to require a trip to the “config” page of a program or site. In any cases, the preferred path is defined by looking at a “cloud” database showing the most common choices of other users. This makes things simple, but has the side consequence of “norming” – people are “nudged” to the consensus, and explore the “edge” less.
So, the long-term path to software interface sustainability appears to be through asking the “cloud” what to do. Is this a good thing?