Sustainable Virtual Design at the Social Level Part I: Resolving Design Strategy Debates


While a lot of posts here have concerned the efficiency aspect of sustainable virtual design, it’s important to remember that all sustainability frameworks – including this one – have a social aspect as well. In the case of the web, this includes the audience, as well as design strategies.

Sustainability is a meta-language, meaning that we can ask a sustainability question across several disciplines. In particular, we often jump from local optimization to consider the overall impact on the entire system – the internet versus a site.

Recall what a general definition of sustainability might mean for the web:

Sustainable Virtual Design provides the maximum service to the largest, most diverse audience on the web today, while not compromising our ability to serve that audience in the future.

With this in mind, I’d like to apply sustainability to an issue which has been important to the web since its founding, and recently heated up with discussions of Responsive Web Design – how to support your audience on different devices, and the “one web” versus “many webs” debates. I’m proposing that sustainability IS the framework that might help everyone to decide for the best, by looking outside local systems (designers, developers, hardware makers, carriers) to the big picture.

As Nathan Shredoff says, we can move from “paper or plastic” to “bag or no bag”.

Here are some of the issues related to mobile design strategy:

  • Supporting desktop, mobile, and tablet users
  • Adapting content for high-end, versus low-end browsers
  • Supporting users in the West (who often have desktop browsers) versus those on mobiles (often in Asia and Africa).

Here are some of the strategies involved in the debate:

  • Create different websites for mobile versus desktop users
  • Create one site for all with Progressive Enhancement of features
  • Sniff browsers on the server-side, (e.g. brower databases), and use server-side Content Adaptation
  • Sniff features on the client-side, (e.g. Responsive Design) and adapt content

There are lot’s of good discussions of mobile versus desktop, and how to serve everyone – see http://yiibu.com for some of the best thought. The following history is intended to illustrate the relative role of designers, carriers, browser manufacturers and developers in this process. In a later article we will show how each fits into an overall sustainability framework.

History of the Debate

In the two decades, as mobile devices have become more common, one can define several distinct eras of mobile support. Each era was defined by how desktop and mobile users were supported. The eras were partly a result of (improving) technology, but also reflect a new “mobile war” between back-end, hardware-oriented programmers, and front-end, interface-oriented designers and front-end developers.

The Browser Wars – during this era, mobile access was nonexistent, and design strategies were formulated to handle incompatible browser specifications in the infamous “browser wars”. With the advent of the HTML 4 spec, along with CSS, these issues were partly solved. Internet Explorer 6 also “solved” things – by trouncing Netscape, it created a dominant standard for desktops that lasted until the mid-2000s. Thus, the desktop web became mostly platform-independent, since the vast majority of web users were using Internet Explorer on Windows. The desktop web remained in this static state during the First and Second generation mobile development (see below). This era was dominated by web designers and front-end developers.

First Generation Mobile - around 2000, the first cellphones with some ability to surf the web were released. Initially, these phones used an HTML-like language called WML (with delivery via wap:// instead of http://) due to tiny screen size and slow connections. Sites were typically tailored to specific features of hardware. Early mobiles all used Java, which meant that Java developers had a big hand in designing the web experience. The typical choice made was to “sniff” a specific device, and tailor content for that specific device. In fact, in many cases the carrier controlled what sites could be surfed on the web, or even locked it down to their website alone. The programmer could assume that only certain kinds of cellphone hardware were connected to their network, and design accordingly.

Since the number of web-enabled cellphones was relatively small, this scheme worked (for a time). This era for mobiles was dominated by hardcore programmers, often with little knowledge or understanding (or even dislike for) the web.

It’s also worth noting that tablet computers were tried several times during this era, with some available as early as 2000. All these attempts failed. They biggest problem was that tablets typically used the same operating system as desktops, and often had no network access.

Second Generation or “Featurephone” Mobile – in the mid 2000s, phones with enhanced features began to appear. They were able to accept XHTML-MP as their default language, and sometimes had rudimentary CSS or JavaScript features. The rise of featurephones, along with the decline of WAP and WML, helped merge mobile and desktop design. However, the vast majority of designers created ever-larger page layouts, essentially ignoring the web.

Device Group Strategy – the greater power of featurephone was exploited by large websites and carriers by an extension of “custom websites” strategies. A series of “Device Groups” were defined for different screens and cellphone capabilities. Then, a series of templates were created for each device group. Finally, content was delivered into each template on the server-side. Typically this required re-routing users to a separate mobile site, instead of adapting the original desktop via “Graceful Decay”. This typically meant that content strategy split into two streams – desktop and mobile.

Browser Sniffing Strategy – To determine the specific device, server-side browser sniffing was typically applied by back-end programmers. Front-end designers were often unaware of these strategies, and unable to implement them in standard HTML/CSS/JavaScript. Both HTTP headers and User-Agents (strings sent out by the browser when a page was requested) were scanned. Unfortunately, there was no standard for either HTTP headers or User-Agent strings beyond vendors, and cellphone vendors and browser-makers invented their own, non-standard forms. As a result, it became increasingly difficult to “sniff” a browser correctly, even with a massive database.

In time, large websites began using commercial ($$$) browser-sniffers like DeviceAtlas or WURFL. The data is comprehensive, as seen on the WURFL search site (Terra – WURFL database http://www.tera-wurfl.com) Smaller sites either ignored mobile, or used less-sensitive PHP scripts, often created by front-end developers with little knowledge of the mobile space. As a result, small sites were good at picking out Internet Explorer, but failed on many, if not most, of the new featurephones.

In this era, the design community remained largely (willfully?) ignorant of mobile, and tended to design for a future where the screens got bigger and bigger. Mobile was ignored, since it took coding most designers couldn’t do, and required expensive resources unavailable to all but the largest sites.

Progressive Enhancement as a design strategy – Around this time, Todd Parker’s Progressive Enhancement strategy introduced an alternate strategy for supporting mobile and desktop users. Rather than making sites tailored to a small screen, or even a specific hardware device, Parker et. al. advocated creating a single website, detecting features of the end-user’s device, and enhancing content as necessary. Importantly, these ideas were developed and propagated by interface designers, rather than back-end or Java coders. So, instead of multiple templates adapted to specific hardware, a base design “sensed” its environment and added features only as necessary or possible.

A key feature of Progressive Enhancement is that, if client-side languages like JavaScript were unavailable, server-side delivery was necessary. This is particularly important for mobiles, since so many either had JavaScript turned off or had a version which was incapable for feature detection.

Content Rewriting or “Content Adaptation” Networks - Cellphone carriers, ignoring the trends in design, and seeking to trap users on their networks, began customizing content for specific devices. So, if a third-party website with standard HTML markup entered the cellphone network, servers would rewrite their code for delivery to a specific device. This happened without any standards, and was done in a non-transparent, proprietary way. In more recent times, the adaptation moved from automatic to optional, using commercial (read $$) services like the DeviceAtlas API for resizing images.

In another related move, some browsers, most notable Opera mini, began to use pre-digested content. In this case, parts of the page deemed too complex for cellphones were pre-rendered on the server, and shifted down to the client. The newest Amazon Kindle Fire uses this strategy while in Silk mode. These strategies were proprietary to the browser and/or network providing the service.

Opera even abandoned HTMl for OBML (Opera Binary Markup Language) to make this happen. This causes major limitations to client-side JavaScript, as detailed on Wikipedia:

Opera Mini has limited support for JavaScript. Before the page is sent to the mobile device, its onLoad events are fired and all scripts are allowed a maximum of two seconds to execute. The setInterval and setTimeout functions are disabled, so scripts designed to wait a certain amount of time before executing will not execute at all.[41]

After the scripts have finished or the timeout (of 2 seconds) is reached, all scripts are stopped and the page is compressed and sent to the mobile device. Once on the device, only a handful of events are allowed to trigger scripts:[41]

  • onUnload: Fires when the user navigates away from a page[42]
  • onSubmit: Fires when a form is submitted[42]
  • onChange: Fires when the value of an input control is changed[42]
  • onClick: Fires when an element is clicked[42]

Opera Mini requires browser-specific authoring, in a way that Microsoft never did.

http://dev.opera.com/articles/view/opera-mini-web-content-authoring-guidelines/

Opera Mini even is touted as “green”, since the pre-rendering happens on green servers, burning “greener” bits than the client:

http://my.opera.com/chooseopera/blog/2011/10/18/how-green-and-clean-is-opera-mini

Third Generation “Smartphones” - With the introduction of the Apple iPhone in 2007, mobiles moved from a side-branch of web development to the location where the most advanced technologies (HTML5 and CSS3) were available. This “great leap forward” advanced past desktops, which were just waking up from their long sleep as market share of Firefox and Google Chrome began to bit into IE’s dominance. iOS and later high-end Android assumed high bandwidth, larger screens, and more powerful mobile browsers.

The rapid adoption of HTML5 seemed to promise a universal language optimized for supporting all web hardware, though in practice the future was seen as something looking like a iPhone or iPad everywhere. Tech-pundits on techno blogs (all armed with the same high-end cellphones) blabbered about “Apple versus Google” as if it was relevant to the vast majority of mobile users who had older devices. Most of these blogs and tech writers hardly considered Opera Mini, even though it rapidly became the standard for mobiles. Many other factors contributed to over- or under-counting advanced devices compared to simpler ones, reinforcing ignorance of the feature phone market. For example, as more sites used Google Analytics JavaScript to report traffic, they completely missed traffic where the provided JavaScript wouldn’t run (read: most featurephones).

Responsive Web Design - Reacting to the rise of around 2010, iOS, designers like Ethan Marcotte (a designer/front-end developer hybrid) began advocating a theory of Responsive Web Design. This method used client-side CSS to sniff the screen dimensions, and select a specific set of CSS rules accordingly. There was a tacit assumption that the receiving mobile supported these technologies, despite the fact that smartphones have not replaced older cellphones and featurephones. This was a case where designers, drawn to the cutting-edge enabled by iOS, developed a strategy that assumed highly capable mobile browsers of the kind that – well, well – the kind that the designers had in their pocket.

Responsive Web design also elaborated on older concepts of “jelly” and “fluid” layouts to make one design “flex” between several standard screen sizes. These screen sizes (once again)  were typically chosen based on what was available at the high-end of the cellphone market.The techniques were also practical for designers with some code ability (read: hybrid) to implement.

Responsive web design was exclusively client-side, and was coupled with a sift to JavaScript-based feature detection over server-side browser sniffing. Back-end developers were not involved. The sub-group of designers creating in the old cellphone context (meaning they were dominated by the vendor and back-end, non-web developers) had little input into this widely adopted strategy.

Critique of Responsive Web Design – In an article posted in 2010 by entitled “Css Media Query for Mobile is Fool’s Gold“, Jason Grigsby considered the problems with the Responsive Web Design paradigm for mobiles, or rather, the real mobile world versus the cutting-edge designer world. His points, were, in order:

  • Mobiles have to download big images and code libraries for many responsive design strategies to work
  • Mobiles are too slow to process complex adaptive pages
  • Mobiles are used differently from desktops
  • If you feature-detect with JavaScript, you ignore everyone who doesn’t have JavaScript (low-end mobiles outside the US and Europe)

His conclusion:

  • Do more work on the server-side
  • Separate desktop and mobile sites, like was done in the good old days.

Other articles on YiBu.com also critiqued Responsive design, with the following

  • Most big sites sniff for browsers on the server side, and deliver custom pages
  • People in Asia and Africa have low-end mobiles, and aren’t served by Responsive Design strategies. Good discussions of the “mobile digital divide” have been developer by Bryan Reiger on Slideshare, and

Finally, the whole idea of the web as it exists, along with the idea of “mobile” versus “desktop “contexts has been discussed by Stephanie Rieger in “The Trouble with Context”.

Mobile first – This strategy, promoted by Luke Wroblewski  (definitely on the design/UX side) borrowed from Progressive Enhancement to state that the best starting point was always a low-end mobile. In this strategy, the mobile experience was seen as more user-friendly (meaning there was only enough screen real estate to give the user what they needed).

In Part 2 of this discussion, I’ll consider how these various strategies and debates can be analyzed by applying a Sustainable Virtual Design framework, and provide some suggestions for the future.  Sustainability helps us back up from “paper versus plastic” to “bag or no bag”. User-sniffing, anyone?

  1. Leave a comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 583 other followers

%d bloggers like this: