Sustainable Virtual Design at the Social Level Part II: Resolving Design Strategy Debates


So let’s attempt a verbal (not LCA or other formal) analysis of browser sniffing versus feature detection.

Our goal is to use sustainability thinking as we go through the process of thinking through how to handle access from a diverse array of devices and browsers.  Let’s assume we’re planning to:

  • Adapt our pages to our audience.
  • Try to support access at the HTML5/CSS3 level.
  • Apply Progressive Enhancement to ensure basic content is available to everyone.
  • Apply (Pragmatic) Responsive Design to adapt our layout for different screen dimensions and sizes.

The  goal, according to the sustainability theory, is to:

  1. Provide for the maximally diverse audience for our content.
  2. Use standards to maximize durability and re-usability of our code and data.
  3. Minimize our energy and resource consumption.
  4. Source the “greenest” bits and CPU cycles we can for doing the analysis.
  5. Make sure our solution will continue to work in the future, ideally without constant active maintenance.

Now, we figure out the pros and cons of different approaches…

Provide for a maximally-diverse audience

Right off, we realize that we aren’t trying to sniff browsers. Browsers are not our audience. Instead, we’re trying to identify users, or rather, users with classes of services. We want to content-adapt with broad, rather than narrow, classes of services. We also want these classes of services to apply in the future.

So, we can’t adopt a ‘my browser or nothing’ strategy – meaning users have to run the same development computer and software to see the site properly. Unfortunately, this is a typical strategy.

We also cannot use a ‘binary’ solution – specify support levels for some browsers (e.g. Chrome vs. IE) and ignore the others.

In fact, we should avoid stupid, browser-centric discussions that mislead us about our audience. Collectively, Microsoft Internet Explorer in April 2012 had about 25% market share. It can’t be ignored, especially by dividing up service between IE6, 7, 8, 9+ and saying individual segments are too small to care about. This is browser fanboy-ism at the expense of users. Equivalent bad arguments can be seen for Chrome vs. Firefox, or Android vs. iOS. Who cares? We want to support our audience, not prove that we were right about a corporation making the best software.

After enough time has passed, and everyone is in our top-tier (HTML5 and CSS3) of service, we can develop a new strategy. Until then, we just monitor tiers of service according to our existing paradigm.

We want to identify users so that we can develop a good Progressive Enhancement/Responsive Design strategy that is practical to implement in our workflow and with our content.

The big picture with our audience is that:

  • An increasing number are on cellphone-type devices (excluding tablets) – 4% to 8% between 2011 and 2012
  • An increasing number are on tablets
  • A slowly-decreasing number are on desktops, but access is likely to be significant even 10 years from now
  • Some are on “ancient” browsers which don’t support JavaScript or CSS media queries well enough for a pure client-side feature detection strategy.
  • All browser makers and device manufacturers are moving to support HTML5 and CSS3. However, many on the mobile side do not. In the future, more and more will
  • Vendors are largely irrelevant, except for when a vendor is associated with no support for HTML5 and CSS3
  • Platform/OS is also largely irrelevant
  • “Bots” scanning the Internet can only interpret text, and don’t benefit from dynamic data, animations, or even images

The world of “living fossil” browsers has the following structure:

  • Old desktop browsers support basic HTML and CSS
  • Internet Explorer before version 9 could fully support pages with some polyfills, e.g. Modernizr
  • Old cellphone browsers run WAP/WML – nearly impossible to support for small organizations. Large organizations can run sophisticate content-adaptation services, but this is impractical for the majority of web designers/developers
  • Feature phones typically can support basic HTML, but have limited CSS and JavaScript

According to a sustainability perspective, even if new browsers have some “special” features that make them cutting-edge. So, even if Google is using SPDY to speed things up, we don’t care. What we care about is the proportion of users using SPDY. Yiibu seems to adopted a “living fossil” approach. A look at their browser/device spreadsheet shows the age of the browser/device.

We should also keep in mind that screen size and resolution will always be the first thing to worry about in a Responsive Design strategy. So, the better we are at detecting it, and support for CSS media queries, the better for the overall system. Support for JavaScript and HTML5/CSS3 come afterwards. Old cellphones, featurephones and IE before 9 form our significant problems here.

Our content adaptation problem is not about the future – we are adapting to the past. We don’t care about what version Firefox is in during 2016. The “triumph” of HTML5 has in all probability created a long-term plateau of features for the web. During the next decade, changes in browsers will be much more standards-based than before. Our scripts should handle the “browser-challenged” user with a “living fossil” browser.

We should look at methods that detect “most of browsers most of the time” without getting hung up on leaving out the occasional rare browser. This is essentially the Pragmatic Responsive Design strategy as described by Stephanie Reiger of Yiibu,  also referred to as “Mobile First Responsive Design. There are some great notes on the concept on Luke Wroblewski website. This means we don’t have to be purists – if we get something by sniffing some proprietary HTTP headers like those created by Opera Mini, that’s OK. They key is to restrict sniffing old, rather than new browsers. Sometimes we’ll sniff devices rather than browsers, and that’s OK as well. We just want to get to user capabilities and support.

If we think in terms of sustainability, Pragmatic Responsive Design prevents us from becoming “too perfect.” For many site, using a commercial database would be overkill. Also, as detection becomes perfect we are tempted to micro-configure our adaptation, which makes it more expensive at the development and implementation (page speed and energy use) level.

A loose strategy doesn’t require a complete database of every browser like DeviceAtlas for support. However, it does mean being aware of long-term browser and device trends (not absolute market share) via sites like GetClickyW3Counter and StatCounter. We should avoid statistics that focus on the browser and only use stats about features, e.g. screen sizes and resolutions.

We also should wish for “one web,” but recognize that it hasn’t happened. The newest stuff represents a chance for “one web”, but sustainability frameworks all emphasize maximizing access to products and services over the cutting-edge.

We should also remember that different stats websites provide different results, so they shouldn’t be used to plan down to the decimal place. StatCounter and Google Analytics often vary widely in their computations of visitor statistics. We should also try to rely on statistics collecting on the server-side, if it is available. If we use JavaScript to do web page analytics, we’ll miss the huge number of old mobiles outside the USA that can’t parse its JavaScript. In other words, these sites over-estimate the number of smartphones and under-estimate the number of older devices that can’t handle JavaScript. Google Analytics has a server-side PHP method at http://code.google.com/p/php-ga/.

If this is a problem for a client or business guy, we should educate them in the unsustainable nature of perfect browser-sniffing.

We should remember that the US is not representative of current and future Internet access. Most people are using mobiles to surf the Internet, rather than desktops. In addition, most cellphones at present are not smartphones, though the percentage will rise to a majority in several years. However, it means that designing for iOS is not designing for the Internet audience.

We should also take note of the recent trend to “pre-rendering” web pages in the cloud. For example, Opera Mini, and Kindle Fire in Silk mode both do some of their rendering on the server side. In the case of Opera Mini, you have 5 seconds for your JavaScript to execute before it is downloaded, often to devices with little or no support for JavaScript.

We must include developers as a segment of our audience.  Since there are many small developers who aren’t hardcore website WPO engineers, we must also find a solution that everyone can apply, without the infrastructure or fees that are required for some high-end solution. Our sustainable solution should target the low-end. Larger sites can use services like DeviceAtlas and WURFL. Medium sites might use the lower-cost Handsetdetection.com. Smaller sites can’t, or in practice they will not.

This is where the “nudge” feature of sustainability thinking comes in. We want to make the “choice architecture” for creating websites include a sustainable version of user identification, and make sure it can be used by the majority of developers. The best way these days is to create a simple solution, and put it into a “boilerplate” system increasingly used by the majority of web designers and developers. By making the boilerplate “mobile first” we nudge developers and users in the direction of energy-efficient devices.

Bots as a sustainability issues – removing bot access to improve a site’s sustainability score

Malicious web bots are an issue for sustainability, since they sap server power with their requests. They also pose a threat since many are malicious, and try to sniff admin logins or restricted information. In other words, bots render the web less sustainable. Since bot-makers constantly change their user-agent strings to thwart detection, we would require a reliably updated database.  But there is a better way – as shown by Jeff Starr at Perishable Press, the best way to nab ‘bots is to look for how they request information.

Here is a bot-detector whose construction is described in series – one Apache rewrites in your .htaccess module, the others as something for server-side programming. I’m really impressed at how this author thought through sustainability issues in creating this blacklist.

http://perishablepress.com/5g-blacklist-2012/

Use standards to maximize durability and usability

It would be nice is (2) user-agent strings or other measures of browsers consistently broadcast their features, but they don’t. The current structure of most UA strings reflects a one-upmanship between browser vendors to steal pages from each other so as to rise higher in traffic reports. Even though major browser creators are trying to standardize their UAs, it isn’t going to change this messy picture. In addition, there are lots of nonstandard strings, especially among non-human agents like spiders, page scrapers and compilers.

So, any standards we implement will be on the processing side, since we can’t expect the community to standardize their feature reporting in the near term. Our code and database should try to follow existing standards. However, here, as in the user-agent strings themselves, there isn’t a standard. The nearest one is the browscap.ini file, maintained by Gary Keith,which is used by the PHP function get_browser().  As a standard, it has the problem of being a relatively large php .ini file format. In addition, many of the features matched to specific user-agents are obsolete and/or irrelevant to the desktop versus. mobile issue.

So, it is OK to develop a custom solution to storing our information and coding our classifier.

There is also little standardization in creating tiers of services. Several have been, or are being used:

So, it is OK to develop our own tiers of service, while leaning heavily on “best practices” from these other attempts to define user groups.

Nice summary here: http://www.sprymedia.co.uk/article/Graded+Technology+Support

If there are a few, absolutely sure ways to recognize user classes, we should use them. This should “trump” comprehensive systems that apply one massive theory (or database or regular expressions) to solve the problem by identifying specific devices and browsers.

We can take some guidelines for support from JQuery Mobile’s Graded Browser Support:

  • A Grade – full ajax support
  • B Grade – JavaScript, but no Ajax navigation
  • C Grade – HTML only, with all the work done server-side

The matrix is at the link below:

http://jquerymobile.com/original-graded-browser-matrix/

However, this may be too limiting for a sustainable strategy – we shouldn’t just lump everything but recent smartphones as “old”. In particular, we don’t want to download stuff to old mobiles that they can’t process, if possible. Here we depart somewhat from the 3 categories in Pragmatic Responsive Design, (Base Mobile, Mobile with CSS media queries, Desktop).

Yahoo! originally defined an additional “X Grade”, which was unknown browsers, as well as new versions of old browsers. X-Grade assumes that the browser is modern.

Pre-rendering browsers like Opera Mini and Kindle browser seem to require a separate class, since they support advanced JavaScript for a few seconds, then simple JS once the prerendered page is downloaded.

Known text-readers (e.g. Lynx, text-based cellphones) seem like another class, a “T” grade. Here, we are interested in not downloading big bitmaps and media files, improving overall sustainability via reducing energy and resource consumption.

Bots and search spiders are essentially text-reading users. By default, we don’t need to shift content for search spiders and bots, unless your project has an explicit SEO strategy. However, we could define a “Bot” grade in that case. But we might leave out image and rich media, if we have bots we want to visit our site, so we should fold bots into our text-reader class.

Minimize energy and resource consumption,while sourcing “green” bits

If we worry about absolute energy and resource consumption, we could make a case for extensive client-side adaptation. If our access is mostly mobile, these devices use very little energy relative to desktops. But a more critical thinking reveals that a tiny slice of server time might be better. In general, the power used at the server level is “greener” than that on the client. Servers are increasingly energy-efficient, and housed in energy-efficient data centers.

So, it is OK to do work on the server, even for clients that have good JavaScript feature detection available.

Make sure our solution continues to work in the future

This is an area that requires re-thinking. The main reason we aren’t supposed to sniff browsers is that they are constantly changing, and we can’t keep up with it all. This is quite true. The creation and generation of user-agent and other information available on the server is, and will remain, chaotic. However, we aren’t trying to keep up to date with browsers. Instead we are trying to support users at a particular tier of service.  Remember, current trends point to the following features becoming “one web”:

  • Full support for HTML5 and CSS3
  • Full support for “modern” (DOM2) JavaScript
  • Full support for CSS media queries

In other words, to be sustainable for the next decade, we only have to sniff the old. We don’t have to look at the new. Therefore, our classifier should only worry about old hardware and software that is very low in the tier. We don’t have to look at cutting-edge at all (or, rather, this is a separate issue for designing and building a website).  IE6 isn’t going to be updated – it is a “living fossil” from an earlier era of the Internet. We don’t have to worry about changes to it. In contrast, IE10 will probably be replaced with IE11, IE12, etc, all of which will have our top level of support. So, we don’t care about them.

The strategy of only looking for old browsers (rather than trying to identify all mobiles, all Nokia cellphones, or all Microsoft products) is the one that is most sustainable.

What this adds up to

After considering these points, we can now define a strategy for adapting our content to our user audience that maximizes sustainability.

  • Pure JavaScript feature detection won’t cut it, since we focus on the old, rather than the new browser/platform. This runs against many recommendations currently seen in Responsive Design literature.
  • The solution has to be simple and free, so it can be included in a boilerplate, and easy to install even for someone who is more “web designer” than “web programmer.
  • The solution should interface with existing client-side feature-detectors, like JQuery Mobile and Modernizr.
  • Sourcing server-side, “green bits” on the webhost is OK, as long as we don’t have a super high-traffic site with a scalability issue.
  • A graded Browser Support strategy is in order, but we are going to define it in terms of users, not browsers.

Remember, we ONLY want to find the browsers that can’t do their detection on the client-side, as well as leave-in or remove some page elements as necessary.

Primary class

  1. A grade – FULL USER – HTML5/CSS3 or HTML4/CSS2 with media queries present, JavaScript DOM2 or better, gets full experience. Our default CSS defines HTML5 elements as ‘block’ or ‘inline’ so basic markup is possible for browsers that don’t explicitly recognize these tags yet.
  2. B grade – FULL USER, but with quirks requiring polyfills. For example, there are lots of polyfills for IE, and document.CreateElement() can make HTML5 tags known to older IE versions, so it is “B” grade.
  3. X grade - FULL-USER, unknown, but assumed new and implicitly “A” level, gets full experience
  4. C grade - BROWSER-CHALLENGED, minimal JavaScript and/or no CSS media queries, gets basic HTML
  5. W grade – BROWSER-CHALLENGED, WML instead of HTML. Use a redirect rather than simplified page in this case – it’s just too hard without a content-adaptation system.

Modifiers that might be used to configure HTML prior to sending it to the client:

  • X – Ajax present/absent for creating dynamic interfaces
  • P – cloud pre-rendering, so JavaScript may have to run in staggered configuration
  • T – text-only, so don’t bother sending images
  • V – video and audio support

Data sources should be public and free for boilerplate distribution. “Hooks” can be provided for using services like DeviceAtlas and/or WURFL via Scientia Mobile. Since we have a “past” rather than a “future” orientation, we assume that all future browsers will have X (A grade) support. We only have to worry about “fossil” browsers and devices, so we won’t need to constantly update the dataset.

Public sources:

We should create the following module for “Green Boilerplate,” implementing our Graded Browser Support strategy:

  • Specify our Graded Browser Support.
  • Develop a support spreadsheet implementing our Graded Browser Support. Google Docs for accessibility.
  • Make a server-side “bootstrap” providing some feature detection, and handling the “browser-challenged” Grade C cases.
  • Forward data to a client-side JavaScript for improved feature detection, if necessary. Yiibu has a strategy where they write a cookie and forward to the server. If the server doesn’t detect it, it drops back to basic HTML strategy. Their blog notes that this solution is quirky on some browsers.
  • Code in PHP and ASP.Net/C#, the most widely used platforms. PHP has some pretty awful problems (see this amazingly good rant) but has ease of install, which means it will support more designers and developers. Using C# and/or Python both limit things so that you need a specialist developer with skills not found at many design shops. Out of the two, C# is vastly easier to download and install (just grab a copy of Visual Studio Express for free), whereas Python will once again require a specialist developer.
  • Make it one-file if possible, or auto-build other files (e.g. a server-side cache) so installation is extremely simple. Don’t go out to external databases or APIs.
  • The server-side script should be inserted into a standard HTML “boilerplate,” which is basic enough for developing universal access via Progressive Enhancement.
  • Do some feature detection on the server-side. We may sniff browsers, but we don’t output this information, we abstract to our tiers of service.
  • The script always tries to determine screen size first, and adjust accordingly.
  • The server-side code writes a JavaScript for additional client-side detection as necessary. It should also be used to load/not load other “heavy” resources like large images or rich media.
  • Our JavaScript accesses the “salt” from the server-side, and tries to extend. It runs quickly, spawning another object if necessary, so it can finish if it is being pre-rendered in the cloud.
  • Very advanced features are detected in a new JS object created from the simpler one, which replaces it. This could be something like JQuery Mobile or Modernizr.
  • Report user level of service according to our guidelines, rather than browser/platform/etc.

Since we can’t get full features on the server-side, we should use the server-detection as a “salt” for client-side detection. In our case, we’ll have our PHP/C# script should write a JavaScript object down to the client, with as many of its properties filled in as possible. The JS object should run quickly so that it can help rendering on cloud-prerendering systems like Opera Mini. It should leave behind advanced stuff for users that can support it.

So, here are the steps and their implementation

Step 0: Use Apache rewrites like those in the 5g blacklist, aimed at requests rather than user-agents to thwart web bots. This reduces problems with bot user-agents contaminating our detection strategy. It also helps naive web designer/developers avoid XSS attacks, since all they have to do is drop in the 5G blacklist rewrites.

Step 1: A PHP or ASP.NET/C# script does preliminary feature detection, employing ‘any means necessary’ to detect them on the server-side. It then adjusts the content to our lowest level of support (no JavaScript). If it hits a “W” grade browser, it redirects. If it hits a “C” grade browser it follows Progressive Enhancement and serves basic content via HTML. Finally, it writes a JavaScript object with the properties it discovered to the output.

Step 2: The intermediate JavaScript relies on the most basic features, so it will be supported on bad old cellphones. It does additional feature detection when possible, and further configures output, and bootstraps more sophisticated libraries if warranted.

Step 3: The JavaScript loops back with advanced feature detection libraries. Use existing libraries like JQuery Mobile, Modernizer, or equivalent, “green-ingredient” equivalents at MicroJS (http://www.microjs.com).

This strategy is exactly what I’m implementing in my Green Boilerplate (http://www.greenboilerplate.com) project. Check the site to see the progress, and feel free to contribute questions and comments here.

In a future post, I’ll go through a similar sustainability strategy for doing page analytics. Hint: we’ll probably look once again at server-side solutions.

  1. #1 by Evelyn Martucci on April 15, 2012 - 1:36 pm

    I’ve learn some just right stuff here. Certainly value bookmarking for revisiting. I wonder how so much effort you place to make this type of fantastic informative web site.

    • #2 by pindiespace on April 16, 2012 - 10:37 am

      Thanks! Watch for future posts on a sustainability strategy for analytics, as well as how to write a Sustainability Statement for your website or web design/development service!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 575 other followers

%d bloggers like this: