So let’s attempt a verbal (not LCA or other formal) analysis of browser sniffing versus feature detection.
Our goal is to use sustainability thinking as we go through the process of thinking through how to handle access from a diverse array of devices and browsers. Let’s assume we’re planning to:
- Adapt our pages to our audience.
- Try to support access at the HTML5/CSS3 level.
- Apply Progressive Enhancement to ensure basic content is available to everyone.
- Apply (Pragmatic) Responsive Design to adapt our layout for different screen dimensions and sizes.
The goal, according to the sustainability theory, is to:
- Provide for the maximally diverse audience for our content.
- Use standards to maximize durability and re-usability of our code and data.
- Minimize our energy and resource consumption.
- Source the “greenest” bits and CPU cycles we can for doing the analysis.
- Make sure our solution will continue to work in the future, ideally without constant active maintenance.
Now, we figure out the pros and cons of different approaches…
Provide for a maximally-diverse audience
Right off, we realize that we aren’t trying to sniff browsers. Browsers are not our audience. Instead, we’re trying to identify users, or rather, users with classes of services. We want to content-adapt with broad, rather than narrow, classes of services. We also want these classes of services to apply in the future.
So, we can’t adopt a ‘my browser or nothing’ strategy – meaning users have to run the same development computer and software to see the site properly. Unfortunately, this is a typical strategy.
We also cannot use a ‘binary’ solution – specify support levels for some browsers (e.g. Chrome vs. IE) and ignore the others.
In fact, we should avoid stupid, browser-centric discussions that mislead us about our audience. Collectively, Microsoft Internet Explorer in April 2012 had about 25% market share. It can’t be ignored, especially by dividing up service between IE6, 7, 8, 9+ and saying individual segments are too small to care about. This is browser fanboy-ism at the expense of users. Equivalent bad arguments can be seen for Chrome vs. Firefox, or Android vs. iOS. Who cares? We want to support our audience, not prove that we were right about a corporation making the best software.
After enough time has passed, and everyone is in our top-tier (HTML5 and CSS3) of service, we can develop a new strategy. Until then, we just monitor tiers of service according to our existing paradigm.
We want to identify users so that we can develop a good Progressive Enhancement/Responsive Design strategy that is practical to implement in our workflow and with our content.
The big picture with our audience is that:
- An increasing number are on cellphone-type devices (excluding tablets) – 4% to 8% between 2011 and 2012
- An increasing number are on tablets
- A slowly-decreasing number are on desktops, but access is likely to be significant even 10 years from now
- All browser makers and device manufacturers are moving to support HTML5 and CSS3. However, many on the mobile side do not. In the future, more and more will
- Vendors are largely irrelevant, except for when a vendor is associated with no support for HTML5 and CSS3
- Platform/OS is also largely irrelevant
- “Bots” scanning the Internet can only interpret text, and don’t benefit from dynamic data, animations, or even images
The world of “living fossil” browsers has the following structure:
- Old desktop browsers support basic HTML and CSS
- Internet Explorer before version 9 could fully support pages with some polyfills, e.g. Modernizr
- Old cellphone browsers run WAP/WML – nearly impossible to support for small organizations. Large organizations can run sophisticate content-adaptation services, but this is impractical for the majority of web designers/developers
According to a sustainability perspective, even if new browsers have some “special” features that make them cutting-edge. So, even if Google is using SPDY to speed things up, we don’t care. What we care about is the proportion of users using SPDY. Yiibu seems to adopted a “living fossil” approach. A look at their browser/device spreadsheet shows the age of the browser/device.
Our content adaptation problem is not about the future – we are adapting to the past. We don’t care about what version Firefox is in during 2016. The “triumph” of HTML5 has in all probability created a long-term plateau of features for the web. During the next decade, changes in browsers will be much more standards-based than before. Our scripts should handle the “browser-challenged” user with a “living fossil” browser.
We should look at methods that detect “most of browsers most of the time” without getting hung up on leaving out the occasional rare browser. This is essentially the Pragmatic Responsive Design strategy as described by Stephanie Reiger of Yiibu, also referred to as “Mobile First Responsive Design. There are some great notes on the concept on Luke Wroblewski website. This means we don’t have to be purists – if we get something by sniffing some proprietary HTTP headers like those created by Opera Mini, that’s OK. They key is to restrict sniffing old, rather than new browsers. Sometimes we’ll sniff devices rather than browsers, and that’s OK as well. We just want to get to user capabilities and support.
If we think in terms of sustainability, Pragmatic Responsive Design prevents us from becoming “too perfect.” For many site, using a commercial database would be overkill. Also, as detection becomes perfect we are tempted to micro-configure our adaptation, which makes it more expensive at the development and implementation (page speed and energy use) level.
A loose strategy doesn’t require a complete database of every browser like DeviceAtlas for support. However, it does mean being aware of long-term browser and device trends (not absolute market share) via sites like GetClicky, W3Counter and StatCounter. We should avoid statistics that focus on the browser and only use stats about features, e.g. screen sizes and resolutions.
We also should wish for “one web,” but recognize that it hasn’t happened. The newest stuff represents a chance for “one web”, but sustainability frameworks all emphasize maximizing access to products and services over the cutting-edge.
If this is a problem for a client or business guy, we should educate them in the unsustainable nature of perfect browser-sniffing.
We should remember that the US is not representative of current and future Internet access. Most people are using mobiles to surf the Internet, rather than desktops. In addition, most cellphones at present are not smartphones, though the percentage will rise to a majority in several years. However, it means that designing for iOS is not designing for the Internet audience.
We must include developers as a segment of our audience. Since there are many small developers who aren’t hardcore website WPO engineers, we must also find a solution that everyone can apply, without the infrastructure or fees that are required for some high-end solution. Our sustainable solution should target the low-end. Larger sites can use services like DeviceAtlas and WURFL. Medium sites might use the lower-cost Handsetdetection.com. Smaller sites can’t, or in practice they will not.
This is where the “nudge” feature of sustainability thinking comes in. We want to make the “choice architecture” for creating websites include a sustainable version of user identification, and make sure it can be used by the majority of developers. The best way these days is to create a simple solution, and put it into a “boilerplate” system increasingly used by the majority of web designers and developers. By making the boilerplate “mobile first” we nudge developers and users in the direction of energy-efficient devices.
Bots as a sustainability issues – removing bot access to improve a site’s sustainability score
Malicious web bots are an issue for sustainability, since they sap server power with their requests. They also pose a threat since many are malicious, and try to sniff admin logins or restricted information. In other words, bots render the web less sustainable. Since bot-makers constantly change their user-agent strings to thwart detection, we would require a reliably updated database. But there is a better way – as shown by Jeff Starr at Perishable Press, the best way to nab ‘bots is to look for how they request information.
Here is a bot-detector whose construction is described in series – one Apache rewrites in your .htaccess module, the others as something for server-side programming. I’m really impressed at how this author thought through sustainability issues in creating this blacklist.
Use standards to maximize durability and usability
It would be nice is (2) user-agent strings or other measures of browsers consistently broadcast their features, but they don’t. The current structure of most UA strings reflects a one-upmanship between browser vendors to steal pages from each other so as to rise higher in traffic reports. Even though major browser creators are trying to standardize their UAs, it isn’t going to change this messy picture. In addition, there are lots of nonstandard strings, especially among non-human agents like spiders, page scrapers and compilers.
So, any standards we implement will be on the processing side, since we can’t expect the community to standardize their feature reporting in the near term. Our code and database should try to follow existing standards. However, here, as in the user-agent strings themselves, there isn’t a standard. The nearest one is the browscap.ini file, maintained by Gary Keith,which is used by the PHP function get_browser(). As a standard, it has the problem of being a relatively large php .ini file format. In addition, many of the features matched to specific user-agents are obsolete and/or irrelevant to the desktop versus. mobile issue.
So, it is OK to develop a custom solution to storing our information and coding our classifier.
There is also little standardization in creating tiers of services. Several have been, or are being used:
- Acid2 and Acid3, which use rendering of a scene with a quantitative score to rank browsers. This is wrong, since it is about browsers, rather than users
- W3.org ‘Acid’ tests are not undergoing development at this time
- Yahoo’s YUI tiers of services
- JQuery Mobile’s tiers of service
- Yiibu’s Pragamatic Responsive Design tiers of services
So, it is OK to develop our own tiers of service, while leaning heavily on “best practices” from these other attempts to define user groups.
Nice summary here: http://www.sprymedia.co.uk/article/Graded+Technology+Support
If there are a few, absolutely sure ways to recognize user classes, we should use them. This should “trump” comprehensive systems that apply one massive theory (or database or regular expressions) to solve the problem by identifying specific devices and browsers.
We can take some guidelines for support from JQuery Mobile’s Graded Browser Support:
- A Grade – full ajax support
- C Grade – HTML only, with all the work done server-side
The matrix is at the link below:
However, this may be too limiting for a sustainable strategy – we shouldn’t just lump everything but recent smartphones as “old”. In particular, we don’t want to download stuff to old mobiles that they can’t process, if possible. Here we depart somewhat from the 3 categories in Pragmatic Responsive Design, (Base Mobile, Mobile with CSS media queries, Desktop).
Yahoo! originally defined an additional “X Grade”, which was unknown browsers, as well as new versions of old browsers. X-Grade assumes that the browser is modern.
Known text-readers (e.g. Lynx, text-based cellphones) seem like another class, a “T” grade. Here, we are interested in not downloading big bitmaps and media files, improving overall sustainability via reducing energy and resource consumption.
Bots and search spiders are essentially text-reading users. By default, we don’t need to shift content for search spiders and bots, unless your project has an explicit SEO strategy. However, we could define a “Bot” grade in that case. But we might leave out image and rich media, if we have bots we want to visit our site, so we should fold bots into our text-reader class.
Minimize energy and resource consumption,while sourcing “green” bits
If we worry about absolute energy and resource consumption, we could make a case for extensive client-side adaptation. If our access is mostly mobile, these devices use very little energy relative to desktops. But a more critical thinking reveals that a tiny slice of server time might be better. In general, the power used at the server level is “greener” than that on the client. Servers are increasingly energy-efficient, and housed in energy-efficient data centers.
Make sure our solution continues to work in the future
This is an area that requires re-thinking. The main reason we aren’t supposed to sniff browsers is that they are constantly changing, and we can’t keep up with it all. This is quite true. The creation and generation of user-agent and other information available on the server is, and will remain, chaotic. However, we aren’t trying to keep up to date with browsers. Instead we are trying to support users at a particular tier of service. Remember, current trends point to the following features becoming “one web”:
- Full support for HTML5 and CSS3
- Full support for CSS media queries
In other words, to be sustainable for the next decade, we only have to sniff the old. We don’t have to look at the new. Therefore, our classifier should only worry about old hardware and software that is very low in the tier. We don’t have to look at cutting-edge at all (or, rather, this is a separate issue for designing and building a website). IE6 isn’t going to be updated – it is a “living fossil” from an earlier era of the Internet. We don’t have to worry about changes to it. In contrast, IE10 will probably be replaced with IE11, IE12, etc, all of which will have our top level of support. So, we don’t care about them.
The strategy of only looking for old browsers (rather than trying to identify all mobiles, all Nokia cellphones, or all Microsoft products) is the one that is most sustainable.
What this adds up to
After considering these points, we can now define a strategy for adapting our content to our user audience that maximizes sustainability.
- The solution has to be simple and free, so it can be included in a boilerplate, and easy to install even for someone who is more “web designer” than “web programmer.
- The solution should interface with existing client-side feature-detectors, like JQuery Mobile and Modernizr.
- Sourcing server-side, “green bits” on the webhost is OK, as long as we don’t have a super high-traffic site with a scalability issue.
Remember, we ONLY want to find the browsers that can’t do their detection on the client-side, as well as leave-in or remove some page elements as necessary.
- B grade – FULL USER, but with quirks requiring polyfills. For example, there are lots of polyfills for IE, and document.CreateElement() can make HTML5 tags known to older IE versions, so it is “B” grade.
- X grade – FULL-USER, unknown, but assumed new and implicitly “A” level, gets full experience
- W grade – BROWSER-CHALLENGED, WML instead of HTML. Use a redirect rather than simplified page in this case – it’s just too hard without a content-adaptation system.
Modifiers that might be used to configure HTML prior to sending it to the client:
- X – Ajax present/absent for creating dynamic interfaces
- T – text-only, so don’t bother sending images
- V – video and audio support
Data sources should be public and free for boilerplate distribution. “Hooks” can be provided for using services like DeviceAtlas and/or WURFL via Scientia Mobile. Since we have a “past” rather than a “future” orientation, we assume that all future browsers will have X (A grade) support. We only have to worry about “fossil” browsers and devices, so we won’t need to constantly update the dataset.
- Hardware platforms – http://www.openmobilealliance.org/application/ProductListing/uaprof/
- Mobile data – http://www.zytrax.com/tech/web/mobile_ids.html
- Desktop – http://www.zytrax.com/tech/web/browser_ids.htm
- Another source of user-agent strings – http://www.useragentstring.com
- Older UA strings – http://www.pgts.com.au/download/data/browser_list.txt
- Browscap.ini file – http://browsers.garykeith.com/downloads.asp
- JQuery Mobile support spreadsheet – http://jquerymobile.com/original-graded-browser-matrix/
- Yiibu support spreadsheet – https://docs.google.com/spreadsheet/ccc?key=0AglzInh14B_-dFZsaHA5SmllX0dlVmUtaEpVLWNhbVE&hl=en#gid=0
We should create the following module for “Green Boilerplate,” implementing our Graded Browser Support strategy:
- Specify our Graded Browser Support.
- Develop a support spreadsheet implementing our Graded Browser Support. Google Docs for accessibility.
- Make a server-side “bootstrap” providing some feature detection, and handling the “browser-challenged” Grade C cases.
- Code in PHP and ASP.Net/C#, the most widely used platforms. PHP has some pretty awful problems (see this amazingly good rant) but has ease of install, which means it will support more designers and developers. Using C# and/or Python both limit things so that you need a specialist developer with skills not found at many design shops. Out of the two, C# is vastly easier to download and install (just grab a copy of Visual Studio Express for free), whereas Python will once again require a specialist developer.
- Make it one-file if possible, or auto-build other files (e.g. a server-side cache) so installation is extremely simple. Don’t go out to external databases or APIs.
- The server-side script should be inserted into a standard HTML “boilerplate,” which is basic enough for developing universal access via Progressive Enhancement.
- Do some feature detection on the server-side. We may sniff browsers, but we don’t output this information, we abstract to our tiers of service.
- The script always tries to determine screen size first, and adjust accordingly.
- Very advanced features are detected in a new JS object created from the simpler one, which replaces it. This could be something like JQuery Mobile or Modernizr.
- Report user level of service according to our guidelines, rather than browser/platform/etc.
So, here are the steps and their implementation
Step 0: Use Apache rewrites like those in the 5g blacklist, aimed at requests rather than user-agents to thwart web bots. This reduces problems with bot user-agents contaminating our detection strategy. It also helps naive web designer/developers avoid XSS attacks, since all they have to do is drop in the 5G blacklist rewrites.
This strategy is exactly what I’m implementing in my Green Boilerplate (http://www.greenboilerplate.com) project. Check the site to see the progress, and feel free to contribute questions and comments here.
In a future post, I’ll go through a similar sustainability strategy for doing page analytics. Hint: we’ll probably look once again at server-side solutions.