Green Boilerplate – Part II Data for configuring boilerplate


In this section you’ll find additional “initial thoughts” about the design and principles behind a green boilerplate, which I’m developing at http://www.greenboilerplate.com.

One feature of a heavily “bootstrapped” template relying on Progressive Enhancement as its first stop is the need for data. If we try to make the template “sniff” the current state of the Internet, we’ll be disappointed – it will be too much computing. Some of the decisions will be static – they’ll be embodied in markup rather than dynamically added later. This means that, in order to set up the right libraries, modules and polyfills we’ll have to use existing data from the Internet on the “compile side” for the boilerplate.

What kind of data do we need?

  • Browser versions – the fill history, so we can calculate a “lifetime” for each browser, and compute into the future whether specific polyfills should be included.
  • Computer/OS versions – so we have a rough measure of the efficiency of the client requesting a page. We might want to “reward” users with energy-efficient mobiles with more design fun versus those with big, old, energy-sucking desktops.
  • Computer/OS longevity – so we know which device will be around the longest. Devices expected to last a long time in their current form (e.g. iPads) might be the core of the design – it would be “design for the most sustainable device first”.
  • User agents – a big database of HTTP_USER_AGENT, especially versions for “living fossil” browsers which will not change, would be useful in the initial server-side bootstrapping. We wouldn’t test for all these agents – instead we would look for a few common features that reliably pick out all the living fossils, and put them into “age groups” instead of “device groups”. For modern browsers, we would then apply regular Object detection.
  • Green-ness of specific web hosts and infrastructure – This would be particularly useful for items loaded cross-domain, for example scripts from script archives like googlecode.com. Depending on the current “green-ness” of a host, we might to elect to use a local copy. It is an open question whether  downloading “virtual globalized” code through the Internet is better/worse than using a local script on our site – is there an analogy to “buy local” here?
  • Target audience – this gets into the layout and resulting UX we give to visitors. In particular, one of our “user goals” for sustainable web design will be to encourage them to upgrade to more energy-efficient browsers, even computers. Depending on the audience, we may feel more comfortable with very gentle Cass Sunstein “nudge”, versus more direct requirements to upgrade to use our site. We can even tease out a “social justice” element in virtual sustainability – we won’t force the “Browser-challenged” to upgrade if our stats on location imply they will have difficulty doing so. In that case, we might send all the polyfills to preserve an equal experience for our users.
  • Access log trends – access logs for a site could inform the build. If, for example, it was discovered that nobody who visited a website ever needed to run a specific module, it could be removed. If the use of a component on the site was declining, we might phase it out while watching the logs to see the reaction. If users responded to a phase-out in a sustainable way (meaning that their experience didn’t decline over time) we would be justified in removing the component. Right now, these decisions are taken in an ad hoc fashion – the goal of the Green Boilerplate is to provide a formal (damm the word ) rubric for doing so at the development level.
  • APIs in use – If there are multiple sources of data, we might categorize APIs by the green-ness of their hosts, or even the use of data (is JSON more “green” than a typically longer XML file?)

Collecting this information pretty much demands an open-source project. The group would contribute to the database, and provide updates. Updates would be echoed to “builds” of the boilerplate templates.

After identifying sources of data, we’ll have to boil down the results into a configuration file, database, or settings for the boilerplate. Running a full database at runtime is wasteful, so most of the insights distilled down into a sort of sustainability “snapshot” of the current web would have to be in static files. During the “burn” from the boilerplate to the final template, much of this “insight” would become hard-coded HTML and CSS. Presumably, there would be many features that would be retained until some data pushed it below the “cutoff” point. Again, these “cutoff” points are often ad hoc, for example “don’t support browsers with market share of < 1%”. Here, we would formalize the cutoff, so our reasons could be explained later based on application of a set of rules used to create the boilerplate.

What about a site that is already deployed? The design of Green Boilerplate need to have a way to add/remove code from a operational website, or at least list things that need to be changed. This is a big challenge, but not as big as it might seem. Much of the complexity of current websites stems from the “one size fits all” concept of Responsive Design. A page support iOS is more complex, and have many more commands, than a HTML5 page aimed at generic desktops. By providing updates for specific device groups in the template, the problem could be split into several steps taken by a developer.

We would also have to make sure that the “burn” or update of and existing burn was very fault-tolerant to editing. Browsers already tolerate lots of little mistakes in code, so it is at least possible to imagine “careful” compile systems for creating the static markup and adding in dynamic code as necessary.

One other feature of this data is that it would help us construct a single number for our build, as proposed by my friend Russell Burt. This “sustainability” number would be the EXACT equivalent of an “Energy Star” rating on a specific appliance. Such a “Virtual Energy Star” rating would use the data on the build, plus information about the hosting platform and the goals of the site to provide a single number. By tying it to the build, the number would be quantitative, and different rating numbers could be compared. One might object that a single number hides the complexity of the system – but this is also true of a refrigerator or dishwasher – the complex process of building them and using them has also been boiled down to one number. Such a number, like the concept of sustainability itself, would make it easier for a design and development team to explain their choices to non-techie stakeholders.

1 Comment

  1. Fascinating blog, Sir / Madam (who are you!)
    Three comments

    1) It seems to me that the kind of awareness, during design, of sustainability factors you want to encourage isn’t best served by the production of a boilerplate. It would probably be better served by a plugin to standard IDEs.

    I’ve also been reviewing HTML boilerplates recently, for a different purpose, and a boilerplate is by definition a one-size-fits-all solution that then requires customisation. Trouble is, while being a good way of minimising coding time, a boilerplate is an ineffective way of minimising code. Where it does achieve a minimisation of code, this is due to the use of e.g. minified libraries.

    Focusing on the development environment, rather than a boilerplate would also help encourage sustainable design practices, as well as sustainable designs. For example, the cost-benefit ratio of including conditional css for a particular browser could be calculated at the moment when that code is being written, and the current embodied energy of a web app could be tracked during development.

    Which leads me to the second point, which is

    2) are Agile development practices more sustainabile than big upfront design projects run in waterfall mode? There’s a sense in which we all suspect that they are, because they maximise code reuse and minimise the creation of unwanted features (especially when practiced as Agile-UX), but on the other hand, they are always-on, and so the integrated total effort over time could conceivably be greater.

    3) When you look at the business case for developing a website, it is the maintenance of the website – not just technical support, upkeep, but often content production – which far outweighs the cost of initial development. Factoring this into the embodied energy of a design would require a close look at the UX of CMS’s from the content managers’ perspective, as well as minimising the number of content templates required, for example. Again, not something that could be captured or rectified via a boilerplate, but potentially a significant energy factor, and almost always a factor in determining the life-expectancy of a web application (as workarounds and custom templates proliferate).

    Anyways, fascinating topic, I look forward to reading more…

    Justin

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s