One of the best features of sustainable thinking in web design is that it helps you “rollback” design decisions before they become a headache. In the regular iterative design process, we often describe design and development in terms of a series of “prototypes”, each more capable than the last (this is in contrast to “waterfall” design, where a single design is completed step by step without iteration).
One possible definition of the various prototypes appearing in standard iterative design might be:
Prototype 0, or “Paper Prototype”
This prototype is hand-drawn. The goal is NOT to clean things up, but to experiment with a lot of possible designs. Paper prototypes can illustrate layout, and also interactive design features that aren’t found in graphic design. Despite their primitive nature, paper prototypes are used extensively, even to the point of conducting usability and accessibility studies with ordinary (meaning non-designer) people. This stage involves designers and/or User Experience (UX), and is normally developed from a design document or shorter brief.
The reason we don’t use a computer at this stage is that it generally hinders, rather than helps, the creative process. Computers aren’t creative, but they are very good at “polishing a turd”, so they can make a bad idea look somewhat cool onscreen. This is especially bad if non-designer “stakeholders” are involved in design review at these early stages. By using paper, we strip away the flair and make sure the site design concept will actually work.
Prototype 1, or “Wireframe”
This is just a paper prototype, cleaned up. The number of design options should be reduced one or two, since the reason for the paper prototype was to stop down to a couple of good design options. Prototype level 1 sites may be created in tools like Illustrator,or in a custom wireframe tool like Balsamic. Since the focus is still on structure, information, and interactivity, color, specific text and images are all omitted.
Prototype 2, or “Page Comp”
This level is still not in code (unless you follow a strategy of doing all prototyping in code), but contains color, specific type, images, and layout details. This is the level many standard “web design docs” aspire to. At design shops where development is split away from the process, the second prototype is often created in Photoshop or eve (ag!) InDesign by people untrained in the web. This is especially true at print shops that have a small web division for their clients.
In theory, doing the design away from code allows “creativity”. In practice, the supposed “savings” from this split are dubious. Tools like Photoshop are fundamentally “non-web” in their operation. Web pages do not have fixed sizes, but the first thing you do in PShop, Illustrator, or InDesign is define the width and height of your document. It pushes a “static” view of the web as a sort of electric billboard, and the design must be re-mapped to the realities of the web by the front-end developer.
Prototype 3, or HTML/CSS
In this stage, a Prototype level 2 or 3 is converted to HTML and CSS. In many design shops, Levels 1, 2, 3 and combined via a “Prototype in Code” strategy. Larger shops, with “siloed” designers who don’t know about the web (a very bad idea for sustainability, by the way), make this a separate step, and the designer and developer may hardly know each other. In other cases, a “creative” agency may dump their design on the doorstep of a development agency.
In any case, when this prototype is complete the HTML pages are still static, and don’t have interactivity beyond basic hyperlinks. If an interactive sequence would change the layout (common in Ajax applications), the changes are “faked” with static pages.
In this stage, the client and server code are linked. Connections to databases, APIs and web services are all “real”. The site is roughly at an “alpha” state relative to traditional software.
Prototype 6, #5 Optimized for security, usability, accessibility, Speed (WPO) and Search Engines (SEO)
In the final stage (which is what we are about to discuss in greater detail), the basic, working site is refactored, or rewritten to make it work better. Code is streamlined, checks are made against audiences for accessibility, as part of Web Performance Optimization (WPO). Search engine optimization (SEO) is applied. In many shops, this step does not involve the designers, but instead is something the programmers work on alone (big mistake!)
This development process, or something similar to it, is followed by many design shops. It is a big jump over a waterfall approach, but it has its own issues when we think about sustainable web development. In particular, efficiency, especially energy efficiency, is a big part of web sustainability, and the standard prototyping strategy does not address it except at the very end of the process. WPO is almost an afterthought. This makes it very likely that optimization will “streamline a Hummer” instead of creating a more sustainable website.
Let’s guess at how much energy each step takes in terms of power for the workstations and other resources. Here is a rough estimate.
- Paper Prototype - VERY LOW ENERGY
- Wireframe – LOW ENERGY
- Page Comp - HIGH TO VERY HIGH ENERGY
- Static HTML – MEDIUM
- Testing, security, accessibility, streamlining, refactoring, etc. – LOW TO MEDIUM ENERGY
OK, why is creating the “page comp” the most energy-intensive part of the process? I set it this way because most comps today will be created using something like Adobe Creative Suite, in particular in Illustrator, Photoshop, or Fireworks.
Unlike the web, desktop applications have not worried about CPU power, file size or other “bloat” in at least a decade. As a result, running an advanced drawing tool like Photoshop sucks more electricity than say, testing a page in a web browser. Check the “factoids” section of this site for examples, or look at this link from Tom’s Hardware to see the huge CPU usage required to draw complex images in design tools.
The watt levels are quite high relative to normal computer operation, in particular testing in a standard web browser. In other words, the detailed design process in a non-web tool consumes lots of energy – and fine-tuning visual layout takes many hours. And at the end, one has to jump to a second design tool, and re-do the design in HTML and CSS. This is a great example of wasteful consumption, from a sustainability point of view.
In contrast, servers tend to use “greener” bits (meaning the source of the electricity is more sustainable) than client computers in an office, so the latter stages of prototyping get a lower score. Some of this testing, like the initial paper prototype, happens in the real world, and the real world is more efficient than the virtual one. Out of these later steps, the stage of linking client-side to server-side is likely to be most intensive, mostly because may trials will be necessary to iron out the bugs in the code gluing client and server together.
Conclusion: PAGE COMPS ARE EVIL.
If we really needed a design stage in a non-web authoring tool, comps would just be a necessary evil of design. But consider that the same shop that has their art direction and identity this way will often have to re-do it downstream. The site will have to be redesigned at later stages when the WPO person discovers that it is impossible to download the page quickly, or usability test confirm that the “art” is actually mystery-meat navigation more suited to artists impressing each other on Behance. So, it goes back to the designer, who cranks out another pretty (in the 32-inch workstation way) design that doesn’t work in the real world.
Another bad reason for lavishing attention on non-web page comps is that the local “executive decision makers” claim they can’t handle thinking in wireframes or paper prototypes. Decisions on design will likely be made on relatively trivial features of the page comp, rather than core features of interactivity or user experience. This is one of the reasons that User Experience, or UX positions have become important – to prevent managers who aren’t designers or developers from arguing how pretty the colors are in a particular comp and derailing the design. In the worst case, the execs will once again require the artists re-do the comps, in effect shifting stuff that should be done at the Paper Prototype level into a power-hungry design workstation and burning more bits.
Ideally, a “team of hybrids”, where we don’t have siloed designers and developers would include “decision makers” with some knowledge of both professions. IMHO, if they don’t know, send them to a design class! In that case, they would be competent to judge at the Prototype 0 and Prototype 1 level, rather than requiring pretty pictures.
But…if you’re in a team that spends too much time doing elaborate CS mockups of sites, you might get a leg up by incorporating sustainability thinking. Sustainability is one of those things everyone understands, or thinks they do.
According to sustainability rules, a key goal should be to “push” as much of the work as possible into a prototype stages with a small energy footprint. The iterative design process should reduce time spent in steps requiring lots of computing power or time on said computers. Therefore, as much of the work as possible should happen very early, at the paper prototype level.
Paper, despite the browbeating it has taken by techno-utopians, is more energy efficient in many cases than the web. In the case of prototypes, a series of hand-drawings, even worked up for testing, will almost certainly consume a fraction of the energy and resources of a workstation humming away with dynamic pages being drawn many times a second. A series of printed wireframes applied in usability testing, will consume fewer overall resources than running the same tests on workstations.
So, a sustainable iterative design strategy should maximize low-energy stages in prototype chains, especially when they involve energy-hungry design tools.
A sustainable strategy also requires that the final phase – making the site more efficient, faster, etc. – can’t wait to the end. Contrary to popular wisdom, some of the code efficiency work should occur as the code is written, rather than at the very end. One doesn’t have to micro-optimize or minify code during early development, but key decisions, such as which library is most “green” for the site, should be made early, not late.
Based on this idea, we might write out a “sustainable” iterative design process as follows:
Level 0, Paper Prototype
Iterate this stage LOTS of times before continuing. Spend more time in the room without computers, sketching. Turn off the workstations until they’re actually needed. This will always save more energy than any “high tech” attempt to reduce computer power use. Just Draw!
Level 1, Wireframe
This stage should only be necessary to adjust the paper prototype. This is where a UX person, or a manger experience in UX should do the evaluation, rather than on an elaborate picture authored in a tool like Photoshop.
Page Comp – REMOVE.
By doing design in this stage, we’re just forcing two design stages, not even counting iterations that will be required to make the web page look more like a Photoshop image. By rights, it should be the other way around – for a website, Photoshop should conform to features of the web.
If we keep this stage, we (1) Encourage siloed designers disconnected from development to ignore the web in favor of their “creativity”, (2) direct design work onto energy-hungry software for many hours, and (3) deliver a design that will require multiple re-w0orkings . GET RID OF IT!
Hey, I don’t hate Photoshop – any more than I hate “gamer” computers with 1500 watt power supplies or Hummers, if used responsibly.
Level 2, Static HTML “comp”
This stage should be short. The design should be “prototyped in code”, rather than an energy-wasting detour into traditional design tools requiring a second translation into HTML. This means that designers will have to learn how to dump their designs immediately into static web pages.
Fortunately, Adobe has been developing some very interesting tools like Adobe’s new Edge (as opposed to the old), which, if used properly (meaning one doesn’t try to stay completely in design layer), should accelerate the process. I haven’t seen a CPU test for Edge, but let’s hope it isn’t the energy and CPU hog characteristic of other Creative Suite tools.
Level 3, Dynamic decisions
The goal would be not to immediately jump to a big library like JQuery, or mega-framework like Sench Touch. Often, it may be possible to swap out these mega-zooids for smaller libraries, like the ones found on microjs.com. This prototyping stage won’t work unless everyone is at the party. Developers with some design training can interact with designers to get the best, most sustainable website, early on. Siloed teams won’t do this as well, and will create less sustainable sites.
While a “dynamic decision” stage seems like more work, it should pay off in the long run. Considering a “greener” library, or more efficient code/interface models will force evaluation of the original UX embodied in the static design and design document. In many cases, the team might do a “rollback” to see if slight adjustments in the site design would result in smaller, simpler code libraries being used downstream.
Level 4, Dynamic site
Similar to the regular model.
Level 5, Dynamic site with server-side components
Similar to the regular model.
Level 6, Efficiency, SEO, WPO
This step will now be SHORTER, since lots of the big-picture refactoring will already have been done. Small-scale optimization is appropriate, but swapping of “green ingredient” code should be complete before this step.
So, there’s a plan for a “sustainable iterative design” workflow. It moves contrary to two common steps in regular iterative design – the “Photoshop page design” phase, the final optimization phase. It will be interesting to see what happens in the real world. One area not address is where SEO might come in earlier than the final stage. Most likely, this would also result in more sustainable websites. Comments welcome.