Code sustainability and software “lock-in”


In the past, I’ve talked about sustainability in Web Performance (WPO), and search engine optimization (SEO).  Recently, I read both of Jaron Lanier’s books, “You are Not a Gadget” (2010) and his more recent Who Owns the Future? (2013). The books consider how digital media affects society from a variety of directions, including long-term sustainability.

Two of the ideas I encountered have become a basic part of my sustainability thinking. The following discusses the first idea, that of software “lock-in” as a threat to web sustainability.

A little background on Lanier. He was one of the co-creators of virtual reality back in the 1990s, and earlier was connected with the group who created MIDI, the programming language used to sequence synthetic musical instruments played in a group. He has connections to many of the groups creating the “new” Internet of Social Networking, and is himself a musician with scientific leanings. He name-drops and describe events that clearly make him part of Silicon Valley culture.

So he clearly is an insider, and differs greatly from many other Internet critics, who typically hail from non-technical backgrounds.

So, what does Lanier tell us about sustainability?

First, I was astonished reading “You are not a Gadget” to find references to “carbon footprints” of websites and web apps. While 2010 so long ago, it comes before the first articles on the “sustainable web” were published. So, forward-thinking.

Second, a big part of Lanier’s first book is the notion of code “lock-in.” It goes like this: unlike the real world, code is arbitrary. When we start a coding project, we often make a small, beautiful test case with arbitrary rules. Since we are simulating, our code always misses part of the real thing, despite its beauty.

As we grow our project, the code becomes messy and inelegant despite anything we do. This is because we are extending beyond concept to practical use.

However…the early code embodied ideas, a “belief system” if you will — and this idea informs the code as it grows. The messy part of large (read useful) code projects constantly work around the initial, beautiful, but limited ideas. It is much easier to adapt new code to the original, “beautiful” thought than to re-write from scratch. So new code carries limitations of the old.

As the system grows, the original ideas become more and more entrenched. If we want an API, or a framework for our code, it has to conform to the original idea. And the original idea, not being perfect,  puts the same imperfection into new.

This happens even if we now see a much better way to code our project. Rather than start over, we bend our code. This creates “lock in” at the first stage.

In time, “lock-in” informs our data. For example, imagine if our original beautiful code defined a specific file format. As we build, we demand that new files conform to that format. This is even true if the original file format was missing something critical. But in most cases we don’t create a new format. Instead now we demand that the data conform to the format, even if it means throwing out data. This is second-degree “lock-in.” We are now altering the message according to the (software) medium’s limits.

Lanier illustrates his idea by looking at the MIDI protocol, which he had some connection to during its origin. MIDI was designed to allow several synths to be connected, with signals from one signal “driving” another synth. It sends discrete codes, which trigger a musical note. This in turn implies a “belief system” that musical notes are discrete, on-off things. There’s not room for blended note, glissandi (a sound slowly rising or falling in pitch) or variations in the “attack” of the note.

Over time, MIDI evolved into a true network protocol, including automated production of music. It’s now used for everything. And it changed music, as a medium, to conform to its quirks.

MIDI has been around for decades, and great piles of software have been built off its base. This means that, in order to conform to MIDI, any music has to somehow be represented as 1970s era on-off synth tones. Lots of music uses  now uses MIDI in which the definition of a musical “note” does not match the original MIDI definition.  And the locked-in limitations of MIDI now affect what kinds of music are composed and released, because it is just to hard to represent some kinds of music via MIDI. Now, constraints are often good in art, but with no alternative to the MIDI standard, this is not a valuable constraint.

So, what are the equivalents of “lock-in” on the web?  Consider CSS. The early methods for putting things in different positions on web pages all relied on a “zen” approach based on the notion of a “float” in CSS. You didn’t move one thing – you moved another to make the first thing move. For example, to move a page element to the left, you often were best off “floating” another page element to the right in your CSS code.

Because of this approach, it was very difficult to create multi-column displays on web pages. So web design didn’t use columns, even when it might make sense.

The approach was also so different from that of graphic layout tools (think Illustrator) that we’ve created a gap between designers for Creative Suite and those for the web.

You might answer that “float” CSS was primitive, and we are now advancing. Today, newer versions of CSS have better models, e.g. “flexbox” for defining more complex layouts. But due to “lock-in” the old floats are with us forever. And supporting old browsers requires either staying with the old standard, or manufacturing complex “polyfill” code to upgrade old browsers to new behavior.

Another good example is JavaScript. If we can believe the histories, the original language was coded with great speed prior to its appearance in Netscape 2. It has, as Douglas Crockford has shown, many “beautiful” parts. But it also has massive problems.

A good example is in variable assignment. If you forget to put a “var” on a variable, it doesn’t cause a fail – instead you get a variable in global space, even if you created the variable inside the function. Now, this would be an obvious thing to fix – I can’t really think of a good reason for automated globals – but it has not been fixed. The global variable bug is securely “locked-in” for the forseeable future.

There are alternative JavaScript-like languages (e.g. CoffeeScript, or Dart) that fix these problems, but the sheer mountain of JS code ensures that the global variable bug will be with us in 2040.

So, how does this impact Sustainable Web Design? Well, support for a large number of users – a principle of most sustainability frameworks – requires that we provide lots of code for old browsers, and additional code to work around quirks in JavaScript dating from hasty decisions in 1994. Libraries like JQuery, are, in part “normalization” libraries that use lots of code to fix other code.

As time goes on, and more and more software is balanced on the original JavaScript, it is quite possible that the support train will get longer, instead of shorter, making code harder to maintain. Will we need a “JQuery of JQueries?” Some frameworks lately seem to be drifting in that direction.

Lanier believes it possible that code development will slow, then grind to a halt in the near future. He notes the astounding fact that computer hardware has increased in speed hundreds of times since the end of the 1970s – but software today is only a little better than it was 30 years ago. So, the dreams of Singularians – those who expect our brains to uploaded into the Overmind in a sort of technological Rapture – will be dashed by software’s development slowing down, mostly due to “lock-in.” Instead, we’ll have a web that can’t change anymore, and may even make a highly dependent society “freeze” on its 20th century precepts.

This is a definite sustainability problem. It could be that the web will become less sustainable because the sheer weight of code will make progress grind to a halt. Imagine if all browsers someday adopt the same basecode (witness Opera’s recent move to WebKit from Presto). Imagine a world even more restrictive than the IE-centric web of the early 2000s.  Imagine code getting more and more brittle to the “lock-in” theory. Ultimately web will “freeze” and be incapable of change, even if it is in widespread use.

Another place we see “lock-in” (IMHO) is in user interfaces. Programs that have been around a long time, e.g. Photoshop have (1) acquired lots of new functions, and (2) now do amazingly clumsy workarounds due to “lock-in” of the original user interface designed decades ago. Surely, a User Experience team could look at how people use Pshop and similar tools, and do radical re-designs that would be easier for everyone else to use. But Ui “lock-in” now supports an entrenched group of experts who have mastered the quirks – we have people “lock-in” as well as software. Is there any chance that the Photoshop of 2040 will NOT look like the program we currently use? It can’t be because it is the “best” possible design for a graphic tool.

A common reply to the arguments Lanier makes is that competing standards and codebases will ultimately appear through the magic of “open source.” Lanier treats this as wishful thinking.

In the old days, when we weren’t so hyper-connected, software could develop in isolation before having to compete. There were islands. Ditto for the early web – it was a mix of standards including UUCP and proprietary online BBS systems like AOL and Prodigy. This is the software equivalent of “diversity” in agriculture, or “grow local” at your local Whole Foods.

other_networksThe browser ecosystem was similarly complex. There were dozens of major browsers, and many more “forks” or derivative versions. This was a headache, but there were a few benefits – ideas could be pursued independently in the different browser worlds.

But today, any new browser, operating system, or network standard has to instantly compete with existing, mostly “locked-in” solutions. This results in a “winner take all” system which prevents new standards from replacing the old. According to Lanier, the Internet “hollows out” the space between a quirky software experiment and a new standard – we only have the software equivalent of “blockbusters” coupled with the “amateur night” found on YouTube. There’s no middle ground, and the “long tail” has its heart cut out.

This is the reason we haven’t had new OS ideas on desktops in many years, and why we don’t have a new crop of browsers (remember Flock) challenging the current players. With the exception of Chrome, browsers in use today have histories going deep into the 1990s. Mobile systems, once standard, seem destined to reduce to two or even one (Android?) system.

Does this mean that the web today is the web of the future?

One feature of sustainable systems is that they grow and adapt. Lanier’s analysis raises some doubt whether computer software and networks really act this way. In the long run, they may slow and freeze out. This would make them unsustainable, and put a finite lifespan on the entire online enterprise.

How to counter this? We have to continue the distributed development of Internet software, e.g., all the GitHub code – but there has to be something more. A strategy has to be found so that new ideas can “bridge” the gap in the long tail created by “lock-in.” I’d be interested to hear any ideas…

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s