Sustainability in programming languages


Sustainability affects the long-term operation of systems. Computer programming, in particular specific programming languages constitute a system whose sustainability can be analyzed. What makes a programming language sustainable?

  1. Code written once functions for a long time
  2. The design of the language makes it easy for non-fanatics to use it in programming
  3. The code produced executes efficiently
  4. The presentation of the language (how it looks on a page when pretty-printed) lends itself to understanding

When I’m thinking of programming, I am referring to web-based programming languages, both client-side (HTML, CSS, JavaScript, ActionScript), and server-side (Ruby, PHP, ASP.NET/C#, Python, and others).  Right, now, the web is in a period of transition, as some languages like ActionScript 3 are falling from favor, while others like JavaScript are moving beyond the web (e.g. as a scripting language in the Unity 3D gaming engine). Is this rise and fall relevant to sustainability? And how does the movement in programming languages interact with attempts to build a “code free” (but less sustainable) design layer for the web?

A few general observations:

Markup languages like HTML and CSS are quickly understood by a wide variety of people, ranging from fine artists to engineers. The accessibility and usability of these languages is the main reason the web grew. If, instead, the web had been some collection of Java applications, like those on early mobiles, the Internet ecosystem would have been very different. In contrast to HTML, Java is difficult to learn, and the concepts required to program are non-visual, abstract things not easily understood by designers.

A “Java” web would have been very poor in sites, and those sites would likely have been created by large corporations from the start – just like the pre-iPhone app Java world of the early mobile web. HTML and CSS earn massive points for sustainability by increasing the number of programmers, and allowing them to develop ideas as individuals or small groups instead of large teams. We saw this happen with mobile – in contrast to the open web of a million sites, we had walled gardens with a few centrally-planned programs available in “stores” to mobile subscribers.

But HTML won, and its durability demonstrated long-term sustainability as a programming environment. In fact, it was so easy that by the early 2000s the web had entered a sort of stasis – the 90% dominance of Internet Explorer (which I call the IcE Age) meant that there was little change in the languages. Interestingly, design on the web also stopped, and many web pages in 2006 looked like they had in 2001. The arrival of the iPhone and the “forcing” of HTML5, broke the logjam. Today, contrary to beliefs that we are entering a “code free world”, HTML looks more robust and long-term than ever. I expect that there will still be substantial number of HTML/CSS developers in 2020, and I wouldn’t be surprised if the same was true in 2050.

JavaScript, in comparision, is schizoid.  It is possible to doe some very simple things, in JS, which allowed it to play a role, though of an “eye candy” variety, in the early web.  By the early 2000s this wasn’t enough, and it looked like development of web 2.0 would be a mix of Java applets and ActiveX controls. A report by the Gartner group celebrated the coming web of non-html apps – web pages replace by programs written in Java and ActionScript. Adobe believed that more advanced versions of Acrobat might actually replace HTML, due to better layout (and authoring environment more like print). JavaScript seemed down and out around 2003.

But in the mid-2000s, JavaScript was reborn, though it happened slightly earlier and for different reasons than HTML/CSS. A group of developers slowly discovered the potential behind JavaScript’s prototype model for objects, and for the first time built complex systems using JavaScript. Definition of the “good” features of the language by Douglas Crockford and others contributed to the change. It was possible for a experienced, formally-trained computer programmer to program JS without holding their nose – even why those in “hard” languages like C++ and C# remained unimpressed. The tables may be turning now – as JavaScript sneaks onto set-top boxes that formerly hosted only Java, the dominance of the later language has been threatened. The rise of server-side JS like that found in Node is speeding up the process.

But, despite this revival, JavaScript fails on some sustainability tests. Its initial easy syntax gives way to a freakishly complex mess of prototypes, clumsy inheritance, and mis-placed binding of objects to their parents. This has erected a wall limiting many web designers from learning anything more about JS, even as it becomes more and more complex in its implementation on websites. The appearance of JQuery partly reversed the trend – it turns the mess into something that reminds designers of CSS – but it still prevents wide access to, say, game development.

A final language to look at here is PHP, which is the “duct tape” of the Internet. Conceived as a replacement for server-side includes, it has grown into a monster with dozens of APIs to databases and other Internet services. Like JavaScript, it is possible to learn basis PHP pretty quickly. Going further is hard – not because of the strange syntax of JavaScript, but because of the “Swiss Army Knife” nature of the language. Like a Content Management System (CMS, e.g. WordPress), PHP is a mess of literally thousands of “tookit” functions, slapped together for utility rather than logical, hierarchical organization. The formal features of the language (e.g. classes) is problematic, and for many doesn’t seem to add much.

Based on this, PHP is not bad (which is what you hear from Python people) but unsustainable. It is hard to see how the language can evolve from its current messy state to a regular, standardized form. The enormous number of overlapping functions may in time create a “combinatorial explosion” which will make the language too hard to program without tripping up some server environments. The language’s origin in open-source ensures that more stuff will be loaded into it, but making the language more stable in the long run is less likely.

In a future post I’ll consider what formal features of programming languages confer sustainability (hint: I can code in an hour or two) but now let’s move to two posts from Strangeloop.

The first adds up wasted time waiting for slow websites, and asks, “Does the Average Web User Wait 2 Days a Year for Downloads?” The most interesting result of this study is that people feel the web is slower than it actually is – and slow sites are seen as even slower than they actually are. This calls into question the use of statistics for download times – User Experience (Ux), focusing on perceived downloads are more important. And, from a user perspective, they are. Each second waiting for a download makes the person less efficient – and the wasted time implicitly impacts all web users in their daily life. That is to say, the indirect result of slow pages is a downstream decrease in efficiency during daily (physical) life. This is a great example of why sustainability is more than numbers, and making a sustainable site is more than optimizing a software engine. Changes in the way the site works (e.g. an offline mode) could increase user efficiency without requiring the Internet to be upgraded. The design of the overall system, rather than an individual speed metric, is most important.

The second post is also interesting. In the tradition of other physical features of the world mapped to the virtual one, Strangeloop discusses the “performance poverty line“. In the words of the author:

The performance poverty line is the plateau at which load time ceases to matter because you’ve hit close to rock bottom in terms of business metrics. If your pages are below the performance poverty line, making them a couple of seconds faster doesn’t help your business.

Plots of user behavior versus download time (or time to view a full web page) show a hyberbolic curve. This means that they react more as a page slows – but only up to a point. After that cutoff, slower speeds don’t affect behavior – the user has shifted from a someone accomplishing something to one mired in “the world wide wait”. This is the point where the user no longer feels in control, and instead sees themselves as dominated by the slow-moving system. It is a shift from active to passive behavior. This point of no return is around 8 seconds, comparable to the oft-quoted “7 second rule” for home page loads. What’s interesting from a sustainability level is what people do when waits become long. My guess is that they become less productive overall, and this shift happens at about 8 seconds. It’s like the difference between walking in sand versus heavy mud – in one case, you keep walking, even if more slowly, while in the other you have to shift to a new, ultra slow motion pulling feet from the mire.

Along with the 8-second rule, a second rule under 1 second has been identified. Possibly due to fast-acting non-web programs (“twitch” games reacting to the user in milliseconds come to mind), users react negatively to sites that take even a half a second to load. An article in the New York Times quotes both Google and Microsoft engineers as saying that the magic number is close to 1/4 of a second – anything longer, and users do less on the web. Since the web needs the same amount of power to run whether or not users are waiting, this is a significant issue. According to some studies, impatience may be rising as an increasingly-ADD Millennial generation becomes a major user of the business web. From the article:

In 2009, a study by Forrester Research found that online shoppers expected pages to load in two seconds or fewer — and at three seconds, a large share abandon the site. Only three years earlier a similar Forrester study found the average expectations for page load times were four seconds or fewer.

It’s too bad that the discussion is purely on speed. Speed doesn’t solve everything, and the real issue is the ability of web users to accomplish tasks efficiently. For many online tasks, waits of a few seconds are not important – but for others, speed is all important. We can improve the sustainability of the web by using design, market research, and user experience studies to determine where we need speed.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.