Code complexity and the collapse of civilization


A nice post over at JSClasses.org, with various discussions of complexity and its effect on code speed and maintainability.

http://www.jsclasses.org/blog/post/43-Speed-up-JavaScript-using-HTML5-LocalStorage–Lately-in-JavaScript-podcast-episode-27.html#transcript

One of the interesting things in this podcast is a discussion of “complexity evaluation” tools for JavaScript and similar languages. The programs parse source code, and compute a level of complexity. If it is too complex, you split into separate modules. This has been standard for complied languages like C++, but is new to JavaScript.

Here are some programs that check complexity, in particular Cyclomatic Complexity, a term first used by Thomas McCabe in the mid-1970s. :

JSComplexity – (A web-based tool) http://jscomplexity.org/

Yardstick – https://github.com/calmh/yardstick

CCM – http://www.blunck.info/ccm.html

Complexity Report – https://github.com/philbooth/complexityReport.js

JSHint (also a styling tool for source code) – http://www.jshint.com/docs/

There are additional measures of complexity out there, some of which may be useful for considering which “big” JavaScript framework to use.

The JSClasses articles also discusses the value of the “strict” keyword (pragma) in JavaScript. In ECMAScript 5, using the “strict” keyword causes programs to throw many more errors that they would silently ignore in non-strict mode.

How is this useful for sustainability? Well, as code becomes complex, errors that don’t crash a small program may become more significant. The “strict” directive doesn’t speed up code – but, by identifying stuff that is tolerated but not exactly right, it makes the code more likely to run as complexity increases.

This is an area where there are “first principles and speculation” but little real empirical research. One way that has been suggested to test whether complexity causes losses in sustainability is by examining security holes. More complex code might be expected to have more security problems. However, in a recent test by Yonghee Shin and Laurie Williams of Mozilla JavaScript Extensions (JSEs), security bugs didn’t seem closely related to Cyclomatic complexity, or several other tests that try to compute a complexity score. However, the same study found that functions with security holes were more complex than functions that just had errors in coding.

 

Complexity and Sustainability

How does this apply to sustainability? One idea is that overly-complex systems become less sustainable. If a product is made of lots of detailed, one-of-a-kind parts, maintaining the product becomes more difficult, than if it is built with a few standardized parts. Also, if there are lots of parts, there are lots of ways those parts can fail. Finally, if you have complex code, the potential number of wrong paths through the code increase exponentially – this is what the complexity tools above look for.

The greater complexity may also provide flexibility – but at the expense of steep learning curves. Frameworks like JQuery hide a lot of gunk related to cross-browser support for page layout, the DOM, and animation. However, if you learn JQuery, you quickly realize that its code parallels equivalent code written without the library.

window.onload = function ()  {...}
$(document).ready(function () {...});

In fact, books on JQuery typically have the same set of tutorials used for “native” JavaScript. The increase in complexity did buy us cross-browser compatibility, but the “learning curve” for the code is about the same. JQuery code is more compact, but at the same time it is harder to explaing concepts like “chaining” to new students. So, we still have to teach JavaScript, then JQuery. Diminishing returns?

The problem is even more acute with “time-saver” frameworks like Phonegap, or Sencha Touch. Anyone who has set up these in a project did some struggling. Then, they had to learn a new language for accomplishing their programming goals. The library helped, mostly with standardizing and by hiding messy low-level stuff – but it doesn’t convert it into an “easy button.”

These frameworks may enhance sustainability – but only if you have a big enough organization for someone to become a guru of the framework. In addition, the framework is burning bits more rapidly than an equivalent, custom-coded application.

And what is the future? Ever more CSS styles. More JavaScript APIs. More devices, with more interfaces. Finally, a web so complex that “ordinary” people will be unable to put up basic web pages, as they have done since 1993. We will need a web specialist, probably with an advanced computer degree. The web, long a domain of creative amateurs, will drift into the maw of code specialists.

It was a good idea (let computers do the work) but is being pushed further and further…

Complexity and Collapse

These issues of complexity vs. sustainability fit into the “big picture” of an evolving Internet if we look at a theory created to predict the fall of civilizations – Joseph Tainter’s The Collapse of Complex Societies. Here is a curve from the class book, showing his theory of how over-complexification leads to ultimate collapse.

complexity curve, with complexity plotted against quality of civilization.
Complexity curve, showing diminishing returns for greater extremes in complexity

In Tainter’s view, civilizations over the last several thousand years started with a “big idea” like agriculture or irrigation canals. At first application, of the new technology gave positive benefits. For example, during the early history of the Roman Empire, good roads helped knit the empire together.

However, human nature is such that when I take one aspirin, it’s good. I then reason that ten aspirin must be really good. Obviously, in the aspirin case, too much of a good thing is a problem. But we often reason the wrong way about technology in general.

For civilization, too much of a useful technology causes problems. In Tainter’s model, civilizations repeatedly reach “complexity overshoot” – too much of a good thing. Even as they get diminishing returns from their efforts, civilizations keep adding more layers of complexity to their original tech. They do this right up to their collapse, even when adding more complexity actually decreases (rather than increases) the quality of life.

So, when digging a few irrigation ditches improved crop yield, it was assumed that 10 times as many would give 10 times the results. This mistake was made in the Middle East long ago, and too much irrigation ultimately lead to salt destroying  the soil, with a collapse of productivity in the ancient world.

Across the globe, the statues on Easter Island got bigger and bigger, and by the big collapse of that society in the 700s, people were chipping out monsters that dwarfed the standing statues.

In the late Roman Empire, the very straight, neat roads which had fueled expansion contributed to decay – armies could move quickly into the center of the Empire.

In hardware, the rise of digital cameras made photography more accessible to the masses – but at the cost of ever more complex devices with lots of cryptic buttons and dials. These camera often take better pictures – but one must re-learn every time we take them off the shelf. The other response to complex camera architecture has been to make “easy button” systems that try to use complex computers to automate pictures. But there is a loss of flexibility. I really doubt that the typical point and shoot digital camera is doing much better than a good film-based camera. Fewer pictures, but more pictures just “complicate” the personal profile we present to each other on Facebook.

In the case of cyberspace, some have argued that over-reliance on ever more complex computers will lead to a similar “complexity overshoot”. In our world, computers run everything from power plants to parking meters. Many of these things ran well (if less efficiently) without computers. But our current belief seems to be, “fix it by throwing a computer/Internet at it.”

Computers are getting ever more complex as their circuits get smaller and more densely packed. And software, enabled by faster hardware, is acquiring layer upon layer of “wrapper” code, objects, chaperones, and various gook that (in theory) is not required for software to work.

It is just possible that our current age is in a complexity overshoot. To see an example of this, go to this page, and look at the screens from the 1984 Macintosh.

macintosh_macpaint

You have to admit, this 1984 computer is doing something VERY similar to Photoshop today – all we see is a matter of degree. This piece of software (MacPaint) ran in a 100,000 bytes of software at most – today’s Photoshop uses hundreds of megabytes.  IMHO, Pshop provides possibly a 20-fold increase in capabilities for a 5000-fold increase in memory size. In other words, diminishing returns.

The next image shows that the Mac in 1984 didn’t look too different from present-day interfaces. And this interface in turn had been cribbed from the Xerox Star, which was even older.

macintosh_screen

This image of an early MacDraw (ancestor of Illustrator-type vector graphics) doesn’t look too shabby, considering the microprocessor is running at 8 MEGAHertz!

macintosh_macdraw

But the most amazing image is the animated GIF of 1985 Microsoft Excel – Really, there’s been very little change, despite all the incredible increase in the complexity of hardware and software. In terms of task completion, this is diminishing returns.

macintosh_excel

Oh wait, there’s an even better example! The article linked below compared a Mac Plus versus and AMD 64-bit Athlon X2 processor from 2007:

pc_mac_comp

http://hubpages.com/hub/_86_Mac_Plus_Vs_07_AMD_DualCore_You_Wont_Believe_Who_Wins

Who do you think won that one, when doing basic “Microsoft Office” operations?

For the functions that people use most often in Microsoft Office, the 1986 vintage Mac Plus beats the 2007 AMD Athlon 64 X2 4800+: 9 tests to 8! Out of the 17 tests, the antique Mac won 53% of the time! Including a jaw-dropping 52 second whipping of the AMD from the time the Power button is pushed to the time the Desktop is up and usable.

Really, these “John Henry” scores wonder why I bought that big PC.

pc_mac_comp3

So, our belief that our ever more complex world is getting better at the rate it is becoming more (overly?) complex may be called into question.

Sustainable Design and Complexity

How does this consideration of complexity impact Sustainable Virtual Design?  Well, for starters, we can point to progress in the other direction – HTML5 was deliberately designed to be simpler than HTML5. Simplified page markup, CSS replacing JavaScript libraries (hiding the complexity forever, we hope) and simple tags for audio and video replacing the dreaded <object> tag are all a plus.

In contrast, the current trend is to “bloatware” with more and more JavaScript libraries, monster stylesheets, frameworks, command-like setups using new servers like Node.js, are all a minus. It is giving us more capability, but it is also turning the web into a place where great things are accomplished by armies of code specialists. Just putting up a web page is getting tough.

Design can come to the rescue (partly). The mission of design on the web is to make sites understandable, usable, and accessible. These factors all weigh to simplicity. It is true that greater complexity does not automatically lead to a design I can’t understand – but often it does. Good design should try to keep a lid on complexity, throwing it away when possible, and burying it from access when it cannot be removed.

This emphasis on design with the big picture means that we need to consider if our “big idea” requires grafting a lot of complex stuff to our system. The slew of whizbang features of HTML5 and related technologies offers lots of potential to make overly complex sites. It might be worth a look at The Wayback Machine at Archive.org to see if we’re really doing better than 2000. A way to ensure this is a Progressive Enhancement strategy – start simple, then complexify. If we start complex, our attempts at “simplifying” are likely going to require even more cruft onto the system.

Finally, thinking in sustainable way may help us avoid a long-term complexity trap. If we obsess over “carbon footprints” on the web, we indirectly move away from complex devices, since complex dynamic machines almost certainly use more energy than an equivalent simple one.

2 Comments

  1. Not a single comment on a fine article! I think this it’s basically correct. But what can be done? As long as one continues to code, things tend to get more complex, even if you think about things like this every day (as I do). The only thing that really has the power to stop this is less food. It’s always worked in the past…

    1. Thanks for the compliment. Code does tend to get more complex. Also, there are two factors (other than making more complex code) at work. The first is a desire to create “frameworks” and abstract the nasty stuff so it is less close to the machine. The trouble is, a framework is ultimately just as complex as the original code. You even see this in GUI interfaces. Learning to use a “graphic” design interface like Illustrator takes about as many hours as it would take to type in commands to build objects directly. Another example is the JQuery library – beautiful re-thinking of JavaScript, but it takes about the same amount of training to learn to use it fully as if you learned the code directly.

      Code complexity typically only helps at the entry level A very simple digital camera might have one button (with automated focus and lighting) but a full-featured digital camera ends up even more complex than the analog device that it replaces. Ditto for blogs – a single task web page can be simplified, but if we want the full range of possible web designs, our “web design tool” ends up as complex as HTML itself.

      Usually, the automation help is minor – possibly 10% easier for a doubling of complexity. It reminds me of re-tying a knot – no matter how you tie it, it is a knot.

      One argument for sustainability thinking in software is that, if people focus on a full set of goals including the context of code in the larger “Internet ecosystem” they might design differently. If you’re thinking about code carbon footprints, you might be led to seek solutions relevant to your project, instead of immediately adding in a massive “framework” library. It would be interesting to see if high energy prices put the bite on Internet bloatware.

      Another aspect of complexity is the idea that if we code complex enough, we’ll somehow get artificial intelligence. To date, the best we can do is associative search engines like Watson – a very, very far cry from the sort of Ai in the 1960s movie 2001.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s