Code sustainability and social “lock-in”

This post, like the previous one, is concerned with the bigger picture of Sustainable Virtual Design. At the small-scale, we worry about Web Performance Optimization (WPO), Search Engine Optimization (SEO) and design workflow. Applying best practices in each area make the web more efficient in terms of energy used and user time spent, enhancing sustainability in a local sense. But beyond this, there are bigger things going on.

In my earlier discussion, I considered the notion of software “Lock-In,” described by Jaron Lanier in his two recent books, You are Not a Gadget, and Who Owns the Future? In these books the author describes the way in which software encodes ideas. Not just programming concepts, but ideas on how things should be done. “Lock-In” is relevant to sustainability due to its effect on code. For example, the “box model” in CSS makes us think that layout = positioning boxes. There are other ways to think about layout, but the effect of CSS has been to “lock in” a boxy model of layout, even as tools for free-form design become available to developers. “Lock-in” also means that bad ideas at the beginning of web technology (e.g. the many faults of JavaScript described by Doug Crockford) now inform how we build interactive systems on the web. Concepts from system programming have “locked in” a way of thinking about how to build web apps which may go against the nature of a networked system. One can almost feel the Java-like world of objects endlessly calling themselves settling over JS like a obsuring cloud.

In Lanier’s second book, he discusses another kind of “lock-in.” This is not as much about software per se, but the interaction of software and society. This is a more abstract concept, but equally important as we go forward with ever more interactive, social systems online.

The best way to understand the social sort of “lock in” is to consider the mindset of Silicon Valley. In the past, there was a widespread expectation that by now (2013) we would have true artificial intelligence. One only need look at Apple’s Knowledge Navigator video from 1987 and compare it with what Siri can actually do today to see the failure of Ai. Both the imagined 1980s system and the actuall iOS app can find things and generalize in a useful way. But the Knowledge Navigator seemed actually able to “think” for the person, whereas Siri is fun to trick. As Lanier explains, we have the network and the databases, but the software still only does what we want it to do – there is no sign of an emergent Ai like those features in SF novels and movies.

In other words, our software is only as good as we are. If we leave our software alone, it doesn’t grow or expand in ability. It doesn’t evolve. Now consider that most people interact more and more via software. The code doesn’t “grow” with this interaction – instead it contains a fixed belief system embodied by its creators on how people should interact. Even advanced systems like Watson have this fixed quality.  But we have a strong desire to believe that we have really created “mind children” who are “emergent” in their behavior, and are more than willing to pretend that the software is doing more than it really is.

According to Lanier, the danger with this situation lies with a feature of the Internet. Unlike optimistic theories like the Long Tail, the Internet appears to consolidate power into a few or even one supplier. These vendor services, which Lanier calls “Siren Servers” work by grabbing all the data they can, and feeding it back to people in social networks. A site like Facebook (which already constrains the definition of “friend” and “interact with friend” to a narrow, code-based definition) further alters people’s behavior by “helping” them. The “Analytics” software takes interactions in social networks, applies statistics, and then feeds back recommendations, nudges, suggestions, etc.

Due to the nature of social networks, these recommendations are often the path of least resistance. This the predictions of Analytics a self-fulfilling prophecy. People get the software’s consensus back, and adjust to it. The entire system then converges onto a single idea or notion of how things should be done. The diversity of human response conforms to the fixed ideas created in the software. People are ultimately make themselves more like an A.I. than the other way around – they become more “computerlike” due to their interaction with the social network.

This problem would not exist if the Analytic software somehow was itself innovative and creative. But it just minimizes differences. It pushes minimized differences back onto its audience. It tends to convert diverse human interaction into software-specified consensus. Not really brain-like at all.

Despite this limitation, a new notion of “artificial intelligence” has appeared in Silicon Valley. In this new model, the Internet, rather than a big computer or robot, is the A.I. The idea that the Internet as a whole forms one giant brain is increasingly promoted. In this model, people on the Internet replace the (failed) code of artificial intelligence. Communication in social networks becomes the firing of individual neurons. The consensus recommendations of sites like Amazon becomes the “thoughts” of said brain. It’s a way to get artificial intelligence back into the picture, when the code clearly doesn’t think on its own.

According to Lanier, in pursuit of this idea the contributions of people on the network are deliberately anonymized and suppressed. For example, authorship on Wikipedia is hidden, implying that the articles, like holy writ, come from a higher source. It’s no accident that this happens in “the cloud” which is where Heaven was once placed.

Now, the real danger to a sustainable world is that we keep kidding ourselves in this way. The software is just a rigid, inflexible system created as a “snapshot” of what people though social communication was in 2004. As we go forward, this decidedly limited notion of, for example, a “friend” gets “locked in” as the total meaning of “friend.”

More important, feedback recommendations in such an environment will, over time, tend to reduce the diversity of human thought to whatever was embodied in the code. The code is needed to carry social interaction in the network, and its limitations become the boundaries of thought. The code only finds statistical numbers in the behavior or social network members, ultimately making us all think in an average way.

In my opinion, this is a big problem for sustainability. Most sustainable definitions involve not just preserving the state of the world, but encouraging its growth and evolution. In contrast, bogus “emergent” A.I. grabbing social network transactions as a way to fake intelligence has the potential to stifle growth. Instead, we converge to a nice, even utopic world – but one that is “the end of history” nonetheless.  As designers and developers, particularly in the interactive, “social web app” realm, we might damage society’s long-term sustainability in a very big way.

At the code level, there is a big move to standardize web developing in ever-larger, more elaborate “frameworks” which put everything into a neat pattern. Coders need to remember that these patterns have another name – a “belief system” about how things will be done. Standardizing will “lock in” a specific belief system. This code, in turn, can conform people’s online behavior to a specific idea of how to communicate.

Perhaps it is too soon for us to put all our web code into one, global “big idea.” Globals are usually seen as bad, and I submit that “social globals” created by software may be bad for society.

When I think about this problem, I remember some classes I took in history in college about “ancient” Egypt. In particular, most of what we find cool about the ancients – the pyramids, mummy in the pyramid, hieroglyphics, art, government – was developed in the first few hundred years of the Old Kingdom. For thousands of years afterwards, aspects of Egyptian society were relatively static, until at last they were zapped by invasion. It is SF play, but I sometimes imagine the world of 10,000 AD as being more or less the same, “locked in” by straitjacketed modes of communication enforced by our computer-based networks.

What can we do about this?  First off, until proven otherwise, we should not consider any hunk of code “intelligent.” When we do so we probably fooling ourselves. It makes us see our creations (even amazing social networks like Linkedin) as more than they are, and a guide for more than a narrow mode of communication.

Second, we should avoid too much “convergent” thinking. In User Experience, testing with the audience is often used to establish the best interface for a web app. But this method suffers from convergence – you are giving people what they want, and then reflecting back to them. This causes the same people to assume the feedback is a norm established in some objective way, rather than audience stats. While Ux is important, it is also important to realize that the endpoint of Ux would result in “one design to rule them all” technology.

So thirdly, we need to be experimental with design. Other design areas, particularly those art-sy ones like print design, have long encouraged designer-centric experimentation. Many of the designs in a typical graphic design magazine are only of interest to other designers. It is true that they are irrelevant to most people, but we shouldn’t shout them down due to their violation of stats-driven Analytics.

Enough of “big picture” – next post back some specifics about feature detection!

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s