This morning’s post concerns a couple of recent projects promise improvements in web efficiency.
Both of these projects address web complexity – the steady rise in ever more elaborate web pages and associated Software as a Service (Saas) APIs. As discussed elsewhere, web pages just keep getting bigger and bulkier.
The average mobile page is 3x larger today than it was a few years ago:
A detailed history of the whole rise in bloatware sites may be found at:
Fixes That Might Not Be Fixes?
Some tech fixes have been announced in recent months which promise things will get better. However, after reading this wonderfully scorching rant on webpage bloat, it’s not clear if these fixes will actually work.
I begin to wonder how beneficial these changes and tech fixes will actually be…
So, in the following I’d like to discuss these new projects, along with critique from a sustainability perspective.
- Polaris Dependency Graph Project
- Google’s AMP
- Facebook’s Instant Articles
All You Webpage is Fat…
And, the most recent HTTP Archive results.
According to the SitePoint article:
- 25% of sites do not use GZIP compression
- 101 HTTP file requests are made — up from 95 a year ago
- Pages contain 896 DOM elements — up from 862
- Resources are loaded from 18 domains
- 49% of assets are cacheable
- 52% of pages use Google libraries such as Analytics
- 24% of pages now use HTTPS
- 36% of pages have assets with 4xx or 5xx HTTP errors
- 79% of pages use redirects
The problem is particularly acute for CMS systems. Many (like the infamous WordPress this site runs on) send the same, overly complex web pages down to mobiles as desktops, just layering on extra dollups of CMS. And that doesn’t even consider the bloat that creeps into CMS themes regardless of their target.
Website Complexity and Dependency Trends in 2016
Web pages download a bunch of media, scripts, and other files they don’t always need in the particular use context. While all these files are used in some of the type, the typical download only requires a fraction of them be present.
And the rising inter-dependency of the page elements – another aspect of rising web page complexity – makes it difficult to streamline the pages.
A more detailed study of web complexity noted that the “cloud” is making sites more complex – multiple servers typically interact to produce a single web page these days, and 1/3 of the bytes on a typical web page came from non-origin servers.
1. Polaris to the Rescue?
In this light, I was interested to read about Polaris, a predictive library for stock browsers that promises to load pages up to 34% faster.
Here’s the research paper:
Polaris is a collaboration of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Harvard University. It is a “Dependency Tracker” system, designed to fix cryptic dependencies in pages leading to stalled pages and additional trips to the web server.
- Scout differs from other WPO tools in tracking dependencies at a basic level. For example, it looks down to the level of individual JS variables to see which ones are shared by multiple libraries and functions.
- The second component, Polaris, is a JS library that is downloaded early. Using the dependency graph from the site analysis by Scout, it schedules downloads in a more efficient order.
- The combined approach pushes efficiency to the client-side, rather than server pre-optimization, as is done with Opera Mini.
The best thing about the system is that it was designed with built-in testing, and the authors conducted tests with lots of large sites. So their claim of increased efficiency is not just conceptual (as are many claims about web frameworks) but backed by hard, scientific data.
In practice, its possible to see how the system could deliver efficiency. Larger websites, with many designers and developers all fighting over what goes on a web page are likely to create “kitchen sink” solutions that download lots of stuff that is either loaded out of order (leading to latency problems) or just not needed on all but a few web pages. Analysis with Scout, plus Polaris optimization could help with this.
In fact, it could form the basis for a useful “Green Boilerplate” that I’ve been working on for several years. I experimented with some of the ideas in the paper, but did not try to create a dynamic dependency graph based on web traffic.
So this One’s a Win…
However, Polaris seems to share some features with Google’s AMP (see below). Until the libraries are released, it is difficult to see how this academic project compares to commercial solutions.
2. Google’s AMP
Bloatware is a big concern to Google, which, along with a very few other vendors, controls much of the “cloud”. In particular, Google has shown a continuing effort to reward sites that make their pages mobile-friendly, and that includes reducing page size. Enter AMP.
The goals of the project are to increase performance for mobile devices by re-defining web pages for fast loading
A list of goals from the main site:
- Allow only asynchronous scripts
- Size all resources statically
- Don’t let extension mechanisms block rendering
- All CSS must be inline and size-bound
- Font triggering must be efficient
- Minimize style recalculations
- Only run GPU-accelerated animations
- Prioritize resource loading
- Load pages in an instant
- Help make AMP faster
Google’s solution is a throwback to the ancient, pre-iPhone, days of non-HTML mobile markup such as WML. Their new spec defines additional non-HTML5 tags required for an AMP pages to work.
Besides these new tags, you’re creating HTML5, preferably with Schema.org markup.
Some additional page requirements from https://github.com/ampproject/amphtml/blob/master/spec/amp-html-format.md
- A <link rel=”canonical” href=”$SOME_URL” /> tag inside their head that points to the regular HTML version of the AMP HTML document or to itself if no such HTML version exists. :link:
- A <meta charset=”utf-8″> tag as the first child of their head tag. :link:
- Contain a <meta name=”viewport” content=”width=device-width,minimum-scale=1″> tag inside their head tag. It’s also recommend to include initial-scale=1 (1). :link:
- A https://cdn.ampproject.org/v0.js tag inside their head tag. :link:
- Contain the AMP boilerplate code in their head tag. :link:
Does it work?
The first question is technical. Since so many websites on mobile are actually CMS systems shoehorning mobile styles on their desktop displays, it is not clear that they easily adapt to AMP. The CMS space includes 1/3 of the entire Web, and half the CMS-based sites, since nearly one third of the web is WordPress. We won’t know until we get AMP stats from HTTP Archive.
Second…the sites used to announce these efficiency increases are themselves bloatware! From idleware:
…the page describing AMP is technically infinite in size. If you open it in Chrome, it will keep downloading the same 3.4 megabyte carousel video forever.
If you open it in Safari, where the carousel is broken, the page still manages to fill 4 megabytes.
Geez. Design is the problem. Getting people, including coders, to design in more sustainable ways requires web bloatware…
3. Facebook’s Instant Articles
Apparently, Facebook has a similar plan in mind for mobile iOS and Android. Their Instant Article Site details the coming framework, due to be opened to all and discussed in detail at the Facebook Developers conference in 2016. However, many large sites are already using the system.
Here’s their fluffy, high carbon-footprint site, apparently catering to “executive decision makers” and Creative Directors who need bloatware to decide stuff.
And the Developer’s site:
And the blog:
One feature of note is that your company has to be approved to support Instant Articles. After setting up a secure RSS feed (see below), you have to submit the RSS feed for approval by Facebook:
You also need to map your Facebook Page URL:
Here’s the page you need to get it working (look at menu across top)
Some features of Instant Articles Styling:
- CSS is NOT SUPPORTED
- Semantic HTML5 tag are required
- A canonical link is required
- OGP (Open Graph Protocol) is uses as on regular pages
From the FB page….
<head> <meta charset="utf-8"> <meta property="op:markup_version" content="v1.0"> <!-- The URL of the web version of your article --> <link rel="canonical" href="http://example.com/article.html"> <!-- The style to be used for this article --> <meta property="fb:article_style" content="myarticlestyle"> </head>
Articles need to be placed in a secure RSS feed:
Once the RSS feed is submitted and approved, you can go to a manual editor on Facebook to publish individual articles on your Facebook Page (under Publishing Tools). You can also create articles manually. There’s also a debugger that validates your article.
This system, like AMP is screaming out for validation that it actually is faster. The purpose of the system is to allow Facebook to host content directly from suppliers, rather than reference other websites. However, it should improve WPO, and hence sustainability. But actual numbers will tell the tale.
But Does it Work?
Remember that sustainability is more than web optimization, including design, Ux, and all development. The following article details the resistance Facebook encounters from “creatives” who don’t care about page load.
Further down the page, you’ll find a 41 megabyte video, the only way to find out more about the project. In the video, this editor rhapsodizes about exciting misfeatures of the new instant format like tilt-to-pan images, which means if you don’t hold your phone steady, the photos will drift around like a Ken Burns documentary.
And supporting evidence from The Atlantic about the resistance of Creative Directors to efficiency in design.
Geez, Design is STILL the problem… Getting people to accept Instant Articles requires bloatware.
And The Real Problem is Design (Again)
All these examples promise sustainability gains from a technical perspective. However, there seems to be a real concern that people won’t build slimmed-down websites, and need bloatware to make the case that they should. To my mind, this implies that people who are unmoved by sustainability arguments need bytestorms of convincing to make the larger web sustainable. So, the people who are in charge of making the web more sustainable don’t feel that those rules apply to them, at least when introducing new products and services.
…More on how “Creative Directors Damage the Earth” in a future post.