Considering How We Use HTTP/2

A note from the editors: This article is part two of a two-part series exploring the new HTTP/2 protocol and using it responsibly. Be sure to read part one, Using HTTP/2 Responsibly: Adapting for Users.

It’s important to remember that HTTP/2-specific optimizations may become performance liabilities for HTTP/1 users. In the final part of this series, we’ll talk about the performance implications of such a strategy and how build tools can help you manage HTTP/1- and HTTP/2-specific assets.

Article Continues Below

Our generalized example from the previous article shows how we can adapt delivery of site assets to a user’s connection. Now let’s see how this affects performance in the real world.

Observing performance outcomes#section2

Developing a testing methodology#section3

Low speed mobile connections are quite common in the developing world. Out of curiosity, I wanted to simulate HTTP/1 and HTTP/2 scenarios with my own site on an actual low speed mobile connection.

In a strangely fortuitous turn of events, I ran out of high speed mobile data the month I was planning to test. In lieu of extra charges, my provider simply throttles the connection speed to 2G. Perfect timing, so I tethered to my iPhone and got started.

To gauge performance in any given scenario, you need a good testing methodology. I wanted to test three distinct scenarios:

  1. Site optimized for HTTP/2: This is the site’s default optimization strategy when served using HTTP/2. If the detected protocol is HTTP/2, a higher number of small, granular assets is served.
  2. Site not optimized for HTTP/1: This scenario occurs when HTTP/2 isn’t supported by the browser and when delivery of content isn’t adapted in accordance with those limitations. In other words, content and assets are optimized for HTTP/2 delivery, making them suboptimal for HTTP/1 users.
  3. Site optimized for HTTP/1: HTTP/2-incompatible browsers are provided with HTTP/1 optimizations after the content delivery strategy adapts to meet the browser’s limitations.

The tool I selected for testing was sitespeed.io. sitespeed.io is a nifty command line tool—installable via Node’s package manager (npm)—that’s packed with options for automating performance testing. sitespeed.io collects various page performance metrics each time it finishes a session.

To collect performance data in each scenario, I used the following command in the terminal window:

sitespeed.io -b chrome -n 200 --browsertime.viewPort 320x480 -c native -d 1 -m 1 https://jeremywagner.me

There are plenty of arguments here, but the gist is that I’m testing my site’s URL using Chrome. The test will be run 200 times for each of the three scenarios, and use a viewport size of 320×480. For a full list of sitespeed.io’s runtime options, check out their documentation.

The test results#section4

We’re tracking three aspects of page performance: the total time it takes for the page to load, the amount of time it takes for the DOMContentLoaded event to fire on the client, and the amount of time it takes for the browser to begin painting the page.

First, let’s look at total page load times for each scenario (Fig. 1).

Stacked bar chart of “Total Page Load Times” testing results for each of the three scenarios
Fig. 1: Total Page Load Times test results indicate a significant difference in performance for HTTP/1 Unoptimized scenarios.

This graph illustrates a trend that you’ll see later on. The scenarios optimized for HTTP/1 and HTTP/2 demonstrate similar levels of performance when running on their respective versions of the protocol. The slowest scenario runs on HTTP/1, yet has been optimized for HTTP/2.

In these graphs, we’re plotting two figures: the average and the 95th percentile (meaning load times are below this value 95% of the time). What this data tells me is that if I moved my site to HTTP/2 but didn’t optimize for HTTP/2-incompatible browsers, average page load time for that segment of users would be 10% slower 95% of the time. And 5% of the time, page loading might be 15% slower.

For a small and uncomplicated site such as my blog, this may seem insignificant, but it really isn’t. What if my site is experiencing heavy traffic? Changing how I deliver content to be more inclusive of users with limited capabilities could be the difference between a user who sticks around or one who decides to leave after waiting too long.

Let’s take a look at how long it takes for the DOM to be ready in each scenario (Fig. 2).

Stacked bar chart of “DOM Content Loaded Time” testing results for each of the three scenarios
Fig. 2: DOMContentLoaded Time test results indicate a significant difference in performance for HTTP/1 Unoptimized scenarios.

Again, we see similar levels of performance when a site is optimized for its particular protocol. For the scenario in which the site is optimized for HTTP/2 but runs on HTTP/1, the DOMContentLoaded event fires 10% more slowly than either of the “optimal” scenarios. This occurs 95% of the time. 5% of the time, however, it could be as much as 26% slower.

What about time to first paint? This is arguably the most important performance metric because it’s the first point the user actually sees your website. What happens to this metric when we optimize our content delivery strategy for each protocol version? (Fig. 3)

Stacked bar chart of “First Paint Time” testing results for each of the three scenarios
Fig. 3: First Paint Time test results indicate a significant difference in performance for HTTP/1 Unoptimized scenarios.

The trend persists yet again. In the HTTP/1 Unoptimized scenario, paint time is 10% longer than either of the optimized scenarios 95% of the time—and nearly twice that long during the other 5%.

A 10– 20% delay in page paint time is a serious concern. If you had the ability to speed up rendering for a significant segment of your audience, wouldn’t you?

Another way to improve this metric for HTTP/1 users is to implement critical CSS. That’s an option for me, since my site’s CSS is 2.2KB after Brotli compression. On HTTP/2 sites, you can achieve a performance benefit similar to inlining by using the protocol’s Server Push feature.

Now that we’ve examined the performance implications of tailoring our content delivery to the user’s HTTP protocol version, let’s learn how to automatically generate optimized assets for both segments of your users.

Build tools can help#section5

You’re busy enough as it is. I get it. Maintaining two sets of assets optimized for two different types of users sounds like a huge pain. But this is where a build tool like gulp comes into the picture.

If you’re using gulp (or other automation tools like Grunt or webpack), chances are you’re already automating stuff like script minification (or uglification, depending on how aggressive your optimizations are.) Below is a generalized example of how you could use the gulp-uglify and gulp-concat plugins to uglify files, and then concatenate those separate uglified assets into a single one.

var gulp = require("gulp"),
   uglify = require("gulp-uglify"),
   concat = require("gulp-concat");

// Uglification
gulp.task("uglify", function(){
   var src = "src/js/*.js",
       dest = "dist/js";

   return gulp.src(src)
       .pipe(uglify())
       .pipe(gulp.dest(dest));
});

// Concatenation
gulp.task("concat", ["uglify"], function(){
   var src = "dist/js/*.js",
       dest = "dist/js";

   return gulp.src(src)
       .pipe(concat("script-bundle.js"))
       .pipe(gulp.dest(dest));
});

In this example, all scripts in the src/js directory are uglified by the uglify task. Each processed script is output separately to dist/js. When this happens, the concat task kicks in and bundles all of these scripts into a single file named script-bundle.js. You can then use the protocol detection technique shown in part one of this article series to change which scripts you serve based on the visitor’s protocol version.

Of course, that’s not the only thing you can do with a build system. You could take the same approach to bundling with your CSS files, or even generate image sprites from separate images with the gulp.spritesmith plugin.

The takeaway here is that a build system makes it easy to maintain two sets of optimized assets (among many other things, of course). It can be done easily and automatically, freeing you up to focus on development and improving performance for your site’s visitors.

Reflection#section6

We’ve seen how an HTTP/2-optimized site can perform poorly for users with HTTP/2-incompatible browsers.

But why are the capabilities of these users limited? It really depends.

Socioeconomic conditions play a big role. Users tend to buy the quality of device they can afford, so the capabilities of the “average” device varies significantly, especially between developing and developed nations.

Lack of financial resources may also drive users to restricted data plans and browsers like Opera Mini that minimize data usage. Until those browsers support HTTP/2, a significant percentage of users out there may never come on board.

Updating phone applications can also be problematic for someone on a restricted data plan. Immediate adoption can’t be expected, and some may forego browser updates in favor of preserving the remaining allotment of data on their plans. In developing nations, internet infrastructure quality is significantly behind pace with what’s in the developed world.

We can’t change the behavior of every user to suit our development preferences. What we can do, though, is identify the audience segment that can’t support HTTP/2, then make an informed decision whether or not it’s worth the effort to adapt how we deliver content to them. If a sizeable portion of the audience uses HTTP/2-incompatible browsers, we can change how we deliver content to them. We can deliver an optimized experience and give them a leg up, and we can do so while providing performance advantages for those users who can support HTTP/2.

There are many people out there who face significant challenges while browsing the web. Before we fully embrace new technologies, let’s figure out how we can do so without leaving a significant segment of our audience in a lurch. The reward in what we do comes from providing solutions that work for everyone. Let’s adopt new technologies responsibly. It behooves us all to act with care.

Further reading#section7

Learn more about boosting site performance with Jeremy’s book Web Performance in Action. Get 39% off with code ALAWPA.

About the Author

A photograph of Jeremy Wagner. He is standing against a backdrop of grass and trees, with a green-to-blue swash on the left side of his hair.

Jeremy Wagner

Jeremy Wagner is more of a writer than a web developer, but he does both anyway. On top of writing Responsible JavaScript and making websites for longer than he thought probable, he has written for A List Apart, CSS-Tricks, and Smashing Magazine. Jeremy will someday relocate to the remote wilderness where sand has not yet been taught to think. Until then, he continues to reside in Minnesota’s Twin Cities with his wife and stepdaughters, bemoaning the existence of strip malls.

4 Reader Comments

Got something to say?

We have turned off comments, but you can see what folks had to say before we did so.

More from ALA

I am a creative.

A List Apart founder and web design OG Zeldman ponders the moments of inspiration, the hours of plodding, and the ultimate mystery at the heart of a creative career.
Career