Key Books: High Performance Browser Networking: Chapter 10 Summary

One of the important books to study for front end performance is Ilya Grigorik’s High Performance Browser Networking from O’Reilly.
I’m going to summarize some of the key chapters. If you’re interested, pick it up!

Primer on web performance

We start with a quick review of where the web has been: hypertext pages.
We’ve been in the web page world for many years now and even web apps are more the norm than the novelty that they were ten years ago.

Because of these new paradigms, developers need to carefully consider how they measure performance. It’s no longer good enough to look at page load time: with a front-end framework, for example, onload can fire before elements on the page are present and usable.

Thus, Grigorik points out, you need to ask different questions about your page. When is the call to action visible? How often is it clicked on? When are users first interacting with the page? Figure out what’s most important to track so that it can be optimized. As he puts it:

The success of your performance and optimization strategy is directly correlated to your ability to define and iterate on application-specific benchmarks and criteria. Nothing beats application-specific knowledge and measurements, especially when linked to bottom-line goals and metrics of your business.

In other words, if you measure it, you can optimize it (or delegate others to optimize it). Vague goals of “make it better” or “make it faster” are difficult to achieve, if they can be achieved at all.

Grigorik takes a brief detour to remind the reader of the critical rendering path of the browser: how raw HTML and CSS goes to a render tree to paint and all the steps in between (don’t forget about the JavaScript!).

He notes that modern web pages and applications have been getting steadily bigger.

User perceived performance

As we look to judge how good our performance is in our applications, we need to keep in mind how our users feel and view delays to different interactions.

Delay User reaction
0-100 ms Interaction feels instantaneous
100-300 ms User perceives a slight delay
300-1000 ms “Things are working”
1,000+ ms Mental context switch
10,000+ ms Task abandoned

It’s important to make your application as fast as possible because just the DNS lookup and the TCP handshake can take up 500+ ms on a mobile network, forcing you to be as quick as possible to avoid the 1000 ms+ mental context switch.

Waterfalls

Analyzing resource waterfalls is an important tool while working on web performance. Grigorik fires up WebPageTest and shows the waterfall for the Yahoo! homepage. He notes how the main file takes time just going through the network processes. It then fires requests and the page proceeds to load. He notes that page load is complete and then some more assets download. This is a good way to deal with ads, trackers and things below the fold. One interesting thing he notes is that the bandwidth demand isn’t constant. This is because bandwidth isn’t usually the limiting factor for web browsing quickly: latency is.

Bandwidth doesn’t matter (much)

Surprisingly, despite all the ISP marketing campaigns that push bandwidth as their selling point, bandwidth isn’t as critical to web browsing speed as latency is. Latency has a direct linear correlation to speed. While there can be significant gains at first by increasing bandwidth (say, from 1 mbps to 2 mbps) after that, the gains in speed decrease to where there is little perceptible gain from 9 mbps to 10 mbps.

While some activities are bandwidth-limited (like streaming Netflix or downloading a game from Steam), most web browsing is latency-limited.

Synthetic vs. RUM

Another tool available to a performance optimizer is synthetic testing. This testing is performed in a controlled environment (say, opening your page in a headless browser and logging the time it takes). This provides a good baseline and helps identify and alert when performance regressions occur.

Synthetic testing has a drawback: it’s hard to get a test to interact with a browser the same way a user would. Not only that, real users have cookies, trackers and other junk that might make your page slower. Thus, it’s important that we see what’s going on with them.

Enter RUM: real user metrics. By using RUM, you can see what your actual users are experiencing on your site. WC3 has standardized different measurement that browsers report through the Navigation Timing API. You can capture what the API reports and send it back to your logging system. Not only that, User Timing and Resource Timing are also available. Now, you can measure what you need to so that you can optimize it.

Better browsers

Web browsers are continually getting better and using different techniques to deliver content to users faster. Why do web developers need to know that? “. . . we can assist the browser and help it do an even better job at accelerating our applications.” Most browsers use four techniques:

  • Resource pre-fetching and prioritization
  • DNS pre-resolve
  • TCP pre-connect
  • Page pre-rendering

To assist the browser in rendering as quickly as possible, Grigorik recommends that CSS should be delivered as quickly as possible and that non-essential JavaScript be deferred so that it doesn’t delay the creation of the CSSOM and DOM. To speed the construction of the DOM and CSSOM, CSS and JavaScript should be discoverable as early as possible. The HTML document should be flushed for best performance occasionally. We can also aid the browser by using dns-prefetch, subresource, prefetch, and prerender.

Join me next time as we take a look at chapter 14, “Primer on Browser Networking”

Leave a Reply

Your email address will not be published. Required fields are marked *