Modern Image Formats: JPEG XR

One of the results you might see when you run a Chrome dev tools lighthouse audit on your site is that you need to use a modern image format on your site such as WebP, JPEG XR or JPEG 2000. What are these formats? What’s the current state of browser support for them? Let’s dive in and take a look.

JPEG XR

JPEG XR (which stands for JPEG eXtended Range), was originally developed by Microsoft as PTC (progressive transform code) as an alternative to JPEG 2000. Compared to JPEG, JPEG 2000 is computationally intensive: it appears that Microsoft wanted an alternative that would work well for textures on Xbox (which would have less processing power).

For web applications, it supports transparency and has a good compression algorithm (similar quality to larger JPEGs).

Should you be using JPEG XR? That’s a complex question. It’s only supported in Microsoft browsers: IE 9+ and Edge. It can pay off to use it if you have the time and/or tooling to generate all the modern formats for the different browsers that you need. If you’re tight on time, just using a properly compressed progressive JPEG (85% compression is the recommend amount).

For more info about JPEG XR, check out these sources:

https://en.wikipedia.org/wiki/JPEG_XR

http://www.useragentman.com/blog/2015/01/14/using-webp-jpeg2000-jpegxr-apng-now-with-picturefill-and-modernizr/

https://calendar.perfplanet.com/2013/browser-specific-image-formats/

https://www.microsoft.com/en-us/research/project/jpeg-xr/

High Performance Images by Colin Bendell, Tim Kadlec, Yoav Weiss, Guy Podjarny, Nick Doyle & Mike McCall from O’Reilly Press

Path to Perf Summaries: #1 with Lara Hogan

Path to Performance is a podcast with Tim Kadlic and Katie Kovalcin. They interview a guest and discuss how to improve web performance. Because I’ve listened to all the episodes, I thought it would be helpful to organize my thoughts about them with a summary of each one. I think this will be a value-add to the show notes page. Please let me know if you think this is useful or not to you in the comments

In this episode, Tim and Katie interview Lara Hogan, the head of engineering (check title) performance at Etsy. They talk about how to build a performance culture and how to win influence for performance considerations.

Pre-interview

Tim talks about What Does My Site Cost?, a tool built on top of WebPageTest to show developers how much their site costs for a mobile user to load. The price can also be adjusted by other factors, like percentage of gross national income to get a better sense of what it’s costing real users.

Perf audit is a place with bite-sized performance audits online. Paul Irish published an in-depth performance audit that uses Chrome developer tools extensively.

History of performance at Etsy

Before Lara took the position, another engineer, Seth Walker started promoting performance by
putting data in front of engineers: he let them know how well parts of the site were performing. This led to greater performance buy-in from the team. Lara mentioned that there’s also buy-in from upper management: the CEO was the CTO and had a strong desire to see a performant site. This set Lara up for the work that she is doing now.

How performance is promoted at Etsy

Lara talked about how important it is to create a performance culture: it won’t stick if it’s just all top-down, “The CEO said to do it.”

One way is to push performance forward by using ego (and WebPageTest). Lara points out that for most audiences, waterfalls are boring, filmstrips (images of the site loading over time) are okay, but videos of the site loading can create a visceral reaction to slowness. Show a competitor’s loading video against your own. This gets people invested in speeding up their part of the site.

Find stats that the business cares about and equip them to measure performance changes against those metrics. Check what the performance experience is for users in other countries.

Another way to cultivate this culture is to celebrate performance wins. Find a person who has done performance work and highlight them and cheer on their success. At Etsy, this is done by putting out a picture them with the performance group looking ecstatic.

Celebrating performance brings the benefits of being a cheerleader for performance as opposed to a enforcer or janitor. A culture change will be more effective with positivity.

Design and performance

Lara wrote Designing for Performance, seeking to put out a concise resource for people who don’t need to know the ins and outs of the browser’s render path, but still want to put together performant solutions. At Etsy, she’s continually educating the designers about performance-related topics at a level that’s relevant to them. She doesn’t need them to know why one image format is better in certain situations, just that it is a consideration and where the resources are to make that decision.

Keeping performance going

At Etsy, they monitor production real user metrics (RUM) on their the site for performance regressions. When the system discovers a regression, it alerts the performance group. The performance group reaches out to the owning team and offers consulting help. They still expect the owning team to do the work and the fix the problem. Etsy also has synthetic tests that run over time to track long term trends.

One longer term goals is to monitor and alert for improvement so that wins can be celebrated.

Continuous improvement

One challenge is setting SLAs that don’t punish teams for success: Laura gives the example of a team that got back end time page down from 800 ms to 400 ms. Do you set a new SLA at the new normal or not? Things will get slower, so probably not. Etsy does have different SLAs for mobile and desktop.

They’re also actively looking to increase the performance of the native apps. She has proposed some ways to objectively measure how performant your native app is.

Another way to improve things is to make your own team feel the pain of users on slow networks: throttling the speed of the office network to something lower, for example.

Wrapping up

Lara stated that she’s very thankful for the freedom she has at Etsy to experiment and all the tools and infrastructure. She and the rest of the team at Etsy give back through their blog and their open source projects. She encouraged people to check out her book and that the proceeds go to help girls learn to code.

I encourage you to check out the episode as well as the links and transcript available on Path to Perf’s site!

Key Books: High Performance Browser Networking: Chapter 14 Summary

One of the important books to study for front end performance is Ilya Grigorik’s High Performance Browser Networking from O’Reilly.
I’m going to summarize some of the key chapters. If you’re interested, pick it up!

The browser: more than a socket manager

Modern browsers do a bunch of things: “. . . process management security sandboxes, layers of optimization caches, JavaScript VMs, graphics rendering and GPU pipelines, storage, sensors, audio and video, networking, and much more.” Fortunately, when designing web applications, we don’t need to see or know about all these functions. However, knowing about how to optimize for the browser can lead to performance improvements.

The browser organizes the sockets into pools and can reuse connections. The browser can then automate TCP reuse, which has performance benefits. The browser keeps the user safe as well by sandboxing connections, enforcing connection limits, and warning users of self-signed certificates. It also keeps users safe by only giving applications access to APIs and necessary resources.

Resource caching

The best and fastest request is a request not made.

With that quote in mind, making sure resources are cached so the request is not made is important. Browsers check if a resource has a cache and reviews if it should be updated. It also makes sure that a user’s cache doesn’t become too bloated. What steps can you take as the application developer? From chapter 13:

Whenever possible, you should specify an explicit cache lifetime for each resource, which allows the client to use a local copy, instead of re-requesting the same object all the time. Similarly, specify a validation mechanism to allow the client to check if the expired resource has been updated: if the resource has not changed, we can eliminate the data transfer.

Browsers also keep track of cookies and sessions so you don’t have to re-authenticate users unnecessarily.

Browser APIs

Grigorik closes the chapter with a rundown of the different features and capacities of XHR, SSE and WebSocket. He points out that one is not better than the other, they all have their place in a complex application and that the key is to learn what they can do and to use the correct one at the correct time.

Key Books: High Performance Browser Networking: Chapter 10 Summary

One of the important books to study for front end performance is Ilya Grigorik’s High Performance Browser Networking from O’Reilly.
I’m going to summarize some of the key chapters. If you’re interested, pick it up!

Primer on web performance

We start with a quick review of where the web has been: hypertext pages.
We’ve been in the web page world for many years now and even web apps are more the norm than the novelty that they were ten years ago.

Because of these new paradigms, developers need to carefully consider how they measure performance. It’s no longer good enough to look at page load time: with a front-end framework, for example, onload can fire before elements on the page are present and usable.

Thus, Grigorik points out, you need to ask different questions about your page. When is the call to action visible? How often is it clicked on? When are users first interacting with the page? Figure out what’s most important to track so that it can be optimized. As he puts it:

The success of your performance and optimization strategy is directly correlated to your ability to define and iterate on application-specific benchmarks and criteria. Nothing beats application-specific knowledge and measurements, especially when linked to bottom-line goals and metrics of your business.

In other words, if you measure it, you can optimize it (or delegate others to optimize it). Vague goals of “make it better” or “make it faster” are difficult to achieve, if they can be achieved at all.

Grigorik takes a brief detour to remind the reader of the critical rendering path of the browser: how raw HTML and CSS goes to a render tree to paint and all the steps in between (don’t forget about the JavaScript!).

He notes that modern web pages and applications have been getting steadily bigger.

User perceived performance

As we look to judge how good our performance is in our applications, we need to keep in mind how our users feel and view delays to different interactions.

Delay User reaction
0-100 ms Interaction feels instantaneous
100-300 ms User perceives a slight delay
300-1000 ms “Things are working”
1,000+ ms Mental context switch
10,000+ ms Task abandoned

It’s important to make your application as fast as possible because just the DNS lookup and the TCP handshake can take up 500+ ms on a mobile network, forcing you to be as quick as possible to avoid the 1000 ms+ mental context switch.

Waterfalls

Analyzing resource waterfalls is an important tool while working on web performance. Grigorik fires up WebPageTest and shows the waterfall for the Yahoo! homepage. He notes how the main file takes time just going through the network processes. It then fires requests and the page proceeds to load. He notes that page load is complete and then some more assets download. This is a good way to deal with ads, trackers and things below the fold. One interesting thing he notes is that the bandwidth demand isn’t constant. This is because bandwidth isn’t usually the limiting factor for web browsing quickly: latency is.

Bandwidth doesn’t matter (much)

Surprisingly, despite all the ISP marketing campaigns that push bandwidth as their selling point, bandwidth isn’t as critical to web browsing speed as latency is. Latency has a direct linear correlation to speed. While there can be significant gains at first by increasing bandwidth (say, from 1 mbps to 2 mbps) after that, the gains in speed decrease to where there is little perceptible gain from 9 mbps to 10 mbps.

While some activities are bandwidth-limited (like streaming Netflix or downloading a game from Steam), most web browsing is latency-limited.

Synthetic vs. RUM

Another tool available to a performance optimizer is synthetic testing. This testing is performed in a controlled environment (say, opening your page in a headless browser and logging the time it takes). This provides a good baseline and helps identify and alert when performance regressions occur.

Synthetic testing has a drawback: it’s hard to get a test to interact with a browser the same way a user would. Not only that, real users have cookies, trackers and other junk that might make your page slower. Thus, it’s important that we see what’s going on with them.

Enter RUM: real user metrics. By using RUM, you can see what your actual users are experiencing on your site. WC3 has standardized different measurement that browsers report through the Navigation Timing API. You can capture what the API reports and send it back to your logging system. Not only that, User Timing and Resource Timing are also available. Now, you can measure what you need to so that you can optimize it.

Better browsers

Web browsers are continually getting better and using different techniques to deliver content to users faster. Why do web developers need to know that? “. . . we can assist the browser and help it do an even better job at accelerating our applications.” Most browsers use four techniques:

  • Resource pre-fetching and prioritization
  • DNS pre-resolve
  • TCP pre-connect
  • Page pre-rendering

To assist the browser in rendering as quickly as possible, Grigorik recommends that CSS should be delivered as quickly as possible and that non-essential JavaScript be deferred so that it doesn’t delay the creation of the CSSOM and DOM. To speed the construction of the DOM and CSSOM, CSS and JavaScript should be discoverable as early as possible. The HTML document should be flushed for best performance occasionally. We can also aid the browser by using dns-prefetch, subresource, prefetch, and prerender.

Join me next time as we take a look at chapter 14, “Primer on Browser Networking”

Key Books: High Performance Browser Networking: Chapter 1 Summary

One of the important books to study for front end performance is Ilya Grigorik’s High Performance Browser Networking from O’Reilly. I’m going to summarize some of the key chapters. If you’re interested, pick it up!

Speed is Key

Users expect speed from web applications. The faster you go, the more users you can retain, you get better conversion. It’s clear that users gravitate toward good sites that are fast too. Consider performance a feature and you will be rewarded.

There are two things that dictate network speed: bandwidth and latency. Bandwidth is the amount of data that you can put through the pipes at one time, latency is the time that it takes to get there. Grigorik then lists some common types of delays that occur when transmitting data over the internet, from propagation to transmission.

A Light Issue

Sadly, there is a hard cap on the rate that we can get information from one place to the other: the speed of light. We don’t transmit data over the internet at the speed of light, but (through marvelous engineering) we’ve gotten within a small constant of it. This is an issue because the round trip time (RTT) to distant places can be quite long in fiber (Grigorik cites New York to Sydney at 160ms). Add to that the processing time, getting to the users computer from the main line, etc. and long trips can easily take more than 300ms, the time at which the user perceives things as sluggish.

This is why CDNs (content delivery networks) are key, as they bring content closer to your users, reducing RTT and thus increasing overall speed.

Unfortunately, for many users, reducing the distance isn’t the only thing adding to RTT time. Most users deal with a large amount of last mile latency as their router does many hops to just get to their ISP’s main router. You can try this yourself with traceroute foo.com Grigorik points out that, “As an end user, if you are looking to improve your web browsing speeds, low latency is worth optimizing for when picking a local ISP.”

Edgy Topic

Grigorik ends the chapter by discussing the issues of edge of the network” the last mile while customers have different setups, there are many hops to main routers and encourages his readers to go through the rest of the book so that they can give their customers the best experience possible.