Examining the Benefits of Progressive Enhancement

Doug Avery, Former Senior Developer

Article Category: #Code

Posted on

What does progressive enhancement do, exactly? When is it a useful technique?

After some back and forths on our blog this week, we’ve been discussing Progressive Enhancement a lot here at Viget. PE is commonly touted as a responsibility, the “right” way to do things, and a safe development practice. It's the way I learned to build websites, and the way a lot of up-and-coming developers are learning, too.

Recently, I've been forced to ask myself the same question I'm about to ask you: what does progressive enhancement actually do? The more scrutiny it's under, the less it appeals to me as a design strategy. I’d like to poke at a few myths about PE and statements I've heard on the topic.

(Note that in this article, I’m using the term PE to address the particular idea of HTML-first design that JS features “enhance”)


It enables users who disable JS to access your site

This one's true, but by the admission of PE advocates, PE isn’t for these users anyway. We can drop this as a premise entirely, and while we’re at it, we can stop disabling JS as a way to point out how sites fail without it. (Looking at you, sighjavascript.tumblr.com) Why? See next paragraph.

You can test your site for failure by disabling JS

Wait, I thought we just said no one has JS disabled? Testing for JS failure is absolutely not done by disabling JS, just like stress testing your database isn’t done by turning it off. If you’re serious about seeing how your JS responds to failure, write integration tests, install monitoring like Sentry, and poke at it with tools like Gremlins.

JavaScript is a programming language that, like any programming language, can fail in thousands of ways, but one of the least likely ways is for it to just not be there. Testing sites with JS off isn't a good use of your time.

It allows features to “fall back” when they fail

In a word, no. Take Chosen, a popular progressive enhancement that converts a <select> into a searchable list.

Chosen has over 500 lines of CoffeeScript, and there are a lot of ways it could fail if some of those lines were changed. Weird ways that break keyboard accessibility, strange ways that prevent values from being selected, unfortunate ways that prevent the “selected” text from updating. However, amidst all these avenues for failure, there’s only one way it would fall back to a <select>: If it fails to run in the first place. The fact that it’s built on an existing HTML element is developer convenience, not responsible design.

The truth is that the worst JS bugs won't cause a feature to fall back. Your AJAX endpoint could be wrong, your click handler could be bound incorrectly, you could experience browser differences with the DOM API or with JS itself, but almost every time, your feature won't revert to the original, un-enhanced version: it'll just break. PE doesn't address this, but testing does.

It allows your website to be accessed when JS doesn’t parse

True. This is the case Jason outlined in his post earlier this week. However, JS completely failing isn’t something that “just happens.” Parse errors can be caught by even the simplest test or QA flow, and backed up by monitoring services — they’re certainly not something you have to change your entire design strategy to accommodate.

It prevents battery drain on mobile devices

If you’re building a rich, interactive feature from either HTML or from a JS framework, you’ll have the same feature at the end, right? That's the whole premise of PE. Battery drain on a mobile device is a serious concern when using a lot of JS, but it’s entirely unrelated to progressive enhancement, because PE doesn’t present a fallback for “low battery” environments.

It allows your website to be accessed when JS doesn’t load

Put this one in the false premise column — there’s simply not a realistic example of JS failing to load while a user experience continues to function beyond the absolute basics. In the case of a network interruption, images won’t load, forms won’t submit, and links won’t work. In the case of a CDN failure, reread my paragraph on catastrophic failures and how to avoid them. Web browsers need networks to function, and when they drop, websites don’t work — it's as simple as that.

It improves accessibility

This is actually on Wikipedia and it’s just not true. Users with low vision, motor skill problems, or screen readers don’t disable JS — they use the same features others use, just in slightly different ways. It doesn’t matter if you start with an <input> and enhance it into an AJAX form, or if you build that form from scratch in JS; users with disabilities will see the final result either way.

It improves SEO

Yes, especially compared with serving an entire page with JS. Building your search box, header dropdowns, and video player in an un-PE manner won't have a serious effect on SEO, but serving actual page content as HTML is the easiest way to get search engines to crawl it. There are a few ways to handle this issue:

  • If your pages are behind a login or are control-heavy — as opposed to content-heavy — you don’t really need to handle it. Search engines can't log in, and they're not going to effectively crawl, say, a design-your-own microphone interface.
  • If your JS-rendered pages are public-facing, you can use a service like Prerender to send markup to search engine crawlers.
  • Even better, you can go isomorphic with a system like React (or, soon, Ember), which will serve your rendered output as HTML. Note that this is absolutely not progressive enhancement, as the HTML you serve won’t be functional until JS loads (forms won’t submit, controls won’t be draggable, and only basic behaviors like links will work).

It’s as easy as not building progressively enhanced

Take a simple example: infinite scroll like Pinterest's. When implemented properly, it’s a useful and beloved feature. How would I build this in a progressively enhanced manner?

  • Plan and design pagination controls.
  • Build and style the pagination with HTML and CSS.
  • Build routes on the backend for pagination.
  • Send the HTML and CSS to the user.
  • Immediately remove it with JavaScript (hopefully before the users sees!), and initiate the infinite scroll behavior.

In the above scenario I’ve involved a developer, a designer, and possibly a project manager and QA specialist in the additional work. I produced a page worth of code that I'll have to continue testing and maintaining for the lifetime of the application. I’ve also increased your pageweight, and the size of CSS file.

To me, the above scenario is the exact opposite of good business — it incurs a cost (both to the users and the developer) and provides practically no benefit. And this is a simple feature — imagine the effort required to plan and build no-JS fallbacks for something like Hipmunk’s search. 

It serves a usable experience while your users download JS

This is true for content in most cases — you can read this blog post before the JS loads, for example — but the web is no longer just a place where average users just read. They buy, post their own content, share, and comment, often using rich interfaces managed by JS.

Before JS executes, progressively enhanced interfaces still "work", which is another touted selling point of the technique. However, this sort of falls down once you present an example — would you want to use a site that shows pagination, and then suddenly, mysteriously, hides it as you move the mouse to click? Would you want to view a giant, scrolling list of images that snaps into a single photo gallery once the gallery code runs? No, in most cases, you wouldn't. I'd go so far as to say that serving a basic interface that gets visibly "enhanced" after a few seconds is a bad experience; worse than just showing a loading spinner. At least with a loading spinner, I know when the final UI is ready to be used — with PE, I'm waiting for it to change suddenly and without warning.

It's the responsible way to handle the next billion web users worldwide

Obviously, doing right by billions of people is a worthwhile goal, but a goal that progressive enhancement — especially the no-JS kind — doesn’t seem to do much to address. Do these people have JavaScript disabled? Looking back to the start of this article, we can say probably not. Are they in conditions where websites will load, but JavaScript won’t? No. What devices are they on? What are they doing on those devices? Will they visit my site? What language do they even read? (hint

These are difficult questions, but we can't answer them with a catch-all strategy and some generalizations. If we want to prepare websites for the next billion to come online, we should think about translating them to other languages long before we work on fallbacks for extremely particular failures.


So what should we do?

My answer would be: Write JavaScript, ensure it will run for your target users, and test, test, test. Test locally, test in integration, and finally, monitor live sites for errors the same way you'd monitor for server failures. Assess your site for realistic problems — keyboard accessibility, performance, screen size variations, SEO — but don't spend your time worrying about undefined or improbable failures.

Related Articles