Instagram-style filters in HTML5 Canvas

Nate Hunzaker, Former Development Director

Article Categories: #Code, #Front-end Engineering

Posted on

So began my journey down the pixel manipulation rabbit hole.

Instagram made us fall in love with image filters. A well-chosen filter can enhance a photo greatly. It augments the best parts and softens undesirable qualities.

On a recent project I was charged with achieving a similar effect for an HTML5 canvas powered image editor. When the user saves their image, the app uploads the canvas bitmap data as a JPEG. This meant that the pixel data had to be altered directly.

So began my journey down the pixel manipulation rabbit hole.

Prior Art #

Late last year Una Kravets gave us Instagram filters in CSS. The project is an excellent example of how powerful CSS has become.

In brief, modern CSS allows one to apply most photo effects directly onto an HTML element:

Specifically it introduces the filter and mix-blend-mode properties. These features expose the full gamut of options one would expect from a graphical image editor.

Bringing Blend Modes to HTML5 Canvas #

"The future is already here — it's just not very evenly distributed."

William Gibson

2D context canvas already supports these blend modes! However they haven't reached ubiquitous support. In order to achieve photo effects in every browser, the canvas image data must be manipulated directly. Fortunately, the browser gives us really good tools for that.

Setting things up #

We need to set up a basic environment before we can begin. We'll load an image, then render the scene once it is ready.

<canvas id="canvas"></canvas>

<script>
  var photo  = new Image();
  var canvas = document.getElementById('canvas');
  var ctx    = canvas.getContext('2d');

  function render () {
    // Scale so that the image fills the container
    var width  = window.innerWidth;
    var scale  = width / photo.naturalWidth;
    var height = photo.naturalHeight * scale;;

    canvas.width  = width;
    canvas.height = height;

    ctx.drawImage(photo, 0, 0);
  }

  photo.onload = render;
  photo.crossOrigin = "Anonymous";
  photo.src = "https://s3.amazonaws.com/share.viget.com/images/viget-works.jpg";
</script>

There's a small catch here that may be unfamiliar. Assuming the proper CORS headers are set, photo.crossOrigin = "Anonymous" enables cross-origin photo painting in HTML5 canvas (another blog post in and of itself).

At any rate, we've laid the foundation:

In Una's implementation of Toaster, there is a radial gradient on top of the image that brings focus to the center. Easy enough:

// Normally, you'd paint this gradient directly onto the canvas. However
// we'll need a separate canvas later in order to blend everything. 
function toasterGradient (width, height) {
  var texture = document.createElement('canvas');
  var ctx = texture.getContext('2d');

  texture.width = width;
  texture.height = height;

  // Fill a Radial Gradient
  // https://developer.mozilla.org/en-US/docs/Web/API/CanvasRenderingContext2D/createRadialGradient
  var gradient = ctx.createRadialGradient(width / 2, height / 2, 0, width / 2, height / 2, width * 0.6);

  gradient.addColorStop(0, "#804e0f");
  gradient.addColorStop(1, "#3b003b");

  ctx.fillStyle = gradient;
  ctx.fillRect(0, 0, width, height);

  return ctx;
}

function render () {
  // ... prior code

  var gradient = toasterGradient(width, height);

  ctx.drawImage(gradient.canvas, 0, 0);
}

We're getting there. However this leaves us with an opaque purple-orange gradient covering our image.

Not ideal. We need to blend the gradient into background. We could use context.globalCompositeOperation, however that bumps into browser support issues for modes like screen, multiply, and color-burn. We'll need to enumerate over all of the pixel data for both canvases using context.getImageData.

context.getImageData #

context.getImageData grabs a box of pixels from the canvas and returns an ImageData object. ImageData provides width, height, and data properties. We only care about the data field:

function blend (background, foreground, width, height, transform) {
  // Side note: grabbing this data is the most expensive piece. For better
  // performance, you could consider caching this data
  var bottom = background.getImageData(0, 0, width, height);
  var top    = foreground.getImageData(0, 0, width, height);

  for (var i = 0, size = top.data.length; i < size; i += 4) {
    // red
    top.data[i+0] = transform(bottom.data[i+0], top.data[i+0]);
    // green
    top.data[i+1] = transform(bottom.data[i+1], top.data[i+1]);
    // blue
    top.data[i+2] = transform(bottom.data[i+2], top.data[i+2]);
    // the fourth slot is alpha. We don't need that (so skip by 4)
  }

  return top;
}

Cool. Enumerate over every pixel of the gradient (the foreground) and replace the pixel with the result of a given transformation function.

So what do I mean by transformation function? I mean a function that implements one of the many blending modes. This is not propriety knowledge, Wikipedia contains a vast array of formulas for most blending techniques.

Implementing the screen blend mode #

According to CSSGram, we need the screen blend mode for Toaster. This formula is:

// https://en.wikipedia.org/wiki/Blend_modes#Screen
function screen (bottomPixel, topPixel) {
  return 1 - (1 - bottomPixel) * (1 - topPixel);
}

Since getImageData returns color values between 0 and 255, we need to make a minor tweak:

// https://en.wikipedia.org/wiki/Blend_modes#Screen
function screen (a, b) {
  return 255 - (255 - topPixel) * (255 - bottomPixel) / 255;
}

Finally, let's invoke the blend function with this transformation:

function render() {
  // ...prior code
  var screen = blend(ctx, gradient, width, height, function (bottomPixel, topPixel) {
    return 255 - (255 - topPixel) * (255 - bottomPixel) / 255;
  })

  // replace `ctx.drawImage(gradient.canvas, 0, 0)` with this:
  ctx.putImageData(screen, 0, 0);
}

Nice! This performs the blending we want.

Why is it so washed out? #

We've neglected an important component: Toaster manipulates the brightness and contrast of the image.

These types of transformations are a little tricker, but the internet is a deep ocean of free information. Techniques are well established. Manipulating brightness, color, contrast, and saturation is done through a color matrix transformation. Fortunately for us, the HTML5 drawing library EaselJS has already figured this out.

Using Color Matrices #

For the purposes of this blog post, I've extracted the color matrix algorithm from EaselJS. Adjusting brightness and contrast is a matter of multiplying matrices and applying the result to pixel data. I wouldn't force that on anyone. However feel free to checkout the source code if you are curious.

After we pull in a color matrix transformation library, manipulating brightness and contrast is a matter of sending the correct parameters:

<!-- https://github.com/vigetlabs/canvas-instagram-filters/blob/gh-pages/lib/color-matrix.js -->
<script src="lib/color-matrix.js"></script>

<script>
  //... prior code
  function render () {
    //...prior code
    var colorCorrected = colorMatrix(screen, { contrast: 30, brightness: -30 });

    // Replace `ctx.putImageData(screen, 0, 0)` with:
    ctx.putImageData(colorCorrected, 0, 0);
  }
</script>

The specific brightness and contrast parameters are different, however the result is virtually the same. Beautiful:

We made it #

A quick diff of the CSS and JS versions in Photoshop shows us that we got pretty close:

In this photo, a darker pixel means a closer match. We achieved near parity.

Implementing blend modes such as screen, multiply and color-burn is a realistic goal, it just takes a little more work. The result is beautiful photography within a 2D context canvas. View the end result for yourself or get the code.

Related Articles