Understanding Futures In Rust -- Part 1

Futures make async programming in Rust easy and readable. Learn how to use futures by building them from scratch.

Updates

This post has been updated. It was originally written to match the futures-rs library 0.1 version, but futures have now reached stable in the standard library and have some significant differences. This post will cover much of the same material it did before, but also explore creating a naive executor using the std::task module.

Part 2 has been released!  Find it here.

Background

Futures in Rust are analogous to promises in JavaScript. They are a powerful abstraction over the concurrency primitives available in Rust. They are also a stepping stone to async/await, which allows users to write asynchronous code that looks like synchronous code.

Async/await isn't quite ready for prime time in Rust, but there is no reason that you shouldn't start using futures today in your Rust projects. The tokio crate is stable, easy to use, and lightning fast. Check out this documentation for a great primer on using futures*.

Futures are already in the standard library** but in this series of blog posts, I'm going to write a simplified version of that library to show how it works, how to use it, and avoid some common pitfalls.

* Tokio master is using std::future on master but all of the documentation refers to 0.1 futures. The concepts are all applicable, though.
** While futures are in std now, many of the commonly used features are missing. They currently are kept in futures-preview and I will be referencing functions and traits defined there. Things are moving quickly, much of what is in that crate will end up in the standard library eventually.

Prerequisites

  • A small amount of Rust knowledge or willingness to learn as you go (go read the Rust book, it's great)
  • A modern web browser like Chrome, Firefox, Safari, or Edge (we'll be using the rust playground)
  • That's it!

The Goal

The goal of this post is to be able to understand this code, and to implement the types and functions required to make this compile. This is valid syntax for real futures from the standard library, and demonstrates how chaining works with futures.

// This does not compile, yet

fn main() {
    let future1 = future::ok::<u32, u32>(1)
        .map(|x| x + 3)
        .map_err(|e| println!("Error: {:?}", e))
        .and_then(|x| Ok(x - 3))
        .then(|res| {
          match res {
              Ok(val) => Ok(val + 3),
              err => err,
          }
        });
    let joined_future = future::join(future1, future::err::<u32, u32>(2));
    let val = block_on(joined_future);
    assert_eq!(val, (Ok(4), Err(2)));
}

What's a future anyway?

Specifically, it's the value represented by a series of asynchronous computations. The documentation for the futures crate calls it a "a concept for an object which is a proxy for another value that may not be ready yet."

Futures in rust allow you to define a task, like a network call or computation, to be run asynchronously. You can chain functions onto that result, transform it, handle errors, merge it with other futures, and perform many other computations on it. Those will only be run when the future is passed to an executor like the tokio library's run function. In fact, if you don't use a future before it falls out of scope, nothing will happen. For this reason the futures crate declares futures must_use and will give a compiler warning if you allow them to fall out of scope without being consumed.

If you are familiar with JavaScript promises, some of this may seem a little weird to you. In JavaScript, promises are executed on the event loop and there is no other choice to run them. The 'executor' function is run immediately. But, in essence, the promise still simply defines a set of instructions to be run later. In Rust, the executor could use any of a number of async strategies to run.

Let's Build Our Future

At a high level, we need a few pieces to make futures work; a runner, the future trait, and the poll type.

But First, An Runner

Our future won't do much if we don't have a way to execute it. Since we are implementing our own futures we'll need to implement our own runner as well. For this exercise, we will not actually be doing anything asynchronous, but we will be approximating asynchronous calls. Futures are pull based rather than push based. This allows them to be a zero cost abstraction, but also means that they get polled once and are responsible for notifying the executor when they are ready to be polled again. The details of how this work are not important to understanding how futures are created and chained together, so our executor is a very rough approximation of one. It can only run one future, and it can't do any meaningful async. The Tokio documentation has a lot more information about the runtime model of futures.

Here's what a very simple implementation looks like:

use std::cell::RefCell;

thread_local!(static NOTIFY: RefCell<bool> = RefCell::new(true));

struct Context<'a> {
    waker: &'a Waker,
}

impl<'a> Context<'a> {
    fn from_waker(waker: &'a Waker) -> Self {
        Context { waker }
    }

    fn waker(&self) -> &'a Waker {
        &self.waker
    }
}

struct Waker;

impl Waker {
    fn wake(&self) {
        NOTIFY.with(|f| *f.borrow_mut() = true)
    }
}

fn run<F>(mut f: F) -> F::Output
where
    F: Future,
{
    NOTIFY.with(|n| loop {
        if *n.borrow() {
            *n.borrow_mut() = false;
            let ctx = Context::from_waker(&Waker);
            if let Poll::Ready(val) = f.poll(&ctx) {
                return val;
            }
        }
    })
}

run is a generic function for type F, where F is a future, and it returns a value of the type Output which is defined on the Future trait. We'll get back to this later.

The body of the function is an approximation of what a real runner might do, it loops until it gets notified that the future is ready to be polled again. It returns from the function when the future ready. The Context and Waker types are a simulation of the types of the same name defined in the the future::task module of found here. They need to be here for this to compile, but is out of the scope of this post. Feel free to dig into exactly how that all works yourself.

Poll is a simple generic enum we can define as follows:

enum Poll<T> {
    Ready(T),
    Pending
}

Our Trait

Traits are a way of defining shared behavior in rust. They allow us to specify types and functions that implementing types must define. They can also implement default behavior, which we will see when we go over combinators.

Our trait implementation looks like this (and it's identical to a real implementation for futures):

trait Future {
    type Output;

    fn poll(&mut self, ctx: &Context) -> Poll<Self::Output>;
}

This trait is simple for now and simply declares the required type, Output, and the signature of the only required method, poll which takes a reference to a context object. This object has a reference to a waker, which is used to notify the runtime that the future is ready to be polled again.

Our Implementation

#[derive(Default)]
struct MyFuture {
    count: u32,
}

impl Future for MyFuture {
    type Output = i32;

    fn poll(&mut self, ctx: &Context) -> Poll<Self::Output> {
        match self.count {
            3 => Poll::Ready(3),
            _ => {
                self.count += 1;
                ctx.waker().wake();
                Poll::Pending
            }
        }
    }
}

Let's go over this line by line:

  • #[derive(Default)] automatically creates a ::default() function for the type. Numbers are defaulted to 0.
  • struct MyFuture { count: u32 } defines a simple struct with a counter. This will allow us to simulate asynchronous behavior.
  • impl Future for MyFuture is our implementation of the trait.
  • We are setting Output to i32 so we can return our internal count.
  • In our implementation of poll we are deciding what to do based on the internal count field.
  • If it matches 3 3 => we are returning a Poll::Ready response with the value, 3.
  • In all other cases we are incrementing the counter and returning Poll::Pending

And with a really simple main function, we can run our future!

fn main() {
    let my_future = MyFuture::default();
    println!("Output: {}", run(my_future));
}

Run it yourself!

One Last Step

This works as is, but doesn't really show you any of the power of futures. So lets create a super-handy future to chain it with that adds 1 to any type that can have 1 added to it, for example MyFuture.

struct AddOneFuture<T>(T);

impl<T> Future for AddOneFuture<T>
where
    T: Future,
    T::Output: std::ops::Add<i32, Output = i32>,
{
    type Output = i32;

    fn poll(&mut self, ctx: &Context) -> Poll<Self::Output> {
        match self.0.poll(ctx) {
            Poll::Ready(count) => Poll::Ready(count + 1),
            Poll::Pending => Poll::Pending,
        }
    }
}

This looks complicated but is pretty simple. I'll go over it a line at a time again:

  • struct AddOneFuture<T>(T); this is an example of a generic newtype pattern. It allows us to 'wrap' other structs and add our own behavior.
  • impl<T> Future for AddOneFuture<T> is a generic trait implementation.
  • T: Future ensures that anything that is wrapped by AddOneFuture implements Future
  • T::Item: std::ops::Add<i32, Output=i32> ensures that the value represented by Poll::Ready(value) responds to the + operation.

The rest should be pretty self-explanatory. It polls the inner future using self.0.poll, passing the context through, and based on the result of that either returns Poll::Pending or returns the count of the inner future with 1 added, Poll::Ready(count + 1).

We can just update the main function to use our new future.

fn main() {
    let my_future = MyFuture::default();
    println!("Output: {}", run(AddOneFuture(my_future)));
}

Run it yourself!

Now, we are starting to see how we could use futures to chain together asynchronous actions together. There are just a couple easy steps to building those chaining functions(combinators) that give futures a lot of their power.

Recap

  • Futures are a powerful way to leverage Rust's concept of zero cost abstractions to make readable, fast asynchronous code.
  • Futures behave a lot like promises in JavaScript and other languages.
  • We've learned a lot about constructing generic types and a little bit about chaining actions together.

Up Next

In part 2, we'll cover combinators. Combinators, in a non-technical sense, allow you to use functions (like callbacks) to build a new types. These will be familiar to you if you have used JavaScript promises.

Joe Jackson

Joe is a developer who brings curiosity and a commitment to quality to every project. He works from our Durham, NC office.

More articles by Joe