Understanding Futures In Rust -- Part 1

Futures make async programming in Rust easy and readable. Learn how to use futures by building them from scratch.


Futures in Rust are analogous to promises in JavaScript. They are a powerful abstraction over the concurrency primitives available in Rust. They are also a stepping stone to async/await, which allows users to write asynchronous code that looks like synchronous code.

Async/await isn't quite ready for primetime in Rust, but there is no reason that you shouldn't start using futures today in your Rust projects. The tokio and futures crates are stable, easy to use, and lightning fast. Check out this documentation for a great primer on using futures.

There is already a futures library, but in this series of blogposts, I'm going to write a simplified version of that library to show how it works, how to use it, and avoid some common pitfalls.


  • A small amount of Rust knowledge or willingness to learn as you go (go read the Rust book, it's great)
  • A modern web browser like Chrome, Firefox, Safari, or Edge (we'll be using the rust playground)
  • That's it!

The Goal

The goal of this post is to be able to understand this code, and to implement the types and functions required to make this compile. This is valid syntax for real futures from the futures crate, and demonstrates how chaining works with futures.

fn main() {
    let result = future::result::<u32, u32>(Ok(1))
        .map(|x| x + 3)
        .map_err(|e| format!("Error: {:?}", e))
        .and_then(|x| Ok(x - 3))
        .then(|res| {
          match res {
              Ok(val) => Ok(x + 3),
              Err(e) => Err(e),
    println!("My Future Result: {:?}", result);

What's a Future Anyway?

Specifically, it's the value represented by a series of asynchronous computations. The documentation for the futures crate calls it a "a concept for an object which is a proxy for another value that may not be ready yet."

Futures in rust allow you to define a task, like a network call or computation, to be run asynchronously. You can chain functions onto that result, transform it, handle errors, merge it with other futures, and perform many other computations on it. Those will only be run when the future is passed to an executor like the tokio library's `run` function. In fact, if you don't use a future before it falls out of scope, nothing will happen.  For this reason the futures crate declares futures `must_use` and will give a compiler warning if you allow them to fall out of scope.

If you are familiar with JavaScript promises, some of this may seem a little weird to you. In JavaScript, promises are executed on the event loop and there is no other choice to run them. The 'executor' function is run immediately. But, in essence, the promise still simply defines a set of instructions to be run later. In Rust, the executor could use any of a number of async strategies to run.

Let's Build Our Future

At a high level, we need a few pieces to make futures work; a runner, the future trait, and the poll and async types.

Our Runner

Our future won't do much if we don't have a way to execute it. Since we are implementing our own futures we'll need to implement our own runner as well. For this exercise, we will not actually be doing anything asynchronous, but we will be approximating asynchronous calls.  Futures are poll based rather than pushed based.  This means that they can be a zero cost abstraction, but also means that they get polled once and are responsible for notifying the executor when they are ready to be polled again.  The details of how this work are not important to understanding how futures are created and chained together, so our executor is a very rough approximation of one.  It can only run one future, and it can't do any meaningful async. The Tokio documentation has a lot more information about the runtime model of futures.

Here's what a very simple implementation looks like:

use std::time::Duration;
use std::thread;
use std::cell::RefCell;

thread_local!(static NOTIFY: RefCell<bool> = RefCell::new(true));

mod task {
    use crate::NOTIFY;
    pub struct Task();
    impl Task {
        pub fn notify(&self) {
            NOTIFY.with(|f| {
                *f.borrow_mut() = true
    pub fn current() -> Task {

fn run<F>(mut f: F)
    F: Future<Item = (), Error = ()>,
    loop {
        if NOTIFY.with(|n| {
            if *n.borrow() {
                *n.borrow_mut() = false;
                match f.poll() {
                    Ok(Async::Ready(_)) | Err(_) => return true,
                    Ok(Async::NotReady) => (),
        }) { break }

Run is a generic function for type F, where F is a future who's item type is () and who's error type is (). () is the only value of the type () (pronounced unit). This is the type that is returned by default from functions and will cause a compiler error if you attempt to use them for anything. In this case we are explicitly saying that we won't use and don't care about the Item and Error types. The where F: is a trait bound. It allows us to limit the types that can be sent into our generic function to those that implement specific traits.

The body of the function is an approximation of what a real runner might do, it loops until it gets notified that the future is ready to be polled again.  It returns from the function when the future is either ready returns an error.  The task module is a simulation of the task model in futures.  It needs to be there for this to compile, but is out of the scope of this post.  Feel free to dig in yourself, here.

Async is a simple generic enum we can define as follows:

enum Async<T> {

Our Trait

Traits are a way of defining shared behavior in Rust. They allow us to specify types and functions that implementing types must define. They can also implement default behavior which we'll see later, when we go over combinators.

Our trait implementation looks like this, and it's identical to a real implementation for futures:

type Poll<T, E> = Result<Async<T>, E>;

trait Future {
    type Item;
    type Error;
    fn poll(&mut self) -> Poll<Self::Item, Self::Error>;

type Poll<T, E> is simply a type alias. It makes it easier to reason about an enum nested in an enum, and makes for a little less typing for us when we go to implement the trait. This trait is simple for now and declares the two required types, Item and Error, and the signature of the only required method, poll.

Our Implementation

struct MyFuture {
    count: u32,

impl Future for MyFuture {
    type Item = ();
    type Error = ();
    fn poll(&mut self) -> Poll<Self::Item, Self::Error> {
        println!("Count: {}", self.count);
        match self.count {
            3 =>  Ok(Async::Ready(())),
            _ => {
                self.count += 1;

Let's go over this line by line:

  • #[derive(Default)] automatically creates a ::default() function for the struct. Struct members that are numbers are defaulted to 0.
  • struct MyFuture { count: u32 } defines a simple struct with a counter. This will allow us to simulate asynchronous behavior by counting up every time poll is called and returning Ready when we reach a certain value.
  • impl Future for MyFuture is our implementation of the trait.
  • We are setting Item and Error to () because we currently don't care about the output.
  • In our implementation of poll we are printing out the count, then deciding what to return based on it.
  • If it matches 3 we are returning a successful result with Async::Ready with a value of ()
  • In all other cases we are incrementing the counter, notifying the executor that it is ready to be polled again, and returning a successful result with Async::NotReady

And with a really simple main function, we can run our future!

let main() {
    let my_future = MyFuture::default();

Run it yourself!

One Last Step

This works as is, but doesn't really show you any of the power of futures. So let's create a super-handy future to chain it with that adds 1 to any type that can have 1 added to it, for example MyFuture.

struct AddOneFuture<T>(T);

impl<T> Future for AddOneFuture<T>
    T: Future,
    T::Item: std::ops::Add<u32, Output=u32>,
    type Item = ();
    type Error = ();

    fn poll(&mut self) -> Poll<Self::Item, Self::Error> {
        match self.0.poll() {
            Ok(Async::Ready(count)) => {
                println!("Final Count: {}", count + 1);
            Ok(Async::NotReady) => Ok(Async::NotReady),
            Err(_) => Err(()),

This looks complicated but is pretty simple. I'll go over it a line at a time again:

  • struct AddOneFuture<T>(T); this is an example of a generic newtype pattern. It allows us to 'wrap' other structs and add our own behavior.
  • impl<T> Future for AddOneFuture<T> is a generic trait implementation.
  • T: Future ensures that anything that is wrapped by AddOneFuture implements Future
  • T::Item: std::ops::Add<u32, Output=u32> ensures that the value represented by Async::Ready(value) responds to the + operation.

The rest should be pretty self-explanatory. It polls the inner future using self.0.poll and based on the result of that either returns Async::NotReady or prints count + 1 and returns Async::Ready(()).

We need to make two more quick changes to make this work:

impl Future for MyFuture {
    type Item = u32;


    match self.count {
      3 => Ok(Async::Ready(self.count)),

fn main() {
    let my_future = MyFuture::default();

Run it yourself!

Now, we are starting to see how we could use futures to chain together asynchronous actions together. There are just a couple easy steps to building those chaining functions(combinators) that give futures their a lot of their power.


  • Futures are a powerful way to leverage Rust's concept of zero cost abstractions to make readable, fast, asynchronous code.
  • Futures behave a lot like promises in JavaScript and other languages.
  • We've learned a lot about constructing generic types and a little bit about chaining futures together.

Up Next

In part 2, we'll cover combinators. Combinators, in a non-technical sense, allow you to use functions (like callbacks) to build a new types. These will be familiar to you if you have used JavaScript promises.

Joe Jackson

Joe is a developer who brings curiosity and a commitment to quality to every project. He works from our Durham, NC office.

More articles by Joe