Concurrent System Programming with Effect Handlers

Stephen Dolan1, Spiros Eliopoulos3, Daniel Hillerström2, Anil Madhavapeddy1, KC Sivaramakrishnan1, and Leo White3

1University of Cambridge, 2The University of Edinburgh, 3Jane Street Group

Presented by Sagar Biswas

12/07/2025

What are Algebraic Effect Handlers?

At their core, effect handlers are a powerful language feature for separating a program's logic from its operational details. They allow us to define custom "effects" and provide "handlers" that interpret these effects, leading to more modular and readable code.

Direct-Style Code

You write code that looks sequential and is easy to read, without complex callbacks or nested promises. This makes debugging simpler, as you can rely on standard call stacks.

Composable & Modular

You can define your own effects and "handle" them differently depending on the context. This makes it possible to swap out a scheduler or testing framework without changing the application logic.

Think of it like a `try...catch` block on steroids: you can catch any custom operation you can imagine, not just exceptions, and you can choose to resume the computation, change its result, or perform other actions.

Problem 1: The Complexity of Concurrency

Traditional concurrency models often force developers into a choice between two bad options: heavyweight OS threads that don't scale, or complex, callback-based code that is hard to reason about.

  • Callback Hell: Inverting the control flow with callbacks makes logic difficult to follow and debug.
  • Monad Transformers: Solutions like monads can help, but often lead to complex type signatures and a "two-color function" problem.
Pyramid of Doom showing nested callbacks

The "Pyramid of Doom" caused by nested callbacks.

Solution: Concurrency as an Effect

We first define our effects, then provide a handler (the scheduler) to implement them.

(* 1. Define effects and wrap them in helper functions *)
effect Async : ('a -> 'b) * 'a -> 'b promise
effect Await : 'a promise -> 'a
effect Yield : unit -> unit

let async f v = perform (Async (f,v))
let await p = perform (Await p)
let yield () = perform Yield

(* 2. The scheduler that handles the effects *)
type 'a promise = Done of 'a | Error of exn | Waiting of ('a, unit) continuation list
type 'a promise_ref = 'a promise ref

let run main v =
  let run_q = Queue.create () in
  let enqueue f = Queue.push f run_q in
  let run_next () = if Queue.is_empty run_q then () else Queue.pop run_q () in
  let rec fork : 'a 'b. 'a promise_ref -> ('b -> 'a) -> 'b -> unit =
    fun p f v ->
      match f v with
      | v -> (* CASE 1: Fiber completed successfully *)
          let Waiting l = !p in
          List.iter (fun k -> enqueue (fun () -> continue k v)) l;
          p := Done v;
          run_next ()
      | exception e -> (* CASE 2: Fiber failed *)
          let Waiting l = !p in
          List.iter (fun k -> enqueue (fun () -> discontinue k e)) l;
          p := Error e;
          run_next ()
      | effect (Async (f,v)) k -> (* CASE 3: Handle Async effect *)
          let p = ref (Waiting []) in
          enqueue (fun () -> continue k p);
          fork p f v
      | effect (Await p) k -> (* CASE 4: Handle Await effect *)
          (match !p with
          | Done v -> continue k v
          | Error e -> discontinue k e
          | Waiting l -> p := Waiting (k::l); run_next ())
      | effect Yield k -> (* CASE 5: Handle Yield effect *)
          enqueue (fun () -> continue k ());
          run_next ()
  in
  fork (ref (Waiting [])) main v

Code Example: Spawning Fibers

Now we can write concurrent code that looks sequential. The scheduler handles the interleaving.

(* This will become Fiber 1 in the visualization *)
let task1 () =
  print_endline "Task 1 starting";
  yield (); (* give other tasks a chance to run *)
  print_endline "Task 1 finishing"

(* This will become Fiber 2 in the visualization *)
let task2 () =
  print_endline "Task 2 running"

let main () =
  let _promise1 = async task1 () in
  let _promise2 = async task2 () in
  () (* Fire and forget for this simple example *)

(* Run it with the scheduler *)
let () = run main ()

How It Works: Concurrency Visualization

Fibers are lightweight threads. The scheduler runs them on an OS thread. `await` or `yield` pauses a fiber without blocking others.

OS Thread / Scheduler
Status: Idle

Problem 2: Blocking I/O

A core challenge in concurrent programming is handling I/O operations (like reading a file or a network socket) without freezing the entire application. If a user-level thread makes a traditional, blocking system call, the entire OS thread it's running on will halt, preventing any other user-level threads from making progress. This completely undermines the goal of concurrency.

Solution: Non-Blocking I/O as an Effect

Instead of calling blocking OS functions directly, we `perform` an I/O effect. The scheduler's handler attempts the operation in a non-blocking way. If the OS says "not ready yet" (`EWOULDBLOCK`), the handler doesn't wait. It pauses the current fiber and schedules another one.

The Result: The application code remains clean and direct, while the handler manages the complexity of non-blocking event loops (`epoll`, `kqueue`) under the hood. The program remains responsive and scalable.

Code Example: Handling a "Blocking" Call

When `Accept` is performed, the handler tries the operation. If it would block, it pauses the fiber and runs something else.

(* Define the I/O effect *)
effect Accept : file_descr -> (file_descr * sockaddr)

(* The handler logic for this effect. This would be added to our scheduler. *)
| effect (Accept fd) k ->
    (match Unix.accept fd with
    | (newfd, sockaddr) ->
        (* Success! The OS had a connection ready.
           Continue the fiber immediately with the result. *)
        continue k (newfd, sockaddr)
    | exception Unix_error(EWOULDBLOCK, _, _) ->
        (* It would block. Don't wait! *)
        (* 1. Record that fiber 'k' is waiting on file descriptor 'fd'.
              The event loop will watch 'fd' for readiness. *)
        record_waiting fd k;
        (* 2. Run the next available fiber from the queue. *)
        run_next ()
    )

How It Works: I/O Visualization

When a fiber performs a blocking I/O effect, the scheduler hands the work to a non-blocking event loop and runs other fibers.

OS Thread / Scheduler
I/O Event Loop (e.g., epoll, kqueue)
Status: Idle

Problem 3: Resource Safety and Asynchronous Signals

Ensuring resources like file handles are always closed is difficult, especially when faced with asynchronous interruptions (e.g., Ctrl-C).

A naive approach leaks resources on any exception:

let f = open_in "data.csv" in
do_stuff_with f;
close_in f

A `try-finally` block helps with normal exceptions:

let f = open_in "data.csv" in
match do_stuff_with f with
| () -> close_in f
| exception e -> close_in f; raise e

But even this is not safe! If an asynchronous signal arrives between `open_in` and the `match` statement, the cleanup code is never reached, and the resource is still leaked.

Solution: Signals as Asynchronous Effects

We can model signals as a special kind of "asynchronous effect". When the OS delivers a signal, the runtime performs a `Break` effect on the currently running fiber. This gives the handler a chance to intercept the interruption cleanly.

The Result: Cancellation becomes a scoped, structured operation. The handler can run cleanup code and then safely terminate the computation. This avoids the pitfalls of global state and makes resource handling robust and reliable, even in the face of random interruptions.

Code Example: Scoped Cancellation

A signal is treated as a `Break` effect, which is caught by a handler that can clean up resources before terminating.

(* This function makes a computation cancellable. *)
let async_cancellable f =
  (* 'mask' prevents async exceptions from firing here. *)
  mask (fun () ->
    (* 'unmask' allows them again, but only inside this block. *)
    match unmask f with
    | result -> Some result (* The function 'f' completed normally. *)
    
    (* If a signal arrives, it's performed as a 'Break' effect.
       Instead of crashing, our handler catches it. *)
    | effect Break k ->
        (* We can run cleanup code here (e.g., close files). *)
        (* Then we simply return None, abandoning the computation 'k' safely. *)
        None
  )

How It Works: Signal Visualization

A signal is treated as an effect, allowing a clean, scoped shutdown without leaking resources.

OS Thread / Scheduler
Status: Idle

Performance Results

The paper evaluates a high-performance web server (`httpaf`) built with this effect-based I/O library (`aeio`) and compares it against the highly-optimized `Async` library and Go's standard HTTP server.

Key Finding: Performance is Competitive

The effect handler implementation performs on par with the heavily optimized, monadic `Async` library. This is a major success: it shows that we can have **both elegant, direct-style code and excellent performance.**

  • Under medium load, the `Effects` version was marginally better than `Async`.
  • Under high load, both OCaml versions showed higher tail latencies than Go, suggesting room for GC tuning, but `Effects` and `Async` were comparable.
  • There is no significant performance degradation from using the effects model.

Conclusion

Effect handlers are not just a theoretical curiosity. They are a practical, performant, and elegant foundation for concurrent systems programming.

They allow developers to write simple, direct-style code that is easy to read and reason about, while achieving performance comparable to complex, callback-based systems.

By providing a modular way to handle complex behaviors like I/O, concurrency, and asynchronous signals, they represent a powerful step forward for systems programming languages.

The key takeaway: We don't have to choose between clean code and fast code.

1 / 15