Hash Server Tutorial
This tutorial builds a TCP server that reads data from clients, computes a
hash on a thread pool, and sends the result back. You’ll learn how to combine
an io_context for network I/O with a thread_pool for CPU-bound work,
switching between them mid-coroutine with capy::run().
| Code snippets assume: |
#include <boost/corosio/io_context.hpp>
#include <boost/corosio/tcp_acceptor.hpp>
#include <boost/corosio/tcp_socket.hpp>
#include <boost/capy/buffers.hpp>
#include <boost/capy/ex/run_async.hpp>
#include <boost/capy/ex/run.hpp>
#include <boost/capy/ex/thread_pool.hpp>
#include <boost/capy/task.hpp>
#include <boost/capy/write.hpp>
namespace corosio = boost::corosio;
namespace capy = boost::capy;
Overview
Most servers spend their time waiting on the network. When the work between
reads and writes is cheap, a single-threaded io_context handles thousands
of connections without breaking a sweat. But some operations — cryptographic
hashes, compression, image processing — consume real CPU time. Running those
inline blocks the event loop and starves every other connection.
The solution is to keep I/O on the io_context and offload heavy computation
to a thread_pool. Capy’s run() function makes this seamless: a single
co_await switches the coroutine to the pool, runs the work, and resumes
back on the original executor when it finishes.
This tutorial demonstrates:
-
Accepting connections with
tcp_acceptor -
Spawning independent session coroutines with
run_async -
Switching executors with
capy::run()for CPU-bound work -
The dispatch trampoline that returns the coroutine to its home executor
The Hash Function
We use FNV-1a as a stand-in for any CPU-intensive operation. In production you would substitute a cryptographic hash, a compression pass, or whatever work justifies leaving the event loop.
capy::task<std::uint64_t>
compute_fnv1a( char const* data, std::size_t len )
{
constexpr std::uint64_t basis = 14695981039346656037ULL;
constexpr std::uint64_t prime = 1099511628211ULL;
std::uint64_t h = basis;
for (std::size_t i = 0; i < len; ++i)
{
h ^= static_cast<unsigned char>( data[i] );
h *= prime;
}
co_return h;
}
This is a capy::task — a lazy coroutine that doesn’t start until someone
awaits it. That matters because run() needs to control which executor the
task runs on.
Session Coroutine
Each client connection is handled by a single coroutine:
capy::task<>
do_session(
corosio::tcp_socket sock,
capy::thread_pool& pool )
{
char buf[4096];
// 1. Read data from client (on io_context)
auto [ec, n] = co_await sock.read_some(
capy::mutable_buffer( buf, sizeof( buf ) ) );
if (ec)
{
sock.close();
co_return;
}
// 2. Switch to thread pool for CPU-bound hash computation,
// then automatically resume on io_context when done
auto hash = co_await capy::run( pool.get_executor() )(
compute_fnv1a( buf, n ) );
// 3. Send hex result back to client (on io_context)
auto result = to_hex( hash ) + "\n";
auto [wec, wn] = co_await capy::write(
sock,
capy::const_buffer( result.data(), result.size() ) );
(void)wec;
(void)wn;
sock.close();
}
Three things happen in sequence, but on two different executors:
-
Read — runs on the
io_contextthread. The socket awaitable suspends the coroutine until data arrives from the kernel. -
Hash —
capy::run( pool.get_executor() )postscompute_fnv1ato the thread pool. The coroutine suspends on theio_contextand resumes on a pool thread. When the task completes, a dispatch trampoline posts the coroutine back to theio_context. -
Write — back on the
io_contextthread, the hex result is sent to the client.
The executor switch is invisible at the call site — it reads like straight-line code.
How run() Switches Executors
When you write:
auto hash = co_await capy::run( pool.get_executor() )(
compute_fnv1a( buf, n ) );
Behind the scenes:
-
run()creates an awaitable that stores the pool executor. -
On
co_await, the awaitable’sawait_suspenddispatches the inner task throughpool_executor.dispatch(task_handle). For a thread pool, dispatch always posts — the task is queued for a worker thread. -
The calling coroutine suspends (the
io_contextis free to process other connections). -
A pool thread picks up the task and runs it to completion.
-
The task’s
final_suspendresumes a dispatch trampoline, which callsio_context_executor.dispatch(caller_handle)to post the caller back to theio_context. -
The caller resumes on the
io_contextthread with the hash result.
The key insight: the caller’s executor is captured before the switch and restored automatically after. You never need to manually post back.
Accept Loop
The accept loop creates a socket per connection and spawns a session:
capy::task<>
do_accept(
corosio::io_context& ioc,
corosio::tcp_acceptor& acc,
capy::thread_pool& pool )
{
for (;;)
{
corosio::tcp_socket peer( ioc );
auto [ec] = co_await acc.accept( peer );
if (ec)
break;
capy::run_async( ioc.get_executor() )(
do_session( std::move( peer ), pool ) );
}
}
run_async is fire-and-forget — each session runs independently on the
io_context. The accept loop immediately continues waiting for the next
connection.
Main Function
int main( int argc, char* argv[] )
{
if (argc != 2)
{
std::cerr << "Usage: hash_server <port>\n";
return 1;
}
auto port = static_cast<std::uint16_t>( std::atoi( argv[1] ) );
corosio::io_context ioc;
capy::thread_pool pool( 4 );
corosio::tcp_acceptor acc( ioc, corosio::endpoint( port ) );
std::cout << "Hash server listening on port " << port << "\n";
capy::run_async( ioc.get_executor() )(
do_accept( ioc, acc, pool ) );
ioc.run();
pool.join();
}
The io_context drives all network I/O on the main thread. The thread pool
runs four worker threads for hash computation. pool.join() waits for any
in-flight pool work after the event loop exits.
run_async vs run
These two functions serve different purposes:
| Function | Context | Purpose |
|---|---|---|
|
Called from outside a coroutine (e.g., |
Fire-and-forget: dispatches the task onto the executor |
|
Called from inside a coroutine |
Switches executors: runs the task on |
In this example, run_async launches the accept loop from main, and
run switches individual hash computations to the thread pool from within
a session coroutine.
Testing
Start the server:
$ ./hash_server 8080
Hash server listening on port 8080
Send data with netcat:
$ echo "hello world" | nc -q1 localhost 8080
782e1488cd5a68b7
$ echo "test data 123" | nc -q1 localhost 8080
daf63590896c6e23
Each request reads one chunk, hashes it on the thread pool, and returns the 16-character hex digest.
Next Steps
-
I/O Context Guide — Deep dive into event loop mechanics
-
Acceptors Guide — Acceptor options and multi-port binding
-
Sockets Guide — Socket operations in detail
-
Composed Operations — Understanding
write()