Mill-IO: Event-loop library for Rust!
Mill-IO is a lightweight event loop library for Rust that provides efficient non-blocking I/O management without relying on heavyweight async runtimes. It's a reactor-based event loop implementation built on top of mio-rs. In this article, we'll discuss it, the problem it solves, why it exists, and more!

Introduction
Have you ever wondered how large-scale systems work internally? It’s a really important question. In engineering, there are a lot of techniques that are used to solve various problems. All of those have pros and cons; there is no perfect solution. Your job is to find the best solution for your case, one that has the pros that solve your problem and the fewest cons that you can bear. One of the critical problems is building a huge multi-user system. Throughout this article, we will discuss our problem and how we can solve it with our library, mill-io, an event-loop library I worked on during my Summer of Bitcoin internship to be used inside Coinswap.
Our Problem
Imagine we have a simple system, a web server. It’s really simple; single-threaded and handles requests by reading/writing to the socket one by one. There’s nothing special about it. If you have 1–10 users, maybe you wouldn’t face any problems and you wouldn’t need to implement any complex solution, but imagine those users become 1 million. Your website becomes a hell for your users due to single-threaded processing.
There’s a simple solution for this problem: the thread-per-connection
approach. Basically, we process each connection inside an individual thread concurrently to enhance performance and decrease the latency of each request. But still, we have performance issues. The OS manages thread execution with a scheduler. A thread has different states, and when it’s blocked, the OS stores its context—such as CPU registers (general-purpose registers, stack pointer, and program counter), its stack, and scheduling information—inside the memory. Before it runs again, the CPU loads the thread’s state from memory and stores the previous state in memory, as before. This operation is called Context Switching. Unfortunately, this operation takes time and have a high memory consumption per thread, and the more threads you have, the worse the performance. We have another struggle now!
How to Solve This Problem?
To solve any problem, you’ll have an initial solution. This solution solves your problem but not in the best way. We find the best solution by optimizing this basic solution. So let’s optimize our solution!
The main problem we face is I/O-blocked operations, like reading/writing files. These operations take a lot of time due to the basic CPU operations, and the thread is blocked when it does these operations until it finishes.
You don’t need to worry about that. The OS provides some syscalls that you can use to find the best solution. What we need is to know when files (or sockets in our web-server example) are ready for reading or writing. There are some system calls that notify us when a specific event occurs (like data being available to read) so we can run a piece of code. After that, we need to limit the number of threads to reduce the context-switching operations that appear in the thread-per-connection
approach. Now, we can say we have a good solution!
So, What’s the Event Loop?
Now that we understand the problem and know we need a better solution, let’s explore what an event loop actually is and how it solves our scalability issues.
The event loop is a programming construct that’s designed specifically to handle our I/O bottleneck problem. Instead of blocking threads while waiting for I/O operations, it uses a different approach: non-blocking I/O with event notifications.
Think of it like instead of having a waiter (thread) stand by each table (connection) waiting for customers to finish reading the menu, you have one smart coordinator who watches all tables at once and only sends a waiter when a customer is actually ready to order. This is exactly what Node.js uses to handle thousands of concurrent connections with just one thread. Libraries like libevent and libuv implement this pattern and power many high-performance systems.
Internals of Polling
You might wonder: “How does the event loop actually know when I/O is ready?” The answer lies in polling, a key technique that makes all of this possible.
Polling is the bridge between our application and the operating system’s I/O capabilities. Remember our problem with blocking I/O? Polling solves this by letting us ask the OS: “Tell me when any of these sockets are ready, but don’t make me wait if none are ready right now.”
It started with the select
syscall introduced in UNIX. Select is a system call that allows programs to monitor multiple file descriptors and check which ones are ready for I/O operations.
int select(int nfds, fd_set* readfds, fd_set* writefds, fd_set* errorfds, struct timeval* timeout);
Read more: select(2)
After that, better replacements have appear in each operating system such as epoll
for Linux, kqueue
for BSD/MacOS, and IOCP
for Windows.
Check documentation of each syscall for more information
kqueue: https://man.openbsd.org/kqueue.2
IOCP: https://learn.microsoft.com/en-us/windows/win32/fileio/i-o-completion-ports
epoll: https://man7.org/linux/man-pages/man7/epoll.7.html
to be able to imagine how we can write a code using these APIs, we can take a look on this C code that uses Linux’s epoll
:
#include <sys/epoll.h>
// create an epoll instance
int epfd = epoll_create1(0);
// register a socket for read events
struct epoll_event event;
event.events = EPOLLIN; // we want to know about reads
event.data.fd = socket_fd;
epoll_ctl(epfd, EPOLL_CTL_ADD, socket_fd, &event);
// wait for events (this is the magic!)
struct epoll_event events[MAX_EVENTS];
int num_events = epoll_wait(epfd, events, MAX_EVENTS, -1);
// handle ready events
for (int i = 0; i < num_events; i++) {
if (events[i].events & EPOLLIN) {
// socket is ready for reading - no blocking!
handle_read(events[i].data.fd);
}
}
Other Solutions
Event loops aren’t the only way to handle concurrency, especially for I/O-bound tasks. There are many approaches to solve our problem with a small brief overview of each and you can get started reading about them in details by Googling them or use LLMs as you want:
- Multithreading: Using multiple threads, where each thread handles a separate task. This is the traditional approach to concurrency, which we discussed before. It has some problems but is still a good solution.
- Multiprocessing: Using multiple processes, each with its own memory space, to handle different tasks.
- Asynchronous I/O with Callbacks: This is the model the event loop uses, but without the central loop. Instead, each I/O request is made with a callback function that is invoked when the operation completes.
- Asynchronous I/O with Coroutines/Generators: This is a more modern approach that uses language features to make asynchronous code look like synchronous code, making it easier to read and write.
How mill-io Works and Why It Exists?
This project was built during my Summer of Bitcoin internship (I will write an article about it soon) for the Citadel-tech organization to be used inside the Coinswap project to enhance its performance through better I/O operation management without relying on heavyweight async runtimes such as tokio-rs and async-std. The implementation leverages mio’s polling capabilities to create a reactor-based architecture with a configurable thread pool, ensuring Coinswap’s core logic remains runtime-agnostic while achieving optimal performance and resource utilization.
Mill-IO Architecture
Let’s break down how mill-io works behind the scenes. The event loop is made up of a few simple building blocks that work together to handle lots of connections efficiently. Here’s a quick overview:
- Application Layer: This is your code! You write event handlers and use the EventLoop API to register what you want to listen for.
- Mill-IO Core: The heart of the library. It includes the Reactor (the boss), PollHandle (the listener), ThreadPool (the workers), and a Handler Registry (keeps track of who does what).
- System Layer: This is where mill-io talks to the operating system using mio, which uses fast polling APIs like epoll, kqueue, or IOCP.
- Memory Management: Handles buffers and objects efficiently so things run smoothly.
Here’s a visual to help you see how everything connects:
1. PollHandle – Listening for Events
PollHandle is like a receptionist who keeps an eye on all your sockets and files. It uses the mio library to ask the OS, “Let me know when something interesting happens!” When an event is ready, PollHandle knows which handler to call.
Example:
let poller = PollHandle::new().unwrap();
let mut src = TestSource::new();
struct NoopHandler;
impl EventHandler for NoopHandler {
fn handle_event(&self, _event: &mio::event::Event) {
println!("Handling the Noop event...");
}
}
poller
.register(&mut src, mio::Token(1), mio::Interest::READABLE, NoopHandler)
.expect("Failed to register src");
2. ThreadPool – The Workers
Once an event is ready, we need someone to do the work. That’s where the thread pool comes in. Instead of creating a new thread for every task (which is slow and uses lots of memory), we keep a small team of worker threads ready to go. When a job comes in, one of them grabs it and gets to work.
Example:
let pool = ThreadPool::new(4);
let counter = Arc::new(AtomicUsize::new(0));
for _ in 0..10 {
let counter_clone = counter.clone();
pool.exec(move || {
counter_clone.fetch_add(1, Ordering::SeqCst);
})
.unwrap();
}
3. Reactor – The Boss
The Reactor is the coordinator. It listens for events using PollHandle and then tells the thread pool to run the right handler. It’s like a dispatcher making sure every event gets handled quickly and efficiently.
How it works:
- Polls for events
- Dispatches them to the thread pool
- Makes sure everything keeps running smoothly
4. EventLoop – The Simple Interface
You don’t need to worry about all the technical details. The EventLoop gives you an easy way to register events, start the loop, and shut things down when you’re done. It manages the Reactor and keeps everything running in the background.
How it feels to use:
- Register your events and handlers
- Start the event loop
- Your handlers get called automatically when events happen
- Stop the loop when you’re finished
If you want to understand the fully flow of the event-loop, you can talk a look on this diagram:
Conclusion
We started with a simple problem: handling millions of users on a web server without overwhelming the system. The traditional thread-per-connection approach creates too much overhead, while single-threaded processing can’t handle the load.
Event loops solve this by using non-blocking I/O and efficient polling. Instead of blocking threads or creating thousands of them, we use one thread to monitor many connections and only act when something is actually ready.
At the end, the important thing that we should learn is that the best solutions come from the simple ones.
Acknowledgments
This project was developed as part of the Summer of Bitcoin 2025 program. thanks to:
- Citadel-tech and the Coinswap project for providing the use case and requirements.
- Summer of Bitcoin organizers and mentors for their guidance.
- The mio project for providing the foundational polling abstractions.
Special Thanks to my mentor, @Shourya742, for his kindness and for helping me a lot while working on this project.