Expand description
§libuio
This is a fully featured async framework designed to run on linux and tailored for high performance
networking solutions. The implementation is inherently multi-threaded by design and leverages
io_uring under the hood as a I/O driver. This allows for unprecedented efficiency gains over
the likes of epoll, poll, and select. The package is split up into a handful of modules
each handling specifc subset of the functionality needed.
At a high level a simple TCP echo server works as you would expect:
use futures::StreamExt;
use libuio::{executor::ThreadPoolBuilder, net::TcpListener};
// First we need to create a new thread pool to execute on.
let pool = ThreadPoolBuilder::new()
.name_prefix("executor")
.create()
.expect("Failed to configure thread pool.");
// Now we spawn our main async task, which will drive any/all async operations needed by our
// application.
pool.spawn_ok(async {
// Since we are demonstrating a TCP server, lets start by creating a new TcpListener that
// is set to listen on [::]:9091 and have a connection backlog of 1024.
let mut listener = TcpListener::with_outstanding("[::]", 9091, 1024)
.expect("Failed to configure listener.");
let mut buf = vec![0u8; 1024];
// First we create an async stream of incoming connections, this is using the
// opcode::AcceptMulti, which is a highly efficient implementation of the standard accept
// loop. This will loop endlessly until dropped or there is an unrecoverable error.
//
// Note that you want to call incoming OUTSIDE of a loop like bellow, otherwise you will
// be implicitly droping/recrating the incoming future which results in performance worse
// than that of a a `listener.accept().await` loop would provide.
let mut incoming = listener.incoming();
while let Some(conn) = incoming.next().await {
// We have a connection or a network error.
let mut conn = match conn {
Ok(conn) => conn,
Err(e) => {
println!("Oh no we had an error: {}", e);
continue;
}
};
// Read some data in from the client.
let (read, _) = match conn.recv(buf.as_mut_slice()).await {
Ok(ret) => ret,
Err(e) => {
println!("Failed to receive from client: {}", e);
continue;
}
};
// Print the data to the screen.
let s = String::from_utf8_lossy(&buf[..read]);
println!("Client request: {}", s);
// And finally echo it back to the client.
conn.send(&buf[..read])
.await
.expect("Failed to respond to client.");
}
});
pool.wait();As the above example demonstrates this is almost a direct drop in replacement for std::net::TcpListener and std::net::TcpStream.
Modules§
- The context module handles the logic for sharing local thread context between tasks and objects that need that context. Namely it exposes a Handle object as a application static global, this handle stores thread_local::ThreadLocal objects that can be injected as needed into logic throughout the application. There are two main means of accessing the thread context either accessing the top level Handle via the statics::handle method or using the helper method statics::io which returns a reference to the local crate::uring::Uring directly.
- This module is almost a direct copy of the futures::executor::ThreadPool, futures::executor::ThreadPoolBuilder, and the futures::executor::unpark_mutex implementations. The reason for the copy is that we needed to implement a customized event loop in order to integrate the io_uring implementation. So we took the base implementation of the ThreadPool and added in the calls and logic necessary to integrate the super::uring::Uring. Otherwise the logic is identical sans the various cfg configs that were not necessary anymore. Really all credit for this module should go to the developers of the futures crate.
- The self package handles all logic relating to creating and managing network IO objects. Namely this exposes a set of socket implementations to support networking applications and aims to be a drop in replacement for std::net or tokio::net modules.
- The underlying event loop implementation built on top of tokio’s io_uring crate. This module primarily exposes the Uring struct which is the main implementation of the event loop, this also exposes a key ingredient for the implementation which is the AsyncResult which can be used as a oneshot return value from the Uring to a given future that is executing async I/O.