Crate libuio

Source
Expand description

§libuio

This is a fully featured async framework designed to run on linux and tailored for high performance networking solutions. The implementation is inherently multi-threaded by design and leverages io_uring under the hood as a I/O driver. This allows for unprecedented efficiency gains over the likes of epoll, poll, and select. The package is split up into a handful of modules each handling specifc subset of the functionality needed.

For detailed examples see the examples directory in the root of this repository.

At a high level a simple TCP echo server works as you would expect:

use std::io;

use futures::StreamExt;

use libuio::net::TcpListener;

#[libuio::main]
async fn main() -> io::Result<()> {
    // Since we are demonstrating a TCP server, lets start by creating a new TcpListener that
    // is set to listen on [::]:9091 and have a connection backlog of 1024.
    let mut listener = TcpListener::with_outstanding("[::]", 9091, 1024)?;

    let mut buf = vec![0u8; 1024];

    println!("Listening on: {}", listener.local_addr());

    // Or we can grab a async stream of incoming connections, this is using the
    // opcode::AcceptMulti, which is a highly efficient implementation of the standard accept
    // loop. This will loop endlessly until dropped or there is an unrecoverable error.
    //
    // Note that you want to call incoming OUTSIDE of a loop like bellow, otherwise you will
    // be implicitly droping/recrating the incoming future which results in performance worse
    // than that of a a `listener.accept().await` loop would provide.
    let mut incoming = listener.incoming();
    while let Some(conn) = incoming.next().await {
        let mut conn = match conn {
            Ok(conn) => conn,
            Err(e) => {
                println!("Oh no we had an error: {}", e);
                continue;
            }
        };

        println!("Got connection from: {}", conn.peer_addr());

        let read = match conn.recv(buf.as_mut_slice()).await {
            Ok(ret) => ret,
            Err(e) => {
                println!("Failed to receive from client: {}", e);
                continue;
            }
        };

        let s = String::from_utf8_lossy(&buf[..read]);

        println!("Client request: {}", s);

        conn.send(&buf[..read])
            .await
            .expect("Failed to respond to client.");
    }
    Ok(())
}

Similarly here is an example TCP client interacting with the above server:

use std::{io, net::SocketAddr};

use libuio::net::TcpStream;

#[libuio::main]
async fn main() -> io::Result<()> {
    println!("Connecting to remote server.");

    let remote_addr: SocketAddr = "[::1]:9091".parse().unwrap();
    let mut client = TcpStream::new(false)?;

    // Connect to the defined remote host.
    client.connect(&remote_addr).await?;

    println!(
        "Connected to remote peer {}, local address: {}",
        client.peer_addr(),
        client.local_addr(),
    );

    // Send some data to the remote host.
    client.send("Hello from client!".as_bytes()).await?;

    // Now read back anything the server sent and then exit.
    let mut buf = vec![0u8; 1024];
    let read = client.recv(buf.as_mut_slice()).await?;

    let str = String::from_utf8_lossy(&buf[..read]);
    println!("Server response: {}", str);
    Ok(())
}

As the above example demonstrates this is almost a direct drop in replacement for std::net::TcpListener and std::net::TcpStream.

Re-exports§

pub use executor::spawn;
pub use executor::ThreadPool;
pub use executor::ThreadPoolBuilder;

Modules§

context
The context module handles the logic for sharing local thread context between tasks and objects that need that context. Namely it exposes a Handle object as a application static global, this handle stores thread_local::ThreadLocal objects that can be injected as needed into logic throughout the application. There are two main means of accessing the thread context either accessing the top level Handle via the statics::handle method or using the helper method [statics::io] which returns a reference to the local [crate::uring::Uring] directly.
executor
This module is almost a direct copy of the futures::executor::ThreadPool, futures::executor::ThreadPoolBuilder, and the futures::executor::unpark_mutex implementations. The reason for the copy is that we needed to implement a customized event loop in order to integrate the io_uring implementation. So we took the base implementation of the ThreadPool and added in the calls and logic necessary to integrate the [super::uring::Uring]. Otherwise the logic is identical sans the various cfg configs that were not necessary anymore. Really all credit for this module should go to the developers of the futures crate.
io_uring
The crate::io_uring module represents a simplified interface ontop of the io_uring::IoUring implementation. This module distills the implementation down to three components:
net
The self package handles all logic relating to creating and managing network IO objects. Namely this exposes a set of socket implementations to support networking applications and aims to be a drop in replacement for std::net or tokio::net modules.
sync

Attribute Macros§

main