Server and client SSH asynchronous library, based on tokio/futures.
The normal way to use this library, both for clients and for
servers, is by creating *handlers*, i.e. types that implement
`client::Handler` for clients and `server::Handler` for
servers.
# Writing servers
In the specific case of servers, a server must implement
`server::Server`, a trait for creating new `server::Handler`. The
main type to look at in the `server` module is `Session` (and
`Config`, of course).
Here is an example server, which forwards input from each client
to all other clients:
```
extern crate thrussh;
extern crate thrussh_keys;
extern crate futures;
extern crate tokio;
use std::sync::{Mutex, Arc};
use thrussh::*;
use thrussh::server::{Auth, Session};
use thrussh_keys::*;
use std::collections::HashMap;
use futures::Future;
#[tokio::main]
async fn main() {
let client_key = thrussh_keys::key::KeyPair::generate_ed25519().unwrap();
let client_pubkey = Arc::new(client_key.clone_public_key());
let mut config = thrussh::server::Config::default();
config.connection_timeout = Some(std::time::Duration::from_secs(3));
config.auth_rejection_time = std::time::Duration::from_secs(3);
config.keys.push(thrussh_keys::key::KeyPair::generate_ed25519().unwrap());
let config = Arc::new(config);
let sh = Server{
client_pubkey,
clients: Arc::new(Mutex::new(HashMap::new())),
id: 0
};
tokio::time::timeout(
std::time::Duration::from_secs(1),
thrussh::server::run(config, "0.0.0.0:2222", sh)
).await.unwrap_or(Ok(()));
}
#[derive(Clone)]
struct Server {
client_pubkey: Arc,
clients: Arc>>,
id: usize,
}
impl server::Server for Server {
type Handler = Self;
fn new(&mut self, _: Option) -> Self {
let s = self.clone();
self.id += 1;
s
}
}
impl server::Handler for Server {
type FutureAuth = futures::future::Ready>;
type FutureUnit = futures::future::Ready>;
type FutureBool = futures::future::Ready>;
fn finished_auth(mut self, auth: Auth) -> Self::FutureAuth {
futures::future::ready(Ok((self, auth)))
}
fn finished_bool(self, b: bool, s: Session) -> Self::FutureBool {
futures::future::ready(Ok((self, s, b)))
}
fn finished(self, s: Session) -> Self::FutureUnit {
futures::future::ready(Ok((self, s)))
}
fn channel_open_session(self, channel: ChannelId, session: Session) -> Self::FutureUnit {
{
let mut clients = self.clients.lock().unwrap();
clients.insert((self.id, channel), session.handle());
}
self.finished(session)
}
fn auth_publickey(self, _: &str, _: &key::PublicKey) -> Self::FutureAuth {
self.finished_auth(server::Auth::Accept)
}
fn data(self, channel: ChannelId, data: &[u8], mut session: Session) -> Self::FutureUnit {
{
let mut clients = self.clients.lock().unwrap();
for ((id, channel), ref mut s) in clients.iter_mut() {
if *id != self.id {
s.data(*channel, CryptoVec::from_slice(data));
}
}
}
session.data(channel, CryptoVec::from_slice(data));
self.finished(session)
}
}
```
Note the call to `session.handle()`, which allows to keep a handle
to a client outside the event loop. This feature is internally
implemented using `futures::sync::mpsc` channels.
Note that this is just a toy server. In particular:
- It doesn't handle errors when `s.data` returns an error,
i.e. when the client has disappeared
- Each new connection increments the `id` field. Even though we
would need a lot of connections per second for a very long time to
saturate it, there are probably better ways to handle this to
avoid collisions.
# Implementing clients
Maybe surprisingly, the data types used by Thrussh to implement
clients are relatively more complicated than for servers. This is
mostly related to the fact that clients are generally used both in
a synchronous way (in the case of SSH, we can think of sending a
shell command), and asynchronously (because the server may send
unsollicited messages), and hence need to handle multiple
interfaces.
The important types in the `client` module are `Session` and
`Connection`. A `Connection` is typically used to send commands to
the server and wait for responses, and contains a `Session`. The
`Session` is passed to the `Handler` when the client receives
data.
```
extern crate thrussh;
extern crate thrussh_keys;
extern crate futures;
extern crate tokio;
extern crate env_logger;
use std::sync::Arc;
use thrussh::*;
use thrussh::server::{Auth, Session};
use thrussh_keys::*;
use futures::Future;
use std::io::Read;
struct Client {
}
impl client::Handler for Client {
type FutureUnit = futures::future::Ready>;
type FutureBool = futures::future::Ready>;
fn finished_bool(self, b: bool) -> Self::FutureBool {
futures::future::ready(Ok((self, b)))
}
fn finished(self, session: client::Session) -> Self::FutureUnit {
futures::future::ready(Ok((self, session)))
}
fn check_server_key(self, server_public_key: &key::PublicKey) -> Self::FutureBool {
println!("check_server_key: {:?}", server_public_key);
self.finished_bool(true)
}
fn channel_open_confirmation(self, channel: ChannelId, max_packet_size: u32, window_size: u32, session: client::Session) -> Self::FutureUnit {
println!("channel_open_confirmation: {:?}", channel);
self.finished(session)
}
fn data(self, channel: ChannelId, data: &[u8], session: client::Session) -> Self::FutureUnit {
println!("data on channel {:?}: {:?}", channel, std::str::from_utf8(data));
self.finished(session)
}
}
#[tokio::main]
async fn main() {
let config = thrussh::client::Config::COMPRESSED;
let config = Arc::new(config);
let sh = Client{};
let key = thrussh_keys::key::KeyPair::generate_ed25519().unwrap();
let mut agent = thrussh_keys::agent::client::AgentClient::connect_env().await.unwrap();
agent.add_identity(&key, &[]).await.unwrap();
let mut session = thrussh::client::connect(config, "localhost:22", sh).await.unwrap();
if session.authenticate_future(std::env::var("USER").unwrap(), key.clone_public_key(), agent).await.unwrap().1 {
let mut channel = session.channel_open_session().await.unwrap();
channel.data(b"Hello, world!").await.unwrap();
if let Some(msg) = channel.wait().await {
println!("{:?}", msg)
}
}
}
```
# Using non-socket IO / writing tunnels
The easy way to implement SSH tunnels, like `ProxyCommand` for
OpenSSH, is to use the `thrussh-config` crate, and use the
`Stream::tcp_connect` or `Stream::proxy_command` methods of that
crate. That crate is a very lightweight layer above Thrussh, only
implementing for external commands the traits used for sockets.
# The SSH protocol
If we exclude the key exchange and authentication phases, handled
by Thrussh behind the scenes, the rest of the SSH protocol is
relatively simple: clients and servers open *channels*, which are
just integers used to handle multiple requests in parallel in a
single connection. Once a client has obtained a `ChannelId` by
calling one the many `channel_open_…` methods of
`client::Connection`, the client may send exec requests and data
to the server.
A simple client just asking the server to run one command will
usually start by calling
`client::Connection::channel_open_session`, then
`client::Connection::exec`, then possibly
`client::Connection::data` a number of times to send data to the
command's standard input, and finally `Connection::channel_eof`
and `Connection::channel_close`.
# Design principles
The main goal of this library is conciseness, and reduced size and
readability of the library's code. Moreover, this library is split
between Thrussh, which implements the main logic of SSH clients
and servers, and Thrussh-keys, which implements calls to
cryptographic primitives.
One non-goal is to implement all possible cryptographic algorithms
published since the initial release of SSH. Technical debt is
easily acquired, and we would need a very strong reason to go
against this principle. If you are designing a system from
scratch, we urge you to consider recent cryptographic primitives
such as Ed25519 for public key cryptography, and Chacha20-Poly1305
for symmetric cryptography and MAC.
# Internal details of the event loop
It might seem a little odd that the read/write methods for server
or client sessions often return neither `Result` nor
`Future`. This is because the data sent to the remote side is
buffered, because it needs to be encrypted first, and encryption
works on buffers, and for many algorithms, not in place.
Hence, the event loop keeps waiting for incoming packets, reacts
to them by calling the provided `Handler`, which fills some
buffers. If the buffers are non-empty, the event loop then sends
them to the socket, flushes the socket, empties the buffers and
starts again. In the special case of the server, unsollicited
messages sent through a `server::Handle` are processed when there
is no incoming packet to read.