1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
//! `Future`-powered I/O at the core of Tokio
//!
//! This crate uses the `futures` crate to provide an event loop ("reactor
//! core") which can be used to drive I/O like TCP and UDP, spawned future
//! tasks, and other events like channels/timeouts. All asynchronous I/O is
//! powered by the `mio` crate.
//!
//! The concrete types provided in this crate are relatively bare bones but are
//! intended to be the essential foundation for further projects needing an
//! event loop. In this crate you'll find:
//!
//! * TCP, both streams and listeners
//! * UDP sockets
//! * Message queues
//! * Timeouts
//!
//! More functionality is likely to be added over time, but otherwise the crate
//! is intended to be flexible, with the `PollEvented` type accepting any
//! type that implements `mio::Evented`. For example, the `tokio-uds` crate
//! uses `PollEvented` to provide support for Unix domain sockets.
//!
//! Some other important tasks covered by this crate are:
//!
//! * The ability to spawn futures into an event loop. The `Handle` and `Pinned`
//! types have a `spawn` method which allows executing a future on an event
//! loop. The `Pinned::spawn` method crucially does not require the future
//! itself to be `Send`.
//!
//! * The `Io` trait serves as an abstraction for future crates to build on top
//! of. This packages up `Read` and `Write` functionality as well as the
//! ability to poll for readiness on both ends.
//!
//! * All I/O is futures-aware. If any action in this crate returns "not ready"
//! or "would block", then the current future task is scheduled to receive a
//! notification when it would otherwise make progress.
//!
//! # Examples
//!
//! A simple TCP echo server:
//!
//! ```no_run
//! extern crate futures;
//! extern crate tokio_core;
//!
//! use std::env;
//! use std::net::SocketAddr;
//!
//! use futures::Future;
//! use futures::stream::Stream;
//! use tokio_core::io::{copy, Io};
//! use tokio_core::net::TcpListener;
//! use tokio_core::reactor::Core;
//!
//! fn main() {
//! let addr = env::args().nth(1).unwrap_or("127.0.0.1:8080".to_string());
//! let addr = addr.parse::<SocketAddr>().unwrap();
//!
//! // Create the event loop that will drive this server
//! let mut l = Core::new().unwrap();
//! let handle = l.handle();
//!
//! // Create a TCP listener which will listen for incoming connections
//! let socket = TcpListener::bind(&addr, &handle).unwrap();
//!
//! // Once we've got the TCP listener, inform that we have it
//! println!("Listening on: {}", addr);
//!
//! // Pull out the stream of incoming connections and then for each new
//! // one spin up a new task copying data.
//! //
//! // We use the `io::copy` future to copy all data from the
//! // reading half onto the writing half.
//! let done = socket.incoming().for_each(|(socket, addr)| {
//! let pair = futures::lazy(|| Ok(socket.split()));
//! let amt = pair.and_then(|(reader, writer)| copy(reader, writer));
//!
//! // Once all that is done we print out how much we wrote, and then
//! // critically we *spawn* this future which allows it to run
//! // concurrently with other connections.
//! handle.spawn(amt.then(move |result| {
//! println!("wrote {:?} bytes to {}", result, addr);
//! Ok(())
//! }));
//!
//! Ok(())
//! });
//!
//! // Execute our server (modeled as a future) and wait for it to
//! // complete.
//! l.run(done).unwrap();
//! }
//! ```
extern crate futures;
extern crate mio;
extern crate slab;
extern crate scoped_tls;
extern crate log;