1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
//!
//! # Desync
//! 
//! 
//! This is a concurrency library for Rust that protects data by scheduling operations in order
//! instead of locking and blocking threads. It provides a simple API that works well with Rust's
//! notion of lifetimes, alongside a concurrency model with a dramatically reduced set of moving
//! parts.
//! 
//! This approach has several advantages over the traditional method:
//! 
//!  * It's simpler: almost the  entire set of thread methods and synchronisation primitives can 
//!    be replaced with the two fundamental scheduling functions, `sync()` and `desync()`.
//!  * There's less boilerplate: code is less about starting threads and sending messages and more
//!    literally expresses intent.
//!  * It's easier to reason about: scheduled operations are always performed in the order they're 
//!    queued so race conditions and similar issues due to out-of-order execution are both much rarer 
//!    and easier to debug.
//!  * Borrowing and asynchronous code can mix much more seamlessly than in other concurrency models.
//!  * It makes it easier to write highly concurrent code: desync makes moving between performing
//!    operations synchronously and asynchronously trivial, with no need to deal with adding code to
//!    start threads or communicate between them.
//! 
//! In addition to the two fundamental methods, desync provides methods for generating futures and
//! processing streams.
//! 
//! # Quick start
//! 
//! There is a single new synchronisation object: `Desync`. You create one like this:
//! 
//! ```
//! use desync::Desync;
//! let number = Desync::new(0);
//! ```
//! 
//! It supports two main operations. `desync` will schedule a new job for the object that will run
//! in a background thread. It's useful for deferring long-running operations and moving updates
//! so they can run in parallel.
//! 
//! ```
//! # use desync::Desync;
//! # use std::thread;
//! # use std::time::*;
//! let number = Desync::new(0);
//! number.desync(|val| {
//!     // Long update here
//!     thread::sleep(Duration::from_millis(100));
//!     *val = 42;
//! });
//! 
//! // We can carry on what we're doing with the update now running in the background
//! ```
//! 
//! The other operation is `sync`, which schedules a job to run synchronously on the data structure.
//! This is useful for retrieving values from a `Desync`.
//! 
//! ```
//! # use desync::Desync;
//! # use std::thread;
//! # use std::time::*;
//! # let number = Desync::new(0);
//! # number.desync(|val| {
//! #     // Long update here
//! #     thread::sleep(Duration::from_millis(100));
//! #     *val = 42;
//! # });
//! let new_number = number.sync(|val| *val);           // = 42
//! # assert!(new_number == 42);
//! ```
//! 
//! `Desync` objects always run operations in the order that is provided, so all operations are
//! serialized from the point of view of the data that they contain. When combined with the ability
//! to perform operations asynchronously, this provides a useful way to immediately parallelize
//! long-running operations.
//! 
//! The `future_sync()` action returns a boxed Future that can be used with other libraries that use them. It's 
//! conceptually the same as `sync`, except that it doesn't wait for the operation to complete:
//! 
//! ```
//! # extern crate futures;
//! # extern crate desync;
//! # fn main() {
//! # use desync::Desync;
//! # use std::thread;
//! # use std::time::*;
//! # use futures::{FutureExt};
//! # use futures::executor;
//! # use futures::future;
//! # let number = Desync::new(0);
//! # number.desync(|val| {
//! #     // Long update here
//! #     thread::sleep(Duration::from_millis(100));
//! #     *val = 42;
//! # });
//! let future_number = number.future_sync(|val| future::ready(*val).boxed());
//! assert!(executor::block_on(async { future_number.await.unwrap() }) == 42 );
//! # }
//! ```
//! 
//! Note that this is the equivalent of just `number.sync(|val| *val)`, so this is mainly useful for
//! interacting with other code that's already using futures. The `after()` function is also provided
//! for using the results of futures to update the contents of `Desync` data: these all preserve the
//! strict order-of-operations semantics, so operations scheduled after an `after` won't start until
//! that operation has completed.
//! 
//! # Pipes and streams
//! 
//! As well as support for futures, Desync provides supports for streams. The `pipe_in()` and `pipe()`
//! functions provide a way to process stream data in a desync object as it arrives. `pipe_in()` just
//! processes stream data as it arrives, and `pipe()` provides an output stream of data.
//! 
//! `pipe()` is quite useful as a way to provide asynchronous access to synchronous code: it can be used
//! to create a channel to send requests to an asynchronous target and retrieve results back via its
//! output. (Unlike this 'traditional' method, the actual scheduling and channel maintenance does not 
//! need to be explicitly implemented)
//! 

#![warn(bare_trait_objects)]

#[macro_use]
extern crate lazy_static;
extern crate futures;

#[cfg(not(target_arch = "wasm32"))]
extern crate num_cpus;

pub mod scheduler;
pub mod desync;
pub mod pipe;

pub use self::scheduler::{TrySyncError};
pub use self::desync::*;
pub use self::pipe::*;