1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
//! API bindings for Faktory workers and job producers.
//!
//! This crate provides API bindings for the language-agnostic
//! [Faktory](https://github.com/contribsys/faktory) work server. For a more detailed system
//! overview of the work server, what jobs are, and how they are scheduled, see the Faktory docs.
//!
//! # System overview
//!
//! At a high level, Faktory has two primary concepts: jobs and workers. Jobs are pieces of work
//! that clients want to have executed, and workers are the things that eventually execute those
//! jobs. A client enqueues a job, Faktory sends the job to an available worker (and waits if
//! they're all busy), the worker executes the job, and eventually reports back to Faktory that the
//! job has completed.
//!
//! Jobs are self-contained, and consist of a job *type* (a string), arguments for the job, and
//! bits and pieces of metadata. When a job is scheduled for execution, the worker is given this
//! information, and uses the job type to figure out how to execute the job. You can think of job
//! execution as a remote function call (or RPC) where the job type is the name of the function,
//! and the job arguments are, perhaps unsuprisingly, the arguments to the function.
//!
//! In this crate, you will find bindings both for submitting jobs (clients that *produce* jobs)
//! and for executing jobs (workers that *consume* jobs). The former can be done by making a
//! `Client`, whereas the latter is done with a `Worker`. See the documentation for each for
//! more details on how to use them.
//!
//! # Encrypted connections (TLS)
//!
//! To connect to a Faktory server hosted over TLS, add the `tls` feature, and see the
//! documentation for `TlsStream`, which can be supplied to [`Client::connect_with`] and
//! [`WorkerBuilder::connect_with`].
//!
//! # Examples
//!
//! If you want to **submit** jobs to Faktory, use `Client`.
//!
//! ```no_run
//! # tokio_test::block_on(async {
//! use faktory::{Client, Job};
//! let mut client = Client::connect().await.unwrap();
//! client.enqueue(Job::new("foobar", vec!["z"])).await.unwrap();
//!
//! let (enqueued_count, errors) = client.enqueue_many([Job::new("foobar", vec!["z"]), Job::new("foobar", vec!["z"])]).await.unwrap();
//! assert_eq!(enqueued_count, 2);
//! assert_eq!(errors, None);
//! });
//! ```
//! If you want to **accept** jobs from Faktory, use `Worker`.
//!
//! ```no_run
//! # tokio_test::block_on(async {
//! use async_trait::async_trait;
//! use faktory::{Job, JobRunner, Worker};
//! use std::io;
//!
//! struct DomainEntity(i32);
//!
//! impl DomainEntity {
//! fn new(buzz: i32) -> Self {
//! DomainEntity(buzz)
//! }
//! }
//!
//! #[async_trait]
//! impl JobRunner for DomainEntity {
//! type Error = io::Error;
//!
//! async fn run(&self, job: Job) -> Result<(), Self::Error> {
//! println!("{:?}, buzz={}", job, self.0);
//! Ok(())
//! }
//! }
//!
//! let mut w = Worker::builder()
//! .register("fizz", DomainEntity::new(1))
//! .register_fn("foobar", |job| async move {
//! println!("{:?}", job);
//! Ok::<(), io::Error>(())
//! })
//! .register_blocking_fn("fibo", |job| {
//! std::thread::sleep(std::time::Duration::from_millis(1000));
//! println!("{:?}", job);
//! Ok::<(), io::Error>(())
//! })
//! .with_rustls() // available on `rustls` feature only
//! .connect()
//! .await
//! .unwrap();
//!
//! if let Err(e) = w.run(&["default"]).await {
//! println!("worker failed: {}", e);
//! }
//! # });
//! ```
extern crate serde_derive;
pub use crateError;
pub use crate;
pub use crate;
/// Constructs only available with the enterprise version of Faktory.
pub use *;