1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
//! Rust bindings for XLA (Accelerated Linear Algebra).
//!
//! [XLA](https://www.tensorflow.org/xla) is a compiler library for Machine Learning. It can be
//! used to run models efficiently on GPUs, TPUs, and on CPUs too.
//!
//! [`XlaOp`]s are used to build a computation graph. This graph can built into a
//! [`XlaComputation`]. This computation can then be compiled into a [`PjRtLoadedExecutable`] and
//! then this executable can be run on a [`PjRtClient`]. [`Literal`] values are used to represent
//! tensors in the host memory, and [`PjRtBuffer`] represent views of tensors/memory on the
//! targeted device.
//!
//! The following example illustrates how to build and run a simple computation.
//! ```ignore
//! // Create a CPU client.
//! let client = xla::PjRtClient::cpu()?;
//!
//! // A builder object is used to store the graph of XlaOp.
//! let builder = xla::XlaBuilder::new("test-builder");
//!
//! // Build a simple graph summing two constants.
//! let cst20 = xla_builder.constant_r0(20f32);
//! let cst22 = xla_builder.constant_r0(22f32);
//! let sum = (cst20 + cst22)?;
//!
//! // Create a computation from the final node.
//! let sum = sum.build()?;
//!
//! // Compile this computation for the target device and then execute it.
//! let result = client.compile(&sum)?;
//! let result = &result.execute::<xla::Literal>(&[])?;
//!
//! // Retrieve the resulting value.
//! let result = result[0][0].to_literal_sync()?.to_vec::<f32>()?;
//! ```
pub use ;
pub use FromRawBytes;
pub use *;