1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
//! Automatic differentiation for tensor operations.
//!
//! Requires Rust nightly.
//!
//! # Features
//!
//! - **Safe auto-grad** — Non-differentiable operations return a separate
//! type that cannot be back-propagated, revealing gaps in your computation graph
//! at compile time.
//!
//! - **Broadcasting** — Tensors with differing but compatible shapes get
//! broadcasted to matching dimensions automatically for most operations.
//!
//! - **Arbitrary inner types** — Tensors can store *almost* any data type and
//! compute gradients for any inner type that satisfies [scalar::Real].
//!
//! - **Zero-copy views** — Tensors may be sliced, indexed, reshaped, transposed and
//! broadcasted without actually copying any data in most situations.
//!
//! - **Graph recycling** — Computation graphs, created by tracing an eager computation,
//! can be reevaluated at a later time with new input data. They can also be serialized
//! and loaded elsewhere, without access to the original code.
//!
//! # Examples
//!
//! Evaluating and minimizing a non-linear function:
//! ```
//! use microtensor::{prelude::*, Tensor};
//!
//! // Create variables from tensors
//! let w = Tensor::randn(&[2, 16]).trained();
//! let b = Tensor::zeros(&[16]).trained();
//!
//! for _ in 0..100 {
//!   // Do some computation
//!   let x = Tensor::vec(&[1.0, 2.0]).tracked();
//!   let loss = ((x.mm(&w) + &b).sigmoid() - 0.5).sqr().mean(0);
//!
//!   // Compute gradients
//!   loss.backward();
//!
//!   // Nudge w and b in order to minimize loss
//!   for mut param in loss.parameters() {
//!     param -= param.grad().unwrap() * 0.01;
//!   }
//!
//!   // Reset gradients
//!   loss.reset();
//! }
//! ```
//!
//! Automatic broadcasting:
//! ```rust
//! use microtensor::{prelude::*, Tensor};
//!
//! let a = Tensor::arrange(&[2, 16], 0., 1.);
//! let b = Tensor::ones(&[2]);
//! let c = &a - b.unsqueeze(-1) + 1.;
//!
//! assert_eq!(a, c);
//! ```
//!
//! Generic return types:
//! ```rust
//! use microtensor::{prelude::*, Tensor};
//!
//! let t = Tensor::<f32>::randn(&[16]);
//! let _a: u8  = t.argmax(0).item();
//! let _b: u16 = t.argmax(0).item(); // argmax will produce a Tensor<u16> here
//! ```
//!
//! # Optional features
//!
//! Some features can be toggled in your `Cargo.toml`.
//!
//! - `unsafe` *(default)* — Accelerated matrix math using [matrixmultiply] crate.
//! - `threading` *(default)* — Thread safety & multi-threaded operation over batch dimensions.
//!
//! ## More examples
//! Check the `/examples` folder for more example code.
//!
// //! Generic inner types:
// //! ```rust
// //! use microtensor::{prelude::*, Tensor};
// //!
// //! let mask: Tensor<bool> = Tensor::randn(&[2, 16]).gt(&Tensor::scalar(1.0)).any(-1);
// //!
// //! assert_eq!(mask.shape().size(), 2);
// //!
// //! ```

#![feature(arc_unwrap_or_clone)]
#![feature(min_specialization)]

mod internal;
mod shape;
mod tensor;
mod variable;

pub mod ops;
pub mod scalar;
pub mod prelude;

pub use shape::Shape;
pub use tensor::Tensor;
pub use variable::{ Variable, Graph, UnaryOp, BinaryOp };