Please check the build logs for more information.
See Builds for ideas on how to fix a failed build, or Metadata for how to configure docs.rs builds.
If you believe this is docs.rs' fault, open an issue.
microtensor
Automatic differentiation for tensor operations.
WIP: Don't use in production!
Features
-
Safe auto-grad -- Non-differentiable operations return a separate type that cannot be back-propagated, revealing gaps in your computation graph at compile time.
-
Broadcasting — Tensors with differing but compatible shapes get broadcasted to matching dimensions automatically for most operations.
-
Arbitrary inner types -- Tensors can store almost any data type and compute gradients for any inner type that satisfies [scalar::Real].
-
Zero-copy views — Tensors may be sliced, indexed, reshaped, transposed and broadcasted without actually copying any data in most situations.
-
Graph recycling -- Computation graphs, created by tracing an eager computation, can be reevaluated at a later time with new input data. They can also be serialized and loaded elsewhere, without access to the original code.
Examples
Evaluating and minimizing a non-linear function:
use ;
// Create variables from tensors
let w = randn.trained;
let b = zeros.trained;
for _ in 0..100
Automatic broadcasting:
use ;
let a = arrange;
let b = ones;
let c = &a - b.unsqueeze + 1.;
assert_eq!;
Generic return types:
use ;
let t = randn;
let _a: u8 = t.argmax.item;
let _b: u16 = t.argmax.item; // argmax will produce a Tensor<u16> here
More examples
Check the /examples
folder for more example code.
License
MIT