native_neural_network 0.2.0

Lib no_std Rust for native neural network (.rnn)
Documentation
# native_neural_network – Quickstart

## Purpose

This quickstart introduces the native_neural_network crate in a concise, approachable way. It explains what the crate is for, what it enables, and is intended for developers building custom inference runtimes or multi-language integrations. The guide is intentionally minimal — a short introduction to help you get started safely.

## Prerequisites

- Minimum Rust: stable (recommend latest stable).
- The crate supports no_std-compatible usage on appropriate targets.
- You need a model file in `.rnn` format to run inference (this quickstart does not provide built-in models).
- No UI is included.
- No pretrained models are bundled in this repository.

## Installation

- Add the crate to your `Cargo.toml` dependencies:

```toml
[dependencies]
native_neural_network = { path = "." }
```

- No special features are required for the minimal usage shown here. If you plan no_std embedding, enable the appropriate target and features in your project configuration.

## Minimal workflow

1. Load a `.rnn` model file into memory.
2. Initialize or decode the model metadata (inspect required buffer sizes).
3. Allocate caller-owned buffers and scratch memory per the required sizes.
4. Provide input tensors to the inference call.
5. Execute the inference step.
6. Read outputs from the output buffer.

## Access to metadata

- The crate exposes functions to inspect model topology and decoded counts (layers, weights, biases).
- You can access layer descriptors and raw weight/bias arrays after decoding.
- Kernel-level information (e.g., required scratch sizes, execution plan) is available to callers.
- The crate allows exporting or using internal structures programmatically for integration or analysis.

## What this guide does not cover

- Advanced training or training loops
- Optimizer tuning and low-level training optimizations
- Detailed profiling and performance tuning
- Quantization pipelines and tooling
- External visualization and tooling integration
- FFI integration details (see the main README for those topics)

## Philosophy

Modular, deterministic building blocks for inference.
Designed for predictable memory usage and easy wrapping by multi-language runtimes.
100% Clippy clean.