native_neural_network_std 0.0.1

Ergonomic std wrapper for the `native_neural_network` crate (no_std) — std-friendly re-exports and utilities.
docs.rs failed to build native_neural_network_std-0.0.1
Please check the build logs for more information.
See Builds for ideas on how to fix a failed build, or Metadata for how to configure docs.rs builds.
If you believe this is docs.rs' fault, open an issue.
Visit the last successful build: native_neural_network_std-0.2.1

native_neural_network_std

native_neural_network_std fournit un wrapper ergonomique orienté std autour du crate native_neural_network (qui cible no_std). Ce crate expose des types possédant la propriété d'ownership (owned APIs) et des fonctions utilitaires plus confortables pour les utilisateurs de l'écosystème Rust standard.

Principales fonctionnalités

  • Ré-exports ergonomiques des API du crate upstream via des types Std (par exemple ModelStd, TensorStd, NeuralNetworkStd, RnnStd).
  • Types propriétaires (owned) pour éviter les emprunts temporaires et simplifier l'interface (pas de zero-copy public, pas de shims de rétro-compatibilité).
  • Conversions sécurisées entre les types Std et les structures internes nécessaires pour appeler les fonctions native (buffers temporaires, helpers as_native_plan, etc.).
  • Wrappers pour : parsing et encodage de formats modèle, initialisation, quantization helpers, engine inference, optimisation, utilitaires cryptographiques, etc.

Documentation détaillée

Pour la description détaillée du format .rnn, des algorithmes internes, des invariants mémoire, et des exemples d'utilisation bas-niveau, consultez le README du crate upstream :

Publication et docs

  • Ce crate inclut readme = "README.md" dans le Cargo.toml pour que la page crates.io et docs.rs affichent ce fichier.
  • Lorsque vous publiez (cargo publish) ce crate sur crates.io, docs.rs reconstruira automatiquement la documentation et publiera les pages HTML sous https://docs.rs/native_neural_network_std.

Usage rapide

Cargo.toml :

[dependencies]
# native_neural_network_std

native_neural_network_std is an ergonomic `std`-focused wrapper around the `native_neural_network` crate (which targets `no_std`). This crate provides owned, convenient APIs and small helper utilities for users working in the Rust standard environment.

What this crate provides

- Ergonomic `Std` types and re-exports for common upstream functionality, for example: `ModelStd`, `TensorStd`, `NeuralNetworkStd`, `RnnStd`.
- Owned APIs (no public zero-copy) that simplify memory ownership and calls into the native engine.
- Safe conversion helpers between `Std` types and the internal native representations when calling upstream functions (temporary buffers, `as_native_plan` helpers, etc.).
- Convenient wrappers for model format (encode/decode), initializers, quantization helpers, inference engine calls, optimizers, crypto helpers, and more.

Detailed documentation

For full, low-level details (the `.rnn` format, internal algorithms, memory invariants and low-level examples), see the upstream crate documentation and README:

- https://crates.io/crates/native_neural_network

Publishing and docs.rs

- This crate includes `readme = "README.md"` and `package.metadata.docs.rs` entries in `Cargo.toml` so that `crates.io` and `docs.rs` display this README and build documentation automatically when published.
- After `cargo publish`, `docs.rs` will build and host the API documentation at `https://docs.rs/native_neural_network_std`.

Quick start

Add to your `Cargo.toml`:

```toml
[dependencies]
native_neural_network_std = "0.0"

Example usage in code:

use native_neural_network_std::ModelStd;

// load, build or create a model using the Std wrappers and run inference

License

MIT OR Apache-2.0