Skip to main content

Crate elizaos_plugin_local_ai

Crate elizaos_plugin_local_ai 

Source
Expand description

elizaOS Local AI Plugin - Rust Implementation

Provides local LLM inference using GGUF models via llama.cpp bindings.

§Features

  • llm - Enable actual inference with llama_cpp_rs
  • cuda - Enable CUDA GPU acceleration
  • metal - Enable Metal GPU acceleration (macOS)

§Example

use elizaos_plugin_local_ai::{LocalAIPlugin, LocalAIConfig, TextGenerationParams};

#[tokio::main]
async fn main() -> anyhow::Result<()> {
    let plugin = LocalAIPlugin::from_env()?;
    let response = plugin.generate_text("Hello, world!").await?;
    println!("{}", response);
    Ok(())
}

Re-exports§

pub use error::LocalAIError;
pub use error::Result;
pub use types::*;

Modules§

error
types
xml_parser

Structs§

LocalAIConfig
Configuration for the Local AI plugin.
LocalAIPlugin
The main Local AI plugin struct.