Expand description
elizaOS Local AI Plugin - Rust Implementation
Provides local LLM inference using GGUF models via llama.cpp bindings.
§Features
llm- Enable actual inference with llama_cpp_rscuda- Enable CUDA GPU accelerationmetal- Enable Metal GPU acceleration (macOS)
§Example
use elizaos_plugin_local_ai::{LocalAIPlugin, LocalAIConfig, TextGenerationParams};
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let plugin = LocalAIPlugin::from_env()?;
let response = plugin.generate_text("Hello, world!").await?;
println!("{}", response);
Ok(())
}Re-exports§
pub use error::LocalAIError;pub use error::Result;pub use types::*;
Modules§
Structs§
- LocalAI
Config - Configuration for the Local AI plugin.
- LocalAI
Plugin - The main Local AI plugin struct.