Expand description
§ggen-ai - LLM integration layer for ggen
Thin wrapper around genai for ggen with environment support and caching.
This crate provides a simplified LLM integration layer for ggen, focusing on:
- Environment-based configuration
- Response caching
- Template generation
- SPARQL query generation
- Ontology generation
- Code refactoring assistance
§Features
- Multi-provider LLM support: OpenAI, Anthropic, Ollama, Gemini, DeepSeek, xAI/Grok, Groq, Cohere (via genai)
- Environment-based configuration: Automatic API key detection and model selection
- Response caching: Reduce API costs and latency with intelligent caching
- Template generation: Natural language to ggen templates
- SPARQL query generation: Intent-based query construction
- Ontology generation: Domain descriptions to RDF/OWL
- Code refactoring: AI-assisted code improvement suggestions
- RDF-based CLI generation: Generate CLI projects from RDF ontologies
§Quick Start
use ggen_ai::{GenAiClient, LlmClient, LlmConfig};
// Initialize client with default configuration
let config = LlmConfig::default();
let client = GenAiClient::new(config)?;
// Generate response
let response = client.complete("Explain Rust ownership").await?;
println!("{}", response.content);§Module Organization
cache- LLM response cachingclient- LLM client abstractionconfig- Configuration managementgenerators- Specialized generators (templates, SPARQL, ontologies)providers- LLM provider implementationsprompts- Prompt templates and buildersrdf- RDF-based CLI generationsecurity- API key masking and securitystreaming- Streaming response supporttypes- Type definitions
Re-exports§
pub use cache::CacheConfig;pub use cache::CacheStats;pub use cache::LlmCache;pub use client::GenAiClient;pub use client::LlmChunk;pub use client::LlmClient;pub use client::LlmConfig;pub use client::LlmResponse;pub use client::UsageStats;pub use config::get_global_config;pub use config::init_global_config;pub use config::AiConfig;pub use config::GlobalLlmConfig;pub use config::LlmProvider;pub use error::GgenAiError;pub use error::Result;pub use generators::NaturalSearchGenerator;pub use generators::OntologyGenerator;pub use generators::QualityMetrics;pub use generators::RefactorAssistant;pub use generators::SparqlGenerator;pub use generators::TemplateGenerator;pub use generators::TemplateValidator;pub use generators::ValidationIssue;pub use providers::adapter::ollama_default_config;pub use providers::adapter::ollama_ministral_3b_config;pub use providers::adapter::ollama_qwen3_coder_config;pub use providers::adapter::MockClient;pub use rdf::Argument;pub use rdf::ArgumentType;pub use rdf::CliGenerator;pub use rdf::CliProject;pub use rdf::Dependency;pub use rdf::Noun;pub use rdf::QueryExecutor;pub use rdf::RdfParser;pub use rdf::TemplateRenderer;pub use rdf::Validation;pub use rdf::Verb;pub use security::MaskApiKey;pub use security::SecretString;pub use streaming::StreamConfig;pub use types::DecisionId;pub use types::PolicyId;pub use types::RequestId;pub use types::RuleId;
Modules§
- cache
- LLM response caching with Moka
- client
- Simplified client interface using rust-genai
- config
- Configuration management for ggen-ai
- constants
- Constants for ggen-ai
- error
- Error types for ggen-ai
- error_
utils - Error handling utilities for ggen-ai
- generators
- AI-powered generators for ggen
- parsing_
utils - Code block parsing utilities for ggen-ai
- prompts
- Prompt engineering for AI-powered generators
- providers
- LLM provider implementations
- rdf
- RDF-based CLI project generation.
- security
- Security utilities for protecting sensitive data
- streaming
- LLM Streaming Support via rust-genai
- types
- Type-safe wrappers for ggen-ai
Constants§
- VERSION
- Version information
Functions§
- init_
logging - Initialize tracing for the ggen-ai crate