Crate sayr_engine

Crate sayr_engine 

Source
Expand description

Rust-flavored building blocks for running AGNO-style agents.

The crate provides a minimal runtime with:

  • A language model abstraction (LanguageModel).
  • A simple tool interface (Tool and ToolRegistry).
  • An Agent that loops between the model and tools using structured JSON directives.

Modules§

guardrails
Guardrails module for input/output validation.
mcp
MCP (Model Context Protocol) client support for sayr-engine.
reasoning
Reasoning module for chain-of-thought agent orchestration.
tools
Tools module - provides various toolkits for agents.

Structs§

AccessController
Agent
An AGNO-style agent that alternates between the LLM and registered tools.
AgentRuntime
AgentTask
Task that dispatches to an individual agent and stores the reply under a key.
AppConfig
Attachment
A non-textual payload that can accompany a message.
AwsBedrockClient
AWS Bedrock client. Currently optimized for Anthropic Claude 3 models on Bedrock.
AzureOpenAIClient
Azure OpenAI client for Azure-hosted models.
CohereClient
ConversationMemory
In-memory transcript storage.
DeploymentConfig
DeploymentPlan
Document
EvaluationReport
FallbackChain
FileConversationStore
A simple JSONL-based store that writes messages to disk.
FireworksClient
Fireworks AI client using their OpenAI-compatible API. Default model: accounts/fireworks/models/llama-v3p1-70b-instruct
FullMemoryStrategy
Keep all messages (default, no limiting)
FunctionTask
Wrap a plain async function as a workflow task.
GroqClient
Groq client - uses OpenAI-compatible API with Groq’s endpoint. Default model: llama-3.3-70b-versatile
InMemoryVectorStore
KnowledgeBase
Message
A single message in the conversation transcript.
MetricsTracker
MistralClient
Mistral AI client using their OpenAI-compatible API. Default model: mistral-large-latest
ModelCompletion
Result of a chat completion request.
ModelConfig
OllamaClient
Ollama client for local LLM inference. Default model: llama3.1
OpenAIClient
OpenAiEmbedder
Embedder that delegates to an OpenAI-compatible embedding client.
PersistentConversationMemory
A conversation memory that persists messages through a pluggable backend.
PgVectorStore
Adapter for Postgres/pgvector style databases.
Principal
PrivacyRule
ProviderConfig
QdrantStore
Adapter for Qdrant (or other HTTP/gRPC vector databases).
RetrievalConfig
RetrievalEvaluation
RetrievalOverrides
RetryPolicy
ScoredDocument
SearchParams
SecurityConfig
ServerConfig
SlidingWindowChunker
Token (word) based chunker with sliding window overlap.
SqlConversationStore
Placeholder for SQL-based backends. The type compiles without requiring the database drivers and can be swapped out once the feature lands.
StubModel
SummarizedMemoryStrategy
Keep first and last N messages, summarize the middle
Team
A coordination surface for multiple agents that share context and a message bus.
TelemetryCollector
TelemetryConfig
TelemetryLabels
TelemetrySink
TogetherClient
Together AI client using their OpenAI-compatible API. Default model: meta-llama/Llama-3.3-70B-Instruct-Turbo
TokenLimitedMemoryStrategy
Token-based memory limiting (approximate)
ToolCall
A tool call generated by the language model.
ToolDescription
Static description of a tool that can be embedded in prompts.
ToolRegistry
ToolResult
A tool result message captured in the transcript.
TransformerEmbedder
Embedder that wraps a transformer runtime (e.g., candle, ort, ggml).
WhitespaceEmbedder
Basic whitespace tokenizer with hashed buckets for deterministic embeddings.
WindowedMemoryStrategy
Keep only the last N messages (sliding window)
Workflow
WorkflowContext
Shared state threaded through a workflow execution.

Enums§

Action
AgentDirective
Structured instructions the language model should emit.
AgnoError
AttachmentKind
Types of attachments supported by the runtime.
GovernanceRole
Role
Chat roles supported by the runtime.
SimilarityMetric
TeamEvent
Events emitted by the team bus.
WorkflowNode

Traits§

AgentHook
ConfirmationHandler
ConversationStore
Generic persistence contract for conversation state.
DocumentChunker
Embedder
LanguageModel
Minimal abstraction around a chat completion provider.
MemoryStrategy
Memory strategy trait for managing conversation context
OpenAiEmbeddingClient
PgVectorClient
QdrantClient
Retriever
Tool
TransformerClient
VectorStore
WorkflowTask

Functions§

basic_toolkit
current_span_attributes
flush_tracer
init_tracing
span_with_labels

Type Aliases§

Result