Expand description
§Praxis
High-performance React agent framework for building AI agents with LLM integration, tool execution, and persistence.
§Overview
Praxis is a comprehensive framework for building production-ready AI agents that can:
- Reason and respond using LLMs (OpenAI, Azure)
- Execute tools via MCP (Model Context Protocol)
- Persist conversations with MongoDB (or other backends)
- Manage context with automatic summarization
- Stream responses in real-time
§Quick Start
use praxis::prelude::*;
use std::sync::Arc;
#[tokio::main]
async fn main() -> anyhow::Result<()> {
// Create LLM client
let llm_client = Arc::new(OpenAIClient::new(
std::env::var("OPENAI_API_KEY")?
)?);
// Create MCP executor
let mcp_executor = Arc::new(MCPToolExecutor::new());
// Build graph
let graph = GraphBuilder::new()
.with_llm_client(llm_client)
.with_mcp_executor(mcp_executor)
.build()?;
// Create input
let input = GraphInput::new(
"conversation-123",
vec![Message::Human {
content: Content::text("Hello!"),
name: None,
}],
LLMConfig::new("gpt-4o"),
);
// Execute and stream events
let mut events = graph.spawn_run(input, None);
while let Some(event) = events.recv().await {
match event {
StreamEvent::Message { content } => println!("{}", content),
StreamEvent::Done { .. } => break,
_ => {}
}
}
Ok(())
}§Architecture
Praxis is organized into focused crates:
praxis-graph: React agent orchestrator with graph executionpraxis-llm: Provider-agnostic LLM client (OpenAI, Azure)praxis-mcp: Model Context Protocol client and executorpraxis-persist: Persistence layer with MongoDB supportpraxis-context: Context management and summarization
§Features
- ✅ Streaming: Real-time event streaming with zero-copy optimizations
- ✅ Persistence: Incremental saving with MongoDB backend
- ✅ Context Management: Automatic summarization and token counting
- ✅ Tool Execution: MCP-based tool integration
- ✅ Type Safety: Strong typing throughout the framework
- ✅ Async: Built on Tokio for high performance
§License
MIT
Modules§
- prelude
- Prelude module for convenient imports
Structs§
- Chat
Options - Chat
Request - Context
Window - Result of context retrieval
- DBMessage
- Database-agnostic message model
- Default
Context Strategy - Event
Accumulator - Observer that accumulates streaming events and detects type transitions
- Graph
- Graph
Builder - Builder for constructing a Graph with optional components
- Graph
Config - Graph
Input - Graph
State - LLMConfig
- MCPClient
- MCP Client wrapper that manages connection to MCP servers
- MCPTool
Executor - Tool executor that delegates to MCP servers
- OpenAI
Client - OpenAI client (HTTP direct, no SDK)
- Persistence
Config - Configuration for optional persistence
- Persistence
Context - Context for persistence operations
- Reasoning
Config - Reasoning configuration
- Response
Options - Response
Request - Thread
- Database-agnostic thread model
- Thread
Metadata - Thread
Summary - Tool
- Tool/Function definition (sent to OpenAI)
- Tool
Call - Tool call made by the LLM (in assistant message)
Enums§
- Content
- Content that can be sent in messages Designed to be extensible for multimodal (images, audio, etc)
- Context
Policy - Graph
Output - Graph output items from LLM execution
- Message
- Praxis message types (high-level, provider-agnostic)
- Message
Role - Message
Type - Persist
Error - Provider
- Reasoning
Effort - Reasoning effort level
- Stream
Event - Unified StreamEvent for Graph orchestration
- Summary
Mode - Summary mode for reasoning
- Tool
Choice - Tool choice parameter (how aggressive to use tools)
- Tool
Response - Response from tool execution
Traits§
- Chat
Client - Trait for chat-based LLM interactions (GPT-4, etc)
- Context
Strategy - Strategy for building context window from conversation history
- LLMClient
- Convenience trait for clients that support both chat and reasoning
- Persistence
Client - Trait for database persistence operations
- Reasoning
Client - Trait for reasoning-based LLM interactions (o1 models)
- Stream
Event Extractor - Trait for extracting information from stream events This allows EventAccumulator to work with any event type