Crate praxis

Crate praxis 

Source
Expand description

§Praxis

High-performance React agent framework for building AI agents with LLM integration, tool execution, and persistence.

§Overview

Praxis is a comprehensive framework for building production-ready AI agents that can:

  • Reason and respond using LLMs (OpenAI, Azure)
  • Execute tools via MCP (Model Context Protocol)
  • Persist conversations with MongoDB (or other backends)
  • Manage context with automatic summarization
  • Stream responses in real-time

§Quick Start

use praxis::prelude::*;
use std::sync::Arc;

#[tokio::main]
async fn main() -> anyhow::Result<()> {
    // Create LLM client
    let llm_client = Arc::new(OpenAIClient::new(
        std::env::var("OPENAI_API_KEY")?
    )?);

    // Create MCP executor
    let mcp_executor = Arc::new(MCPToolExecutor::new());

    // Build graph
    let graph = GraphBuilder::new()
        .with_llm_client(llm_client)
        .with_mcp_executor(mcp_executor)
        .build()?;

    // Create input
    let input = GraphInput::new(
        "conversation-123",
        vec![Message::Human {
            content: Content::text("Hello!"),
            name: None,
        }],
        LLMConfig::new("gpt-4o"),
    );

    // Execute and stream events
    let mut events = graph.spawn_run(input, None);
    while let Some(event) = events.recv().await {
        match event {
            StreamEvent::Message { content } => println!("{}", content),
            StreamEvent::Done { .. } => break,
            _ => {}
        }
    }

    Ok(())
}

§Architecture

Praxis is organized into focused crates:

  • praxis-graph: React agent orchestrator with graph execution
  • praxis-llm: Provider-agnostic LLM client (OpenAI, Azure)
  • praxis-mcp: Model Context Protocol client and executor
  • praxis-persist: Persistence layer with MongoDB support
  • praxis-context: Context management and summarization

§Features

  • Streaming: Real-time event streaming with zero-copy optimizations
  • Persistence: Incremental saving with MongoDB backend
  • Context Management: Automatic summarization and token counting
  • Tool Execution: MCP-based tool integration
  • Type Safety: Strong typing throughout the framework
  • Async: Built on Tokio for high performance

§License

MIT

Modules§

prelude
Prelude module for convenient imports

Structs§

ChatOptions
ChatRequest
ContextWindow
Result of context retrieval
DBMessage
Database-agnostic message model
DefaultContextStrategy
EventAccumulator
Observer that accumulates streaming events and detects type transitions
Graph
GraphBuilder
Builder for constructing a Graph with optional components
GraphConfig
GraphInput
GraphState
LLMConfig
MCPClient
MCP Client wrapper that manages connection to MCP servers
MCPToolExecutor
Tool executor that delegates to MCP servers
OpenAIClient
OpenAI client (HTTP direct, no SDK)
PersistenceConfig
Configuration for optional persistence
PersistenceContext
Context for persistence operations
ReasoningConfig
Reasoning configuration
ResponseOptions
ResponseRequest
Thread
Database-agnostic thread model
ThreadMetadata
ThreadSummary
Tool
Tool/Function definition (sent to OpenAI)
ToolCall
Tool call made by the LLM (in assistant message)

Enums§

Content
Content that can be sent in messages Designed to be extensible for multimodal (images, audio, etc)
ContextPolicy
GraphOutput
Graph output items from LLM execution
Message
Praxis message types (high-level, provider-agnostic)
MessageRole
MessageType
PersistError
Provider
ReasoningEffort
Reasoning effort level
StreamEvent
Unified StreamEvent for Graph orchestration
SummaryMode
Summary mode for reasoning
ToolChoice
Tool choice parameter (how aggressive to use tools)
ToolResponse
Response from tool execution

Traits§

ChatClient
Trait for chat-based LLM interactions (GPT-4, etc)
ContextStrategy
Strategy for building context window from conversation history
LLMClient
Convenience trait for clients that support both chat and reasoning
PersistenceClient
Trait for database persistence operations
ReasoningClient
Trait for reasoning-based LLM interactions (o1 models)
StreamEventExtractor
Trait for extracting information from stream events This allows EventAccumulator to work with any event type