Expand description
Intelligent Mock Behavior System
This module provides LLM-powered stateful mock behavior that maintains consistency across multiple API requests, simulating a real, thinking backend.
§Features
- Stateful Context Management: Tracks state across requests using sessions
- LLM-Powered Decision Making: Uses AI to generate intelligent, context-aware responses
- Vector Memory: Semantic search over past interactions for long-term memory
- Consistency Rules: Enforces logical behavior patterns (e.g., auth requirements)
- State Machines: Resources follow realistic lifecycle transitions
§Architecture
Request → Context Manager → Behavior Model → LLM + Vector Store → Response
↓ ↓ ↓
Session State Consistency Rules Past Interactions
§Example Usage
use mockforge_core::intelligent_behavior::{
StatefulAiContext, BehaviorModel, IntelligentBehaviorConfig,
};
// Create a stateful context
let config = IntelligentBehaviorConfig::default();
let mut context = StatefulAiContext::new("session_123", config);
// Record an interaction
context.record_interaction(
"POST",
"/api/users",
Some(serde_json::json!({"name": "Alice"})),
Some(serde_json::json!({"id": "user_1", "name": "Alice"})),
).await?;
// Get current state
let state = context.get_state();
Re-exports§
pub use behavior::BehaviorModel;
pub use config::IntelligentBehaviorConfig;
pub use context::StatefulAiContext;
pub use memory::VectorMemoryStore;
pub use rules::ConsistencyRule;
pub use rules::RuleAction;
pub use rules::StateMachine;
pub use rules::StateTransition;
pub use session::SessionManager;
pub use session::SessionTracking;
pub use spec_suggestion::EndpointSuggestion;
pub use spec_suggestion::OutputFormat;
pub use spec_suggestion::ParameterInfo;
pub use spec_suggestion::SpecSuggestionEngine;
pub use spec_suggestion::SuggestionConfig;
pub use spec_suggestion::SuggestionInput;
pub use spec_suggestion::SuggestionMetadata;
pub use spec_suggestion::SuggestionResult;
pub use types::BehaviorRules;
pub use types::InteractionRecord;
Modules§
- behavior
- Behavior model for LLM-powered decision making
- cache
- Response caching for intelligent behavior
- config
- Configuration for the Intelligent Mock Behavior system
- context
- Stateful AI context management
- embedding_
client - Embedding client for vector memory
- llm_
client - LLM client wrapper for intelligent behavior
- memory
- Vector memory store for long-term semantic memory
- rules
- Consistency rules and state machines for intelligent behavior
- session
- Session management for tracking state across requests
- spec_
suggestion - AI-powered specification suggestion and generation
- types
- Core types for the Intelligent Mock Behavior system