Frame Trace - Shared Utilities for SAM Ecosystem
Shared debugging and utility functions for the Frame ecosystem.
Features
🔍 Execution Tracing
CallGraph tracking for debugging, transparency, and performance analysis.
- Pipeline visualization: Track execution flow through complex systems
- Performance profiling: Measure duration of each step
- Debugging: Understand call chains and data flow
- Transparency: Export execution traces for analysis
Installation
Add to your Cargo.toml:
[]
= "0.1.0"
Dependency Architecture
frame-trace is standalone with no Frame dependencies:
frame-trace
└── (no Frame dependencies)
Used by: All Frame subsystems for execution monitoring
Position in Frame ecosystem:
frame-trace (standalone monitoring)
↓
[All Frame subsystems use this for tracing]
Quick Start
Basic Execution Tracing
use ;
With Input/Output Data
use ;
use json;
Performance Analysis
use ExecutionTrace;
Step Types
The StepType enum defines common pipeline stages:
AudioCapture- Audio input captureVoiceActivity- Voice activity detectionSpeechToText- Speech-to-text transcriptionRetrieval- Knowledge/context retrievalLlmGeneration- LLM response generationToolExecution- Tool/skill executionTextToSpeech- Text-to-speech synthesisAudioPlayback- Audio output playbackError- Error condition
Use Cases
1. Debugging Complex Pipelines
Track execution flow through multi-stage AI pipelines:
// Voice assistant pipeline
trace.start_step;
let audio = capture_audio;
trace.end_step;
trace.start_step;
let text = transcribe;
trace.end_step;
trace.start_step;
let context = retrieve_context;
trace.end_step;
trace.start_step;
let response = llm.generate;
trace.end_step;
2. Performance Profiling
Identify bottlenecks in your application:
for step in trace.steps
3. Transparency & Auditability
Export execution traces for review:
4. Distributed Tracing
Pass traces between services for end-to-end visibility:
// Service A
let trace = execute_service_a;
let trace_json = to_string?;
send_to_service_b;
// Service B
let mut trace: ExecutionTrace = from_str?;
trace.start_step;
// ... continue trace ...
API Reference
ExecutionTrace
Main trace container.
Methods:
new()- Create new tracestart_step(step_type, name)- Start a new stepstart_step_with_data(step_type, name, input)- Start with input dataend_step()- End current stepend_step_with_data(output)- End with output datasteps()- Get all stepstotal_duration_ms()- Total execution timecurrent_step_mut()- Get mutable reference to current step
TraceStep
Individual step in execution trace.
Fields:
step_type: StepType- Type of stepname: String- Step descriptionstart_time_ms: u64- Unix timestamp (ms)duration_ms: u64- Duration in millisecondsinput: Option<Value>- Input data (JSON)output: Option<Value>- Output data (JSON)error: Option<String>- Error message if failed
StepType
Enum of common pipeline step types.
See Step Types section above.
Performance
- Overhead: ~1-2 microseconds per step start/end
- Memory: ~200 bytes per step
- Serialization: ~5-10ms for 100 steps to JSON
Minimal overhead suitable for production use.
Compatibility
- Rust Edition: 2021
- MSRV: 1.70+
- Platforms: All (platform-independent)
History
Extracted from the Frame project, where it provides execution tracing for the AI assistant pipeline.
License
MIT - See LICENSE for details.
Author
Magnus Trent magnus@blackfall.dev