⚠️ DEPRECATED - llm-toolkit-expertise
This crate has been archived and integrated into
llm-toolkitcore.Migration: Use
llm-toolkit::agent::expertiseinstead.All functionality is now available in the main
llm-toolkitcrate under theagentfeature. This crate will not receive further updates.
llm-toolkit-expertise (Archived)
Agent as Code: Graph-based composition system for LLM agent capabilities.
Migration Guide
Replace imports:
- use llm_toolkit_expertise::{Expertise, WeightedFragment, KnowledgeFragment};
- use llm_toolkit_expertise::{Priority, TaskHealth, ContextProfile};
+ use llm_toolkit::agent::expertise::{Expertise, WeightedFragment, KnowledgeFragment};
+ use llm_toolkit::context::{Priority, TaskHealth, ContextProfile};
All APIs remain identical - only import paths have changed.
Overview (Historical)
llm-toolkit-expertise provides a flexible, composition-based approach to defining LLM agent expertise through weighted knowledge fragments. Instead of rigid inheritance hierarchies, expertise is built by composing independent fragments with priorities and contextual activation rules.
Core Concepts
- 🧩 Composition over Inheritance: Build agents like equipment sets
- ⚖️ Weighted Fragments: Knowledge with priority levels (Critical/High/Normal/Low)
- 🎯 Context-Driven: Enable dynamic behavior based on TaskHealth and runtime context
- 📊 Visualization: Generate Mermaid graphs and tree views
- 🔧 JSON Schema: Full schema support for validation and tooling
Quick Start
Add to your Cargo.toml:
[]
= "0.1.0"
Basic Example
use ;
// Create a code reviewer expertise
let expertise = new
.with_tag
.with_tag
.with_fragment
.with_fragment;
// Generate prompt
println!;
// Generate visualizations
println!;
println!;
Features
🎛️ Priority Levels
Control how strongly knowledge should be enforced:
- Critical: Absolute must-follow (violations = error)
- High: Recommended/emphasized (explicit instruction)
- Normal: Standard context (general guidance)
- Low: Reference information (background)
new
.with_priority
🔄 Context-Aware Activation
Fragments can be conditionally activated based on:
- Task Types:
"debug","security-review","refactor", etc. - User States:
"beginner","expert","confused", etc. - Task Health:
OnTrack,AtRisk,OffTrack
Conditional
📚 Knowledge Fragment Types
Five types of knowledge representation:
- Logic: Thinking procedures with Chain-of-Thought steps
- Guideline: Behavioral rules with positive/negative examples (Anchors)
- QualityStandard: Evaluation criteria and passing grades
- ToolDefinition: Tool interfaces (JSON format)
- Text: Free-form text knowledge
// Logic fragment
Logic
// Guideline with anchoring
Guideline
📊 Visualization
Generate multiple visualization formats:
Tree View:
let tree = expertise.to_tree;
// Output:
// Expertise: rust-reviewer (v1.0)
// ├─ Tags: lang:rust, role:reviewer
// └─ Content:
// ├─ [CRITICAL] Text: Always run cargo check...
// └─ [HIGH] Logic: Check for security issues
// └─ Health: ⚠️ At Risk
Mermaid Graph:
let mermaid = expertise.to_mermaid;
// Generates Mermaid syntax with color-coded priority nodes
🔗 llm-toolkit Integration
Enable the integration feature to use ToPrompt trait:
[]
= { = "0.1.0", = ["integration"] }
use ToPrompt;
let expertise = new
.with_fragment;
let prompt_part = expertise.to_prompt?;
🎨 Context-Aware Rendering (Phase 2)
Dynamically filter and render expertise based on runtime context:
use ;
// Create expertise with conditional fragments
let expertise = new
.with_fragment
.with_fragment
.with_fragment;
// Method 1: Direct rendering with context
let beginner_context = new.with_user_state;
let beginner_prompt = expertise.to_prompt_with_render_context;
// Contains: base fragment + beginner-specific guidance
// Method 2: ContextualPrompt wrapper (for DTO integration)
let expert_prompt = from_expertise
.with_user_state
.to_prompt;
// Contains: base fragment + expert-specific guidance
// Method 3: DTO pattern integration
let request = AgentRequest ;
let final_prompt = request.to_prompt;
Key Features:
- Dynamic Filtering: Fragments are included/excluded based on runtime context
- Priority Ordering: Critical → High → Normal → Low in the output
- Multiple User States: Supports checking against multiple simultaneous user states
- DTO Integration:
ContextualPromptimplementsto_prompt()for seamless template usage - Backward Compatible: Existing
to_prompt()still works (uses empty context)
📋 JSON Schema Generation
Generate JSON Schema for validation and tooling:
use ;
// Get schema as JSON
let schema = dump_expertise_schema;
println!;
// Save to file
save_expertise_schema?;
Examples
The crate includes several examples:
# Basic expertise creation and usage
# Generate JSON Schema
# Context-aware prompt generation
Architecture
Composition over Inheritance
Unlike traditional inheritance-based systems, llm-toolkit-expertise uses graph composition:
- No fragile base class problem: Parent changes don't break children
- Flexible mixing: Combine arbitrary fragments with tags
- Conflict resolution: Higher priority always wins
- Dynamic assembly: Runtime context determines active fragments
TaskHealth: Adaptive Behavior
The TaskHealth enum enables "gear shifting" based on task status:
- OnTrack: Speed mode (concise, confident)
- AtRisk: Careful mode (verify, clarify)
- OffTrack: Stop mode (reassess, consult)
This mirrors how senior engineers adjust their approach based on project health.
Roadmap
✅ Phase 2: Context-Aware Rendering (Completed)
- ✅ Dynamic System Prompt generation from weighted fragments
- ✅ Priority-based ordering (Critical first, Low last)
- ✅ Context-aware fragment selection engine
- ✅
RenderContextfor runtime context management - ✅
ContextualPromptwrapper for DTO integration - ✅ Backward compatible with legacy
to_prompt()
Phase 3: State Analyzer
- Conversation history analysis
- TaskHealth and user_state inference
- Lightweight classifier for context detection
Phase 4: Registry System
- Expertise storage and versioning
- Tag-based search and discovery
- Composition recommendations
Design Philosophy
- Independence: Works standalone, integrates optionally
- Extensibility: Future Prompt Compiler/State Analyzer ready
- Type Safety: Rust types + JSON Schema validation
- Simplicity: Start simple, grow as needed
Contributing
Contributions welcome! This is an early-stage project exploring new patterns for agent capability composition.
License
MIT License - see LICENSE for details.
Related Projects
- llm-toolkit - Core LLM utilities and agent framework
- llm-toolkit-macros - Derive macros for LLM types