# ⚠️ DEPRECATED - llm-toolkit-expertise
> **This crate has been archived and integrated into `llm-toolkit` core.**
>
> **Migration:** Use `llm-toolkit::agent::expertise` instead.
>
> All functionality is now available in the main `llm-toolkit` crate under the `agent` feature.
> This crate will not receive further updates.
---
# llm-toolkit-expertise (Archived)
**Agent as Code**: Graph-based composition system for LLM agent capabilities.
[](https://crates.io/crates/llm-toolkit-expertise)
[](https://docs.rs/llm-toolkit-expertise)
[](LICENSE)
## Migration Guide
Replace imports:
```diff
- use llm_toolkit_expertise::{Expertise, WeightedFragment, KnowledgeFragment};
- use llm_toolkit_expertise::{Priority, TaskHealth, ContextProfile};
+ use llm_toolkit::agent::expertise::{Expertise, WeightedFragment, KnowledgeFragment};
+ use llm_toolkit::context::{Priority, TaskHealth, ContextProfile};
```
All APIs remain identical - only import paths have changed.
## Overview (Historical)
`llm-toolkit-expertise` provides a flexible, composition-based approach to defining LLM agent expertise through **weighted knowledge fragments**. Instead of rigid inheritance hierarchies, expertise is built by composing independent fragments with priorities and contextual activation rules.
### Core Concepts
- 🧩 **Composition over Inheritance**: Build agents like equipment sets
- ⚖️ **Weighted Fragments**: Knowledge with priority levels (Critical/High/Normal/Low)
- 🎯 **Context-Driven**: Enable dynamic behavior based on TaskHealth and runtime context
- 📊 **Visualization**: Generate Mermaid graphs and tree views
- 🔧 **JSON Schema**: Full schema support for validation and tooling
## Quick Start
Add to your `Cargo.toml`:
```toml
[dependencies]
llm-toolkit-expertise = "0.1.0"
```
### Basic Example
```rust
use llm_toolkit_expertise::{
Expertise, WeightedFragment, KnowledgeFragment, Priority, ContextProfile, TaskHealth,
};
// Create a code reviewer expertise
let expertise = Expertise::new("rust-reviewer", "1.0")
.with_tag("lang:rust")
.with_tag("role:reviewer")
.with_fragment(
WeightedFragment::new(KnowledgeFragment::Text(
"Always run cargo check before reviewing".to_string()
))
.with_priority(Priority::Critical)
)
.with_fragment(
WeightedFragment::new(KnowledgeFragment::Logic {
instruction: "Check for security issues".to_string(),
steps: vec![
"Scan for unsafe code".to_string(),
"Verify input validation".to_string(),
],
})
.with_priority(Priority::High)
.with_context(ContextProfile::Conditional {
task_types: vec!["security-review".to_string()],
user_states: vec![],
task_health: Some(TaskHealth::AtRisk),
})
);
// Generate prompt
println!("{}", expertise.to_prompt());
// Generate visualizations
println!("{}", expertise.to_tree());
println!("{}", expertise.to_mermaid());
```
## Features
### 🎛️ Priority Levels
Control how strongly knowledge should be enforced:
- **Critical**: Absolute must-follow (violations = error)
- **High**: Recommended/emphasized (explicit instruction)
- **Normal**: Standard context (general guidance)
- **Low**: Reference information (background)
```rust
WeightedFragment::new(fragment)
.with_priority(Priority::Critical)
```
### 🔄 Context-Aware Activation
Fragments can be conditionally activated based on:
- **Task Types**: `"debug"`, `"security-review"`, `"refactor"`, etc.
- **User States**: `"beginner"`, `"expert"`, `"confused"`, etc.
- **Task Health**: `OnTrack`, `AtRisk`, `OffTrack`
```rust
ContextProfile::Conditional {
task_types: vec!["debug".to_string()],
user_states: vec!["beginner".to_string()],
task_health: Some(TaskHealth::AtRisk),
}
```
### 📚 Knowledge Fragment Types
Five types of knowledge representation:
1. **Logic**: Thinking procedures with Chain-of-Thought steps
2. **Guideline**: Behavioral rules with positive/negative examples (Anchors)
3. **QualityStandard**: Evaluation criteria and passing grades
4. **ToolDefinition**: Tool interfaces (JSON format)
5. **Text**: Free-form text knowledge
```rust
// Logic fragment
KnowledgeFragment::Logic {
instruction: "Systematic debugging".to_string(),
steps: vec!["Reproduce", "Isolate", "Fix", "Verify"].iter().map(|s| s.to_string()).collect(),
}
// Guideline with anchoring
KnowledgeFragment::Guideline {
rule: "Prefer explicit error handling".to_string(),
anchors: vec![Anchor {
context: "Parsing user input".to_string(),
positive: "parse().map_err(|e| Error::Parse(e))?".to_string(),
negative: "parse().unwrap()".to_string(),
reason: "Unwrap can panic on bad input".to_string(),
}],
}
```
### 📊 Visualization
Generate multiple visualization formats:
**Tree View**:
```rust
let tree = expertise.to_tree();
// Output:
// Expertise: rust-reviewer (v1.0)
// ├─ Tags: lang:rust, role:reviewer
// └─ Content:
// ├─ [CRITICAL] Text: Always run cargo check...
// └─ [HIGH] Logic: Check for security issues
// └─ Health: ⚠️ At Risk
```
**Mermaid Graph**:
```rust
let mermaid = expertise.to_mermaid();
// Generates Mermaid syntax with color-coded priority nodes
```
### 🔗 llm-toolkit Integration
Enable the `integration` feature to use `ToPrompt` trait:
```toml
[dependencies]
llm-toolkit-expertise = { version = "0.1.0", features = ["integration"] }
```
```rust
use llm_toolkit::ToPrompt;
let expertise = Expertise::new("test", "1.0")
.with_fragment(/* ... */);
let prompt_part = expertise.to_prompt()?;
```
### 🎨 Context-Aware Rendering (Phase 2)
Dynamically filter and render expertise based on runtime context:
```rust
use llm_toolkit_expertise::{
Expertise, WeightedFragment, KnowledgeFragment,
RenderContext, ContextualPrompt, Priority, ContextProfile, TaskHealth,
};
// Create expertise with conditional fragments
let expertise = Expertise::new("rust-tutor", "1.0")
.with_fragment(
WeightedFragment::new(KnowledgeFragment::Text(
"You are a Rust programming tutor".to_string()
))
.with_priority(Priority::High)
.with_context(ContextProfile::Always)
)
.with_fragment(
WeightedFragment::new(KnowledgeFragment::Text(
"Provide detailed explanations with examples".to_string()
))
.with_context(ContextProfile::Conditional {
task_types: vec![],
user_states: vec!["beginner".to_string()],
task_health: None,
})
)
.with_fragment(
WeightedFragment::new(KnowledgeFragment::Text(
"Focus on advanced patterns and optimizations".to_string()
))
.with_context(ContextProfile::Conditional {
task_types: vec![],
user_states: vec!["expert".to_string()],
task_health: None,
})
);
// Method 1: Direct rendering with context
let beginner_context = RenderContext::new().with_user_state("beginner");
let beginner_prompt = expertise.to_prompt_with_render_context(&beginner_context);
// Contains: base fragment + beginner-specific guidance
// Method 2: ContextualPrompt wrapper (for DTO integration)
let expert_prompt = ContextualPrompt::from_expertise(&expertise, RenderContext::new())
.with_user_state("expert")
.to_prompt();
// Contains: base fragment + expert-specific guidance
// Method 3: DTO pattern integration
#[derive(Serialize, ToPrompt)]
#[prompt(template = "# Expertise\n{{expertise}}\n\n# Task\n{{task}}")]
struct AgentRequest {
expertise: String, // ContextualPrompt.to_prompt() result
task: String,
}
let request = AgentRequest {
expertise: ContextualPrompt::from_expertise(
&expertise,
RenderContext::new().with_user_state("beginner")
).to_prompt(),
task: "Explain ownership and borrowing".to_string(),
};
let final_prompt = request.to_prompt();
```
**Key Features:**
- **Dynamic Filtering**: Fragments are included/excluded based on runtime context
- **Priority Ordering**: Critical → High → Normal → Low in the output
- **Multiple User States**: Supports checking against multiple simultaneous user states
- **DTO Integration**: `ContextualPrompt` implements `to_prompt()` for seamless template usage
- **Backward Compatible**: Existing `to_prompt()` still works (uses empty context)
### 📋 JSON Schema Generation
Generate JSON Schema for validation and tooling:
```rust
use llm_toolkit_expertise::{dump_expertise_schema, save_expertise_schema};
// Get schema as JSON
let schema = dump_expertise_schema();
println!("{}", serde_json::to_string_pretty(&schema)?);
// Save to file
save_expertise_schema("expertise-schema.json")?;
```
## Examples
The crate includes several examples:
```bash
# Basic expertise creation and usage
cargo run --example basic_expertise
# Generate JSON Schema
cargo run --example generate_schema
# Context-aware prompt generation
cargo run --example prompt_generation
```
## Architecture
### Composition over Inheritance
Unlike traditional inheritance-based systems, `llm-toolkit-expertise` uses **graph composition**:
- **No fragile base class problem**: Parent changes don't break children
- **Flexible mixing**: Combine arbitrary fragments with tags
- **Conflict resolution**: Higher priority always wins
- **Dynamic assembly**: Runtime context determines active fragments
### TaskHealth: Adaptive Behavior
The `TaskHealth` enum enables "gear shifting" based on task status:
- **OnTrack**: Speed mode (concise, confident)
- **AtRisk**: Careful mode (verify, clarify)
- **OffTrack**: Stop mode (reassess, consult)
This mirrors how senior engineers adjust their approach based on project health.
## Roadmap
### ✅ Phase 2: Context-Aware Rendering (Completed)
- ✅ Dynamic System Prompt generation from weighted fragments
- ✅ Priority-based ordering (Critical first, Low last)
- ✅ Context-aware fragment selection engine
- ✅ `RenderContext` for runtime context management
- ✅ `ContextualPrompt` wrapper for DTO integration
- ✅ Backward compatible with legacy `to_prompt()`
### Phase 3: State Analyzer
- Conversation history analysis
- TaskHealth and user_state inference
- Lightweight classifier for context detection
### Phase 4: Registry System
- Expertise storage and versioning
- Tag-based search and discovery
- Composition recommendations
## Design Philosophy
1. **Independence**: Works standalone, integrates optionally
2. **Extensibility**: Future Prompt Compiler/State Analyzer ready
3. **Type Safety**: Rust types + JSON Schema validation
4. **Simplicity**: Start simple, grow as needed
## Contributing
Contributions welcome! This is an early-stage project exploring new patterns for agent capability composition.
## License
MIT License - see [LICENSE](LICENSE) for details.
## Related Projects
- [llm-toolkit](https://github.com/ynishi/llm-toolkit) - Core LLM utilities and agent framework
- [llm-toolkit-macros](https://crates.io/crates/llm-toolkit-macros) - Derive macros for LLM types