This file is a merged representation of the entire codebase, combined into a single document by Repomix.
<file_summary>
This section contains a summary of this file.
<purpose>
This file contains a packed representation of the entire repository's contents.
It is designed to be easily consumable by AI systems for analysis, code review,
or other automated processes.
</purpose>
<file_format>
The content is organized as follows:
1. This summary section
2. Repository information
3. Directory structure
4. Repository files (if enabled)
5. Multiple file entries, each consisting of:
- File path as an attribute
- Full contents of the file
</file_format>
<usage_guidelines>
- This file should be treated as read-only. Any changes should be made to the
original repository files, not this packed version.
- When processing this file, use the file path to distinguish
between different files in the repository.
- Be aware that this file may contain sensitive information. Handle it with
the same level of security as you would the original repository.
</usage_guidelines>
<notes>
- Some files may have been excluded based on .gitignore rules and Repomix's configuration
- Binary files are not included in this packed representation. Please refer to the Repository Structure section for a complete list of file paths, including binary files
- Files matching patterns in .gitignore are excluded
- Files matching default ignore patterns are excluded
- Files are sorted by Git change count (files with more changes are at the bottom)
</notes>
</file_summary>
<directory_structure>
.claude/
agents/
cl/
codebase-analyzer.md
codebase-locator.md
codebase-pattern-finder.md
thoughts-analyzer.md
thoughts-locator.md
web-search-researcher.md
commands/
cl/
commit.md
create_plan.md
debug.md
describe_pr.md
implement_plan.md
research_codebase.md
.scud/
tasks/
tasks.scg
config.toml
.taskmaster/
docs/
plan-metadata-filtering.md
docs/
manifest-discovery-plan.md
research-anthropic-skills-support.md
examples/
community-repo-ci/
.github/
workflows/
validate-skills.yml
README.md
validate.sh
gauntlet-ci/
.github/
workflows/
validate-resources.yml
CONTRIBUTING.md
validate-resources.sh
src/
bundle.rs
config.rs
discover.rs
install.rs
main.rs
manifest.rs
setup.rs
source.rs
target.rs
.gitignore
Cargo.toml
CLAUDE.md
LICENSE
README.md
</directory_structure>
<files>
This section contains the contents of the repository's files.
<file path="docs/manifest-discovery-plan.md">
# Manifest-Based Bundle Discovery Implementation Plan
## Overview
Add `skm.toml` manifest support to skill-manager so that repositories like fg-synapse can declare their bundles with custom directory structures (e.g., `plugins/{name}/skills/base/` instead of flat `skills/`).
This enables:
- `skm sources add fg https://github.com/millstone-ai/fg-synapse`
- `skm add fg` — install all bundles from named source
- `skm add fg/synapse-docs` — install specific bundle from named source
## Target Branch
```bash
git checkout -b feat/manifest-discovery
```
---
## Phase 2A: Add Manifest Parsing Module
### Overview
Add a new `src/manifest.rs` module that can parse an `skm.toml` file at the root of a source directory.
### skm.toml Format
```toml
# skm.toml — declares how skm should discover bundles in this source
[source]
name = "fg-synapse"
description = "FacilityGrid Synapse plugin marketplace"
[[bundles]]
name = "synapse-core"
path = "plugins/core"
description = "Essential hooks, validation, and MCP server configuration"
tags = ["validation", "hooks", "mcp"]
[bundles.paths]
skills = "skills/base"
agents = "agents/base"
commands = "commands/base"
rules = "rules/base"
```
**Semantics**:
- `[source]` — optional metadata (name for display, description)
- `[[bundles]]` — each bundle declaration
- `name` — bundle name used in `skm add <name>` and `skm add <source>/<name>`
- `path` — relative path from source root to the bundle directory
- `description` / `tags` — for `skm list` display and fuzzy search
- `[bundles.paths]` — maps component types to subdirectories within the bundle path. Defaults: `skills = "skills"`, `agents = "agents"`, `commands = "commands"`, `rules = "rules"`
### Changes Required
#### 1. New file: `src/manifest.rs`
```rust
use serde::Deserialize;
use std::path::PathBuf;
use crate::bundle::{Bundle, BundleMeta, SkillFile, SkillType};
#[derive(Debug, Deserialize)]
pub struct SourceManifest {
pub source: Option<SourceMeta>,
#[serde(default)]
pub bundles: Vec<BundleDeclaration>,
}
#[derive(Debug, Deserialize)]
pub struct SourceMeta {
pub name: Option<String>,
pub description: Option<String>,
}
#[derive(Debug, Deserialize)]
pub struct BundleDeclaration {
pub name: String,
pub path: String,
pub description: Option<String>,
pub tags: Option<Vec<String>>,
#[serde(default)]
pub paths: ComponentPaths,
}
#[derive(Debug, Deserialize, Default)]
pub struct ComponentPaths {
pub skills: Option<String>,
pub agents: Option<String>,
pub commands: Option<String>,
pub rules: Option<String>,
}
impl ComponentPaths {
pub fn skills_dir(&self) -> &str {
self.skills.as_deref().unwrap_or("skills")
}
pub fn agents_dir(&self) -> &str {
self.agents.as_deref().unwrap_or("agents")
}
pub fn commands_dir(&self) -> &str {
self.commands.as_deref().unwrap_or("commands")
}
pub fn rules_dir(&self) -> &str {
self.rules.as_deref().unwrap_or("rules")
}
}
/// Load and parse an skm.toml manifest from a source root directory
pub fn load_manifest(source_root: &PathBuf) -> Option<SourceManifest> {
let manifest_path = source_root.join("skm.toml");
if !manifest_path.exists() {
return None;
}
let content = std::fs::read_to_string(&manifest_path).ok()?;
toml::from_str(&content).ok()
}
/// Build a Bundle from a manifest declaration by scanning its declared paths
pub fn bundle_from_declaration(
source_root: &PathBuf,
decl: &BundleDeclaration,
) -> anyhow::Result<Bundle> {
let bundle_root = source_root.join(&decl.path);
let skills = scan_component_dir(
&bundle_root.join(decl.paths.skills_dir()),
SkillType::Skill,
)?;
let agents = scan_component_dir(
&bundle_root.join(decl.paths.agents_dir()),
SkillType::Agent,
)?;
let commands = scan_component_dir(
&bundle_root.join(decl.paths.commands_dir()),
SkillType::Command,
)?;
let rules = scan_component_dir(
&bundle_root.join(decl.paths.rules_dir()),
SkillType::Rule,
)?;
Ok(Bundle {
name: decl.name.clone(),
path: bundle_root,
skills,
agents,
commands,
rules,
meta: BundleMeta {
author: None,
description: decl.description.clone(),
},
})
}
/// Scan a component directory for skill files.
/// Handles BOTH flat .md files AND {name}/SKILL.md directory format.
fn scan_component_dir(dir: &PathBuf, skill_type: SkillType) -> anyhow::Result<Vec<SkillFile>> {
if !dir.exists() {
return Ok(vec![]);
}
let mut files = vec![];
for entry in std::fs::read_dir(dir)? {
let entry = entry?;
let path = entry.path();
if path.is_file() && path.extension().is_some_and(|e| e == "md" || e == "mdc") {
// Flat .md file (e.g., agents/base/review-agent.md)
let name = path
.file_stem()
.and_then(|n| n.to_str())
.unwrap_or("")
.to_string();
files.push(SkillFile {
name,
path,
skill_type,
});
} else if path.is_dir() {
// Directory format: look for SKILL.md, AGENT.md, COMMAND.md, RULE.md, or any .md
let expected_names = match skill_type {
SkillType::Skill => vec!["SKILL.md", "skill.md"],
SkillType::Agent => vec!["AGENT.md", "agent.md"],
SkillType::Command => vec!["COMMAND.md", "command.md"],
SkillType::Rule => vec!["RULE.md", "rule.md"],
};
let mut found = false;
for expected in &expected_names {
let md_path = path.join(expected);
if md_path.exists() {
let folder_name = path
.file_name()
.and_then(|n| n.to_str())
.unwrap_or("")
.to_string();
files.push(SkillFile {
name: folder_name,
path: md_path,
skill_type,
});
found = true;
break;
}
}
// Fall back to any .md file in the directory
if !found {
if let Ok(entries) = std::fs::read_dir(&path) {
for sub_entry in entries.flatten() {
let sub_path = sub_entry.path();
if sub_path.is_file()
&& sub_path.extension().is_some_and(|e| e == "md")
{
let folder_name = path
.file_name()
.and_then(|n| n.to_str())
.unwrap_or("")
.to_string();
files.push(SkillFile {
name: folder_name,
path: sub_path,
skill_type,
});
break;
}
}
}
}
}
}
files.sort_by(|a, b| a.name.cmp(&b.name));
Ok(files)
}
```
**Key design decision**: `scan_component_dir()` handles BOTH flat `.md` files (like `agents/base/review-agent.md`) and directory-with-SKILL.md (like `skills/base/data-model-visualizer/SKILL.md`). This is critical because Synapse uses both patterns — agents are flat `.md` files, skills are directories.
#### 2. Register module in `src/main.rs`
Add `mod manifest;` at `main.rs:1` (alongside existing `mod bundle;`, etc.)
```rust
mod bundle;
mod config;
mod discover;
mod install;
mod manifest; // NEW
mod setup;
mod source;
mod target;
```
#### 3. Tests for manifest parsing
Add to `src/manifest.rs` (see full test suite in the detailed plan).
### Success Criteria
- `cargo test manifest` passes all tests
- `cargo build` compiles cleanly
### Files Changed
- `src/manifest.rs` (NEW — ~250 lines)
- `src/main.rs:1` (add `mod manifest;`)
---
## Phase 2B: Integrate Manifest Discovery into Source Cascade
### Overview
Modify `LocalSource::list_bundles()` to check for `skm.toml` manifest as the highest-priority format, before the existing cascade.
### Changes Required
#### 1. Modify `source.rs:28-74`
Current code at `source.rs:28`:
```rust
fn list_bundles(&self) -> Result<Vec<Bundle>> {
if !self.path.exists() {
return Ok(vec![]);
}
// Check if this is a resources-format source...
```
Change to:
```rust
fn list_bundles(&self) -> Result<Vec<Bundle>> {
if !self.path.exists() {
return Ok(vec![]);
}
// Check for skm.toml manifest (highest priority)
if let Some(manifest) = crate::manifest::load_manifest(&self.path) {
return self.list_bundles_from_manifest(manifest);
}
// Check if this is a resources-format source (has resources/ directory at root)
// ... existing cascade unchanged ...
}
```
#### 2. Add `list_bundles_from_manifest` method to `LocalSource`
After the existing `display_path()` method at `source.rs:76-85`, add:
```rust
impl LocalSource {
// ... existing methods ...
fn list_bundles_from_manifest(
&self,
manifest: crate::manifest::SourceManifest,
) -> Result<Vec<Bundle>> {
let mut bundles = Vec::new();
for decl in &manifest.bundles {
let bundle_root = self.path.join(&decl.path);
if !bundle_root.exists() {
eprintln!(
" {}: bundle path {} does not exist",
"Warning".yellow(),
decl.path
);
continue;
}
match crate::manifest::bundle_from_declaration(&self.path, decl) {
Ok(bundle) if !bundle.is_empty() => bundles.push(bundle),
Ok(_) => {
// Bundle exists but has no files — skip silently
}
Err(e) => {
eprintln!(
" {}: failed to scan bundle {}: {}",
"Warning".yellow(),
decl.name,
e
);
}
}
}
Ok(bundles)
}
}
```
Note: `list_bundles_from_manifest` must be defined as an inherent method on `LocalSource`, not inside the `impl Source for LocalSource` block.
### Success Criteria
- All existing tests continue to pass (no regressions)
- New manifest tests pass
- Sources without `skm.toml` still use existing cascade (backwards compatible)
### Files Changed
- `src/source.rs:28` (insert manifest check)
- `src/source.rs:86+` (add `list_bundles_from_manifest` method)
- `src/source.rs` tests section (add 2 tests)
---
## Phase 2C: Add Named Sources to Config
### Overview
Add an optional `name` field to `SourceConfig` so sources can be referenced by name (e.g., `skm add fg`).
### Changes Required
#### 1. Modify `SourceConfig` enum at `config.rs:16-23`
Current:
```rust
#[derive(Debug, Serialize, Deserialize, Clone)]
#[serde(tag = "type")]
pub enum SourceConfig {
#[serde(rename = "local")]
Local { path: String },
#[serde(rename = "git")]
Git { url: String },
}
```
Change to:
```rust
#[derive(Debug, Serialize, Deserialize, Clone)]
#[serde(tag = "type")]
pub enum SourceConfig {
#[serde(rename = "local")]
Local {
path: String,
#[serde(default, skip_serializing_if = "Option::is_none")]
name: Option<String>,
},
#[serde(rename = "git")]
Git {
url: String,
#[serde(default, skip_serializing_if = "Option::is_none")]
name: Option<String>,
},
}
```
#### 2. Update pattern matching throughout config.rs
Update `display()`, add `name()` method, update `add_source`, `remove_source`, `sources()`, `git_sources()` to destructure with `..`.
#### 3. Add `find_source_by_name()` method to Config
```rust
/// Find a source by its name
pub fn find_source_by_name(&self, name: &str) -> Option<(Box<dyn Source>, &SourceConfig)> {
for source_config in &self.sources {
if source_config.name() == Some(name) {
let source: Option<Box<dyn Source>> = match source_config {
SourceConfig::Local { path, .. } => {
let expanded = expand_tilde(path);
Some(Box::new(LocalSource::new(expanded)))
}
SourceConfig::Git { url, .. } => {
GitSource::new(url.clone()).ok().map(|s| Box::new(s) as Box<dyn Source>)
}
};
if let Some(source) = source {
return Some((source, source_config));
}
}
}
None
}
```
#### 4. Update `sources_add` CLI handler at `main.rs`
Change `SourcesAction::Add` to accept an optional name:
```rust
/// Add a source (local path or git URL)
Add {
/// Optional name for the source (e.g., "fg")
#[arg(value_name = "NAME")]
name: Option<String>,
/// Path or URL to add
path: String,
},
```
### Example Usage
```bash
# Named source (new):
skm sources add fg https://github.com/millstone-ai/fg-synapse
skm sources add fg /path/to/fg-synapse
# Unnamed source (backwards compatible):
skm sources add https://github.com/pyrex41/claude_skills
skm sources add /path/to/local/skills
```
### Success Criteria
- Existing config files without `name` field still load correctly
- `skm sources add fg /path/to/synapse` stores name
- `skm sources add /path/to/other` works without name (backwards compatible)
- `config.find_source_by_name("fg")` returns the named source
- All existing tests pass
### Files Changed
- `src/config.rs` (SourceConfig variants, pattern matches, new methods)
- `src/main.rs` (SourcesAction::Add, sources_add function)
---
## Phase 2D: Add Source-Scoped Bundle References
### Overview
Support `skm add fg` (install all bundles from named source) and `skm add fg/synapse-docs` (install specific bundle from named source).
### Changes Required
#### 1. Add reference parsing in `main.rs`
```rust
/// Parse a bundle reference that may be source-scoped.
/// "fg/synapse-docs" → (Some("fg"), Some("synapse-docs"))
/// "fg" → could be source name OR bundle name — caller resolves
fn parse_bundle_ref(input: &str) -> (Option<&str>, Option<&str>) {
if let Some((source, bundle)) = input.split_once('/') {
(Some(source), Some(bundle))
} else {
(None, Some(input))
}
}
```
#### 2. Add `install_from_source` function in `install.rs`
```rust
/// Install all bundles from a named source
pub fn install_from_source(
source: &dyn crate::source::Source,
tool: &Tool,
target_dir: &PathBuf,
types: &[SkillType],
) -> Result<()> {
// ... iterate bundles, install each
}
/// Install a specific bundle from a specific source
pub fn install_bundle_from_source(
source: &dyn crate::source::Source,
bundle_name: &str,
tool: &Tool,
target_dir: &PathBuf,
types: &[SkillType],
) -> Result<()> {
// ... find bundle by name, install it
}
```
#### 3. Add `do_install` dispatch function
Resolution order for `skm add <arg>`:
1. If `<arg>` contains `/` → treat as `<source>/<bundle>`, look up source by name
2. If `<arg>` matches a source name → install ALL bundles from that source
3. Otherwise → search all sources for a bundle with that name (existing behavior)
### Success Criteria
- `skm add fg` installs all bundles from the "fg" named source
- `skm add fg/synapse-docs` installs only "synapse-docs" from the "fg" source
- `skm add cl` still works as before (searches all sources for a bundle named "cl")
- Error messages clearly distinguish "source not found" from "bundle not found"
### Files Changed
- `src/main.rs` (add `parse_bundle_ref`, `do_install`, update install dispatch)
- `src/install.rs` (add `install_from_source`, `install_bundle_from_source`)
---
## Phase 2E: Update List Display for Manifest Sources
### Overview
Update `list_bundles()` and `browse_bundles()` to show manifest metadata (descriptions, tags) and source names.
### Changes Required
Show description when available in `list_bundles()` and include in fuzzy search in `browse_bundles()`.
### Success Criteria
- `skm` (no args) shows bundle descriptions from manifest
- `skm list` includes descriptions in fuzzy search
### Files Changed
- `src/main.rs:699-767` (list display)
- `src/main.rs:259-349` (browse display)
---
## Full Test Verification
```bash
cargo test # All tests pass
cargo build --release # Clean compile
cargo clippy # No warnings
```
## Dependency Graph
```
Phase 2A (Manifest parsing)
└── Phase 2B (Source cascade integration)
└── Phase 2C (Named sources)
└── Phase 2D (Source-scoped install)
└── Phase 2E (Display updates)
```
Each phase builds on the previous. Do not skip ahead.
</file>
<file path="docs/research-anthropic-skills-support.md">
---
date: 2026-01-27T12:00:00-08:00
topic: "Supporting anthropics/skills repo format and `skm add pptx` workflow"
tags: [research, codebase, source-formats, anthropic-format, bundle-discovery]
status: complete
---
# Research: Supporting `anthropics/skills` Repo as a Source
## Research Question
How does the codebase currently handle sources, bundles, and discovery, and what exists today that relates to supporting the `anthropics/skills` repo structure so that a user could run `skm add pptx` and have it find/install the `pptx` skill from that repo?
## Summary
The codebase **already has Anthropic-format detection and parsing** built into `bundle.rs`. When a source directory contains `skills/{name}/SKILL.md`, `is_anthropic_format()` returns true and `list_from_anthropic_path()` creates one bundle per skill folder. This means that if `https://github.com/anthropics/skills` is added as a git source, `skm add pptx` would search for a bundle named `pptx` across all sources and find it. The key gap is that the user must first manually add the repo as a source (`skm sources add https://github.com/anthropics/skills`). There is no built-in "default registry" or "search remote repos" capability.
## Detailed Findings
### 1. Existing Source Format Detection (`source.rs:62-114`)
`LocalSource::list_bundles()` uses a priority-based detection chain:
1. **`skm.toml` manifest** (highest priority) - `manifest.rs` - Explicit bundle declarations with custom paths
2. **Resources format** - `Bundle::is_resources_format()` - Checks for `resources/` directory at root
3. **Anthropic format** - `Bundle::is_anthropic_format()` - Checks for `skills/{name}/SKILL.md` at root
4. **Default format** (fallback) - Each top-level subdirectory is treated as a bundle containing `skills/`, `agents/`, `commands/`, `rules/` subdirs
`GitSource::list_bundles()` delegates entirely to `LocalSource` after cloning (`source.rs:233-246`).
### 2. Anthropic Format Detection (`bundle.rs:218-298`)
Two methods handle this format:
- **`is_anthropic_format(path)`** (`bundle.rs:218-234`) - Returns true if `skills/` exists and any subdirectory contains `SKILL.md`
- **`list_from_anthropic_path(path)`** (`bundle.rs:239-298`) - Creates one `Bundle` per skill folder:
- Reads YAML frontmatter from `SKILL.md` for `name`, `author`, `description`
- Uses folder name as fallback if no `name` in frontmatter
- Each bundle has exactly one `SkillFile` of type `SkillType::Skill`
- Skips hidden (`.`) and template (`_`) directories
### 3. The `anthropics/skills` Repo Structure
The actual repo at `https://github.com/anthropics/skills` has this layout:
```
skills/
algorithmic-art/SKILL.md
brand-guidelines/SKILL.md
canvas-design/SKILL.md
doc-coauthoring/SKILL.md
docx/SKILL.md
frontend-design/SKILL.md
internal-comms/SKILL.md
mcp-builder/SKILL.md
pdf/SKILL.md
pptx/SKILL.md
skill-creator/SKILL.md
slack-gif-creator/SKILL.md
theme-factory/SKILL.md
web-artifacts-builder/SKILL.md
webapp-testing/SKILL.md
xlsx/SKILL.md
```
Each `SKILL.md` has YAML frontmatter with `name` (hyphenated, matches folder name), `description`, and optionally `license`. Many skills also contain companion files: `scripts/`, `templates/`, reference `.md` files, etc.
This **exactly matches** the Anthropic format that `bundle.rs` already detects. The existing `is_anthropic_format()` check would return `true` for this repo, and `list_from_anthropic_path()` would create 16 bundles, one per skill.
### 4. Bundle Resolution Flow for `skm add pptx`
The install flow (`main.rs:196-264`) works as follows:
1. `do_install()` (`main.rs:1225-1264`) parses the bundle reference
- `parse_bundle_ref("pptx")` returns `(None, Some("pptx"))` - no source scope
2. First checks if `"pptx"` is a named source - it won't be
3. Falls through to `install_bundle()` (`install.rs:11-81`)
4. `config.find_bundle("pptx")` (`config.rs:182-197`) iterates all sources:
- For each source, calls `source.list_bundles()` which returns all bundles
- Looks for `bundle.name == "pptx"`
5. If found, installs the bundle's files to the target directory
For the `anthropics/skills` repo, the `pptx` folder's `SKILL.md` frontmatter has `name: pptx`, so the bundle name would be `"pptx"` and `find_bundle("pptx")` would match it.
### 5. What Currently Works (if repo is added as a source)
If a user runs:
```bash
skm sources add https://github.com/anthropics/skills
skm add pptx
```
This would:
1. Clone the repo to the git cache directory
2. Detect the Anthropic format via `is_anthropic_format()`
3. Parse all 16 skill folders into individual bundles
4. Find the `pptx` bundle
5. Install `SKILL.md` to `.claude/skills/pptx/pptx.md` (for Claude target)
**However**, only `SKILL.md` itself would be installed. The companion files (scripts, reference docs, templates) that live alongside `SKILL.md` in each skill directory would **not** be copied, because the current `SkillFile` struct only tracks a single file path (`bundle.rs:50-58`).
### 6. Current `SkillFile` Limitation
The `SkillFile` struct (`bundle.rs:50-58`) represents a single `.md` file:
```rust
pub struct SkillFile {
pub name: String,
pub path: PathBuf, // Single file path
pub skill_type: SkillType,
}
```
When `list_from_anthropic_path()` creates a bundle for `pptx`, it creates one `SkillFile` pointing at `skills/pptx/SKILL.md`. The companion files (`scripts/`, `ooxml.md`, `html2pptx.md`, etc.) are not tracked.
The install path in `target.rs` (`Tool::write_file()`) operates on individual `SkillFile` objects, writing one file per call. There's no mechanism to copy an entire directory tree.
### 7. Config and Source Management (`config.rs`)
Sources are stored in `~/.config/skm/config.toml`:
```toml
[[sources]]
type = "git"
url = "https://github.com/anthropics/skills"
name = "anthropic" # optional alias
```
The `SourceConfig` enum (`config.rs:16-31`) supports both `Local` and `Git` variants, each with an optional `name` field for aliasing. Named sources enable scoped references like `skm add anthropic/pptx`.
`find_bundle()` (`config.rs:182-197`) does a linear search across all sources, returning the first match. Source order determines priority.
### 8. Manifest System (`manifest.rs`)
The manifest system (`skm.toml`) provides an alternative, more explicit way to declare bundles. A manifest in the `anthropics/skills` repo could look like:
```toml
[source]
name = "anthropic-skills"
[[bundles]]
name = "pptx"
path = "skills/pptx"
[[bundles]]
name = "pdf"
path = "skills/pdf"
```
However, the `anthropics/skills` repo does not contain an `skm.toml`, so auto-detection via `is_anthropic_format()` is the relevant path.
### 9. Marketplace Manifest in `anthropics/skills`
The repo also has `.claude-plugin/marketplace.json` which groups skills into two plugin bundles (`document-skills` and `example-skills`). This is a Claude Code-specific plugin manifest format that `skm` does not currently read.
## Code References
- `src/source.rs:62-114` - `LocalSource::list_bundles()` with format detection chain
- `src/source.rs:233-246` - `GitSource::list_bundles()` delegates to `LocalSource`
- `src/bundle.rs:218-234` - `is_anthropic_format()` detection
- `src/bundle.rs:239-298` - `list_from_anthropic_path()` bundle creation
- `src/bundle.rs:300-313` - `extract_frontmatter()` YAML parsing
- `src/bundle.rs:50-58` - `SkillFile` struct (single file only)
- `src/config.rs:182-197` - `find_bundle()` cross-source search
- `src/config.rs:136-149` - `add_source()` with dedup
- `src/install.rs:11-81` - `install_bundle()` orchestration
- `src/target.rs:54-65` - `Tool::write_file()` per-file install
- `src/main.rs:1213-1264` - `parse_bundle_ref()` and `do_install()` dispatch
- `src/manifest.rs:52-60` - `load_manifest()` (skm.toml)
## Architecture Documentation
### Current Format Detection Priority
```
Source directory
├─ Has skm.toml? → Manifest-based discovery
├─ Has resources/? → Resources format (community repos)
├─ Has skills/*/SKILL.md? → Anthropic format (each skill = bundle)
└─ Default → Each subdir is a bundle with skills/agents/commands/rules/
```
### Current Data Flow for `skm add <name>`
```
CLI parse → do_install() → parse_bundle_ref()
→ check named source (source/bundle split on /)
→ or find_bundle() across all sources
→ each source.list_bundles() → format auto-detection
→ match bundle by name
→ install_bundle() → tool.write_file() per SkillFile
```
## Key Gaps for Full `anthropics/skills` Support
1. **Companion files not installed**: `SKILL.md` references scripts, templates, and reference docs that live in the same directory. Only `SKILL.md` itself is tracked and copied.
2. **No default/built-in source**: Users must manually `skm sources add` the repo before `skm add pptx` works. There's no concept of a default registry or well-known source.
3. **No remote search**: `skm add pptx` only searches locally-configured sources. There's no ability to search a remote registry/repo that hasn't been added yet.
## Open Questions
- Should companion files (scripts, templates, reference docs) be installed alongside `SKILL.md`? If so, the install path needs to handle directory trees, not just single files.
- Should the `anthropics/skills` repo be a built-in default source, or should there be a registry/search mechanism?
- Should the `.claude-plugin/marketplace.json` groupings be respected (e.g., `skm add document-skills` installing all 4 doc skills)?
</file>
<file path=".claude/agents/cl/codebase-analyzer.md">
---
name: codebase-analyzer
description: Analyzes codebase implementation details. Call the codebase-analyzer agent when you need to find detailed information about specific components. As always, the more detailed your request prompt, the better! :)
tools: Read, Grep, Glob, LS
model: sonnet
---
You are a specialist at understanding HOW code works. Your job is to analyze implementation details, trace data flow, and explain technical workings with precise file:line references.
## CRITICAL: YOUR ONLY JOB IS TO DOCUMENT AND EXPLAIN THE CODEBASE AS IT EXISTS TODAY
- DO NOT suggest improvements or changes unless the user explicitly asks for them
- DO NOT perform root cause analysis unless the user explicitly asks for them
- DO NOT propose future enhancements unless the user explicitly asks for them
- DO NOT critique the implementation or identify "problems"
- DO NOT comment on code quality, performance issues, or security concerns
- DO NOT suggest refactoring, optimization, or better approaches
- ONLY describe what exists, how it works, and how components interact
## Core Responsibilities
1. **Analyze Implementation Details**
- Read specific files to understand logic
- Identify key functions and their purposes
- Trace method calls and data transformations
- Note important algorithms or patterns
2. **Trace Data Flow**
- Follow data from entry to exit points
- Map transformations and validations
- Identify state changes and side effects
- Document API contracts between components
3. **Identify Architectural Patterns**
- Recognize design patterns in use
- Note architectural decisions
- Identify conventions and best practices
- Find integration points between systems
## Analysis Strategy
### Step 1: Read Entry Points
- Start with main files mentioned in the request
- Look for exports, public methods, or route handlers
- Identify the "surface area" of the component
### Step 2: Follow the Code Path
- Trace function calls step by step
- Read each file involved in the flow
- Note where data is transformed
- Identify external dependencies
- Take time to ultrathink about how all these pieces connect and interact
### Step 3: Document Key Logic
- Document business logic as it exists
- Describe validation, transformation, error handling
- Explain any complex algorithms or calculations
- Note configuration or feature flags being used
- DO NOT evaluate if the logic is correct or optimal
- DO NOT identify potential bugs or issues
## Output Format
Structure your analysis like this:
```
## Analysis: [Feature/Component Name]
### Overview
[2-3 sentence summary of how it works]
### Entry Points
- `api/routes.js:45` - POST /webhooks endpoint
- `handlers/webhook.js:12` - handleWebhook() function
### Core Implementation
#### 1. Request Validation (`handlers/webhook.js:15-32`)
- Validates signature using HMAC-SHA256
- Checks timestamp to prevent replay attacks
- Returns 401 if validation fails
#### 2. Data Processing (`services/webhook-processor.js:8-45`)
- Parses webhook payload at line 10
- Transforms data structure at line 23
- Queues for async processing at line 40
#### 3. State Management (`stores/webhook-store.js:55-89`)
- Stores webhook in database with status 'pending'
- Updates status after processing
- Implements retry logic for failures
### Data Flow
1. Request arrives at `api/routes.js:45`
2. Routed to `handlers/webhook.js:12`
3. Validation at `handlers/webhook.js:15-32`
4. Processing at `services/webhook-processor.js:8`
5. Storage at `stores/webhook-store.js:55`
### Key Patterns
- **Factory Pattern**: WebhookProcessor created via factory at `factories/processor.js:20`
- **Repository Pattern**: Data access abstracted in `stores/webhook-store.js`
- **Middleware Chain**: Validation middleware at `middleware/auth.js:30`
### Configuration
- Webhook secret from `config/webhooks.js:5`
- Retry settings at `config/webhooks.js:12-18`
- Feature flags checked at `utils/features.js:23`
### Error Handling
- Validation errors return 401 (`handlers/webhook.js:28`)
- Processing errors trigger retry (`services/webhook-processor.js:52`)
- Failed webhooks logged to `logs/webhook-errors.log`
```
## Important Guidelines
- **Always include file:line references** for claims
- **Read files thoroughly** before making statements
- **Trace actual code paths** don't assume
- **Focus on "how"** not "what" or "why"
- **Be precise** about function names and variables
- **Note exact transformations** with before/after
## What NOT to Do
- Don't guess about implementation
- Don't skip error handling or edge cases
- Don't ignore configuration or dependencies
- Don't make architectural recommendations
- Don't analyze code quality or suggest improvements
- Don't identify bugs, issues, or potential problems
- Don't comment on performance or efficiency
- Don't suggest alternative implementations
- Don't critique design patterns or architectural choices
- Don't perform root cause analysis of any issues
- Don't evaluate security implications
- Don't recommend best practices or improvements
## REMEMBER: You are a documentarian, not a critic or consultant
Your sole purpose is to explain HOW the code currently works, with surgical precision and exact references. You are creating technical documentation of the existing implementation, NOT performing a code review or consultation.
Think of yourself as a technical writer documenting an existing system for someone who needs to understand it, not as an engineer evaluating or improving it. Help users understand the implementation exactly as it exists today, without any judgment or suggestions for change.
</file>
<file path=".claude/agents/cl/codebase-locator.md">
---
name: codebase-locator
description: Locates files, directories, and components relevant to a feature or task. Call `codebase-locator` with human language prompt describing what you're looking for. Basically a "Super Grep/Glob/LS tool" — Use it if you find yourself desiring to use one of these tools more than once.
tools: Grep, Glob, LS
model: sonnet
---
You are a specialist at finding WHERE code lives in a codebase. Your job is to locate relevant files and organize them by purpose, NOT to analyze their contents.
## CRITICAL: YOUR ONLY JOB IS TO DOCUMENT AND EXPLAIN THE CODEBASE AS IT EXISTS TODAY
- DO NOT suggest improvements or changes unless the user explicitly asks for them
- DO NOT perform root cause analysis unless the user explicitly asks for them
- DO NOT propose future enhancements unless the user explicitly asks for them
- DO NOT critique the implementation
- DO NOT comment on code quality, architecture decisions, or best practices
- ONLY describe what exists, where it exists, and how components are organized
## Core Responsibilities
1. **Find Files by Topic/Feature**
- Search for files containing relevant keywords
- Look for directory patterns and naming conventions
- Check common locations (src/, lib/, pkg/, etc.)
2. **Categorize Findings**
- Implementation files (core logic)
- Test files (unit, integration, e2e)
- Configuration files
- Documentation files
- Type definitions/interfaces
- Examples/samples
3. **Return Structured Results**
- Group files by their purpose
- Provide full paths from repository root
- Note which directories contain clusters of related files
## Search Strategy
### Initial Broad Search
First, think deeply about the most effective search patterns for the requested feature or topic, considering:
- Common naming conventions in this codebase
- Language-specific directory structures
- Related terms and synonyms that might be used
1. Start with using your grep tool for finding keywords.
2. Optionally, use glob for file patterns
3. LS and Glob your way to victory as well!
### Refine by Language/Framework
- **JavaScript/TypeScript**: Look in src/, lib/, components/, pages/, api/
- **Python**: Look in src/, lib/, pkg/, module names matching feature
- **Go**: Look in pkg/, internal/, cmd/
- **General**: Check for feature-specific directories - I believe in you, you are a smart cookie :)
### Common Patterns to Find
- `*service*`, `*handler*`, `*controller*` - Business logic
- `*test*`, `*spec*` - Test files
- `*.config.*`, `*rc*` - Configuration
- `*.d.ts`, `*.types.*` - Type definitions
- `README*`, `*.md` in feature dirs - Documentation
## Output Format
Structure your findings like this:
```
## File Locations for [Feature/Topic]
### Implementation Files
- `src/services/feature.js` - Main service logic
- `src/handlers/feature-handler.js` - Request handling
- `src/models/feature.js` - Data models
### Test Files
- `src/services/__tests__/feature.test.js` - Service tests
- `e2e/feature.spec.js` - End-to-end tests
### Configuration
- `config/feature.json` - Feature-specific config
- `.featurerc` - Runtime configuration
### Type Definitions
- `types/feature.d.ts` - TypeScript definitions
### Related Directories
- `src/services/feature/` - Contains 5 related files
- `docs/feature/` - Feature documentation
### Entry Points
- `src/index.js` - Imports feature module at line 23
- `api/routes.js` - Registers feature routes
```
## Important Guidelines
- **Don't read file contents** - Just report locations
- **Be thorough** - Check multiple naming patterns
- **Group logically** - Make it easy to understand code organization
- **Include counts** - "Contains X files" for directories
- **Note naming patterns** - Help user understand conventions
- **Check multiple extensions** - .js/.ts, .py, .go, etc.
## What NOT to Do
- Don't analyze what the code does
- Don't read files to understand implementation
- Don't make assumptions about functionality
- Don't skip test or config files
- Don't ignore documentation
- Don't critique file organization or suggest better structures
- Don't comment on naming conventions being good or bad
- Don't identify "problems" or "issues" in the codebase structure
- Don't recommend refactoring or reorganization
- Don't evaluate whether the current structure is optimal
## REMEMBER: You are a documentarian, not a critic or consultant
Your job is to help someone understand what code exists and where it lives, NOT to analyze problems or suggest improvements. Think of yourself as creating a map of the existing territory, not redesigning the landscape.
You're a file finder and organizer, documenting the codebase exactly as it exists today. Help users quickly understand WHERE everything is so they can navigate the codebase effectively.
</file>
<file path=".claude/agents/cl/codebase-pattern-finder.md">
---
name: codebase-pattern-finder
description: Finds examples of existing patterns in the codebase. Use when you need to find similar implementations, usage examples, or coding conventions already established in the project.
tools: Grep, Glob, Read, LS
model: sonnet
---
You are a specialist at finding code patterns and examples in the codebase. Your job is to locate and surface existing implementations that can serve as references or templates for new work.
## CRITICAL: YOUR ONLY JOB IS TO DOCUMENT AND SHOW EXISTING PATTERNS AS THEY ARE
- DO NOT suggest improvements or changes unless the user explicitly asks for them
- DO NOT perform root cause analysis unless the user explicitly asks for them
- DO NOT propose future enhancements unless the user explicitly asks for them
- DO NOT critique patterns or identify anti-patterns
- DO NOT comment on code quality or suggest better approaches
- DO NOT evaluate which patterns are "good" or "bad"
- ONLY find and present existing patterns as they exist in the codebase
## Core Responsibilities
1. **Find Pattern Examples**
- Locate implementations of requested patterns
- Find similar code structures across the codebase
- Identify how conventions are used in practice
2. **Surface Usage Examples**
- Show how APIs are called
- Find test examples for components
- Locate configuration patterns
3. **Document Conventions**
- Identify naming conventions in use
- Find file organization patterns
- Show error handling approaches
## Search Strategy
### Step 1: Identify Pattern Categories
Think about what types of patterns might be relevant:
- Structural patterns (how code is organized)
- Behavioral patterns (how code flows)
- Creational patterns (how objects are made)
- Testing patterns (how tests are written)
### Step 2: Execute Targeted Searches
Use your tools to find examples:
- Grep for specific function/class names
- Glob for file patterns
- Read files to extract relevant snippets
- LS to explore directory structures
### Step 3: Extract and Present
For each pattern found:
- Show the relevant code snippet
- Include file path and line numbers
- Note any variations in usage
## Output Format
Structure your findings like this:
```
## Pattern: [Pattern Name]
### Example 1: `path/to/file.js:45-67`
```javascript
// Code snippet here
```
**Context**: Brief description of how this is used
### Example 2: `path/to/other.js:12-30`
```javascript
// Code snippet here
```
**Context**: Brief description of how this is used
### Variations Found
- `file1.js` uses callback style
- `file2.js` uses async/await
- `file3.js` uses Promise.then()
### Related Files
- `tests/pattern.test.js` - Test examples
- `docs/pattern.md` - Documentation
```
## Important Guidelines
- **Show concrete code** - Include actual snippets, not descriptions
- **Include file references** - Always provide file:line locations
- **Show variations** - If a pattern is used differently in places, show both
- **Find test examples** - Tests often show intended usage
- **Be comprehensive** - Find multiple examples when they exist
## What NOT to Do
- Don't identify anti-patterns
- Don't suggest improvements to patterns found
- Don't evaluate pattern quality
- Don't recommend which pattern to use
- Don't critique implementations
- Don't compare patterns to external best practices
- Don't suggest refactoring existing code
- Don't identify "problems" with current patterns
- Don't recommend alternative approaches
## REMEMBER: You are a cataloger, not an evaluator
Your job is to find and present existing patterns exactly as they exist in the codebase. You are creating a reference catalog of how things are currently done, not evaluating whether they should be done differently.
Help users find examples they can learn from and build upon, without editorial judgment about the patterns themselves.
</file>
<file path=".claude/agents/cl/thoughts-analyzer.md">
---
name: thoughts-analyzer
description: Analyzes research documents to extract high-value insights while filtering noise. Use when you need to understand decisions, trade-offs, and lessons learned from existing documentation.
tools: Read, Grep, Glob, LS
model: sonnet
---
You are a specialist at extracting high-value insights from research and planning documents. Your job is to analyze documents deeply and surface the most important decisions, constraints, and lessons while aggressively filtering out noise.
## Core Responsibilities
1. **Extract Key Decisions**
- Identify firm decisions with their rationale
- Note trade-offs that were considered
- Surface constraints that shaped choices
2. **Filter Ruthlessly**
- Skip exploratory content without conclusions
- Ignore superseded or outdated information
- Remove vague or redundant content
- Focus only on actionable insights
3. **Validate Relevance**
- Distinguish between proposed vs. implemented
- Check if information is still current
- Note any caveats or limitations
## Analysis Process
### Step 1: Read Entire Document
- Understand the full context and purpose
- Identify the document type (plan, research, notes, etc.)
- Note the date and author for context
### Step 2: Extract Key Information
Look specifically for:
- **Decisions**: What was decided and why
- **Constraints**: Non-obvious limitations or requirements
- **Technical Specs**: Concrete implementation details
- **Warnings/Gotchas**: Important things to watch out for
- **Action Items**: Next steps or follow-ups
### Step 3: Filter Aggressively
**Include:**
- Firm decisions with rationale
- Non-obvious constraints
- Concrete technical details
- Important warnings or gotchas
- Lessons learned from experience
**Exclude:**
- Exploratory musings without conclusions
- Superseded information
- Vague or redundant content
- Personal opinions without evidence
- Rejected options (unless specifically asked)
## Output Format
Structure your analysis like this:
```
## Document Analysis: [Document Title]
**Source**: `thoughts/shared/research/2024-01-15-auth-design.md`
**Date**: 2024-01-15
**Author**: [Author name if known]
### Document Context
[1-2 sentences on what this document is about and its purpose]
### Key Decisions
1. **[Decision]**: [Rationale]
2. **[Decision]**: [Rationale]
### Critical Constraints
- [Constraint 1 and why it matters]
- [Constraint 2 and why it matters]
### Technical Specifications
- [Specific detail with reference]
- [Specific detail with reference]
### Actionable Insights
- [Insight that can be acted upon]
- [Warning or gotcha to remember]
### Relevance Assessment
[Does this information still apply today? Any caveats?]
```
## Important Guidelines
- **Be selective** - Quality over quantity
- **Quote specifically** - Use exact text when important
- **Note dates** - Context matters for relevance
- **Check currency** - Is this still valid?
- **Connect dots** - Link related insights
## What NOT to Do
- Don't include everything you find
- Don't summarize exploration without conclusions
- Don't include rejected options unless asked
- Don't add your own opinions or recommendations
- Don't evaluate whether decisions were good or bad
## REMEMBER: You are an insight extractor, not a summarizer
Your job is to find the diamonds in the rough - the specific, actionable, high-value information buried in documents. Help users quickly understand what matters without wading through everything themselves.
</file>
<file path=".claude/agents/cl/thoughts-locator.md">
---
name: thoughts-locator
description: Discovers and categorizes documents within a thoughts/ directory system. Use when you need to find relevant research, plans, or documentation that already exists.
tools: Grep, Glob, LS
model: sonnet
---
You are a specialist at discovering and categorizing documents within a `thoughts/` directory system. Your job is to locate relevant documents and organize them by type, NOT to analyze their contents deeply.
## Core Responsibilities
1. **Find Relevant Documents**
- Search across all thoughts/ subdirectories
- Match documents by topic, keywords, or components
- Consider multiple naming conventions
2. **Categorize by Type**
- Research documents
- Implementation plans
- Tickets/issues
- PR descriptions
- Meeting notes
- Personal notes
3. **Return Organized Results**
- Group by document type
- Include corrected paths
- Provide brief descriptions
## Directory Structure
Common thoughts/ directories:
- `thoughts/shared/` - Shared team documentation
- `thoughts/shared/research/` - Research documents
- `thoughts/shared/plans/` - Implementation plans
- `thoughts/shared/prs/` - PR descriptions
- `thoughts/[username]/` - Personal notes
- `thoughts/global/` - Global templates and references
## CRITICAL: Path Correction
Documents found in `searchable/` directory must have that segment removed:
- Found: `thoughts/searchable/shared/research/file.md`
- Report: `thoughts/shared/research/file.md`
Always remove ONLY "searchable/" - preserve all other subdirectories.
## Search Strategy
### Step 1: Check Relevant Subdirectories
Based on the query, prioritize:
- For technical topics: `research/`, `plans/`
- For PRs/changes: `prs/`
- For historical context: check all directories
### Step 2: Use Multiple Search Approaches
- Grep for content keywords
- Glob for filename patterns
- LS to explore directory contents
- Consider technical terms, component names, related concepts
### Step 3: Organize Findings
Group by document type and relevance
## Output Format
Structure your findings like this:
```
## Documents Found for [Topic]
### Research Documents
- `thoughts/shared/research/2024-01-15-auth-design.md` - Authentication architecture decisions
- `thoughts/shared/research/2024-02-01-api-patterns.md` - API design patterns
### Implementation Plans
- `thoughts/shared/plans/2024-01-20-auth-implementation.md` - Auth feature rollout plan
### PR Descriptions
- `thoughts/shared/prs/123_description.md` - PR implementing initial auth
### Related Discussions
- `thoughts/allison/notes/auth-ideas.md` - Early exploration notes
### Tickets
- `thoughts/shared/tickets/ENG-456.md` - Original auth ticket
```
## Important Guidelines
- **Correct paths** - Remove "searchable/" from all paths
- **Be thorough** - Check all relevant directories
- **Brief descriptions** - One line per document
- **Preserve structure** - Don't change directory names (except searchable/)
- **Include dates** - When visible in filenames
## What NOT to Do
- Don't analyze document contents deeply
- Don't skip personal directories if relevant
- Don't ignore historical documents
- Don't modify directory structure in paths (except searchable/)
- Don't evaluate document quality or relevance
## REMEMBER: You are a document finder, not an analyzer
Your job is to help users discover what documentation exists so they can decide what to read. You're creating an index, not performing analysis.
Help users rapidly identify existing historical context and documentation relevant to their research tasks.
</file>
<file path=".claude/agents/cl/web-search-researcher.md">
---
name: web-search-researcher
description: Do you find yourself desiring information that you don't quite feel well-trained (confident) on? Information that is modern and potentially only discoverable on the web? Use the web-search-researcher subagent_type today to find any and all answers to your questions! It will research deeply to figure out and attempt to answer your questions! If you aren't immediately satisfied you can get your money back! (Not really - but you can re-run web-search-researcher with an altered prompt in the event you're not satisfied the first time)
tools: WebSearch, WebFetch, TodoWrite, Read, Grep, Glob, LS
color: yellow
model: sonnet
---
You are an expert web research specialist focused on finding accurate, relevant information from web sources. Your primary tools are WebSearch and WebFetch, which you use to discover and retrieve information based on user queries.
## Core Responsibilities
When you receive a research query, you will:
1. **Analyze the Query**: Break down the user's request to identify:
- Key search terms and concepts
- Types of sources likely to have answers (documentation, blogs, forums, academic papers)
- Multiple search angles to ensure comprehensive coverage
2. **Execute Strategic Searches**:
- Start with broad searches to understand the landscape
- Refine with specific technical terms and phrases
- Use multiple search variations to capture different perspectives
- Include site-specific searches when targeting known authoritative sources (e.g., "site:docs.stripe.com webhook signature")
3. **Fetch and Analyze Content**:
- Use WebFetch to retrieve full content from promising search results
- Prioritize official documentation, reputable technical blogs, and authoritative sources
- Extract specific quotes and sections relevant to the query
- Note publication dates to ensure currency of information
4. **Synthesize Findings**:
- Organize information by relevance and authority
- Include exact quotes with proper attribution
- Provide direct links to sources
- Highlight any conflicting information or version-specific details
- Note any gaps in available information
## Search Strategies
### For API/Library Documentation:
- Search for official docs first: "[library name] official documentation [specific feature]"
- Look for changelog or release notes for version-specific information
- Find code examples in official repositories or trusted tutorials
### For Best Practices:
- Search for recent articles (include year in search when relevant)
- Look for content from recognized experts or organizations
- Cross-reference multiple sources to identify consensus
- Search for both "best practices" and "anti-patterns" to get full picture
### For Technical Solutions:
- Use specific error messages or technical terms in quotes
- Search Stack Overflow and technical forums for real-world solutions
- Look for GitHub issues and discussions in relevant repositories
- Find blog posts describing similar implementations
### For Comparisons:
- Search for "X vs Y" comparisons
- Look for migration guides between technologies
- Find benchmarks and performance comparisons
- Search for decision matrices or evaluation criteria
## Output Format
Structure your findings as:
```
## Summary
[Brief overview of key findings]
## Detailed Findings
### [Topic/Source 1]
**Source**: [Name with link]
**Relevance**: [Why this source is authoritative/useful]
**Key Information**:
- Direct quote or finding (with link to specific section if possible)
- Another relevant point
### [Topic/Source 2]
[Continue pattern...]
## Additional Resources
- [Relevant link 1] - Brief description
- [Relevant link 2] - Brief description
## Gaps or Limitations
[Note any information that couldn't be found or requires further investigation]
```
## Quality Guidelines
- **Accuracy**: Always quote sources accurately and provide direct links
- **Relevance**: Focus on information that directly addresses the user's query
- **Currency**: Note publication dates and version information when relevant
- **Authority**: Prioritize official sources, recognized experts, and peer-reviewed content
- **Completeness**: Search from multiple angles to ensure comprehensive coverage
- **Transparency**: Clearly indicate when information is outdated, conflicting, or uncertain
## Search Efficiency
- Start with 2-3 well-crafted searches before fetching content
- Fetch only the most promising 3-5 pages initially
- If initial results are insufficient, refine search terms and try again
- Use search operators effectively: quotes for exact phrases, minus for exclusions, site: for specific domains
- Consider searching in different forms: tutorials, documentation, Q&A sites, and discussion forums
Remember: You are the user's expert guide to web information. Be thorough but efficient, always cite your sources, and provide actionable information that directly addresses their needs. Think deeply as you work.
</file>
<file path=".claude/commands/cl/commit.md">
---
description: Create git commits with user approval and no Claude attribution
---
# Commit Changes
You are tasked with creating git commits for the changes made during this session.
## Process:
1. **Think about what changed:**
- Review the conversation history and understand what was accomplished
- Run `git status` to see current changes
- Run `git diff` to understand the modifications
- Consider whether changes should be one commit or multiple logical commits
2. **Plan your commits:**
- Identify which files belong together
- Draft clear, descriptive commit messages
- Use imperative mood in commit messages
- Focus on why the changes were made, not just what
3. **Present your plan to the user:**
- List the files you plan to add for each commit
- Show the commit message(s) you'll use
- Ask: "I plan to create [N] commit(s) with these changes. Shall I proceed?"
4. **Execute upon confirmation:**
- Use `git add` with specific files (never use `-A` or `.`)
- Create commits with your planned messages
- Show the result with `git log --oneline -n [number]`
## Important:
- **NEVER add co-author information or Claude attribution**
- Commits should be authored solely by the user
- Do not include any "Generated with Claude" messages
- Do not add "Co-Authored-By" lines
- Write commit messages as if the user wrote them
## Remember:
- You have the full context of what was done in this session
- Group related changes together
- Keep commits focused and atomic when possible
- The user trusts your judgment - they asked you to commit
</file>
<file path=".claude/commands/cl/create_plan.md">
---
description: Create detailed implementation plans through interactive research and iteration
---
# Create Implementation Plan
You are tasked with creating a detailed, actionable implementation plan through collaborative research and iteration.
## Philosophy
Be skeptical and thorough. Plans should:
- Eliminate all open questions before finalization
- Be based on deep understanding of the existing codebase
- Include both automated and manual success criteria
- Be iterative - seek feedback at each phase
## Process
### Phase 1: Context Gathering
1. **Read all mentioned files completely** - Never use limit/offset, you need full context
2. **Spawn parallel research agents** to understand the codebase:
- Use `codebase-locator` to find WHERE relevant code lives
- Use `codebase-analyzer` to understand HOW specific code works
- Use `thoughts-locator` to find existing documentation
3. **Present your understanding** with focused questions for clarification
### Phase 2: Research & Discovery
1. **Verify any user corrections** through new research
2. **Create a todo list** to track exploration
3. **Spawn concurrent sub-tasks** for comprehensive investigation
4. **Present findings** with design options when multiple approaches exist
### Phase 3: Plan Structure Development
1. **Propose phasing** before detailed writing
2. **Seek feedback** on organization and granularity
3. **Identify dependencies** between phases
### Phase 4: Detailed Plan Writing
Create a markdown document with this structure:
```markdown
# Plan: [Feature/Task Name]
## Overview
[2-3 sentence summary of what this plan accomplishes]
## Current State Analysis
[What exists today, relevant code locations]
## Desired End State
[Clear description of the goal]
## Implementation Approach
[High-level strategy]
## Phases
### Phase 1: [Name]
**Goal**: [What this phase accomplishes]
**Changes**:
- [ ] Change 1 (`file:line`)
- [ ] Change 2 (`file:line`)
**Success Criteria - Automated**:
- [ ] `make check` passes
- [ ] `make test` passes
- [ ] Specific test case passes
**Success Criteria - Manual**:
- [ ] UI shows expected behavior
- [ ] Edge case X works correctly
### Phase 2: [Name]
[Continue pattern...]
## Open Questions
[MUST be empty before plan is finalized]
## Risks and Mitigations
[Known risks and how to handle them]
```
### Phase 5: Sync and Review
1. **Save the plan** to appropriate location
2. **Present for feedback**
3. **Iterate based on corrections**
## Critical Requirements
### Success Criteria Distinction
Always separate into two categories:
- **Automated**: Commands that can be run (make, npm test, type checking)
- **Manual**: Requires human judgment (UI functionality, real-world performance)
### Zero Open Questions
Plans must have NO open questions before finalization. If there are unresolved decisions:
- Research more
- Ask the user
- Make a recommendation with rationale
### File References
Always include specific file:line references for:
- Code that needs to change
- Code that serves as examples
- Code that might be affected
## Important Notes
- **Interactivity over monolithic output** - Check in frequently
- **Thoroughness** - Read files completely, research comprehensively
- **Practicality** - Incremental, testable changes
- **Skepticism** - Don't assume, verify through research
</file>
<file path=".claude/commands/cl/debug.md">
---
description: Investigate issues during manual testing without modifying files
---
# Debug
You are tasked with investigating issues encountered during manual testing. This is a read-only investigation command.
## Getting Started
**If given a plan/ticket file:**
Acknowledge the file and ask: "What specifically went wrong? What were you testing?"
**If invoked without parameters:**
Ask: "What issue are you encountering? Please describe:
- What you were trying to do
- What happened instead
- When did it last work correctly (if known)"
## Available Investigation Tools
### Logs
- Application logs in standard locations
- Service-specific logs
- Error logs and stack traces
### Database (if applicable)
- Query current state
- Check recent events/transactions
- Verify data consistency
### Git State
- Current branch and commits
- Uncommitted changes
- Recent history
## Investigation Process
### Step 1: Understand the Problem
- Read any context provided (plan, ticket, description)
- Check git status for current state
- Identify what should be happening vs. what is happening
### Step 2: Investigate in Parallel
Spawn parallel task agents to examine:
- **Logs**: Look for errors, warnings, unexpected patterns
- **State**: Check database or file state
- **Code**: Verify the implementation matches expectations
### Step 3: Present Findings
Structure your debug report:
```
## Debug Report
### Problem Summary
[What went wrong in one sentence]
### Evidence Found
#### From Logs
- [Relevant log entries with timestamps]
- [Error messages or stack traces]
#### From State
- [Database state or file contents]
- [Expected vs. actual values]
#### From Code
- [Relevant code paths]
- [Any mismatches between implementation and expectation]
### Root Cause Analysis
[Most likely explanation based on evidence]
### Recommended Next Steps
1. [First thing to try]
2. [Alternative if that doesn't work]
3. [Escalation path if needed]
### What I Couldn't Check
- [Things outside investigation scope]
- [Missing access or tools]
```
## Important Constraints
- **This is investigation only** - no file editing
- **Focus on evidence** - don't speculate without data
- **Be systematic** - check logs, state, and code
- **Present clearly** - make findings easy to act on
## What's Outside Scope
Some things you may not be able to investigate:
- Browser console errors (user needs to provide)
- Network requests (may need user assistance)
- External service internals
- Real-time state that's hard to capture
When you hit these limits, clearly explain what the user needs to check themselves.
</file>
<file path=".claude/commands/cl/describe_pr.md">
---
description: Generate comprehensive PR descriptions following repository templates
---
# Generate PR Description
You are tasked with generating a comprehensive pull request description.
## Process
### Step 1: Identify the PR
- Check if current branch has a PR: `gh pr view --json url,number,title,state 2>/dev/null`
- If no PR exists, list open PRs: `gh pr list --limit 10 --json number,title,headRefName,author`
- Ask which PR to describe if unclear
### Step 2: Gather PR Information
- Get the full PR diff: `gh pr diff {number}`
- Get commit history: `gh pr view {number} --json commits`
- Get PR metadata: `gh pr view {number} --json url,title,number,state,baseRefName`
### Step 3: Analyze Changes Thoroughly
- Read through the entire diff carefully
- For context, read files referenced but not in the diff
- Understand the purpose and impact of each change
- Identify:
- User-facing changes vs. internal implementation
- Breaking changes or migration requirements
- New dependencies or configurations
### Step 4: Run Verification
For any verification commands you can run:
- If it passes, mark checkbox: `- [x]`
- If it fails, keep unchecked with explanation: `- [ ]`
- If it requires manual testing, leave unchecked and note for user
### Step 5: Generate Description
Create a comprehensive description:
```markdown
## Summary
[2-3 sentences describing what this PR does and why]
## Changes
### [Category 1]
- Change description
- Another change
### [Category 2]
- Change description
## Breaking Changes
[List any breaking changes, or "None"]
## How to Verify
### Automated
- [x] `make check` passes
- [x] `make test` passes
- [ ] [Test that couldn't be run - reason]
### Manual
- [ ] [Manual verification step 1]
- [ ] [Manual verification step 2]
## Related
- Fixes #[issue number] (if applicable)
- Related to #[other PR] (if applicable)
```
### Step 6: Update the PR
- Update the PR description: `gh pr edit {number} --body-file [description file]`
- Confirm the update was successful
- Remind user of any unchecked verification steps
## Important Notes
- Be thorough but concise - descriptions should be scannable
- Focus on the "why" as much as the "what"
- Include breaking changes or migration notes prominently
- If PR touches multiple components, organize accordingly
- Always attempt to run verification commands when possible
- Clearly communicate which steps need manual testing
</file>
<file path=".claude/commands/cl/implement_plan.md">
---
description: Implement technical plans with verification at each phase
---
# Implement Plan
You are tasked with implementing an approved technical plan. These plans contain phases with specific changes and success criteria.
## Getting Started
When given a plan path:
1. **Read the plan completely** and check for any existing checkmarks (`- [x]`)
2. **Read the original ticket** and all files mentioned in the plan
3. **Read files fully** - never use limit/offset parameters, you need complete context
4. **Think deeply** about how the pieces fit together
5. **Create a todo list** to track your progress
6. **Start implementing** if you understand what needs to be done
If no plan path provided, ask for one.
## Implementation Philosophy
Plans are carefully designed, but reality can be messy. Your job is to:
- Follow the plan's intent while adapting to what you find
- Implement each phase fully before moving to the next
- Verify your work makes sense in the broader codebase context
- Update checkboxes in the plan as you complete sections
When things don't match the plan exactly, think about why and communicate clearly.
## Handling Mismatches
If you encounter something that doesn't match the plan:
1. **STOP** and think deeply about why
2. **Present the issue clearly**:
```
Issue in Phase [N]:
Expected: [what the plan says]
Found: [actual situation]
Why this matters: [explanation]
How should I proceed?
```
## Verification Approach
After implementing a phase:
1. **Run automated checks** (usually `make check test` or similar)
2. **Fix any issues** before proceeding
3. **Update progress** in both the plan and your todos
4. **Check off completed items** in the plan file using Edit
5. **Pause for manual verification**:
```
Phase [N] Complete - Ready for Manual Verification
Automated verification passed:
- [List automated checks that passed]
Please perform the manual verification steps listed in the plan:
- [List manual verification items from the plan]
Let me know when manual testing is complete so I can proceed to Phase [N+1].
```
**Note**: If instructed to execute multiple phases consecutively, skip the pause until the last phase.
**Important**: Do not check off manual testing items until confirmed by the user.
## If You Get Stuck
When something isn't working as expected:
1. Make sure you've read and understood all relevant code
2. Consider if the codebase has evolved since the plan was written
3. Present the mismatch clearly and ask for guidance
Use sub-tasks sparingly - mainly for targeted debugging or exploring unfamiliar territory.
## Resuming Work
If the plan has existing checkmarks:
- Trust that completed work is done
- Pick up from the first unchecked item
- Verify previous work only if something seems off
## Remember
You're implementing a solution, not just checking boxes. Keep the end goal in mind and maintain forward momentum.
</file>
<file path=".claude/commands/cl/research_codebase.md">
---
description: Document codebase as-is through comprehensive parallel research
model: opus
---
# Research Codebase
You are tasked with conducting comprehensive research across the codebase to answer user questions by spawning parallel sub-agents and synthesizing their findings.
## CRITICAL: YOUR ONLY JOB IS TO DOCUMENT AND EXPLAIN THE CODEBASE AS IT EXISTS TODAY
- DO NOT suggest improvements or changes unless the user explicitly asks
- DO NOT perform root cause analysis unless explicitly asked
- DO NOT propose future enhancements unless explicitly asked
- DO NOT critique the implementation or identify problems
- ONLY describe what exists, where it exists, how it works, and how components interact
## Initial Setup
When this command is invoked, respond with:
```
I'm ready to research the codebase. Please provide your research question or area of interest, and I'll analyze it thoroughly by exploring relevant components and connections.
```
Then wait for the user's research query.
## Research Process
### Step 1: Read Mentioned Files First
If the user mentions specific files:
- Read them FULLY (no limit/offset parameters)
- Read them yourself in the main context BEFORE spawning sub-tasks
- This ensures you have full context before decomposing the research
### Step 2: Analyze and Decompose
- Break down the query into composable research areas
- Think deeply about underlying patterns and connections
- Create a research plan using TodoWrite
- Consider which directories and patterns are relevant
### Step 3: Spawn Parallel Research Agents
Use specialized agents concurrently:
**For codebase research:**
- `codebase-locator` - Find WHERE files and components live
- `codebase-analyzer` - Understand HOW specific code works
- `codebase-pattern-finder` - Find examples of existing patterns
**For documentation research:**
- `thoughts-locator` - Discover what documents exist
- `thoughts-analyzer` - Extract key insights from documents
**For web research (only if explicitly asked):**
- `web-search-researcher` - External documentation and resources
### Step 4: Wait and Synthesize
- Wait for ALL sub-agents to complete before proceeding
- Compile all results (codebase and documentation findings)
- Prioritize live codebase findings as primary source of truth
- Connect findings across different components
- Include specific file paths and line numbers
### Step 5: Generate Research Document
Structure the document with YAML frontmatter:
```markdown
---
date: [ISO format with timezone]
topic: "[User's Question/Topic]"
tags: [research, codebase, relevant-component-names]
status: complete
---
# Research: [User's Question/Topic]
## Research Question
[Original user query]
## Summary
[High-level documentation answering the question]
## Detailed Findings
### [Component/Area 1]
- Description of what exists (`file.ext:line`)
- How it connects to other components
- Current implementation details
### [Component/Area 2]
...
## Code References
- `path/to/file.py:123` - Description
- `another/file.ts:45-67` - Description
## Architecture Documentation
[Current patterns and design implementations found]
## Open Questions
[Any areas needing further investigation]
```
### Step 6: Present Findings
- Present a concise summary to the user
- Include key file references for easy navigation
- Ask if they have follow-up questions
## Important Notes
- **Always use parallel Task agents** to maximize efficiency
- **Always run fresh codebase research** - never rely solely on existing documents
- **Focus on concrete file paths and line numbers**
- **Document cross-component connections**
- **Keep main agent focused on synthesis**, not deep file reading
- **CRITICAL**: You are a documentarian, not an evaluator
- **REMEMBER**: Document what IS, not what SHOULD BE
</file>
<file path=".scud/tasks/tasks.scg">
</file>
<file path=".scud/config.toml">
[llm]
provider = "xai"
model = "grok-code-fast-1"
smart_model = "grok-4-1-fast-reasoning"
fast_model = "grok-code-fast-1"
max_tokens = 16000
</file>
<file path=".taskmaster/docs/plan-metadata-filtering.md">
# Plan: Bundle Metadata and Interactive Filtering
## Overview
Add support for `metadata.yaml` files in skill bundles to track author information (GitHub username required, display name optional) and descriptions. Enable interactive filtering by author in the CLI's browse/list mode.
## Current State Analysis
- Bundles are directories containing `skills/`, `agents/`, `commands/` subdirectories with `.md` files
- `Bundle` struct (`bundle.rs:34-46`) tracks: name, path, skills, agents, commands
- No metadata is currently stored or displayed
- `skm list` shows bundles with counts but no author/description info
- Dependencies include `serde` but not `serde_yaml`
## Desired End State
1. Each bundle can have an optional `metadata.yaml` file at its root:
```yaml
github: "pyrex41" # required
author: "Reuben Brooks" # optional display name
description: "Git workflow commands for Claude Code" # optional
```
2. `skm list` displays author information:
```
Available bundles:
cl 3s 2a 1c by Reuben Brooks (@pyrex41)
gastro 5s 0a 3c by @someuser
```
3. Interactive filtering when running `skm list`:
```
Filter by author? [All] [pyrex41] [someuser] [Type to search...]
> pyrex41
Bundles by @pyrex41:
cl 3s 2a 1c Git workflow commands
```
## Implementation Approach
Add metadata support in three incremental phases:
1. Parse and store metadata in Bundle struct
2. Display metadata in list/browse output
3. Add interactive filtering
## Phases
### Phase 1: Metadata Parsing
**Goal**: Parse `metadata.yaml` files and store in Bundle struct
**Changes**:
- [ ] Add `serde_yaml` dependency (`Cargo.toml:24`)
```toml
serde_yaml = "0.9"
```
- [ ] Create metadata struct (`bundle.rs:31` - insert before Bundle struct)
```rust
/// Metadata for a skill bundle
#[derive(Debug, Clone, Default, serde::Deserialize)]
pub struct BundleMetadata {
/// GitHub username (required in file, but optional in struct for backwards compat)
pub github: Option<String>,
/// Display name (optional)
pub author: Option<String>,
/// Bundle description (optional)
pub description: Option<String>,
}
impl BundleMetadata {
/// Load metadata from a bundle directory, returns default if not found
pub fn from_path(bundle_path: &std::path::Path) -> Self {
let meta_path = bundle_path.join("metadata.yaml");
if meta_path.exists() {
if let Ok(contents) = std::fs::read_to_string(&meta_path) {
if let Ok(meta) = serde_yaml::from_str(&contents) {
return meta;
}
}
}
Self::default()
}
/// Get display name: author if set, otherwise @github, otherwise None
pub fn display_name(&self) -> Option<String> {
if let Some(ref author) = self.author {
if let Some(ref github) = self.github {
Some(format!("{} (@{})", author, github))
} else {
Some(author.clone())
}
} else {
self.github.as_ref().map(|g| format!("@{}", g))
}
}
}
```
- [ ] Add metadata field to Bundle struct (`bundle.rs:34-46`)
```rust
pub struct Bundle {
pub name: String,
pub path: PathBuf,
pub skills: Vec<SkillFile>,
pub agents: Vec<SkillFile>,
pub commands: Vec<SkillFile>,
pub metadata: BundleMetadata, // NEW
}
```
- [ ] Load metadata in `Bundle::from_path()` (`bundle.rs:50-68`)
```rust
// After line 59, before Ok(Bundle {...})
let metadata = BundleMetadata::from_path(&path);
// Add to struct initialization
Ok(Bundle {
name,
path,
skills,
agents,
commands,
metadata, // NEW
})
```
**Success Criteria - Automated**:
- [ ] `cargo build` succeeds
- [ ] `cargo test` passes
- [ ] Add test: bundle with metadata.yaml parses correctly
- [ ] Add test: bundle without metadata.yaml gets default (empty) metadata
**Success Criteria - Manual**:
- [ ] Create test bundle with metadata.yaml, verify it loads
---
### Phase 2: Display Metadata in List
**Goal**: Show author and description in `skm list` output
**Changes**:
- [ ] Update list display in `main.rs` (find the list command handler)
- Show author after bundle counts: `bundle-name 3s 2a 1c by @username`
- Use dim/gray color for author info
- Truncate description to first 40 chars if shown
- [ ] Update bundle detail view (when selecting a bundle in browse)
- Show full description
- Show GitHub profile link: `https://github.com/{github}`
**Success Criteria - Automated**:
- [ ] `cargo build` succeeds
- [ ] `cargo test` passes
**Success Criteria - Manual**:
- [ ] `skm list` shows author info for bundles with metadata
- [ ] `skm list` shows no author for bundles without metadata (graceful)
- [ ] Selecting a bundle shows full description
---
### Phase 3: Interactive Filtering
**Goal**: Add interactive author filter before showing bundle list
**Changes**:
- [ ] Collect all unique authors from bundles
```rust
fn collect_authors(bundles: &[Bundle]) -> Vec<String> {
let mut authors: Vec<String> = bundles
.iter()
.filter_map(|b| b.metadata.github.clone())
.collect();
authors.sort();
authors.dedup();
authors
}
```
- [ ] Add filter prompt before list display (using `dialoguer::FuzzySelect` or `Select`)
- Options: "All authors" + list of github usernames
- Allow typing to fuzzy-filter the list
- Skip prompt if only one author or `--no-filter` flag
- [ ] Filter bundles by selected author before display
```rust
let filtered: Vec<&Bundle> = if selected_author == "All" {
bundles.iter().collect()
} else {
bundles.iter()
.filter(|b| b.metadata.github.as_deref() == Some(&selected_author))
.collect()
};
```
- [ ] Add `--no-filter` or `--all` flag to skip interactive prompt
**Success Criteria - Automated**:
- [ ] `cargo build` succeeds
- [ ] `cargo test` passes
**Success Criteria - Manual**:
- [ ] `skm list` prompts for author filter
- [ ] Selecting an author shows only their bundles
- [ ] "All authors" shows everything
- [ ] `skm list --all` skips the filter prompt
- [ ] Works with 0 authors (no metadata files) - skips filter
---
## Open Questions
None - all decisions confirmed:
- Bundle-level metadata (not repo-level)
- Fields: `github` (required), `author` (optional), `description` (optional)
- Interactive filtering in browse mode
- User-configured sources (no built-in registry)
## Risks and Mitigations
| Risk | Mitigation |
|------|------------|
| YAML parse errors in malformed metadata.yaml | Silent fallback to default, log warning |
| Performance with many bundles | Metadata is small, parsing is fast. No concern. |
| Breaking change for existing bundles | Fully backwards compatible - metadata is optional |
| Users confused by filter prompt | Add `--all` flag to skip, show clear "All authors" option |
## Future Enhancements (Out of Scope)
- Filter by tags (would need `tags` field in metadata)
- Full-text search across descriptions
- Validation that `github` field is actually a valid GitHub username
- Automatic metadata generation from git commit author
</file>
<file path="examples/community-repo-ci/.github/workflows/validate-skills.yml">
name: Validate Skills
on:
pull_request:
types: [opened, synchronize, reopened, labeled, unlabeled]
jobs:
validate:
# Skip if 'skip-validation' label is present
if: ${{ !contains(github.event.pull_request.labels.*.name, 'skip-validation') }}
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Detect and validate skill format
run: |
#!/bin/bash
set -e
ERRORS=0
WARNINGS=0
FORMAT=""
error() { echo "::error::$1"; ((ERRORS++)); }
warn() { echo "::warning::$1"; ((WARNINGS++)); }
info() { echo "$1"; }
echo "========================================"
echo " Skill Repository Validator"
echo "========================================"
echo ""
# Detect format
if [ -d "resources" ]; then
FORMAT="resources"
info "Detected format: Resources (resources/{type}/name/)"
elif [ -d "skills" ] && find skills -name "SKILL.md" -type f | head -1 | grep -q .; then
FORMAT="anthropic"
info "Detected format: Anthropic (skills/{name}/SKILL.md)"
elif [ -d "skills" ] || [ -d "commands" ] || [ -d "agents" ] || [ -d "rules" ]; then
FORMAT="flat"
info "Detected format: Flat ({skills,commands,agents,rules}/*.md)"
else
error "No recognized skill format found"
error "Expected one of:"
error " - resources/{skills,commands,agents,rules}/name/{meta.yaml,*.md}"
error " - skills/{name}/SKILL.md"
error " - {skills,commands,agents,rules}/*.md"
exit 1
fi
echo ""
# Validate based on format
case "$FORMAT" in
resources)
VALID_TYPES="skills commands agents cursor-rules rules"
for type_dir in resources/*/; do
[ -d "$type_dir" ] || continue
type_name=$(basename "$type_dir")
if ! echo "$VALID_TYPES" | grep -qw "$type_name"; then
error "Invalid resource type: $type_name (valid: $VALID_TYPES)"
continue
fi
for resource_dir in "$type_dir"*/; do
[ -d "$resource_dir" ] || continue
resource_name=$(basename "$resource_dir")
# Skip templates
[[ "$resource_name" == _* ]] && continue
[[ "$resource_name" == .* ]] && continue
info "Checking: $type_name/$resource_name"
# Require meta.yaml
if [ ! -f "$resource_dir/meta.yaml" ]; then
error "$type_name/$resource_name: Missing meta.yaml"
else
if ! grep -q "^name:" "$resource_dir/meta.yaml"; then
error "$type_name/$resource_name: meta.yaml missing 'name' field"
fi
if ! grep -q "^author:" "$resource_dir/meta.yaml"; then
error "$type_name/$resource_name: meta.yaml missing 'author' field"
fi
if ! grep -q "^description:" "$resource_dir/meta.yaml"; then
warn "$type_name/$resource_name: meta.yaml missing 'description' field"
fi
fi
# Require at least one .md file
if ! find "$resource_dir" -maxdepth 1 -name "*.md" -type f | grep -q .; then
error "$type_name/$resource_name: No .md content file found"
fi
# Warn on naming convention
if [[ ! "$resource_name" =~ ^[a-z0-9]+(-[a-z0-9]+)*$ ]]; then
warn "$type_name/$resource_name: Name should be kebab-case"
fi
done
done
;;
anthropic)
for skill_dir in skills/*/; do
[ -d "$skill_dir" ] || continue
skill_name=$(basename "$skill_dir")
# Skip templates
[[ "$skill_name" == _* ]] && continue
[[ "$skill_name" == .* ]] && continue
info "Checking: skills/$skill_name"
# Require SKILL.md
if [ ! -f "$skill_dir/SKILL.md" ]; then
error "skills/$skill_name: Missing SKILL.md"
else
# Check for frontmatter (recommended but not required)
if ! head -1 "$skill_dir/SKILL.md" | grep -q "^---"; then
warn "skills/$skill_name: SKILL.md missing YAML frontmatter"
elif ! grep -q "^name:" "$skill_dir/SKILL.md"; then
warn "skills/$skill_name: SKILL.md frontmatter missing 'name' field"
fi
fi
# Warn on naming convention
if [[ ! "$skill_name" =~ ^[a-z0-9]+(-[a-z0-9]+)*$ ]]; then
warn "skills/$skill_name: Name should be kebab-case"
fi
done
;;
flat)
for type_dir in skills commands agents rules; do
[ -d "$type_dir" ] || continue
for md_file in "$type_dir"/*.md; do
[ -f "$md_file" ] || continue
filename=$(basename "$md_file")
info "Checking: $type_dir/$filename"
# Check file is not empty
if [ ! -s "$md_file" ]; then
error "$type_dir/$filename: File is empty"
fi
# Warn on naming convention
name="${filename%.md}"
if [[ ! "$name" =~ ^[a-z0-9]+(-[a-z0-9]+)*$ ]]; then
warn "$type_dir/$filename: Name should be kebab-case"
fi
done
done
;;
esac
echo ""
echo "========================================"
echo " Summary"
echo "========================================"
echo " Format: $FORMAT"
echo " Errors: $ERRORS"
echo " Warnings: $WARNINGS"
echo ""
if [ "$ERRORS" -gt 0 ]; then
echo "::error::Validation failed with $ERRORS error(s)"
exit 1
elif [ "$WARNINGS" -gt 0 ]; then
echo "Validation passed with warnings"
else
echo "Validation passed"
fi
- name: Check for naming conflicts
run: |
#!/bin/bash
set -e
echo "Checking for naming conflicts..."
# Collect all skill names across all formats
NAMES_FILE=$(mktemp)
# Resources format
if [ -d "resources" ]; then
for type_dir in resources/*/; do
[ -d "$type_dir" ] || continue
find "$type_dir" -mindepth 1 -maxdepth 1 -type d \
! -name "_*" ! -name ".*" -exec basename {} \; >> "$NAMES_FILE"
done
fi
# Anthropic format
if [ -d "skills" ]; then
find skills -mindepth 1 -maxdepth 1 -type d \
! -name "_*" ! -name ".*" -exec basename {} \; >> "$NAMES_FILE"
fi
# Check for case-insensitive duplicates
DUPES=$(cat "$NAMES_FILE" | tr '[:upper:]' '[:lower:]' | sort | uniq -d)
if [ -n "$DUPES" ]; then
echo "::error::Naming conflicts found:"
echo "$DUPES" | while read -r dupe; do
echo " - $dupe"
done
rm "$NAMES_FILE"
exit 1
fi
rm "$NAMES_FILE"
echo "No naming conflicts found"
</file>
<file path="examples/community-repo-ci/README.md">
# Community Skill Repository CI
This directory contains GitHub Actions and scripts for validating skill contributions in community repositories.
## Features
- **Multi-format support**: Validates Resources, Anthropic, and Flat skill formats
- **Bypass mechanism**: Add `skip-validation` label to skip checks on maintenance PRs
- **Local validation**: Run `validate.sh` before submitting PRs
- **Naming conflict detection**: Prevents duplicate skill names
## Setup
1. Copy `.github/workflows/validate-skills.yml` to your repository's `.github/workflows/` directory
2. (Optional) Copy `validate.sh` to your repository root for local validation
3. (Optional) Create the `skip-validation` label in your GitHub repository settings
## Supported Formats
### Resources Format
```
resources/
├── skills/
│ └── my-skill/
│ ├── meta.yaml # Required: name, author; Recommended: description
│ └── skill.md # Content file
├── commands/
│ └── my-command/
│ ├── meta.yaml
│ └── command.md
└── cursor-rules/
└── my-rule/
├── meta.yaml
└── rule.md
```
### Anthropic Format
```
skills/
├── xlsx/
│ └── SKILL.md # With YAML frontmatter (name, description)
├── pdf/
│ └── SKILL.md
└── docx/
└── SKILL.md
```
### Flat Format
```
skills/
├── helper.md
└── analyzer.md
commands/
├── commit.md
└── review.md
```
## Validation Rules
### All Formats
- At least one skill/command/agent must exist
- Names should be kebab-case (lowercase with hyphens)
- No duplicate names (case-insensitive)
### Resources Format
- Each resource folder must have `meta.yaml`
- `meta.yaml` must include `name` and `author` fields
- `description` field is recommended
- At least one `.md` content file required
### Anthropic Format
- Each skill folder must have `SKILL.md`
- YAML frontmatter with `name` field is recommended
### Flat Format
- Files must not be empty
## Bypassing Validation
For maintenance PRs that don't add skills (e.g., updating README, CI fixes):
1. Add the `skip-validation` label to the PR
2. The validation workflow will be skipped
## Local Validation
Run before submitting a PR:
```bash
./validate.sh
```
Or validate a specific directory:
```bash
./validate.sh /path/to/skill-repo
```
## Example CONTRIBUTING.md
See the `gauntlet-ci/CONTRIBUTING.md` file for a template you can adapt for your repository.
</file>
<file path="examples/community-repo-ci/validate.sh">
#!/bin/bash
#
# Validate skill repository structure
# Supports: Resources, Anthropic, and Flat formats
#
# Usage: ./validate.sh [directory]
#
set -e
DIR="${1:-.}"
cd "$DIR"
ERRORS=0
WARNINGS=0
FORMAT=""
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m'
error() { echo -e "${RED}ERROR:${NC} $1"; ((ERRORS++)); }
warn() { echo -e "${YELLOW}WARNING:${NC} $1"; ((WARNINGS++)); }
info() { echo -e "${BLUE}INFO:${NC} $1"; }
ok() { echo -e "${GREEN}OK:${NC} $1"; }
echo ""
echo "========================================"
echo " Skill Repository Validator"
echo "========================================"
echo ""
# Detect format
if [ -d "resources" ]; then
FORMAT="resources"
info "Detected format: Resources (resources/{type}/name/)"
elif [ -d "skills" ] && find skills -name "SKILL.md" -type f 2>/dev/null | head -1 | grep -q .; then
FORMAT="anthropic"
info "Detected format: Anthropic (skills/{name}/SKILL.md)"
elif [ -d "skills" ] || [ -d "commands" ] || [ -d "agents" ] || [ -d "rules" ]; then
FORMAT="flat"
info "Detected format: Flat ({skills,commands,agents,rules}/*.md)"
else
error "No recognized skill format found"
echo ""
echo "Expected one of:"
echo " - resources/{skills,commands,agents,rules}/name/{meta.yaml,*.md}"
echo " - skills/{name}/SKILL.md"
echo " - {skills,commands,agents,rules}/*.md"
exit 1
fi
echo ""
# Validate based on format
case "$FORMAT" in
resources)
VALID_TYPES="skills commands agents cursor-rules rules"
for type_dir in resources/*/; do
[ -d "$type_dir" ] || continue
type_name=$(basename "$type_dir")
if ! echo "$VALID_TYPES" | grep -qw "$type_name"; then
error "Invalid resource type: $type_name"
info "Valid types: $VALID_TYPES"
continue
fi
echo "--- $type_name/ ---"
for resource_dir in "$type_dir"*/; do
[ -d "$resource_dir" ] || continue
resource_name=$(basename "$resource_dir")
# Skip templates
[[ "$resource_name" == _* ]] && { info "Skipping template: $resource_name"; continue; }
[[ "$resource_name" == .* ]] && continue
echo ""
echo " $resource_name:"
# Require meta.yaml
if [ ! -f "$resource_dir/meta.yaml" ]; then
error " Missing meta.yaml"
else
if ! grep -q "^name:" "$resource_dir/meta.yaml"; then
error " meta.yaml missing 'name' field"
else
ok " name field present"
fi
if ! grep -q "^author:" "$resource_dir/meta.yaml"; then
error " meta.yaml missing 'author' field"
else
ok " author field present"
fi
if ! grep -q "^description:" "$resource_dir/meta.yaml"; then
warn " meta.yaml missing 'description' field (recommended)"
fi
fi
# Require at least one .md file
md_count=$(find "$resource_dir" -maxdepth 1 -name "*.md" -type f 2>/dev/null | wc -l | tr -d ' ')
if [ "$md_count" -eq 0 ]; then
error " No .md content file found"
elif [ "$md_count" -gt 1 ]; then
warn " Multiple .md files found"
else
ok " Content file present"
fi
# Warn on naming convention
if [[ ! "$resource_name" =~ ^[a-z0-9]+(-[a-z0-9]+)*$ ]]; then
warn " Name should be kebab-case (e.g., my-skill-name)"
fi
done
echo ""
done
;;
anthropic)
for skill_dir in skills/*/; do
[ -d "$skill_dir" ] || continue
skill_name=$(basename "$skill_dir")
# Skip templates
[[ "$skill_name" == _* ]] && { info "Skipping template: $skill_name"; continue; }
[[ "$skill_name" == .* ]] && continue
echo " $skill_name:"
# Require SKILL.md
if [ ! -f "$skill_dir/SKILL.md" ]; then
error " Missing SKILL.md"
else
ok " SKILL.md present"
# Check for frontmatter
if ! head -1 "$skill_dir/SKILL.md" | grep -q "^---"; then
warn " SKILL.md missing YAML frontmatter (recommended)"
elif ! grep -q "^name:" "$skill_dir/SKILL.md"; then
warn " SKILL.md frontmatter missing 'name' field (recommended)"
else
ok " Frontmatter with name field"
fi
fi
# Warn on naming convention
if [[ ! "$skill_name" =~ ^[a-z0-9]+(-[a-z0-9]+)*$ ]]; then
warn " Name should be kebab-case"
fi
echo ""
done
;;
flat)
for type_dir in skills commands agents rules; do
[ -d "$type_dir" ] || continue
echo "--- $type_dir/ ---"
for md_file in "$type_dir"/*.md; do
[ -f "$md_file" ] || continue
filename=$(basename "$md_file")
echo " $filename:"
# Check file is not empty
if [ ! -s "$md_file" ]; then
error " File is empty"
else
ok " Has content"
fi
# Warn on naming convention
name="${filename%.md}"
if [[ ! "$name" =~ ^[a-z0-9]+(-[a-z0-9]+)*$ ]]; then
warn " Name should be kebab-case"
fi
done
echo ""
done
;;
esac
# Check for naming conflicts
echo "--- Naming Conflicts ---"
NAMES_FILE=$(mktemp)
if [ -d "resources" ]; then
for type_dir in resources/*/; do
[ -d "$type_dir" ] || continue
find "$type_dir" -mindepth 1 -maxdepth 1 -type d \
! -name "_*" ! -name ".*" -exec basename {} \; >> "$NAMES_FILE"
done
fi
if [ -d "skills" ]; then
find skills -mindepth 1 -maxdepth 1 -type d \
! -name "_*" ! -name ".*" -exec basename {} \; >> "$NAMES_FILE" 2>/dev/null || true
fi
DUPES=$(cat "$NAMES_FILE" | tr '[:upper:]' '[:lower:]' | sort | uniq -d)
rm "$NAMES_FILE"
if [ -n "$DUPES" ]; then
error "Naming conflicts found:"
echo "$DUPES" | while read -r dupe; do
echo " - $dupe"
done
else
ok "No naming conflicts"
fi
echo ""
echo "========================================"
echo " Summary"
echo "========================================"
echo ""
echo " Format: $FORMAT"
echo " Errors: $ERRORS"
echo " Warnings: $WARNINGS"
echo ""
if [ "$ERRORS" -gt 0 ]; then
echo -e "${RED}VALIDATION FAILED${NC}"
echo ""
echo "Please fix the errors above before submitting your PR."
exit 1
elif [ "$WARNINGS" -gt 0 ]; then
echo -e "${YELLOW}VALIDATION PASSED WITH WARNINGS${NC}"
echo ""
echo "Consider addressing the warnings above."
exit 0
else
echo -e "${GREEN}VALIDATION PASSED${NC}"
exit 0
fi
</file>
<file path="examples/gauntlet-ci/.github/workflows/validate-resources.yml">
name: Validate Resources
on:
pull_request:
paths:
- 'resources/**'
jobs:
validate:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Validate resource structure
run: |
#!/bin/bash
set -e
ERRORS=0
WARNINGS=0
echo "🔍 Validating resource structure..."
echo ""
# Valid resource types
VALID_TYPES="skills commands agents cursor-rules rules"
# Check each resource type directory
for type_dir in resources/*/; do
type_name=$(basename "$type_dir")
# Validate type directory name
if ! echo "$VALID_TYPES" | grep -qw "$type_name"; then
echo "❌ ERROR: Invalid resource type directory: $type_name"
echo " Valid types: $VALID_TYPES"
((ERRORS++))
continue
fi
# Check each resource in this type
for resource_dir in "$type_dir"*/; do
[ -d "$resource_dir" ] || continue
resource_name=$(basename "$resource_dir")
# Skip template/example directories
if [[ "$resource_name" == _* ]] || [[ "$resource_name" == .* ]]; then
echo "⏭️ Skipping template: $resource_dir"
continue
fi
echo "📦 Checking: $type_dir$resource_name"
# Check for meta.yaml
if [ ! -f "$resource_dir/meta.yaml" ]; then
echo " ❌ ERROR: Missing meta.yaml"
((ERRORS++))
else
# Validate meta.yaml has required fields
if ! grep -q "^name:" "$resource_dir/meta.yaml"; then
echo " ❌ ERROR: meta.yaml missing 'name' field"
((ERRORS++))
fi
if ! grep -q "^author:" "$resource_dir/meta.yaml"; then
echo " ❌ ERROR: meta.yaml missing 'author' field"
((ERRORS++))
fi
if ! grep -q "^description:" "$resource_dir/meta.yaml"; then
echo " ⚠️ WARNING: meta.yaml missing 'description' field"
((WARNINGS++))
fi
fi
# Check for content .md file
MD_COUNT=$(find "$resource_dir" -maxdepth 1 -name "*.md" | wc -l)
if [ "$MD_COUNT" -eq 0 ]; then
echo " ❌ ERROR: No .md content file found"
((ERRORS++))
elif [ "$MD_COUNT" -gt 1 ]; then
echo " ⚠️ WARNING: Multiple .md files found, using first match"
((WARNINGS++))
fi
# Validate naming convention (kebab-case)
if [[ ! "$resource_name" =~ ^[a-z0-9]+(-[a-z0-9]+)*$ ]]; then
echo " ⚠️ WARNING: Resource name should be kebab-case (e.g., my-skill-name)"
((WARNINGS++))
fi
echo " ✓ Structure OK"
done
done
echo ""
echo "═══════════════════════════════════════"
echo "📊 Validation Summary"
echo "═══════════════════════════════════════"
echo " Errors: $ERRORS"
echo " Warnings: $WARNINGS"
echo ""
if [ "$ERRORS" -gt 0 ]; then
echo "❌ Validation FAILED"
exit 1
elif [ "$WARNINGS" -gt 0 ]; then
echo "⚠️ Validation PASSED with warnings"
exit 0
else
echo "✅ Validation PASSED"
exit 0
fi
- name: Check for naming conflicts
run: |
#!/bin/bash
set -e
echo "🔍 Checking for naming conflicts..."
echo ""
CONFLICTS=0
for type_dir in resources/*/; do
[ -d "$type_dir" ] || continue
type_name=$(basename "$type_dir")
# Get all resource names (excluding templates)
names=$(find "$type_dir" -mindepth 1 -maxdepth 1 -type d \
! -name "_*" ! -name ".*" -printf "%f\n" | sort)
# Check for duplicates (case-insensitive)
dupes=$(echo "$names" | tr '[:upper:]' '[:lower:]' | sort | uniq -d)
if [ -n "$dupes" ]; then
echo "❌ Naming conflict in $type_name/:"
echo "$dupes" | while read -r dupe; do
echo " - $dupe"
done
((CONFLICTS++))
fi
done
if [ "$CONFLICTS" -gt 0 ]; then
echo ""
echo "❌ Found naming conflicts!"
exit 1
else
echo "✅ No naming conflicts found"
fi
</file>
<file path="examples/gauntlet-ci/CONTRIBUTING.md">
# Contributing to Gauntlet Champion Resources
Thanks for contributing! Here's how to add your skills, commands, agents, or rules.
## Quick Start
1. Fork this repo
2. Create your resource folder
3. Add `meta.yaml` + content file
4. Submit a PR
## Directory Structure
```
resources/
├── skills/
│ └── your-skill-name/
│ ├── meta.yaml # Required: metadata
│ └── skill.md # Required: content
├── commands/
│ └── your-command-name/
│ ├── meta.yaml
│ └── command.md
├── agents/
│ └── your-agent-name/
│ ├── meta.yaml
│ └── agent.md
└── cursor-rules/
└── your-rule-name/
├── meta.yaml
└── rule.md
```
## Naming Rules
- **Use kebab-case**: `my-awesome-skill` ✓
- **Lowercase only**: `MySkill` ✗ → `my-skill` ✓
- **No spaces**: `my skill` ✗ → `my-skill` ✓
- **First-write-wins**: If a name is taken, choose a different one
## meta.yaml Format
```yaml
# Required fields
name: My Awesome Skill
author: your-github-username
description: A clear description of what this does.
# Optional fields
tags:
- productivity
- debugging
- testing
version: "1.0.0"
instructions: |
Additional usage instructions.
Can span multiple lines.
```
### Required Fields
| Field | Description |
|-------|-------------|
| `name` | Display name (can include spaces) |
| `author` | Your GitHub username |
| `description` | What does this do? (1-2 sentences) |
### Optional Fields
| Field | Description |
|-------|-------------|
| `tags` | Searchable keywords |
| `version` | Semantic version (e.g., "1.0.0") |
| `instructions` | Detailed usage notes |
## Content File
The `.md` file contains the actual skill/command/agent/rule content:
```markdown
# My Awesome Skill
Instructions and context for the AI assistant.
## When to Use
- Use this when...
- This helps with...
## Instructions
1. First, do this...
2. Then, do that...
```
## Validation
Before submitting, run the validator locally:
```bash
./validate-resources.sh
```
Or the CI will check automatically when you open a PR.
## Example: Adding a Skill
1. Create folder: `resources/skills/code-review-helper/`
2. Create `meta.yaml`:
```yaml
name: Code Review Helper
author: johndoe
description: Helps perform thorough code reviews with a checklist approach.
tags:
- code-review
- quality
version: "1.0.0"
```
3. Create `skill.md`:
```markdown
# Code Review Helper
When reviewing code, follow this checklist...
```
4. Run validator: `./validate-resources.sh`
5. Submit PR!
## What Gets Checked
The CI validates:
- ✅ Correct directory structure
- ✅ `meta.yaml` exists with required fields
- ✅ Content `.md` file exists
- ✅ No naming conflicts
- ✅ kebab-case naming convention
- ✅ Valid YAML syntax
## Questions?
Open an issue or ask in the Gauntlet community!
</file>
<file path="examples/gauntlet-ci/validate-resources.sh">
#!/bin/bash
#
# Validate Gauntlet Champion Resources structure
# Run this locally before submitting a PR, or use in CI/CD
#
# Usage: ./validate-resources.sh [resources_dir]
#
set -e
RESOURCES_DIR="${1:-resources}"
ERRORS=0
WARNINGS=0
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
error() { echo -e "${RED}❌ ERROR:${NC} $1"; ((ERRORS++)); }
warn() { echo -e "${YELLOW}⚠️ WARNING:${NC} $1"; ((WARNINGS++)); }
info() { echo -e "${BLUE}ℹ️ ${NC} $1"; }
ok() { echo -e "${GREEN}✓${NC} $1"; }
echo ""
echo "═══════════════════════════════════════════════════════"
echo " Gauntlet Champion Resources Validator"
echo "═══════════════════════════════════════════════════════"
echo ""
if [ ! -d "$RESOURCES_DIR" ]; then
error "Resources directory not found: $RESOURCES_DIR"
exit 1
fi
# Valid resource types
VALID_TYPES=("skills" "commands" "agents" "cursor-rules" "rules")
# Track all names for conflict detection
declare -A ALL_NAMES
validate_meta_yaml() {
local meta_file="$1"
local resource_path="$2"
if [ ! -f "$meta_file" ]; then
error "$resource_path: Missing meta.yaml"
return 1
fi
local has_error=0
# Check required fields
if ! grep -q "^name:" "$meta_file"; then
error "$resource_path: meta.yaml missing required 'name' field"
has_error=1
fi
if ! grep -q "^author:" "$meta_file"; then
error "$resource_path: meta.yaml missing required 'author' field"
has_error=1
fi
# Check recommended fields
if ! grep -q "^description:" "$meta_file"; then
warn "$resource_path: meta.yaml missing 'description' field (recommended)"
fi
# Validate YAML syntax (basic check)
if command -v python3 &> /dev/null; then
if ! python3 -c "import yaml; yaml.safe_load(open('$meta_file'))" 2>/dev/null; then
error "$resource_path: meta.yaml has invalid YAML syntax"
has_error=1
fi
fi
return $has_error
}
validate_resource() {
local resource_dir="$1"
local type_name="$2"
local resource_name=$(basename "$resource_dir")
# Skip templates
if [[ "$resource_name" == _* ]] || [[ "$resource_name" == .* ]]; then
info "Skipping template: $resource_name"
return 0
fi
echo ""
echo "📦 $type_name/$resource_name"
# Check naming convention
if [[ ! "$resource_name" =~ ^[a-z0-9]+(-[a-z0-9]+)*$ ]]; then
warn "Name should be kebab-case (lowercase with hyphens)"
fi
# Track for conflict detection
local name_key="${type_name}:${resource_name,,}" # lowercase
if [[ -n "${ALL_NAMES[$name_key]}" ]]; then
error "Naming conflict with: ${ALL_NAMES[$name_key]}"
else
ALL_NAMES[$name_key]="$type_name/$resource_name"
fi
# Validate meta.yaml
validate_meta_yaml "$resource_dir/meta.yaml" "$type_name/$resource_name"
# Check for content file
local md_files=($(find "$resource_dir" -maxdepth 1 -name "*.md" -type f))
local md_count=${#md_files[@]}
if [ "$md_count" -eq 0 ]; then
error "No .md content file found"
elif [ "$md_count" -gt 1 ]; then
warn "Multiple .md files found: ${md_files[*]##*/}"
else
ok "Content file: ${md_files[0]##*/}"
fi
# Check for unexpected files
local expected_pattern="^(meta\.yaml|.*\.md|README\.md)$"
for file in "$resource_dir"/*; do
[ -f "$file" ] || continue
local filename=$(basename "$file")
if [[ ! "$filename" =~ $expected_pattern ]]; then
warn "Unexpected file: $filename"
fi
done
}
# Validate directory structure
for type_dir in "$RESOURCES_DIR"/*/; do
[ -d "$type_dir" ] || continue
type_name=$(basename "$type_dir")
# Check if valid type
if [[ ! " ${VALID_TYPES[*]} " =~ " ${type_name} " ]]; then
error "Invalid resource type: $type_name"
info "Valid types: ${VALID_TYPES[*]}"
continue
fi
echo ""
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo "📁 $type_name/"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
# Validate each resource
for resource_dir in "$type_dir"*/; do
[ -d "$resource_dir" ] || continue
validate_resource "$resource_dir" "$type_name"
done
done
# Summary
echo ""
echo ""
echo "═══════════════════════════════════════════════════════"
echo " Summary"
echo "═══════════════════════════════════════════════════════"
echo ""
echo " Errors: $ERRORS"
echo " Warnings: $WARNINGS"
echo ""
if [ "$ERRORS" -gt 0 ]; then
echo -e "${RED}❌ VALIDATION FAILED${NC}"
echo ""
echo "Please fix the errors above before submitting your PR."
exit 1
elif [ "$WARNINGS" -gt 0 ]; then
echo -e "${YELLOW}⚠️ VALIDATION PASSED WITH WARNINGS${NC}"
echo ""
echo "Consider addressing the warnings above."
exit 0
else
echo -e "${GREEN}✅ VALIDATION PASSED${NC}"
exit 0
fi
</file>
<file path=".gitignore">
/target
</file>
<file path="LICENSE">
MIT License
Copyright (c) 2025
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
</file>
<file path="src/manifest.rs">
use serde::Deserialize;
use std::path::PathBuf;
use crate::bundle::{Bundle, BundleMeta, SkillFile, SkillType};
#[derive(Debug, Deserialize)]
pub struct SourceManifest {
pub source: Option<SourceMeta>,
#[serde(default)]
pub bundles: Vec<BundleDeclaration>,
}
#[derive(Debug, Deserialize)]
pub struct SourceMeta {
pub name: Option<String>,
pub description: Option<String>,
}
#[derive(Debug, Deserialize)]
pub struct BundleDeclaration {
pub name: String,
pub path: String,
pub description: Option<String>,
pub tags: Option<Vec<String>>,
#[serde(default)]
pub paths: ComponentPaths,
}
#[derive(Debug, Deserialize, Default)]
pub struct ComponentPaths {
pub skills: Option<String>,
pub agents: Option<String>,
pub commands: Option<String>,
pub rules: Option<String>,
}
impl ComponentPaths {
pub fn skills_dir(&self) -> &str {
self.skills.as_deref().unwrap_or("skills")
}
pub fn agents_dir(&self) -> &str {
self.agents.as_deref().unwrap_or("agents")
}
pub fn commands_dir(&self) -> &str {
self.commands.as_deref().unwrap_or("commands")
}
pub fn rules_dir(&self) -> &str {
self.rules.as_deref().unwrap_or("rules")
}
}
/// Load and parse an skm.toml manifest from a source root directory
pub fn load_manifest(source_root: &PathBuf) -> Option<SourceManifest> {
let manifest_path = source_root.join("skm.toml");
if !manifest_path.exists() {
return None;
}
let content = std::fs::read_to_string(&manifest_path).ok()?;
toml::from_str(&content).ok()
}
/// Build a Bundle from a manifest declaration by scanning its declared paths
pub fn bundle_from_declaration(
source_root: &PathBuf,
decl: &BundleDeclaration,
) -> anyhow::Result<Bundle> {
let bundle_root = source_root.join(&decl.path);
let skills = scan_component_dir(
&bundle_root.join(decl.paths.skills_dir()),
SkillType::Skill,
)?;
let agents = scan_component_dir(
&bundle_root.join(decl.paths.agents_dir()),
SkillType::Agent,
)?;
let commands = scan_component_dir(
&bundle_root.join(decl.paths.commands_dir()),
SkillType::Command,
)?;
let rules = scan_component_dir(
&bundle_root.join(decl.paths.rules_dir()),
SkillType::Rule,
)?;
Ok(Bundle {
name: decl.name.clone(),
path: bundle_root,
skills,
agents,
commands,
rules,
meta: BundleMeta {
author: None,
description: decl.description.clone(),
},
})
}
/// Scan a component directory for skill files.
/// Handles BOTH flat .md files AND {name}/SKILL.md directory format.
fn scan_component_dir(dir: &PathBuf, skill_type: SkillType) -> anyhow::Result<Vec<SkillFile>> {
if !dir.exists() {
return Ok(vec![]);
}
let mut files = vec![];
for entry in std::fs::read_dir(dir)? {
let entry = entry?;
let path = entry.path();
if path.is_file() && path.extension().is_some_and(|e| e == "md" || e == "mdc") {
// Flat .md file (e.g., agents/base/review-agent.md)
let name = path
.file_stem()
.and_then(|n| n.to_str())
.unwrap_or("")
.to_string();
files.push(SkillFile {
name,
path,
skill_type,
source_dir: None,
});
} else if path.is_dir() {
// Directory format: look for SKILL.md, AGENT.md, COMMAND.md, RULE.md, or any .md
let expected_names = match skill_type {
SkillType::Skill => vec!["SKILL.md", "skill.md"],
SkillType::Agent => vec!["AGENT.md", "agent.md"],
SkillType::Command => vec!["COMMAND.md", "command.md"],
SkillType::Rule => vec!["RULE.md", "rule.md"],
};
let mut found = false;
for expected in &expected_names {
let md_path = path.join(expected);
if md_path.exists() {
let folder_name = path
.file_name()
.and_then(|n| n.to_str())
.unwrap_or("")
.to_string();
files.push(SkillFile {
name: folder_name,
path: md_path,
skill_type,
source_dir: Some(path.clone()),
});
found = true;
break;
}
}
// Fall back to any .md file in the directory
if !found {
if let Ok(entries) = std::fs::read_dir(&path) {
for sub_entry in entries.flatten() {
let sub_path = sub_entry.path();
if sub_path.is_file()
&& sub_path.extension().is_some_and(|e| e == "md")
{
let folder_name = path
.file_name()
.and_then(|n| n.to_str())
.unwrap_or("")
.to_string();
files.push(SkillFile {
name: folder_name,
path: sub_path,
skill_type,
source_dir: Some(path.clone()),
});
break;
}
}
}
}
}
}
files.sort_by(|a, b| a.name.cmp(&b.name));
Ok(files)
}
#[cfg(test)]
mod tests {
use super::*;
use std::fs;
use tempfile::tempdir;
#[test]
fn test_load_manifest_not_present() {
let dir = tempdir().unwrap();
assert!(load_manifest(&dir.path().to_path_buf()).is_none());
}
#[test]
fn test_load_manifest_minimal() {
let dir = tempdir().unwrap();
fs::write(
dir.path().join("skm.toml"),
r#"
[[bundles]]
name = "my-bundle"
path = "src"
"#,
)
.unwrap();
let manifest = load_manifest(&dir.path().to_path_buf()).unwrap();
assert_eq!(manifest.bundles.len(), 1);
assert_eq!(manifest.bundles[0].name, "my-bundle");
assert!(manifest.source.is_none());
}
#[test]
fn test_load_manifest_full() {
let dir = tempdir().unwrap();
fs::write(
dir.path().join("skm.toml"),
r#"
[source]
name = "test-source"
description = "A test source"
[[bundles]]
name = "bundle-a"
path = "plugins/a"
description = "First bundle"
tags = ["test", "alpha"]
[bundles.paths]
skills = "skills/base"
agents = "agents/base"
commands = "commands/base"
rules = "rules/base"
[[bundles]]
name = "bundle-b"
path = "plugins/b"
"#,
)
.unwrap();
let manifest = load_manifest(&dir.path().to_path_buf()).unwrap();
assert_eq!(
manifest.source.as_ref().unwrap().name.as_deref(),
Some("test-source")
);
assert_eq!(manifest.bundles.len(), 2);
assert_eq!(manifest.bundles[0].paths.skills_dir(), "skills/base");
assert_eq!(manifest.bundles[1].paths.skills_dir(), "skills"); // default
}
#[test]
fn test_component_paths_defaults() {
let paths = ComponentPaths::default();
assert_eq!(paths.skills_dir(), "skills");
assert_eq!(paths.agents_dir(), "agents");
assert_eq!(paths.commands_dir(), "commands");
assert_eq!(paths.rules_dir(), "rules");
}
#[test]
fn test_scan_component_dir_flat_files() {
let dir = tempdir().unwrap();
let agents_dir = dir.path().join("agents");
fs::create_dir_all(&agents_dir).unwrap();
fs::write(agents_dir.join("analyzer.md"), "# Analyzer").unwrap();
fs::write(agents_dir.join("curator.md"), "# Curator").unwrap();
let files = scan_component_dir(&agents_dir, SkillType::Agent).unwrap();
assert_eq!(files.len(), 2);
assert_eq!(files[0].name, "analyzer");
assert_eq!(files[1].name, "curator");
}
#[test]
fn test_scan_component_dir_skill_md_directories() {
let dir = tempdir().unwrap();
let skills_dir = dir.path().join("skills");
// Create SKILL.md directory format
let skill1 = skills_dir.join("data-model-visualizer");
fs::create_dir_all(&skill1).unwrap();
fs::write(
skill1.join("SKILL.md"),
"---\nname: visualizer\n---\n# Viz",
)
.unwrap();
let skill2 = skills_dir.join("system-mapper");
fs::create_dir_all(&skill2).unwrap();
fs::write(skill2.join("SKILL.md"), "# System Mapper").unwrap();
let files = scan_component_dir(&skills_dir, SkillType::Skill).unwrap();
assert_eq!(files.len(), 2);
assert_eq!(files[0].name, "data-model-visualizer");
assert_eq!(files[1].name, "system-mapper");
}
#[test]
fn test_scan_component_dir_mixed_formats() {
let dir = tempdir().unwrap();
let base = dir.path().join("base");
// Mix of flat file and directory
fs::create_dir_all(&base).unwrap();
fs::write(base.join("simple.md"), "# Simple").unwrap();
let skill_dir = base.join("complex-skill");
fs::create_dir_all(&skill_dir).unwrap();
fs::write(skill_dir.join("SKILL.md"), "# Complex").unwrap();
let files = scan_component_dir(&base, SkillType::Skill).unwrap();
assert_eq!(files.len(), 2);
}
#[test]
fn test_bundle_from_declaration() {
let dir = tempdir().unwrap();
let plugin = dir.path().join("plugins/docs");
// Create skills/base with SKILL.md directory
let skill_dir = plugin.join("skills/base/my-skill");
fs::create_dir_all(&skill_dir).unwrap();
fs::write(skill_dir.join("SKILL.md"), "# Skill").unwrap();
// Create agents/base with flat file
let agents_dir = plugin.join("agents/base");
fs::create_dir_all(&agents_dir).unwrap();
fs::write(agents_dir.join("review-agent.md"), "# Agent").unwrap();
let decl = BundleDeclaration {
name: "synapse-docs".to_string(),
path: "plugins/docs".to_string(),
description: Some("Documentation plugin".to_string()),
tags: None,
paths: ComponentPaths {
skills: Some("skills/base".to_string()),
agents: Some("agents/base".to_string()),
commands: Some("commands/base".to_string()),
rules: Some("rules/base".to_string()),
},
};
let bundle = bundle_from_declaration(&dir.path().to_path_buf(), &decl).unwrap();
assert_eq!(bundle.name, "synapse-docs");
assert_eq!(bundle.skills.len(), 1);
assert_eq!(bundle.agents.len(), 1);
assert_eq!(
bundle.meta.description,
Some("Documentation plugin".to_string())
);
}
}
</file>
<file path="src/config.rs">
use anyhow::Result;
use serde::{Deserialize, Serialize};
use std::path::PathBuf;
use crate::source::{GitSource, LocalSource, Source};
#[derive(Debug, Serialize, Deserialize, Default)]
pub struct Config {
#[serde(default)]
pub default_tool: String,
#[serde(default)]
sources: Vec<SourceConfig>,
}
#[derive(Debug, Serialize, Deserialize, Clone)]
#[serde(tag = "type")]
pub enum SourceConfig {
#[serde(rename = "local")]
Local {
path: String,
#[serde(default, skip_serializing_if = "Option::is_none")]
name: Option<String>,
},
#[serde(rename = "git")]
Git {
url: String,
#[serde(default, skip_serializing_if = "Option::is_none")]
name: Option<String>,
},
}
impl Config {
/// Create a new config with the given sources
pub fn new(sources: Vec<SourceConfig>) -> Self {
Config {
default_tool: "claude".to_string(),
sources,
}
}
/// Load config from file, return None if it doesn't exist
pub fn load() -> Result<Option<Self>> {
let config_path = Self::config_path()?;
if config_path.exists() {
let content = std::fs::read_to_string(&config_path)?;
let config: Config = toml::from_str(&content)?;
Ok(Some(config))
} else {
Ok(None)
}
}
/// Load config from file or return default with ~/.claude-skills as source
pub fn load_or_default() -> Result<Self> {
if let Some(config) = Self::load()? {
Ok(config)
} else {
// Fallback default - used when no config exists and not in interactive mode
Ok(Config {
default_tool: "claude".to_string(),
sources: vec![SourceConfig::Local {
path: "~/.claude-skills".to_string(),
name: None,
}],
})
}
}
/// Save config to file
pub fn save(&self) -> Result<()> {
let config_path = Self::config_path()?;
// Create parent directory if needed
if let Some(parent) = config_path.parent() {
std::fs::create_dir_all(parent)?;
}
let content = toml::to_string_pretty(self)?;
std::fs::write(&config_path, content)?;
Ok(())
}
/// Get the config file path
pub fn config_path() -> Result<PathBuf> {
let proj_dirs = directories::ProjectDirs::from("", "", "skm")
.ok_or_else(|| anyhow::anyhow!("Could not determine config directory"))?;
Ok(proj_dirs.config_dir().join("config.toml"))
}
/// Check if config file exists
pub fn exists() -> Result<bool> {
let config_path = Self::config_path()?;
Ok(config_path.exists())
}
/// Get all configured sources as Source trait objects
pub fn sources(&self) -> Vec<Box<dyn Source>> {
self.sources
.iter()
.filter_map(|s| match s {
SourceConfig::Local { path, .. } => {
let expanded = expand_tilde(path);
Some(Box::new(LocalSource::new(expanded)) as Box<dyn Source>)
}
SourceConfig::Git { url, .. } => match GitSource::new(url.clone()) {
Ok(source) => Some(Box::new(source) as Box<dyn Source>),
Err(e) => {
eprintln!("Warning: Could not initialize git source {}: {}", url, e);
None
}
},
})
.collect()
}
/// Get git sources for update command
pub fn git_sources(&self) -> Vec<GitSource> {
self.sources
.iter()
.filter_map(|s| match s {
SourceConfig::Git { url, .. } => GitSource::new(url.clone()).ok(),
_ => None,
})
.collect()
}
/// Get raw source configs
pub fn source_configs(&self) -> &[SourceConfig] {
&self.sources
}
/// Add a source to the config
pub fn add_source(&mut self, source: SourceConfig) {
// Check if source already exists
let exists = self.sources.iter().any(|s| match (s, &source) {
(SourceConfig::Local { path: p1, .. }, SourceConfig::Local { path: p2, .. }) => {
p1 == p2
}
(SourceConfig::Git { url: u1, .. }, SourceConfig::Git { url: u2, .. }) => u1 == u2,
_ => false,
});
if !exists {
self.sources.push(source);
}
}
/// Move a source from one position to another (for priority)
pub fn move_source(&mut self, from: usize, to: usize) -> Result<()> {
if from >= self.sources.len() || to >= self.sources.len() {
anyhow::bail!("Invalid source index");
}
let source = self.sources.remove(from);
self.sources.insert(to, source);
Ok(())
}
/// Remove a source from the config by path/url or name
pub fn remove_source(&mut self, path_or_url: &str) -> bool {
let initial_len = self.sources.len();
// Expand the input path for comparison (handles ~/foo vs /home/user/foo)
let input_expanded = expand_tilde(path_or_url);
self.sources.retain(|s| match s {
SourceConfig::Local { path, name } => {
// Compare both the raw string and expanded paths, and also by name
path != path_or_url
&& expand_tilde(path) != input_expanded
&& name.as_deref() != Some(path_or_url)
}
SourceConfig::Git { url, name } => {
url != path_or_url && name.as_deref() != Some(path_or_url)
}
});
self.sources.len() < initial_len
}
/// Find a bundle by name across all sources
pub fn find_bundle(
&self,
name: &str,
) -> Result<Option<(Box<dyn Source>, crate::bundle::Bundle)>> {
for source in self.sources() {
// Skip sources that fail to list (they'll be warned about elsewhere)
let bundles = match source.list_bundles() {
Ok(b) => b,
Err(_) => continue,
};
if let Some(bundle) = bundles.into_iter().find(|b| b.name == name) {
return Ok(Some((source, bundle)));
}
}
Ok(None)
}
/// Find a source by its name
pub fn find_source_by_name(&self, name: &str) -> Option<(Box<dyn Source>, &SourceConfig)> {
for source_config in &self.sources {
if source_config.name() == Some(name) {
let source: Option<Box<dyn Source>> = match source_config {
SourceConfig::Local { path, .. } => {
let expanded = expand_tilde(path);
Some(Box::new(LocalSource::new(expanded)))
}
SourceConfig::Git { url, .. } => GitSource::new(url.clone())
.ok()
.map(|s| Box::new(s) as Box<dyn Source>),
};
if let Some(source) = source {
return Some((source, source_config));
}
}
}
None
}
}
impl SourceConfig {
/// Get display string for this source
pub fn display(&self) -> &str {
match self {
SourceConfig::Local { path, .. } => path,
SourceConfig::Git { url, .. } => url,
}
}
/// Get the optional name for this source
pub fn name(&self) -> Option<&str> {
match self {
SourceConfig::Local { name, .. } => name.as_deref(),
SourceConfig::Git { name, .. } => name.as_deref(),
}
}
}
/// Expand ~ to home directory
fn expand_tilde(path: &str) -> PathBuf {
if path.starts_with("~/") {
if let Some(home) = dirs_home() {
return home.join(&path[2..]);
}
} else if path == "~" {
if let Some(home) = dirs_home() {
return home;
}
}
PathBuf::from(path)
}
fn dirs_home() -> Option<PathBuf> {
std::env::var_os("HOME").map(PathBuf::from)
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_expand_tilde() {
let home = std::env::var("HOME").unwrap();
assert_eq!(
expand_tilde("~/.claude-skills"),
PathBuf::from(format!("{}/.claude-skills", home))
);
assert_eq!(
expand_tilde("/absolute/path"),
PathBuf::from("/absolute/path")
);
}
#[test]
fn test_default_config() {
let config = Config::load_or_default().unwrap();
assert_eq!(config.default_tool, "claude");
assert!(!config.sources.is_empty());
}
}
</file>
<file path="src/discover.rs">
use std::collections::HashMap;
use std::path::{Path, PathBuf};
use anyhow::Result;
use walkdir::WalkDir;
/// Represents an installed skill discovered in the current directory
#[derive(Debug, Clone)]
pub struct InstalledSkill {
/// The name of the skill (derived from filename)
pub name: String,
/// The type of skill (skill, agent, command)
pub skill_type: SkillType,
/// The tool this is installed for
pub tool: InstalledTool,
/// Full path to the skill file
pub path: PathBuf,
/// Optional bundle name (if detectable from path structure)
pub bundle: Option<String>,
}
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
pub enum InstalledTool {
Claude,
OpenCode,
Cursor,
}
impl InstalledTool {
pub fn as_str(&self) -> &'static str {
match self {
InstalledTool::Claude => "claude",
InstalledTool::OpenCode => "opencode",
InstalledTool::Cursor => "cursor",
}
}
pub fn display_name(&self) -> &'static str {
match self {
InstalledTool::Claude => "Claude",
InstalledTool::OpenCode => "OpenCode",
InstalledTool::Cursor => "Cursor",
}
}
}
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
pub enum SkillType {
Skill,
Agent,
Command,
Rule,
}
impl SkillType {
pub fn plural(&self) -> &'static str {
match self {
SkillType::Skill => "skills",
SkillType::Agent => "agents",
SkillType::Command => "commands",
SkillType::Rule => "rules",
}
}
}
/// Discover all installed skills in a directory
pub fn discover_installed(base: &Path) -> Result<Vec<InstalledSkill>> {
let mut skills = Vec::new();
// Discover Claude skills
skills.extend(discover_claude(base)?);
// Discover OpenCode skills
skills.extend(discover_opencode(base)?);
// Discover Cursor skills
skills.extend(discover_cursor(base)?);
Ok(skills)
}
/// Discover Claude installed skills
fn discover_claude(base: &Path) -> Result<Vec<InstalledSkill>> {
let mut skills = Vec::new();
let claude_dir = base.join(".claude");
if !claude_dir.exists() {
return Ok(skills);
}
// .claude/commands/**/*.md -> commands
let commands_dir = claude_dir.join("commands");
if commands_dir.exists() {
for entry in WalkDir::new(&commands_dir)
.into_iter()
.filter_map(|e| e.ok())
.filter(|e| e.file_type().is_file())
.filter(|e| e.path().extension().map(|ext| ext == "md").unwrap_or(false))
{
let path = entry.path().to_path_buf();
let name = path
.file_stem()
.and_then(|s| s.to_str())
.unwrap_or("")
.to_string();
// Try to detect bundle from path: .claude/commands/bundle/skill.md
let bundle = path.parent().and_then(|p| {
if p != commands_dir {
p.file_name().and_then(|n| n.to_str()).map(String::from)
} else {
None
}
});
if !name.is_empty() {
skills.push(InstalledSkill {
name,
skill_type: SkillType::Command,
tool: InstalledTool::Claude,
path,
bundle,
});
}
}
}
// .claude/agents/**/*.md -> agents
let agents_dir = claude_dir.join("agents");
if agents_dir.exists() {
for entry in WalkDir::new(&agents_dir)
.into_iter()
.filter_map(|e| e.ok())
.filter(|e| e.file_type().is_file())
.filter(|e| e.path().extension().map(|ext| ext == "md").unwrap_or(false))
{
let path = entry.path().to_path_buf();
let name = path
.file_stem()
.and_then(|s| s.to_str())
.unwrap_or("")
.to_string();
let bundle = path.parent().and_then(|p| {
if p != agents_dir {
p.file_name().and_then(|n| n.to_str()).map(String::from)
} else {
None
}
});
if !name.is_empty() {
skills.push(InstalledSkill {
name,
skill_type: SkillType::Agent,
tool: InstalledTool::Claude,
path,
bundle,
});
}
}
}
// .claude/skills/**/*.md -> skills
let skills_dir = claude_dir.join("skills");
if skills_dir.exists() {
for entry in WalkDir::new(&skills_dir)
.into_iter()
.filter_map(|e| e.ok())
.filter(|e| e.file_type().is_file())
.filter(|e| e.path().extension().map(|ext| ext == "md").unwrap_or(false))
{
let path = entry.path().to_path_buf();
let name = path
.file_stem()
.and_then(|s| s.to_str())
.unwrap_or("")
.to_string();
let bundle = path.parent().and_then(|p| {
if p != skills_dir {
p.file_name().and_then(|n| n.to_str()).map(String::from)
} else {
None
}
});
if !name.is_empty() {
skills.push(InstalledSkill {
name,
skill_type: SkillType::Skill,
tool: InstalledTool::Claude,
path,
bundle,
});
}
}
}
// .claude/rules/**/*.md -> rules
let rules_dir = claude_dir.join("rules");
if rules_dir.exists() {
for entry in WalkDir::new(&rules_dir)
.into_iter()
.filter_map(|e| e.ok())
.filter(|e| e.file_type().is_file())
.filter(|e| e.path().extension().map(|ext| ext == "md").unwrap_or(false))
{
let path = entry.path().to_path_buf();
let name = path
.file_stem()
.and_then(|s| s.to_str())
.unwrap_or("")
.to_string();
let bundle = path.parent().and_then(|p| {
if p != rules_dir {
p.file_name().and_then(|n| n.to_str()).map(String::from)
} else {
None
}
});
if !name.is_empty() {
skills.push(InstalledSkill {
name,
skill_type: SkillType::Rule,
tool: InstalledTool::Claude,
path,
bundle,
});
}
}
}
Ok(skills)
}
/// Discover OpenCode installed skills
fn discover_opencode(base: &Path) -> Result<Vec<InstalledSkill>> {
let mut skills = Vec::new();
let opencode_dir = base.join(".opencode");
if !opencode_dir.exists() {
return Ok(skills);
}
// .opencode/skill/*/SKILL.md -> skills
let skill_dir = opencode_dir.join("skill");
if skill_dir.exists() {
for entry in std::fs::read_dir(&skill_dir)? {
let entry = entry?;
let path = entry.path();
if path.is_dir() {
let skill_file = path.join("SKILL.md");
if skill_file.exists() {
let name = path
.file_name()
.and_then(|s| s.to_str())
.unwrap_or("")
.to_string();
if !name.is_empty() {
skills.push(InstalledSkill {
name: name.clone(),
skill_type: SkillType::Skill,
tool: InstalledTool::OpenCode,
path: skill_file,
bundle: Some(name),
});
}
}
}
}
}
// .opencode/agent/*.md -> agents
let agent_dir = opencode_dir.join("agent");
if agent_dir.exists() {
for entry in std::fs::read_dir(&agent_dir)? {
let entry = entry?;
let path = entry.path();
if path.is_file() && path.extension().map(|e| e == "md").unwrap_or(false) {
let name = path
.file_stem()
.and_then(|s| s.to_str())
.unwrap_or("")
.to_string();
if !name.is_empty() {
skills.push(InstalledSkill {
name,
skill_type: SkillType::Agent,
tool: InstalledTool::OpenCode,
path,
bundle: None,
});
}
}
}
}
// .opencode/command/*.md -> commands
let command_dir = opencode_dir.join("command");
if command_dir.exists() {
for entry in std::fs::read_dir(&command_dir)? {
let entry = entry?;
let path = entry.path();
if path.is_file() && path.extension().map(|e| e == "md").unwrap_or(false) {
let name = path
.file_stem()
.and_then(|s| s.to_str())
.unwrap_or("")
.to_string();
if !name.is_empty() {
skills.push(InstalledSkill {
name,
skill_type: SkillType::Command,
tool: InstalledTool::OpenCode,
path,
bundle: None,
});
}
}
}
}
// .opencode/rule/*/RULE.md -> rules
let rule_dir = opencode_dir.join("rule");
if rule_dir.exists() {
for entry in std::fs::read_dir(&rule_dir)? {
let entry = entry?;
let path = entry.path();
if path.is_dir() {
let rule_file = path.join("RULE.md");
if rule_file.exists() {
let name = path
.file_name()
.and_then(|s| s.to_str())
.unwrap_or("")
.to_string();
if !name.is_empty() {
skills.push(InstalledSkill {
name: name.clone(),
skill_type: SkillType::Rule,
tool: InstalledTool::OpenCode,
path: rule_file,
bundle: Some(name),
});
}
}
}
}
}
Ok(skills)
}
/// Discover Cursor installed skills
fn discover_cursor(base: &Path) -> Result<Vec<InstalledSkill>> {
let mut skills = Vec::new();
let cursor_dir = base.join(".cursor");
if !cursor_dir.exists() {
return Ok(skills);
}
// .cursor/skills/*/SKILL.md -> skills
let skills_dir = cursor_dir.join("skills");
if skills_dir.exists() {
for entry in std::fs::read_dir(&skills_dir)? {
let entry = entry?;
let path = entry.path();
if path.is_dir() {
let skill_file = path.join("SKILL.md");
if skill_file.exists() {
let name = path
.file_name()
.and_then(|s| s.to_str())
.unwrap_or("")
.to_string();
if !name.is_empty() {
skills.push(InstalledSkill {
name: name.clone(),
skill_type: SkillType::Skill,
tool: InstalledTool::Cursor,
path: skill_file,
bundle: Some(name),
});
}
}
}
}
}
// .cursor/rules/*/RULE.md -> rules (folder-based)
let rules_dir = cursor_dir.join("rules");
if rules_dir.exists() {
for entry in std::fs::read_dir(&rules_dir)? {
let entry = entry?;
let path = entry.path();
if path.is_dir() {
let rule_file = path.join("RULE.md");
if rule_file.exists() {
let name = path
.file_name()
.and_then(|s| s.to_str())
.unwrap_or("")
.to_string();
if !name.is_empty() {
skills.push(InstalledSkill {
name: name.clone(),
skill_type: SkillType::Rule,
tool: InstalledTool::Cursor,
path: rule_file,
bundle: Some(name),
});
}
}
}
}
}
Ok(skills)
}
/// Group skills by tool, then by type
pub fn group_by_tool(
skills: &[InstalledSkill],
) -> HashMap<InstalledTool, HashMap<SkillType, Vec<&InstalledSkill>>> {
let mut result: HashMap<InstalledTool, HashMap<SkillType, Vec<&InstalledSkill>>> =
HashMap::new();
for skill in skills {
result
.entry(skill.tool)
.or_default()
.entry(skill.skill_type)
.or_default()
.push(skill);
}
result
}
/// Filter skills to a specific tool
pub fn filter_by_tool(skills: Vec<InstalledSkill>, tool: &str) -> Vec<InstalledSkill> {
let tool_lower = tool.to_lowercase();
skills
.into_iter()
.filter(|s| s.tool.as_str() == tool_lower)
.collect()
}
/// Get a unique identifier for a skill (for grouping across tools)
impl InstalledSkill {
pub fn unique_id(&self) -> String {
if let Some(ref bundle) = self.bundle {
format!("{}/{}", bundle, self.name)
} else {
self.name.clone()
}
}
}
/// Group skills that have the same name/bundle across different tools
pub fn group_same_skills(skills: &[InstalledSkill]) -> HashMap<String, Vec<&InstalledSkill>> {
let mut result: HashMap<String, Vec<&InstalledSkill>> = HashMap::new();
for skill in skills {
result.entry(skill.unique_id()).or_default().push(skill);
}
result
}
/// Remove a skill file and clean up empty parent directories
pub fn remove_skill(skill: &InstalledSkill) -> Result<()> {
// For skills/rules that are directories (OpenCode/Cursor skills/rules), remove the whole directory
if skill.skill_type == SkillType::Skill || skill.skill_type == SkillType::Rule {
if let Some(parent) = skill.path.parent() {
if parent.is_dir() {
std::fs::remove_dir_all(parent)?;
return Ok(());
}
}
}
// Remove the file
std::fs::remove_file(&skill.path)?;
// Clean up empty parent directories
let mut current = skill.path.parent();
while let Some(parent) = current {
// Stop at the tool directory (.claude, .opencode, .cursor)
if let Some(name) = parent.file_name().and_then(|n| n.to_str()) {
if name.starts_with('.') {
break;
}
}
// Try to remove if empty
if std::fs::read_dir(parent)?.next().is_none() {
std::fs::remove_dir(parent)?;
current = parent.parent();
} else {
break;
}
}
Ok(())
}
#[cfg(test)]
mod tests {
use super::*;
use std::fs;
use tempfile::tempdir;
#[test]
fn test_discover_empty_dir() {
let dir = tempdir().unwrap();
let skills = discover_installed(dir.path()).unwrap();
assert!(skills.is_empty());
}
#[test]
fn test_discover_claude_commands() {
let dir = tempdir().unwrap();
// Create .claude/commands/test.md
let commands_dir = dir.path().join(".claude/commands");
fs::create_dir_all(&commands_dir).unwrap();
fs::write(commands_dir.join("test.md"), "# Test command").unwrap();
let skills = discover_installed(dir.path()).unwrap();
assert_eq!(skills.len(), 1);
assert_eq!(skills[0].name, "test");
assert_eq!(skills[0].skill_type, SkillType::Command);
assert_eq!(skills[0].tool, InstalledTool::Claude);
}
#[test]
fn test_discover_claude_commands_with_bundle() {
let dir = tempdir().unwrap();
// Create .claude/commands/mybundle/test.md
let bundle_dir = dir.path().join(".claude/commands/mybundle");
fs::create_dir_all(&bundle_dir).unwrap();
fs::write(bundle_dir.join("test.md"), "# Test command").unwrap();
let skills = discover_installed(dir.path()).unwrap();
assert_eq!(skills.len(), 1);
assert_eq!(skills[0].name, "test");
assert_eq!(skills[0].bundle, Some("mybundle".to_string()));
}
#[test]
fn test_discover_opencode_skills() {
let dir = tempdir().unwrap();
// Create .opencode/skill/myskill/SKILL.md
let skill_dir = dir.path().join(".opencode/skill/myskill");
fs::create_dir_all(&skill_dir).unwrap();
fs::write(skill_dir.join("SKILL.md"), "# My skill").unwrap();
let skills = discover_installed(dir.path()).unwrap();
assert_eq!(skills.len(), 1);
assert_eq!(skills[0].name, "myskill");
assert_eq!(skills[0].skill_type, SkillType::Skill);
assert_eq!(skills[0].tool, InstalledTool::OpenCode);
}
#[test]
fn test_discover_cursor_rules() {
let dir = tempdir().unwrap();
// Create .cursor/rules/test/RULE.md (folder-based)
let rule_dir = dir.path().join(".cursor/rules/test");
fs::create_dir_all(&rule_dir).unwrap();
fs::write(rule_dir.join("RULE.md"), "# Test rule").unwrap();
let skills = discover_installed(dir.path()).unwrap();
assert_eq!(skills.len(), 1);
assert_eq!(skills[0].name, "test");
assert_eq!(skills[0].skill_type, SkillType::Rule);
assert_eq!(skills[0].tool, InstalledTool::Cursor);
}
#[test]
fn test_filter_by_tool() {
let skills = vec![
InstalledSkill {
name: "test1".to_string(),
skill_type: SkillType::Command,
tool: InstalledTool::Claude,
path: PathBuf::from("/test1"),
bundle: None,
},
InstalledSkill {
name: "test2".to_string(),
skill_type: SkillType::Command,
tool: InstalledTool::OpenCode,
path: PathBuf::from("/test2"),
bundle: None,
},
];
let filtered = filter_by_tool(skills, "claude");
assert_eq!(filtered.len(), 1);
assert_eq!(filtered[0].name, "test1");
}
}
</file>
<file path="src/setup.rs">
use anyhow::Result;
use colored::Colorize;
use dialoguer::{theme::ColorfulTheme, Input, Select};
use crate::config::{Config, SourceConfig};
/// Run the first-time setup wizard
pub fn run_setup_wizard() -> Result<Config> {
println!();
println!(
"{}",
"No config found. Let's set up your skill sources.".bold()
);
println!();
let options = vec![
"Use ~/.claude-skills (recommended)",
"Specify a custom path",
"Skip for now (add sources later with `skm sources add`)",
];
let selection = Select::with_theme(&ColorfulTheme::default())
.with_prompt("How would you like to configure your default source?")
.items(&options)
.default(0)
.interact()?;
let sources = match selection {
0 => {
// Use default ~/.claude-skills
let path = "~/.claude-skills".to_string();
println!();
println!(" {} {}", "Adding source:".dimmed(), path);
vec![SourceConfig::Local { path, name: None }]
}
1 => {
// Custom path
let path: String = Input::with_theme(&ColorfulTheme::default())
.with_prompt("Enter path to your skills directory")
.interact_text()?;
let path = if path.starts_with("~/") || path.starts_with('/') {
path
} else {
// Make relative paths absolute
let cwd = std::env::current_dir()?;
cwd.join(&path).to_string_lossy().to_string()
};
println!();
println!(" {} {}", "Adding source:".dimmed(), path);
vec![SourceConfig::Local { path, name: None }]
}
2 => {
// Skip
println!();
println!(
"{}",
" Skipping source setup. Add sources later with:".dimmed()
);
println!(" skm sources add <path>");
vec![]
}
_ => vec![],
};
let config = Config::new(sources);
config.save()?;
let config_path = Config::config_path()?;
println!();
println!("{} {}", "Config saved to:".green(), config_path.display());
println!();
Ok(config)
}
</file>
<file path="CLAUDE.md">
# CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## Build & Test Commands
```bash
cargo build # Debug build
cargo build --release # Release build
cargo test # Run all tests
cargo test <test_name> # Run a single test
cargo run # Run the CLI (lists bundles)
cargo run -- <bundle> # Install a bundle
```
The binary is named `skm` (skill-manager).
## Architecture
This is a Rust CLI tool for managing AI coding assistant skills across Claude, OpenCode, and Cursor. It copies skill bundles from configured sources to tool-specific locations.
### Core Data Flow
1. **Sources** (`source.rs`) - Skill bundles come from local directories or git repositories
2. **Bundles** (`bundle.rs`) - A bundle is a directory containing `skills/`, `agents/`, and/or `commands/` subdirectories with `.md` files
3. **Targets** (`target.rs`) - Each tool (Claude/OpenCode/Cursor) has different destination paths and file transformations
4. **Install** (`install.rs`) - Orchestrates copying from source bundle to target tool location
### Module Responsibilities
- `main.rs` - CLI argument parsing (clap) and command dispatch
- `config.rs` - TOML config file at `~/.config/skm/config.toml`, manages source list
- `source.rs` - `Source` trait with `LocalSource` and `GitSource` implementations; git sources clone to cache dir
- `bundle.rs` - `Bundle` struct representing a skill bundle with its files
- `target.rs` - `Tool` enum handling per-tool path conventions and file transformations
- `discover.rs` - Scans directories for installed skills across all three tools
- `install.rs` - Copies bundle files to target locations
- `setup.rs` - First-run interactive setup wizard
### Tool-Specific Path Mappings
| Type | Claude | OpenCode | Cursor |
|------|--------|----------|--------|
| Skills | `.claude/skills/{bundle}/{name}.md` | `.opencode/skill/{bundle}-{name}/SKILL.md` | `.cursor/skills/{bundle}-{name}/SKILL.md` (beta) |
| Agents | `.claude/agents/{bundle}/{name}.md` | `.opencode/agent/{bundle}-{name}.md` | `.cursor/rules/{bundle}-{name}/RULE.md` |
| Commands | `.claude/commands/{bundle}/{name}.md` | `.opencode/command/{bundle}-{name}.md` | `.cursor/rules/{bundle}-{name}/RULE.md` |
| Rules | `.claude/rules/{bundle}/{name}.md` | `.opencode/rule/{bundle}-{name}/RULE.md` | `.cursor/rules/{bundle}-{name}/RULE.md` |
OpenCode and Cursor skills/rules require YAML frontmatter with a `name` field - `target.rs:transform_skill_file()` adds this automatically.
**Cursor Support**: Cursor supports both folder-based skills (beta, `.cursor/skills/`) and rules (`.cursor/rules/`). Skills are installed to the beta skills directory, while agents, commands, and rules go to the rules directory.
### Key Dependencies
- `clap` - CLI argument parsing with derive macros
- `git2` - Git operations (clone, fetch, fast-forward)
- `dialoguer` - Interactive prompts for setup wizard and skill removal
- `walkdir` - Recursive directory traversal for discovery
## SCUD Task Management
This project uses SCUD Task Manager for task management.
### Session Workflow
1. **Start of session**: Run `scud warmup` to orient yourself
- Shows current working directory and recent git history
- Displays active tag, task counts, and any stale locks
- Identifies the next available task
2. **Claim a task**: Use `/scud:task-next` or `scud next --claim --name "Claude"`
- Always claim before starting work to prevent conflicts
- Task context is stored in `.scud/current-task`
3. **Work on the task**: Implement the requirements
- Reference task details with `/scud:task-show <id>`
- Dependencies are automatically tracked by the DAG
4. **Commit with context**: Use `scud commit -m "message"` or `scud commit -a -m "message"`
- Automatically prefixes commits with `[TASK-ID]`
- Uses task title as default commit message if none provided
5. **Complete the task**: Mark done with `/scud:task-status <id> done`
- The stop hook will prompt for task completion
### Progress Journaling
Keep a brief progress log during complex tasks:
```
## Progress Log
### Session: 2025-01-15
- Investigated auth module, found issue in token refresh
- Updated refresh logic to handle edge case
- Tests passing, ready for review
```
This helps maintain continuity across sessions and provides context for future work.
### Key Commands
- `scud warmup` - Session orientation
- `scud next` - Find next available task
- `scud show <id>` - View task details
- `scud set-status <id> <status>` - Update task status
- `scud commit` - Task-aware git commit
- `scud stats` - View completion statistics
</file>
<file path="src/install.rs">
use anyhow::Result;
use colored::Colorize;
use std::path::PathBuf;
use crate::bundle::SkillType;
use crate::config::Config;
use crate::source::Source;
use crate::target::Tool;
/// Install a bundle to the target directory
pub fn install_bundle(
config: &Config,
bundle_name: &str,
tool: &Tool,
target_dir: &PathBuf,
types: &[SkillType],
) -> Result<()> {
// Find the bundle in configured sources
let (_source, bundle) = config.find_bundle(bundle_name)?.ok_or_else(|| {
// Collect available bundle names for the error message
let mut available = vec![];
for src in config.sources() {
if let Ok(bundles) = src.list_bundles() {
for b in bundles {
available.push(b.name);
}
}
}
anyhow::anyhow!(
"Bundle not found: {}\nAvailable: {}",
bundle_name,
if available.is_empty() {
"(none)".to_string()
} else {
available.join(", ")
}
)
})?;
println!(
"Importing from {} to {}...",
bundle_name.cyan(),
tool.name()
);
let mut total_count = 0;
for skill_type in types {
let files = bundle.files_of_type(*skill_type);
if files.is_empty() {
continue;
}
let mut count = 0;
for file in files {
tool.write_file(target_dir, &bundle.name, file)?;
count += 1;
}
if count > 0 {
let dest_info = tool.dest_info(*skill_type, &bundle.name);
println!(
" {}: {} files -> {}",
skill_type.dir_name(),
count,
dest_info.dimmed()
);
total_count += count;
}
}
if total_count == 0 {
println!("{}", "No files to import.".yellow());
} else {
println!("{}", "Done!".green());
}
Ok(())
}
/// Install all bundles from a named source
pub fn install_from_source(
source: &dyn Source,
tool: &Tool,
target_dir: &PathBuf,
types: &[SkillType],
) -> Result<()> {
let bundles = source.list_bundles()?;
if bundles.is_empty() {
println!("{}", "No bundles found in source.".yellow());
return Ok(());
}
println!(
"Installing {} bundle(s) from {} to {}...",
bundles.len(),
source.display_path().cyan(),
tool.name()
);
println!();
let mut total_files = 0;
for bundle in bundles {
let mut bundle_files = 0;
for skill_type in types {
let files = bundle.files_of_type(*skill_type);
for file in files {
tool.write_file(target_dir, &bundle.name, file)?;
bundle_files += 1;
}
}
if bundle_files > 0 {
println!(" {} {} file(s)", bundle.name.cyan(), bundle_files);
total_files += bundle_files;
}
}
if total_files == 0 {
println!("{}", "No files to import.".yellow());
} else {
println!();
println!("{} {} file(s) installed.", "Done!".green(), total_files);
}
Ok(())
}
/// Install a specific bundle from a specific source
pub fn install_bundle_from_source(
source: &dyn Source,
bundle_name: &str,
tool: &Tool,
target_dir: &PathBuf,
types: &[SkillType],
) -> Result<()> {
let bundles = source.list_bundles()?;
let bundle = bundles.into_iter().find(|b| b.name == bundle_name).ok_or_else(|| {
anyhow::anyhow!(
"Bundle '{}' not found in source '{}'",
bundle_name,
source.display_path()
)
})?;
println!(
"Importing from {} to {}...",
bundle_name.cyan(),
tool.name()
);
let mut total_count = 0;
for skill_type in types {
let files = bundle.files_of_type(*skill_type);
if files.is_empty() {
continue;
}
let mut count = 0;
for file in files {
tool.write_file(target_dir, &bundle.name, file)?;
count += 1;
}
if count > 0 {
let dest_info = tool.dest_info(*skill_type, &bundle.name);
println!(
" {}: {} files -> {}",
skill_type.dir_name(),
count,
dest_info.dimmed()
);
total_count += count;
}
}
if total_count == 0 {
println!("{}", "No files to import.".yellow());
} else {
println!("{}", "Done!".green());
}
Ok(())
}
#[cfg(test)]
mod tests {
use super::*;
use std::fs;
use tempfile::tempdir;
fn setup_test_source() -> (tempfile::TempDir, PathBuf) {
let dir = tempdir().unwrap();
let source_path = dir.path().to_path_buf();
let bundle_dir = source_path.join("test-bundle");
// Create commands
let commands_dir = bundle_dir.join("commands");
fs::create_dir_all(&commands_dir).unwrap();
fs::write(commands_dir.join("commit.md"), "# Commit command").unwrap();
fs::write(commands_dir.join("debug.md"), "# Debug command").unwrap();
// Create agents
let agents_dir = bundle_dir.join("agents");
fs::create_dir_all(&agents_dir).unwrap();
fs::write(agents_dir.join("analyzer.md"), "# Analyzer agent").unwrap();
// Create skills
let skills_dir = bundle_dir.join("skills");
fs::create_dir_all(&skills_dir).unwrap();
fs::write(skills_dir.join("helper.md"), "# Helper skill").unwrap();
(dir, source_path)
}
#[test]
fn test_install_to_claude() {
let (_source_dir, source_path) = setup_test_source();
let target_dir = tempdir().unwrap();
// We can't easily test with Config since it's hardcoded for Phase 1
// This test verifies the Tool::write_file logic directly
let bundle = crate::bundle::Bundle::from_path(source_path.join("test-bundle")).unwrap();
for cmd in &bundle.commands {
Tool::Claude
.write_file(&target_dir.path().to_path_buf(), "test-bundle", cmd)
.unwrap();
}
// Verify files were created
assert!(target_dir
.path()
.join(".claude/commands/test-bundle/commit.md")
.exists());
assert!(target_dir
.path()
.join(".claude/commands/test-bundle/debug.md")
.exists());
}
#[test]
fn test_install_to_opencode() {
let (_source_dir, source_path) = setup_test_source();
let target_dir = tempdir().unwrap();
let bundle = crate::bundle::Bundle::from_path(source_path.join("test-bundle")).unwrap();
// Test skill (should create directory structure)
for skill in &bundle.skills {
Tool::OpenCode
.write_file(&target_dir.path().to_path_buf(), "test-bundle", skill)
.unwrap();
}
// Verify skill structure
assert!(target_dir
.path()
.join(".opencode/skill/test-bundle-helper/SKILL.md")
.exists());
// Test command
for cmd in &bundle.commands {
Tool::OpenCode
.write_file(&target_dir.path().to_path_buf(), "test-bundle", cmd)
.unwrap();
}
assert!(target_dir
.path()
.join(".opencode/command/test-bundle-commit.md")
.exists());
}
#[test]
fn test_install_to_cursor() {
let (_source_dir, source_path) = setup_test_source();
let target_dir = tempdir().unwrap();
let bundle = crate::bundle::Bundle::from_path(source_path.join("test-bundle")).unwrap();
// Test skill (should go to skills beta directory)
for skill in &bundle.skills {
Tool::Cursor
.write_file(&target_dir.path().to_path_buf(), "test-bundle", skill)
.unwrap();
}
// Verify skills folder-based structure (beta)
assert!(target_dir
.path()
.join(".cursor/skills/test-bundle-helper/SKILL.md")
.exists());
// Test agent (should go to rules folder-based structure)
for agent in &bundle.agents {
Tool::Cursor
.write_file(&target_dir.path().to_path_buf(), "test-bundle", agent)
.unwrap();
}
// Verify rules folder-based structure
assert!(target_dir
.path()
.join(".cursor/rules/test-bundle-analyzer/RULE.md")
.exists());
}
}
</file>
<file path="README.md">
# skill-manager (skm)
A CLI tool for managing AI coding assistant skills across Claude, OpenCode, and Cursor.
## What This Does
AI coding assistants like Claude Code, OpenCode, and Cursor support custom "skills" - markdown files containing prompts, instructions, or agent definitions that extend their capabilities. The problem: each tool expects these files in different locations with different formats.
**skill-manager** lets you maintain a single collection of skills and install them to any supported tool. It handles the path conventions and file transformations automatically.
## How It Works
```
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Sources │ │ Bundles │ │ Targets │
│ │ │ │ │ │
│ ~/.claude-skills│ ───► │ my-bundle/ │ ───► │ .claude/ │
│ ~/my-skills │ │ skills/ │ │ .opencode/ │
│ github.com/... │ │ agents/ │ │ .cursor/ │
└─────────────────┘ │ commands/ │ └─────────────────┘
└─────────────────┘
```
1. **Sources** are directories (local or git repos) containing skill bundles
2. **Bundles** are folders with `skills/`, `agents/`, and/or `commands/` subdirectories
3. **Targets** are the tool-specific directories where skills get installed
When you run `skm my-bundle`, it copies the bundle's files to the appropriate locations for your chosen tool, applying any necessary transformations.
## Installation
```bash
cargo install skill-manager
```
This installs the `skm` binary. Requires [Rust](https://rustup.rs/) to be installed.
## Quick Start
```bash
# Browse available bundles interactively
skm list
# Install a bundle to Claude (default)
skm add my-bundle
# or just:
skm my-bundle
# Install to OpenCode or Cursor instead
skm my-bundle -o # OpenCode
skm my-bundle -c # Cursor
# Manage sources interactively
skm sources
# See what's installed in current directory
skm here
# Remove installed skills interactively
skm here --remove
```
## Commands
### `skm list`
Interactive browser with **fuzzy search** for exploring available bundles. Type to filter by bundle name, author, description, or skill names. Press Esc to quit, Enter to view bundle details.
```
Available Bundles (type to search)
> xlsx by Anthropic 1s 0a 0c (anthropics/skills)
pdf by Anthropic 1s 0a 0c (anthropics/skills)
my-skill by username 2s 1a 0c (~/.claude-skills)
```
### `skm add <bundle>` or `skm <bundle>`
Install a bundle to the current directory. Bundles are searched in priority order across all configured sources.
```bash
skm add my-bundle # Install to Claude (default)
skm add my-bundle -o # Install to OpenCode
skm add my-bundle -c # Install to Cursor
skm add my-bundle -g # Install globally
skm add my-bundle --skills # Install only skills
skm add my-bundle --agents # Install only agents
skm add my-bundle --commands # Install only commands
```
### `skm sources`
Interactive menu to view, add, remove, and reorder sources by priority. Sources are checked in order when searching for bundles.
```bash
skm sources # Interactive management
skm sources list # Just list sources
skm sources add <path> # Add a local directory or git URL
skm sources remove <path> # Remove a source
```
### `skm here`
Show and manage skills installed in the current directory.
```bash
skm here # Show all installed skills
skm here --tool claude # Filter by tool
skm here --remove # Interactive removal
skm here --clean # Remove all (with confirmation)
skm here --clean --yes # Remove all without confirmation
```
### `skm update`
Pull latest changes from all git sources.
## Supported Skill Formats
skm supports multiple skill repository formats, making it compatible with popular community skill repos.
### Flat Bundle Format
The original format - a directory with subdirectories for each type:
```
my-bundle/
├── skills/ # Reusable skill definitions
│ └── helper.md
├── agents/ # Agent definitions
│ └── reviewer.md
├── commands/ # Slash commands (e.g., /commit)
│ └── commit.md
└── rules/ # Rules/guidelines
└── style.md
```
### Anthropic/Marketplace Format
Compatible with [anthropics/skills](https://github.com/anthropics/skills) and [huggingface/skills](https://github.com/huggingface/skills):
```
skills/
├── xlsx/
│ └── SKILL.md # With YAML frontmatter (name, description)
├── pdf/
│ └── SKILL.md
└── docx/
└── SKILL.md
```
Each skill folder becomes a separate installable bundle. The skill name is extracted from YAML frontmatter if present.
```bash
# Add the official Anthropic skills repo
skm sources add https://github.com/anthropics/skills
# Install individual skills
skm xlsx
skm pdf
```
### Community Resources Format
For community repos with `resources/` directory structure:
```
resources/
├── skills/
│ └── my-skill/
│ ├── meta.yaml # name, author, description
│ └── skill.md
└── commands/
└── my-command/
├── meta.yaml
└── command.md
```
Each resource folder becomes a separate bundle, named from `meta.yaml`.
### Where Files Get Installed
| Source | Claude | OpenCode | Cursor |
|--------|--------|----------|--------|
| `skills/foo.md` | `.claude/skills/bundle/foo.md` | `.opencode/skill/bundle-foo/SKILL.md` | `.cursor/skills/bundle-foo/SKILL.md` |
| `agents/foo.md` | `.claude/agents/bundle/foo.md` | `.opencode/agent/bundle-foo.md` | `.cursor/rules/bundle-foo/RULE.md` |
| `commands/foo.md` | `.claude/commands/bundle/foo.md` | `.opencode/command/bundle-foo.md` | `.cursor/rules/bundle-foo/RULE.md` |
| `rules/foo.md` | `.claude/rules/bundle/foo.md` | `.opencode/rule/bundle-foo/RULE.md` | `.cursor/rules/bundle-foo/RULE.md` |
OpenCode and Cursor skills/rules require YAML frontmatter with a `name` field - skm adds this automatically if missing.
## Configuration
Config file: `~/.config/skm/config.toml`
```toml
default_tool = "claude"
[[sources]]
type = "local"
path = "~/.claude-skills"
[[sources]]
type = "git"
url = "https://github.com/user/skills"
```
Sources are searched in order (first match wins). Use `skm sources` to manage priority.
## Shell Completions
```bash
skm completions bash > ~/.local/share/bash-completion/completions/skm
skm completions zsh > ~/.zfunc/_skm
skm completions fish > ~/.config/fish/completions/skm.fish
```
## License
MIT
</file>
<file path="src/source.rs">
use anyhow::{Context, Result};
use colored::Colorize;
use std::path::PathBuf;
use crate::bundle::Bundle;
/// Trait for skill sources (local directories, git repos, etc.)
pub trait Source {
/// List all bundles in this source
fn list_bundles(&self) -> Result<Vec<Bundle>>;
/// Get display path for this source
fn display_path(&self) -> String;
}
/// A local directory source
pub struct LocalSource {
path: PathBuf,
}
impl LocalSource {
pub fn new(path: PathBuf) -> Self {
LocalSource { path }
}
}
impl LocalSource {
fn list_bundles_from_manifest(
&self,
manifest: crate::manifest::SourceManifest,
) -> Result<Vec<Bundle>> {
let mut bundles = Vec::new();
for decl in &manifest.bundles {
let bundle_root = self.path.join(&decl.path);
if !bundle_root.exists() {
eprintln!(
" {}: bundle path {} does not exist",
"Warning".yellow(),
decl.path
);
continue;
}
match crate::manifest::bundle_from_declaration(&self.path, decl) {
Ok(bundle) if !bundle.is_empty() => bundles.push(bundle),
Ok(_) => {
// Bundle exists but has no files — skip silently
}
Err(e) => {
eprintln!(
" {}: failed to scan bundle {}: {}",
"Warning".yellow(),
decl.name,
e
);
}
}
}
Ok(bundles)
}
}
impl Source for LocalSource {
fn list_bundles(&self) -> Result<Vec<Bundle>> {
if !self.path.exists() {
return Ok(vec![]);
}
// Check for skm.toml manifest (highest priority)
if let Some(manifest) = crate::manifest::load_manifest(&self.path) {
return self.list_bundles_from_manifest(manifest);
}
// Check if this is a resources-format source (has resources/ directory at root)
// Each resource folder becomes its own bundle
if Bundle::is_resources_format(&self.path) {
return Bundle::list_from_resources_path(self.path.clone());
}
// Check if this is an Anthropic-format source (skills/{name}/SKILL.md at root)
// Each skill folder becomes its own bundle
if Bundle::is_anthropic_format(&self.path) {
return Bundle::list_from_anthropic_path(self.path.clone());
}
let mut bundles = vec![];
for entry in std::fs::read_dir(&self.path)? {
let entry = entry?;
let path = entry.path();
// Skip non-directories
if !path.is_dir() {
continue;
}
// Skip hidden directories and 'shell' directory
let name = path.file_name().and_then(|n| n.to_str()).unwrap_or("");
if name.starts_with('.') || name == "shell" {
continue;
}
// Try to create a bundle from this directory
match Bundle::from_path(path) {
Ok(bundle) if !bundle.is_empty() => bundles.push(bundle),
_ => continue,
}
}
// Sort bundles by name
bundles.sort_by(|a, b| a.name.cmp(&b.name));
Ok(bundles)
}
fn display_path(&self) -> String {
// Try to show with ~ if it's under home
if let Some(home) = std::env::var_os("HOME") {
let home_path = PathBuf::from(home);
if let Ok(relative) = self.path.strip_prefix(&home_path) {
return format!("~/{}", relative.display());
}
}
self.path.display().to_string()
}
}
/// A git repository source
pub struct GitSource {
url: String,
cache_path: PathBuf,
}
impl GitSource {
pub fn new(url: String) -> Result<Self> {
let cache_path = Self::cache_path_for_url(&url)?;
Ok(GitSource { url, cache_path })
}
/// Get the cache directory for a git URL
fn cache_path_for_url(url: &str) -> Result<PathBuf> {
let cache_dir = directories::ProjectDirs::from("", "", "skm")
.ok_or_else(|| anyhow::anyhow!("Could not determine cache directory"))?
.cache_dir()
.to_path_buf();
// Parse URL to create a path like github.com/user/repo
let path_suffix = Self::url_to_path(url);
Ok(cache_dir.join(path_suffix))
}
/// Convert a git URL to a filesystem path
fn url_to_path(url: &str) -> String {
// Handle various URL formats:
// https://github.com/user/repo.git -> github.com/user/repo
// git@github.com:user/repo.git -> github.com/user/repo
// https://github.com/user/repo -> github.com/user/repo
let url = url.trim_end_matches(".git");
if url.starts_with("https://") {
url.strip_prefix("https://").unwrap_or(url).to_string()
} else if url.starts_with("git@") {
// git@github.com:user/repo -> github.com/user/repo
url.strip_prefix("git@").unwrap_or(url).replace(':', "/")
} else {
url.to_string()
}
}
/// Clone the repository if it doesn't exist
pub fn ensure_cloned(&self) -> Result<()> {
if self.cache_path.exists() {
return Ok(());
}
println!(" {} {}...", "Cloning".cyan(), self.url);
// Create parent directory
if let Some(parent) = self.cache_path.parent() {
std::fs::create_dir_all(parent)?;
}
// Clone the repository
git2::Repository::clone(&self.url, &self.cache_path)
.with_context(|| format!("Failed to clone {}", self.url))?;
Ok(())
}
/// Get the URL for display
pub fn url(&self) -> &str {
&self.url
}
/// Pull latest changes from the remote
pub fn pull(&self) -> Result<bool> {
if !self.cache_path.exists() {
self.ensure_cloned()?;
return Ok(true);
}
let repo = git2::Repository::open(&self.cache_path)
.with_context(|| format!("Failed to open repository at {:?}", self.cache_path))?;
// Fetch from origin
let mut remote = repo.find_remote("origin")?;
remote.fetch(&["HEAD"], None, None)?;
// Get the fetch head
let fetch_head = repo.find_reference("FETCH_HEAD")?;
let fetch_commit = repo.reference_to_annotated_commit(&fetch_head)?;
// Get HEAD
let head = repo.head()?;
let head_commit = head.peel_to_commit()?;
// Check if we need to update
if fetch_commit.id() == head_commit.id() {
return Ok(false);
}
// Fast-forward merge
let refname = head.name().unwrap_or("HEAD");
repo.reference(refname, fetch_commit.id(), true, "Fast-forward")?;
repo.set_head(refname)?;
repo.checkout_head(Some(git2::build::CheckoutBuilder::default().force()))?;
Ok(true)
}
}
impl Source for GitSource {
fn list_bundles(&self) -> Result<Vec<Bundle>> {
// Ensure the repo is cloned first
self.ensure_cloned()?;
// Delegate to LocalSource for actual bundle discovery
let local = LocalSource::new(self.cache_path.clone());
local.list_bundles()
}
fn display_path(&self) -> String {
self.url.clone()
}
}
#[cfg(test)]
mod tests {
use super::*;
use std::fs;
use tempfile::tempdir;
#[test]
fn test_local_source_empty_dir() {
let dir = tempdir().unwrap();
let source = LocalSource::new(dir.path().to_path_buf());
let bundles = source.list_bundles().unwrap();
assert!(bundles.is_empty());
}
#[test]
fn test_local_source_with_bundle() {
let dir = tempdir().unwrap();
// Create a bundle with a command
let bundle_dir = dir.path().join("test-bundle");
let commands_dir = bundle_dir.join("commands");
fs::create_dir_all(&commands_dir).unwrap();
fs::write(commands_dir.join("test.md"), "# Test command").unwrap();
let source = LocalSource::new(dir.path().to_path_buf());
let bundles = source.list_bundles().unwrap();
assert_eq!(bundles.len(), 1);
assert_eq!(bundles[0].name, "test-bundle");
assert_eq!(bundles[0].commands.len(), 1);
assert_eq!(bundles[0].commands[0].name, "test");
}
#[test]
fn test_local_source_skips_hidden_and_shell() {
let dir = tempdir().unwrap();
// Create hidden directory
let hidden = dir.path().join(".hidden");
fs::create_dir_all(hidden.join("commands")).unwrap();
fs::write(hidden.join("commands/test.md"), "# Test").unwrap();
// Create shell directory
let shell = dir.path().join("shell");
fs::create_dir_all(&shell).unwrap();
fs::write(shell.join("skim.bash"), "# Shell script").unwrap();
let source = LocalSource::new(dir.path().to_path_buf());
let bundles = source.list_bundles().unwrap();
assert!(bundles.is_empty());
}
#[test]
fn test_local_source_resources_format() {
let dir = tempdir().unwrap();
// Create resources-format structure with multiple resources
let resources = dir.path().join("resources");
let skills_dir = resources.join("skills");
// First skill
let skill1 = skills_dir.join("my-skill");
fs::create_dir_all(&skill1).unwrap();
fs::write(skill1.join("meta.yaml"), "name: My Skill\nauthor: test\n").unwrap();
fs::write(skill1.join("skill.md"), "# Skill content").unwrap();
// Second skill
let skill2 = skills_dir.join("another-skill");
fs::create_dir_all(&skill2).unwrap();
fs::write(
skill2.join("meta.yaml"),
"name: Another Skill\nauthor: test\n",
)
.unwrap();
fs::write(skill2.join("skill.md"), "# Another skill").unwrap();
let source = LocalSource::new(dir.path().to_path_buf());
let bundles = source.list_bundles().unwrap();
// Each resource folder becomes its own bundle
assert_eq!(bundles.len(), 2);
assert_eq!(bundles[0].name, "Another Skill");
assert_eq!(bundles[1].name, "My Skill");
}
}
</file>
<file path="src/bundle.rs">
use serde::Deserialize;
use std::path::PathBuf;
/// Type of skill item
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum SkillType {
Skill,
Agent,
Command,
Rule,
}
impl SkillType {
pub fn dir_name(&self) -> &'static str {
match self {
SkillType::Skill => "skills",
SkillType::Agent => "agents",
SkillType::Command => "commands",
SkillType::Rule => "rules",
}
}
/// Alternative directory names for the resources format
pub fn alt_dir_names(&self) -> &'static [&'static str] {
match self {
SkillType::Rule => &["cursor-rules"],
_ => &[],
}
}
}
/// Metadata from meta.yaml files (resources format)
#[derive(Debug, Deserialize, Default, Clone)]
pub struct ResourceMeta {
pub name: Option<String>,
pub author: Option<String>,
pub description: Option<String>,
}
/// Metadata for a bundle (author, description, etc.)
#[derive(Debug, Clone, Default)]
pub struct BundleMeta {
/// Author name or GitHub username
pub author: Option<String>,
/// Description of the bundle
pub description: Option<String>,
}
/// A single skill/agent/command file
#[derive(Debug, Clone)]
pub struct SkillFile {
/// Name without extension (e.g., "commit")
pub name: String,
/// Full path to the source file
pub path: PathBuf,
/// Type of skill
pub skill_type: SkillType,
/// Directory containing companion files (scripts, templates, etc.)
/// When set, all sibling files/dirs are copied alongside the main file.
pub source_dir: Option<PathBuf>,
}
/// A bundle containing skills, agents, commands, and rules
#[derive(Debug, Clone)]
pub struct Bundle {
/// Bundle name (e.g., "cl", "gastro")
pub name: String,
/// Path to the bundle directory
#[allow(dead_code)]
pub path: PathBuf,
/// Skills in this bundle
pub skills: Vec<SkillFile>,
/// Agents in this bundle
pub agents: Vec<SkillFile>,
/// Commands in this bundle
pub commands: Vec<SkillFile>,
/// Rules in this bundle
pub rules: Vec<SkillFile>,
/// Metadata (author, description)
pub meta: BundleMeta,
}
impl Bundle {
/// Create a searchable string for fuzzy matching
pub fn search_string(&self) -> String {
let mut parts = vec![self.name.clone()];
if let Some(author) = &self.meta.author {
parts.push(author.clone());
}
if let Some(desc) = &self.meta.description {
parts.push(desc.clone());
}
// Add skill/command names for searching
for skill in &self.skills {
parts.push(skill.name.clone());
}
for cmd in &self.commands {
parts.push(cmd.name.clone());
}
parts.join(" ")
}
}
impl Bundle {
/// Create a new bundle by scanning a directory
pub fn from_path(path: PathBuf) -> anyhow::Result<Self> {
let name = path
.file_name()
.and_then(|n| n.to_str())
.ok_or_else(|| anyhow::anyhow!("Invalid bundle path"))?
.to_string();
let skills = Self::scan_type(&path, SkillType::Skill)?;
let agents = Self::scan_type(&path, SkillType::Agent)?;
let commands = Self::scan_type(&path, SkillType::Command)?;
let rules = Self::scan_type(&path, SkillType::Rule)?;
Ok(Bundle {
name,
path,
skills,
agents,
commands,
rules,
meta: BundleMeta::default(),
})
}
/// Create multiple bundles from a resources-format directory
/// Each resource folder becomes its own bundle (for community repos)
/// Structure: resources/{skills,commands,agents,cursor-rules}/resource-name/{meta.yaml,*.md}
pub fn list_from_resources_path(path: PathBuf) -> anyhow::Result<Vec<Bundle>> {
let resources_dir = path.join("resources");
if !resources_dir.exists() {
return Ok(vec![]);
}
let mut bundles: std::collections::HashMap<String, Bundle> =
std::collections::HashMap::new();
// Scan all resource types
for skill_type in [
SkillType::Skill,
SkillType::Agent,
SkillType::Command,
SkillType::Rule,
] {
let mut dir_names = vec![skill_type.dir_name()];
dir_names.extend(skill_type.alt_dir_names());
for dir_name in dir_names {
let type_dir = resources_dir.join(dir_name);
if !type_dir.exists() {
continue;
}
for entry in std::fs::read_dir(&type_dir)? {
let entry = entry?;
let resource_dir = entry.path();
if !resource_dir.is_dir() {
continue;
}
let folder_name = resource_dir
.file_name()
.and_then(|n| n.to_str())
.unwrap_or("");
// Skip templates and hidden
if folder_name.starts_with('.') || folder_name.starts_with('_') {
continue;
}
// Get or create bundle for this resource
if let Some((skill_file, resource_meta)) = Self::scan_resource_folder_with_meta(
&resource_dir,
skill_type,
folder_name,
)? {
let bundle_name = skill_file.name.clone();
let bundle = bundles.entry(bundle_name.clone()).or_insert_with(|| {
let meta = BundleMeta {
author: resource_meta.author.clone(),
description: resource_meta.description.clone(),
};
Bundle {
name: bundle_name,
path: resource_dir.clone(),
skills: vec![],
agents: vec![],
commands: vec![],
rules: vec![],
meta,
}
});
match skill_type {
SkillType::Skill => bundle.skills.push(skill_file),
SkillType::Agent => bundle.agents.push(skill_file),
SkillType::Command => bundle.commands.push(skill_file),
SkillType::Rule => bundle.rules.push(skill_file),
}
}
}
}
}
let mut result: Vec<Bundle> = bundles.into_values().collect();
result.sort_by(|a, b| a.name.cmp(&b.name));
Ok(result)
}
/// Check if a path uses the resources format
pub fn is_resources_format(path: &PathBuf) -> bool {
path.join("resources").is_dir()
}
/// Check if a path uses the Anthropic/marketplace format
/// Structure: skills/{name}/SKILL.md at the root level
pub fn is_anthropic_format(path: &PathBuf) -> bool {
let skills_dir = path.join("skills");
if !skills_dir.is_dir() {
return false;
}
// Check if any subdirectory contains SKILL.md
if let Ok(entries) = std::fs::read_dir(&skills_dir) {
for entry in entries.flatten() {
let subdir = entry.path();
if subdir.is_dir() && subdir.join("SKILL.md").exists() {
return true;
}
}
}
false
}
/// Create multiple bundles from an Anthropic-format directory
/// Each skill folder becomes its own bundle
/// Structure: skills/{name}/SKILL.md (with optional YAML frontmatter)
pub fn list_from_anthropic_path(path: PathBuf) -> anyhow::Result<Vec<Bundle>> {
let skills_dir = path.join("skills");
if !skills_dir.exists() {
return Ok(vec![]);
}
let mut bundles = vec![];
for entry in std::fs::read_dir(&skills_dir)? {
let entry = entry?;
let skill_dir = entry.path();
if !skill_dir.is_dir() {
continue;
}
let folder_name = skill_dir.file_name().and_then(|n| n.to_str()).unwrap_or("");
// Skip hidden and template directories
if folder_name.starts_with('.') || folder_name.starts_with('_') {
continue;
}
let skill_md = skill_dir.join("SKILL.md");
if !skill_md.exists() {
continue;
}
// Extract metadata from YAML frontmatter if present
let frontmatter = Self::extract_frontmatter(&skill_md);
let name = frontmatter
.as_ref()
.and_then(|fm| fm.name.clone())
.unwrap_or_else(|| folder_name.to_string());
let meta = BundleMeta {
author: frontmatter.as_ref().and_then(|fm| fm.author.clone()),
description: frontmatter.as_ref().and_then(|fm| fm.description.clone()),
};
let skill_file = SkillFile {
name: name.clone(),
path: skill_md,
skill_type: SkillType::Skill,
source_dir: Some(skill_dir.clone()),
};
bundles.push(Bundle {
name,
path: skill_dir,
skills: vec![skill_file],
agents: vec![],
commands: vec![],
rules: vec![],
meta,
});
}
bundles.sort_by(|a, b| a.name.cmp(&b.name));
Ok(bundles)
}
/// Extract full metadata from YAML frontmatter in a markdown file
fn extract_frontmatter(path: &PathBuf) -> Option<ResourceMeta> {
let content = std::fs::read_to_string(path).ok()?;
if !content.starts_with("---") {
return None;
}
// Find end of frontmatter
let rest = &content[3..];
let end_idx = rest.find("---")?;
let frontmatter = &rest[..end_idx];
serde_yaml::from_str(frontmatter).ok()
}
/// Load metadata from meta.yaml file
fn load_meta_yaml(dir: &PathBuf) -> Option<ResourceMeta> {
let meta_path = dir.join("meta.yaml");
if !meta_path.exists() {
return None;
}
let content = std::fs::read_to_string(&meta_path).ok()?;
serde_yaml::from_str(&content).ok()
}
/// Scan a subdirectory for .md files (original flat format)
fn scan_type(bundle_path: &PathBuf, skill_type: SkillType) -> anyhow::Result<Vec<SkillFile>> {
let type_dir = bundle_path.join(skill_type.dir_name());
if !type_dir.exists() {
return Ok(vec![]);
}
let mut files = vec![];
for entry in std::fs::read_dir(&type_dir)? {
let entry = entry?;
let path = entry.path();
if path.is_file() && path.extension().is_some_and(|e| e == "md") {
let name = path
.file_stem()
.and_then(|n| n.to_str())
.ok_or_else(|| anyhow::anyhow!("Invalid file name"))?
.to_string();
files.push(SkillFile {
name,
path,
skill_type,
source_dir: None,
});
}
}
// Sort for consistent output
files.sort_by(|a, b| a.name.cmp(&b.name));
Ok(files)
}
/// Scan a single resource folder for meta.yaml and content .md file
/// Returns both the skill file and the metadata
fn scan_resource_folder_with_meta(
resource_dir: &PathBuf,
skill_type: SkillType,
folder_name: &str,
) -> anyhow::Result<Option<(SkillFile, ResourceMeta)>> {
// Try to read meta.yaml to get metadata
let meta = Self::load_meta_yaml(resource_dir).unwrap_or_default();
let name = meta.name.clone().unwrap_or_else(|| folder_name.to_string());
// Find the content .md file (could be skill.md, command.md, agent.md, rule.md, or any .md)
let expected_names = match skill_type {
SkillType::Skill => vec!["skill.md", "SKILL.md"],
SkillType::Agent => vec!["agent.md", "AGENT.md"],
SkillType::Command => vec!["command.md", "COMMAND.md"],
SkillType::Rule => vec!["rule.md", "RULE.md"],
};
// First try expected names
for expected in &expected_names {
let md_path = resource_dir.join(expected);
if md_path.exists() {
return Ok(Some((
SkillFile {
name,
path: md_path,
skill_type,
source_dir: Some(resource_dir.to_path_buf()),
},
meta,
)));
}
}
// Fall back to any .md file (excluding meta files)
for entry in std::fs::read_dir(resource_dir)? {
let entry = entry?;
let path = entry.path();
if path.is_file() && path.extension().is_some_and(|e| e == "md") {
return Ok(Some((
SkillFile {
name,
path,
skill_type,
source_dir: Some(resource_dir.to_path_buf()),
},
meta,
)));
}
}
Ok(None)
}
/// Get all files of a specific type
pub fn files_of_type(&self, skill_type: SkillType) -> &[SkillFile] {
match skill_type {
SkillType::Skill => &self.skills,
SkillType::Agent => &self.agents,
SkillType::Command => &self.commands,
SkillType::Rule => &self.rules,
}
}
/// Check if bundle is empty (no files)
pub fn is_empty(&self) -> bool {
self.skills.is_empty()
&& self.agents.is_empty()
&& self.commands.is_empty()
&& self.rules.is_empty()
}
}
#[cfg(test)]
mod tests {
use super::*;
use std::fs;
use tempfile::tempdir;
#[test]
fn test_skill_type_dir_name() {
assert_eq!(SkillType::Skill.dir_name(), "skills");
assert_eq!(SkillType::Agent.dir_name(), "agents");
assert_eq!(SkillType::Command.dir_name(), "commands");
assert_eq!(SkillType::Rule.dir_name(), "rules");
}
#[test]
fn test_resources_format_detection() {
let dir = tempdir().unwrap();
// Without resources/ directory
assert!(!Bundle::is_resources_format(&dir.path().to_path_buf()));
// With resources/ directory
fs::create_dir(dir.path().join("resources")).unwrap();
assert!(Bundle::is_resources_format(&dir.path().to_path_buf()));
}
#[test]
fn test_resources_format_bundle() {
let dir = tempdir().unwrap();
let resources = dir.path().join("resources");
let skills_dir = resources.join("skills");
let skill_folder = skills_dir.join("my-skill");
fs::create_dir_all(&skill_folder).unwrap();
// Create meta.yaml
fs::write(
skill_folder.join("meta.yaml"),
"name: My Awesome Skill\nauthor: testuser\ndescription: A test skill\n",
)
.unwrap();
// Create skill.md
fs::write(skill_folder.join("skill.md"), "# My Skill\n\nContent here").unwrap();
let bundles = Bundle::list_from_resources_path(dir.path().to_path_buf()).unwrap();
assert_eq!(bundles.len(), 1);
assert_eq!(bundles[0].name, "My Awesome Skill");
assert_eq!(bundles[0].skills.len(), 1);
assert_eq!(bundles[0].skills[0].name, "My Awesome Skill");
}
#[test]
fn test_resources_format_cursor_rules() {
let dir = tempdir().unwrap();
let resources = dir.path().join("resources");
let rules_dir = resources.join("cursor-rules");
let rule_folder = rules_dir.join("my-rule");
fs::create_dir_all(&rule_folder).unwrap();
fs::write(
rule_folder.join("meta.yaml"),
"name: My Cursor Rule\nauthor: testuser\n",
)
.unwrap();
fs::write(rule_folder.join("rule.md"), "# Rule content").unwrap();
let bundles = Bundle::list_from_resources_path(dir.path().to_path_buf()).unwrap();
assert_eq!(bundles.len(), 1);
assert_eq!(bundles[0].name, "My Cursor Rule");
assert_eq!(bundles[0].rules.len(), 1);
}
#[test]
fn test_resources_format_skips_templates() {
let dir = tempdir().unwrap();
let resources = dir.path().join("resources");
let skills_dir = resources.join("skills");
// Create template folder (should be skipped)
let template = skills_dir.join("_example");
fs::create_dir_all(&template).unwrap();
fs::write(template.join("meta.yaml"), "name: Example\n").unwrap();
fs::write(template.join("skill.md"), "# Example").unwrap();
// Create real skill
let skill = skills_dir.join("real-skill");
fs::create_dir_all(&skill).unwrap();
fs::write(skill.join("meta.yaml"), "name: Real Skill\n").unwrap();
fs::write(skill.join("skill.md"), "# Real").unwrap();
let bundles = Bundle::list_from_resources_path(dir.path().to_path_buf()).unwrap();
assert_eq!(bundles.len(), 1);
assert_eq!(bundles[0].name, "Real Skill");
}
#[test]
fn test_resources_format_fallback_to_folder_name() {
let dir = tempdir().unwrap();
let resources = dir.path().join("resources");
let skills_dir = resources.join("skills");
let skill_folder = skills_dir.join("my-skill");
fs::create_dir_all(&skill_folder).unwrap();
// No meta.yaml, should use folder name
fs::write(skill_folder.join("skill.md"), "# Content").unwrap();
let bundles = Bundle::list_from_resources_path(dir.path().to_path_buf()).unwrap();
assert_eq!(bundles.len(), 1);
assert_eq!(bundles[0].name, "my-skill");
}
// Anthropic format tests
#[test]
fn test_anthropic_format_detection() {
let dir = tempdir().unwrap();
// Without skills/ directory
assert!(!Bundle::is_anthropic_format(&dir.path().to_path_buf()));
// With skills/ directory but no SKILL.md
fs::create_dir(dir.path().join("skills")).unwrap();
assert!(!Bundle::is_anthropic_format(&dir.path().to_path_buf()));
// With skills/{name}/SKILL.md
let skill_dir = dir.path().join("skills").join("my-skill");
fs::create_dir_all(&skill_dir).unwrap();
fs::write(skill_dir.join("SKILL.md"), "# Skill content").unwrap();
assert!(Bundle::is_anthropic_format(&dir.path().to_path_buf()));
}
#[test]
fn test_anthropic_format_with_frontmatter() {
let dir = tempdir().unwrap();
let skills_dir = dir.path().join("skills");
let skill_dir = skills_dir.join("xlsx");
fs::create_dir_all(&skill_dir).unwrap();
// Create SKILL.md with YAML frontmatter (Anthropic style)
fs::write(
skill_dir.join("SKILL.md"),
"---\nname: Excel Processor\ndescription: Process Excel files\n---\n\n# Excel Skill\n\nContent here",
)
.unwrap();
let bundles = Bundle::list_from_anthropic_path(dir.path().to_path_buf()).unwrap();
assert_eq!(bundles.len(), 1);
assert_eq!(bundles[0].name, "Excel Processor");
assert_eq!(bundles[0].skills.len(), 1);
assert_eq!(bundles[0].skills[0].name, "Excel Processor");
}
#[test]
fn test_anthropic_format_without_frontmatter() {
let dir = tempdir().unwrap();
let skills_dir = dir.path().join("skills");
let skill_dir = skills_dir.join("my-skill");
fs::create_dir_all(&skill_dir).unwrap();
// Create SKILL.md without frontmatter
fs::write(skill_dir.join("SKILL.md"), "# My Skill\n\nContent here").unwrap();
let bundles = Bundle::list_from_anthropic_path(dir.path().to_path_buf()).unwrap();
assert_eq!(bundles.len(), 1);
assert_eq!(bundles[0].name, "my-skill"); // Falls back to folder name
assert_eq!(bundles[0].skills.len(), 1);
}
#[test]
fn test_anthropic_format_multiple_skills() {
let dir = tempdir().unwrap();
let skills_dir = dir.path().join("skills");
// Create first skill
let skill1 = skills_dir.join("pdf");
fs::create_dir_all(&skill1).unwrap();
fs::write(
skill1.join("SKILL.md"),
"---\nname: PDF Handler\n---\n\n# PDF Skill",
)
.unwrap();
// Create second skill
let skill2 = skills_dir.join("docx");
fs::create_dir_all(&skill2).unwrap();
fs::write(
skill2.join("SKILL.md"),
"---\nname: Word Handler\n---\n\n# Word Skill",
)
.unwrap();
let bundles = Bundle::list_from_anthropic_path(dir.path().to_path_buf()).unwrap();
assert_eq!(bundles.len(), 2);
// Sorted alphabetically
assert_eq!(bundles[0].name, "PDF Handler");
assert_eq!(bundles[1].name, "Word Handler");
}
#[test]
fn test_anthropic_format_skips_templates() {
let dir = tempdir().unwrap();
let skills_dir = dir.path().join("skills");
// Create template folder (should be skipped)
let template = skills_dir.join("_template");
fs::create_dir_all(&template).unwrap();
fs::write(template.join("SKILL.md"), "# Template").unwrap();
// Create real skill
let skill = skills_dir.join("real-skill");
fs::create_dir_all(&skill).unwrap();
fs::write(skill.join("SKILL.md"), "# Real Skill").unwrap();
let bundles = Bundle::list_from_anthropic_path(dir.path().to_path_buf()).unwrap();
assert_eq!(bundles.len(), 1);
assert_eq!(bundles[0].name, "real-skill");
}
#[test]
fn test_extract_frontmatter_name() {
let dir = tempdir().unwrap();
let file = dir.path().join("test.md");
// With frontmatter
fs::write(
&file,
"---\nname: My Skill\ndescription: test\nauthor: Test Author\n---\n\n# Content",
)
.unwrap();
let meta = Bundle::extract_frontmatter(&file);
assert!(meta.is_some());
let meta = meta.unwrap();
assert_eq!(meta.name, Some("My Skill".to_string()));
assert_eq!(meta.author, Some("Test Author".to_string()));
assert_eq!(meta.description, Some("test".to_string()));
// Without frontmatter
fs::write(&file, "# No Frontmatter").unwrap();
assert!(Bundle::extract_frontmatter(&file).is_none());
// With frontmatter but no name field
fs::write(&file, "---\ndescription: test\n---\n\n# Content").unwrap();
let meta = Bundle::extract_frontmatter(&file);
assert!(meta.is_some());
assert_eq!(meta.unwrap().name, None);
}
#[test]
fn test_anthropic_format_sets_source_dir() {
let dir = tempdir().unwrap();
let skills_dir = dir.path().join("skills");
let skill_dir = skills_dir.join("pptx");
fs::create_dir_all(&skill_dir).unwrap();
fs::write(skill_dir.join("SKILL.md"), "# PPTX Skill").unwrap();
// Add companion files
fs::write(skill_dir.join("ooxml.md"), "# Reference").unwrap();
let bundles = Bundle::list_from_anthropic_path(dir.path().to_path_buf()).unwrap();
assert_eq!(bundles.len(), 1);
let skill_file = &bundles[0].skills[0];
assert!(skill_file.source_dir.is_some());
assert_eq!(skill_file.source_dir.as_ref().unwrap(), &skill_dir);
}
#[test]
fn test_flat_scan_has_no_source_dir() {
let dir = tempdir().unwrap();
let bundle_dir = dir.path().join("my-bundle");
let skills_dir = bundle_dir.join("skills");
fs::create_dir_all(&skills_dir).unwrap();
fs::write(skills_dir.join("simple.md"), "# Simple").unwrap();
let bundle = Bundle::from_path(bundle_dir).unwrap();
assert_eq!(bundle.skills.len(), 1);
assert!(bundle.skills[0].source_dir.is_none());
}
}
</file>
<file path="src/main.rs">
mod bundle;
mod config;
mod discover;
mod install;
mod manifest;
mod setup;
mod source;
mod target;
use anyhow::Result;
use clap::{CommandFactory, Parser, Subcommand};
use clap_complete::{generate, Shell};
use colored::Colorize;
use std::io;
use std::path::PathBuf;
use crate::bundle::SkillType;
use crate::config::{Config, SourceConfig};
use crate::install::{install_bundle, install_bundle_from_source, install_from_source};
use crate::setup::run_setup_wizard;
use crate::target::Tool;
#[derive(Parser)]
#[command(name = "skm")]
#[command(about = "Manage AI coding tool skills for Claude, OpenCode, and Cursor")]
#[command(version)]
#[command(args_conflicts_with_subcommands = true)]
struct Cli {
#[command(subcommand)]
command: Option<Commands>,
/// Bundle name to install (when no subcommand given)
#[arg(value_name = "BUNDLE")]
bundle: Option<String>,
/// Install to OpenCode instead of Claude
#[arg(short = 'o', long = "opencode", global = true)]
opencode: bool,
/// Install to Cursor instead of Claude
#[arg(short = 'c', long = "cursor", global = true)]
cursor: bool,
/// Install globally (tool-specific location)
#[arg(short = 'g', long = "global", global = true)]
global: bool,
/// Target directory (default: current directory)
#[arg(short = 't', long = "to", global = true)]
target: Option<PathBuf>,
/// Filter: only install skills
#[arg(long = "skills")]
skills_only: bool,
/// Filter: only install agents
#[arg(long = "agents")]
agents_only: bool,
/// Filter: only install commands
#[arg(long = "commands")]
commands_only: bool,
/// Filter: only install rules
#[arg(long = "rules")]
rules_only: bool,
}
#[derive(Subcommand)]
enum Commands {
/// Install a bundle (alias for `skm <bundle>`)
Add {
/// Bundle name to install
bundle: String,
},
/// Browse available bundles interactively
List,
/// Manage skill sources (interactive if no subcommand)
Sources {
#[command(subcommand)]
action: Option<SourcesAction>,
},
/// Show installed skills in current directory
Here {
/// Filter by tool (claude, opencode, cursor)
#[arg(long)]
tool: Option<String>,
/// Interactively remove skills
#[arg(long)]
remove: bool,
/// Remove all installed skills
#[arg(long)]
clean: bool,
/// Skip confirmation prompts
#[arg(short = 'y', long)]
yes: bool,
},
/// Update git sources to latest
Update,
/// Generate shell completions
Completions {
/// Shell to generate completions for
#[arg(value_enum)]
shell: Shell,
},
/// Convert between rule and command formats
Convert {
/// Source file to convert
source: PathBuf,
/// Convert to rule format (default: convert to command format)
#[arg(long)]
to_rule: bool,
/// Output file (default: stdout)
#[arg(long)]
output: Option<PathBuf>,
},
}
#[derive(Subcommand)]
enum SourcesAction {
/// List configured sources
List,
/// Add a source (local path or git URL)
Add {
/// Optional name for the source (e.g., "fg")
#[arg(value_name = "NAME")]
name: Option<String>,
/// Path or URL to add
path: String,
},
/// Remove a source
Remove {
/// Path, URL, or name to remove
path: String,
},
}
fn main() -> Result<()> {
let cli = Cli::parse();
// Check if this is first run (no config file) and we're not doing a specific subcommand
let config = if !Config::exists()? && cli.command.is_none() && cli.bundle.is_none() {
// First run - show setup wizard
run_setup_wizard()?
} else {
// Load existing config or use defaults
Config::load_or_default()?
};
// Determine target tool
let tool = if cli.cursor {
Tool::Cursor
} else if cli.opencode {
Tool::OpenCode
} else {
Tool::Claude
};
// Determine target directory
let target_dir = if cli.global {
tool.global_target()
} else if let Some(t) = cli.target {
t
} else {
std::env::current_dir()?
};
// Determine which types to install
let types = if cli.skills_only || cli.agents_only || cli.commands_only || cli.rules_only {
let mut t = vec![];
if cli.skills_only {
t.push(SkillType::Skill);
}
if cli.agents_only {
t.push(SkillType::Agent);
}
if cli.commands_only {
t.push(SkillType::Command);
}
if cli.rules_only {
t.push(SkillType::Rule);
}
t
} else {
vec![
SkillType::Skill,
SkillType::Agent,
SkillType::Command,
SkillType::Rule,
]
};
match cli.command {
Some(Commands::Add {
bundle: bundle_name,
}) => {
// `skm add <bundle>` is an alias for `skm <bundle>`
do_install(&config, &bundle_name, &tool, &target_dir, &types)?;
}
Some(Commands::List) => {
browse_bundles(&config)?;
}
Some(Commands::Sources { action }) => match action {
Some(SourcesAction::List) => {
sources_list(&config)?;
}
Some(SourcesAction::Add { name, path }) => {
sources_add(name, path)?;
}
Some(SourcesAction::Remove { path }) => {
sources_remove(path)?;
}
None => {
// Interactive sources management
sources_interactive()?;
}
},
Some(Commands::Here {
tool: filter_tool,
remove,
clean,
yes,
}) => {
if remove {
interactive_remove(&target_dir, filter_tool.as_deref())?;
} else if clean {
clean_all_skills(&target_dir, filter_tool.as_deref(), yes)?;
} else {
show_installed_skills(&target_dir, filter_tool.as_deref())?;
}
}
Some(Commands::Update) => {
update_sources(&config)?;
}
Some(Commands::Completions { shell }) => {
generate_completions(shell);
}
Some(Commands::Convert {
source,
to_rule,
output,
}) => {
convert_format(&source, to_rule, output.as_ref())?;
}
None => {
// No subcommand - either list bundles or install a bundle
if let Some(bundle_name) = cli.bundle {
// Install the specified bundle
do_install(&config, &bundle_name, &tool, &target_dir, &types)?;
} else {
// List available bundles
list_bundles(&config)?;
}
}
}
Ok(())
}
fn browse_bundles(config: &Config) -> Result<()> {
use crate::bundle::Bundle;
use dialoguer::{theme::ColorfulTheme, FuzzySelect};
let sources = config.sources();
if sources.is_empty() {
println!("{}", "No sources configured.".yellow());
println!("Add a source with: skm sources add <path>");
return Ok(());
}
// Collect all bundles with their source info
let mut all_bundles: Vec<(String, Bundle)> = Vec::new();
for source in &sources {
match source.list_bundles() {
Ok(bundles) => {
for bundle in bundles {
all_bundles.push((source.display_path(), bundle));
}
}
Err(e) => {
eprintln!(
" {} {} - {}",
"Warning:".yellow(),
source.display_path(),
e
);
}
}
}
if all_bundles.is_empty() {
println!("{}", "No bundles found in configured sources.".yellow());
return Ok(());
}
loop {
println!();
println!("{}", "Available Bundles (type to search)".bold());
println!();
// Build display items with searchable content
// Format: "name | description | author | counts | source"
let items: Vec<String> = all_bundles
.iter()
.map(|(source, bundle)| {
let desc = bundle
.meta
.description
.as_ref()
.map(|d| {
// Truncate long descriptions
if d.len() > 40 {
format!("{}...", &d[..37])
} else {
d.clone()
}
})
.unwrap_or_default();
let author = bundle
.meta
.author
.as_ref()
.map(|a| format!("by {}", a))
.unwrap_or_default();
let counts = format!(
"{}s {}a {}c",
bundle.skills.len(),
bundle.agents.len(),
bundle.commands.len()
);
// Include searchable content (name, author, description, skill names)
let search_hint = bundle.search_string();
if desc.is_empty() {
format!(
"{:<20} {:<15} {} {} [{}]",
bundle.name,
author.dimmed(),
counts.dimmed(),
format!("({})", source).dimmed(),
search_hint.dimmed()
)
} else {
format!(
"{:<20} {} {:<15} {} {} [{}]",
bundle.name,
desc.dimmed(),
author.dimmed(),
counts.dimmed(),
format!("({})", source).dimmed(),
search_hint.dimmed()
)
}
})
.collect();
let sel = FuzzySelect::with_theme(&ColorfulTheme::default())
.with_prompt("Select a bundle (type to filter, Esc to quit)")
.items(&items)
.default(0)
.highlight_matches(true)
.interact_opt()?;
match sel {
Some(idx) if idx < all_bundles.len() => {
let (_, bundle) = &all_bundles[idx];
show_bundle_details(bundle)?;
}
_ => break,
}
}
Ok(())
}
fn show_bundle_details(bundle: &crate::bundle::Bundle) -> Result<()> {
use dialoguer::{theme::ColorfulTheme, Select};
loop {
println!();
println!("{} {}", "Bundle:".bold(), bundle.name.cyan());
println!();
let mut items: Vec<String> = Vec::new();
let mut file_paths: Vec<Option<std::path::PathBuf>> = Vec::new();
for (section, files) in [
("skills", &bundle.skills),
("agents", &bundle.agents),
("commands", &bundle.commands),
] {
if !files.is_empty() {
items.push(format!(
"── {}/{} ──",
section,
format!(" ({} files)", files.len()).dimmed()
));
file_paths.push(None); // section header
for file in files {
let preview = get_file_preview(&file.path);
items.push(format!(" {} {}", file.name, preview.dimmed()));
file_paths.push(Some(file.path.clone()));
}
}
}
items.push("← Back".to_string());
file_paths.push(None);
let sel = Select::with_theme(&ColorfulTheme::default())
.with_prompt("Select to view contents")
.items(&items)
.default(0)
.interact()?;
if sel >= items.len() - 1 {
break;
}
let path = match &file_paths[sel] {
Some(p) => p,
None => continue, // section header
};
// Show file contents
println!();
println!("{}", "─".repeat(60).dimmed());
if let Ok(content) = std::fs::read_to_string(path) {
for line in content.lines().take(40) {
println!("{}", line);
}
let line_count = content.lines().count();
if line_count > 40 {
println!(
"{}",
format!("... ({} more lines)", line_count - 40).dimmed()
);
}
}
println!("{}", "─".repeat(60).dimmed());
println!();
}
Ok(())
}
fn get_file_preview(path: &std::path::PathBuf) -> String {
if let Ok(content) = std::fs::read_to_string(path) {
content
.lines()
.filter(|line| !line.trim().is_empty())
.filter(|line| !line.starts_with("---"))
.filter(|line| !line.contains(':') || line.starts_with('#'))
.take(1)
.map(|line| {
let trimmed = line.trim_start_matches('#').trim();
if trimmed.len() > 50 {
format!("- {}...", &trimmed[..47])
} else {
format!("- {}", trimmed)
}
})
.next()
.unwrap_or_default()
} else {
String::new()
}
}
fn sources_interactive() -> Result<()> {
use dialoguer::{theme::ColorfulTheme, Input, Select};
loop {
let config = Config::load_or_default()?;
let sources = config.source_configs();
println!();
println!("{}", "Skill Sources".bold());
println!();
if sources.is_empty() {
println!(" {}", "(no sources configured)".dimmed());
} else {
for (i, source) in sources.iter().enumerate() {
let type_label = match source {
SourceConfig::Local { .. } => "local",
SourceConfig::Git { .. } => "git",
};
let priority = format!("[{}]", i + 1).dimmed();
let name_display = source
.name()
.map(|n| format!(" ({})", n.yellow()))
.unwrap_or_default();
println!(
" {} {}{} {}",
priority,
source.display().cyan(),
name_display,
format!("({})", type_label).dimmed()
);
}
}
println!();
let mut options = vec!["Add source", "Remove source"];
if sources.len() > 1 {
options.push("Change priority");
}
options.push("Done");
let selection = Select::with_theme(&ColorfulTheme::default())
.with_prompt("What would you like to do?")
.items(&options)
.default(options.len() - 1)
.interact()?;
match options[selection] {
"Add source" => {
let path: String = Input::with_theme(&ColorfulTheme::default())
.with_prompt("Enter path or git URL")
.interact_text()?;
sources_add(None, path)?;
}
"Remove source" => {
if sources.is_empty() {
println!("{}", "No sources to remove.".yellow());
continue;
}
let source_names: Vec<&str> = sources.iter().map(|s| s.display()).collect();
let sel = Select::with_theme(&ColorfulTheme::default())
.with_prompt("Select source to remove")
.items(&source_names)
.interact()?;
sources_remove(source_names[sel].to_string())?;
}
"Change priority" => {
if sources.len() < 2 {
continue;
}
let source_names: Vec<String> = sources
.iter()
.enumerate()
.map(|(i, s)| format!("[{}] {}", i + 1, s.display()))
.collect();
let sel = Select::with_theme(&ColorfulTheme::default())
.with_prompt("Select source to move")
.items(&source_names)
.interact()?;
let positions: Vec<String> = (1..=sources.len())
.map(|i| format!("Position {}", i))
.collect();
let new_pos = Select::with_theme(&ColorfulTheme::default())
.with_prompt("Move to position")
.items(&positions)
.default(sel)
.interact()?;
if sel != new_pos {
let mut config = Config::load_or_default()?;
config.move_source(sel, new_pos)?;
config.save()?;
println!("{}", "Priority updated.".green());
}
}
"Done" => break,
_ => break,
}
}
// Auto-update git sources on exit
let config = Config::load_or_default()?;
let git_sources = config.git_sources();
if !git_sources.is_empty() {
println!();
println!("{}", "Updating git sources...".dimmed());
for source in git_sources {
match source.pull() {
Ok(true) => {
println!(" {} {}", "Updated:".green(), source.url());
}
Ok(false) => {} // Already up to date, stay quiet
Err(e) => {
println!(" {} {}: {}", "Error:".red(), source.url(), e);
}
}
}
}
Ok(())
}
fn sources_list(config: &Config) -> Result<()> {
println!("{}", "Configured sources:".bold());
println!();
let sources = config.source_configs();
if sources.is_empty() {
println!(" {}", "(none)".dimmed());
println!();
println!("Add a source with: skm sources add <path>");
} else {
for (i, source) in sources.iter().enumerate() {
let type_label = match source {
SourceConfig::Local { .. } => "local",
SourceConfig::Git { .. } => "git",
};
let name_display = source
.name()
.map(|n| format!("[{}] ", n.cyan()))
.unwrap_or_default();
println!(
" {}. {}{} {}",
i + 1,
name_display,
source.display(),
format!("({})", type_label).dimmed()
);
}
}
println!();
Ok(())
}
fn sources_add(name: Option<String>, path: String) -> Result<()> {
let mut config = Config::load_or_default()?;
// Determine if this is a git URL or local path
let source =
if path.starts_with("https://") || path.starts_with("git@") || path.ends_with(".git") {
SourceConfig::Git {
url: path.clone(),
name,
}
} else {
// Normalize local path
let normalized = if path.starts_with("~/") || path.starts_with('/') {
path.clone()
} else {
// Make relative path absolute
let cwd = std::env::current_dir()?;
cwd.join(&path).to_string_lossy().to_string()
};
SourceConfig::Local {
path: normalized,
name,
}
};
// Check if path exists for local sources
if let SourceConfig::Local { ref path, .. } = source {
let expanded = if path.starts_with("~/") {
let home = std::env::var("HOME")?;
PathBuf::from(format!("{}/{}", home, &path[2..]))
} else {
PathBuf::from(path)
};
if !expanded.exists() {
println!("{} Path does not exist: {}", "Warning:".yellow(), path);
}
}
config.add_source(source);
config.save()?;
println!("{} {}", "Added source:".green(), path);
Ok(())
}
fn sources_remove(path: String) -> Result<()> {
let mut config = Config::load_or_default()?;
if config.remove_source(&path) {
config.save()?;
println!("{} {}", "Removed source:".green(), path);
} else {
println!("{} Source not found: {}", "Error:".red(), path);
}
Ok(())
}
fn update_sources(config: &Config) -> Result<()> {
let git_sources = config.git_sources();
if git_sources.is_empty() {
println!("{}", "No git sources configured.".yellow());
println!("Add a git source with: skm sources add <git-url>");
return Ok(());
}
println!("{}", "Updating git sources...".bold());
println!();
let mut updated = 0;
let mut already_current = 0;
let mut errors = 0;
for source in git_sources {
print!(" {} {}... ", "Updating".cyan(), source.url());
match source.pull() {
Ok(true) => {
println!("{}", "updated".green());
updated += 1;
}
Ok(false) => {
println!("{}", "already up to date".dimmed());
already_current += 1;
}
Err(e) => {
println!("{}: {}", "error".red(), e);
errors += 1;
}
}
}
println!();
if updated > 0 {
println!(" {} {} source(s) updated", "".green(), updated);
}
if already_current > 0 {
println!(
" {} {} source(s) already up to date",
"".dimmed(),
already_current
);
}
if errors > 0 {
println!(" {} {} source(s) failed", "".red(), errors);
}
Ok(())
}
fn list_bundles(config: &Config) -> Result<()> {
let sources = config.sources();
if sources.is_empty() {
println!("{}", "No sources configured.".yellow());
println!("Add a source with: skm sources add <path>");
return Ok(());
}
println!("{}", "Available bundles:".bold());
println!();
let mut found_any = false;
let mut had_errors = false;
for source in sources {
// Handle source errors gracefully - warn and continue
let bundles = match source.list_bundles() {
Ok(b) => b,
Err(e) => {
eprintln!(
" {} {} - {}",
"Warning:".yellow(),
source.display_path(),
e
);
had_errors = true;
continue;
}
};
if bundles.is_empty() {
continue;
}
found_any = true;
println!(" {} {}", "Source:".dimmed(), source.display_path());
for bundle in bundles {
// Show description on same line if available
if let Some(desc) = &bundle.meta.description {
println!(" {}/ - {}", bundle.name.cyan(), desc.dimmed());
} else {
println!(" {}/", bundle.name.cyan());
}
let skill_count = bundle.skills.len();
let agent_count = bundle.agents.len();
let command_count = bundle.commands.len();
let rule_count = bundle.rules.len();
if skill_count > 0 {
println!(" {:<10} {} files", "skills/", skill_count);
}
if agent_count > 0 {
println!(" {:<10} {} files", "agents/", agent_count);
}
if command_count > 0 {
println!(" {:<10} {} files", "commands/", command_count);
}
if rule_count > 0 {
println!(" {:<10} {} files", "rules/", rule_count);
}
}
println!();
}
if !found_any {
if had_errors {
println!(" {}", "(no accessible bundles found)".dimmed());
} else {
println!(" {}", "(no bundles found in configured sources)".dimmed());
}
println!();
}
Ok(())
}
fn show_installed_skills(base: &PathBuf, filter_tool: Option<&str>) -> Result<()> {
use crate::discover::{
discover_installed, filter_by_tool, group_by_tool, InstalledTool, SkillType,
};
let mut skills = discover_installed(base)?;
// Apply filter if provided
if let Some(tool_filter) = filter_tool {
skills = filter_by_tool(skills, tool_filter);
}
if skills.is_empty() {
if filter_tool.is_some() {
println!(
"{}",
"No installed skills found for the specified tool.".yellow()
);
} else {
println!("{}", "No installed skills found.".yellow());
}
println!();
println!("Install skills with: skm <bundle>");
return Ok(());
}
println!("{}", "Installed skills:".bold());
println!();
let grouped = group_by_tool(&skills);
// Define tool order
let tool_order = [
InstalledTool::Claude,
InstalledTool::OpenCode,
InstalledTool::Cursor,
];
for tool in &tool_order {
if let Some(type_map) = grouped.get(tool) {
println!(" {}", tool.display_name().cyan().bold());
// Define type order
let type_order = [SkillType::Skill, SkillType::Agent, SkillType::Command];
for skill_type in &type_order {
if let Some(skill_list) = type_map.get(skill_type) {
if !skill_list.is_empty() {
println!(" {}/", skill_type.plural().dimmed());
for skill in skill_list {
let display_name = if let Some(ref bundle) = skill.bundle {
format!("{}/{}", bundle, skill.name)
} else {
skill.name.clone()
};
println!(" {}", display_name);
}
}
}
}
println!();
}
}
// Show summary
let total = skills.len();
let by_tool: std::collections::HashMap<_, usize> =
skills
.iter()
.fold(std::collections::HashMap::new(), |mut acc, s| {
*acc.entry(s.tool).or_insert(0) += 1;
acc
});
let summary_parts: Vec<String> = tool_order
.iter()
.filter_map(|t| {
by_tool
.get(t)
.map(|count| format!("{} {}", count, t.display_name()))
})
.collect();
println!(
" {} {} total ({})",
"".dimmed(),
total,
summary_parts.join(", ")
);
println!();
Ok(())
}
fn generate_completions(shell: Shell) {
let mut cmd = Cli::command();
generate(shell, &mut cmd, "skm", &mut io::stdout());
}
fn interactive_remove(base: &PathBuf, filter_tool: Option<&str>) -> Result<()> {
use crate::discover::{discover_installed, filter_by_tool, group_same_skills, remove_skill};
use dialoguer::{theme::ColorfulTheme, Confirm, MultiSelect};
let mut skills = discover_installed(base)?;
if let Some(tool_filter) = filter_tool {
skills = filter_by_tool(skills, tool_filter);
}
if skills.is_empty() {
println!("{}", "No installed skills found.".yellow());
return Ok(());
}
// Group skills by unique ID (same skill across multiple tools)
let grouped = group_same_skills(&skills);
let mut skill_ids: Vec<_> = grouped.keys().cloned().collect();
skill_ids.sort();
// Build display items for multi-select
let display_items: Vec<String> = skill_ids
.iter()
.map(|id| {
let instances = grouped.get(id).unwrap();
let tools: Vec<&str> = instances.iter().map(|s| s.tool.display_name()).collect();
format!("{} ({})", id, tools.join(", "))
})
.collect();
// Show multi-select
println!("{}", "Select skills to remove:".bold());
println!("{}", "(space to toggle, enter to confirm)".dimmed());
println!();
let selections = MultiSelect::with_theme(&ColorfulTheme::default())
.items(&display_items)
.interact()?;
if selections.is_empty() {
println!("{}", "No skills selected.".yellow());
return Ok(());
}
// Collect skills to remove
let mut to_remove: Vec<&crate::discover::InstalledSkill> = Vec::new();
for idx in &selections {
let id = &skill_ids[*idx];
if let Some(instances) = grouped.get(id) {
to_remove.extend(instances.iter().copied());
}
}
// Build summary
let summary: Vec<String> = selections
.iter()
.map(|idx| {
let id = &skill_ids[*idx];
let instances = grouped.get(id).unwrap();
let tools: Vec<&str> = instances.iter().map(|s| s.tool.display_name()).collect();
format!(" {} from {}", id.cyan(), tools.join(", "))
})
.collect();
println!();
println!("{}", "Will remove:".bold());
for line in &summary {
println!("{}", line);
}
println!();
// Confirm
let confirm = Confirm::with_theme(&ColorfulTheme::default())
.with_prompt(format!("Remove {} skill(s)?", to_remove.len()))
.default(false)
.interact()?;
if !confirm {
println!("{}", "Cancelled.".yellow());
return Ok(());
}
// Remove the skills
let mut removed = 0;
let mut errors = 0;
for skill in to_remove {
match remove_skill(skill) {
Ok(()) => {
removed += 1;
}
Err(e) => {
eprintln!(
"{}: Failed to remove {}: {}",
"Error".red(),
skill.path.display(),
e
);
errors += 1;
}
}
}
println!();
if removed > 0 {
println!("{} Removed {} skill(s)", "".green(), removed);
}
if errors > 0 {
println!("{} Failed to remove {} skill(s)", "".red(), errors);
}
Ok(())
}
fn clean_all_skills(base: &PathBuf, filter_tool: Option<&str>, skip_confirm: bool) -> Result<()> {
use crate::discover::{discover_installed, filter_by_tool, remove_skill};
use dialoguer::{theme::ColorfulTheme, Confirm};
let mut skills = discover_installed(base)?;
if let Some(tool_filter) = filter_tool {
skills = filter_by_tool(skills, tool_filter);
}
if skills.is_empty() {
println!("{}", "No installed skills found.".yellow());
return Ok(());
}
let count = skills.len();
let tool_desc = filter_tool
.map(|t| format!(" for {}", t))
.unwrap_or_default();
println!("{} {} skill(s){}", "Found".bold(), count, tool_desc);
println!();
// Confirm unless --yes flag
let confirmed = if skip_confirm {
true
} else {
Confirm::with_theme(&ColorfulTheme::default())
.with_prompt(format!("Remove all {} skill(s)?", count))
.default(false)
.interact()?
};
if !confirmed {
println!("{}", "Cancelled.".yellow());
return Ok(());
}
// Remove all skills
let mut removed = 0;
let mut errors = 0;
for skill in &skills {
match remove_skill(skill) {
Ok(()) => {
removed += 1;
}
Err(e) => {
eprintln!(
"{}: Failed to remove {}: {}",
"Error".red(),
skill.path.display(),
e
);
errors += 1;
}
}
}
println!();
if removed > 0 {
println!("{} Removed {} skill(s)", "".green(), removed);
}
if errors > 0 {
println!("{} Failed to remove {} skill(s)", "".red(), errors);
}
Ok(())
}
fn convert_format(source: &PathBuf, to_rule: bool, output: Option<&PathBuf>) -> Result<()> {
use std::fs;
use std::io::Write;
if !source.exists() {
println!(
"{} Source file does not exist: {}",
"Error:".red(),
source.display()
);
return Ok(());
}
let content = fs::read_to_string(source)?;
let converted = if to_rule {
convert_to_rule(&content, source)
} else {
convert_to_command(&content)
};
match output {
Some(output_path) => {
let mut file = fs::File::create(output_path)?;
file.write_all(converted.as_bytes())?;
println!(
"{} Converted to {}",
"Success:".green(),
output_path.display()
);
}
None => {
println!("{}", converted);
}
}
Ok(())
}
fn convert_to_rule(content: &str, source_path: &PathBuf) -> String {
let lines: Vec<&str> = content.lines().collect();
// Check if already has frontmatter
if lines.first() == Some(&"---") {
// Already has frontmatter, assume it is already in rule format
return content.to_string();
}
// Extract title from filename or first heading
let name = source_path
.file_stem()
.and_then(|s| s.to_str())
.unwrap_or("converted-rule");
let title = if let Some(first_line) = lines.first() {
if first_line.starts_with("#") {
first_line.trim_start_matches("#").trim().to_string()
} else {
name.to_string()
}
} else {
name.to_string()
};
// Create rule frontmatter
let mut result = String::new();
result.push_str("---\n");
result.push_str(&format!("description: \"{}\"\n", title));
result.push_str("alwaysApply: false\n");
result.push_str("---\n");
result.push('\n');
result.push_str(content);
result
}
fn convert_to_command(content: &str) -> String {
let lines: Vec<&str> = content.lines().collect();
// Check if it has frontmatter
if lines.first() == Some(&"---") {
// Find the end of frontmatter
let mut in_frontmatter = false;
let mut end_idx = 0;
for (i, line) in lines.iter().enumerate() {
if *line == "---" {
if in_frontmatter {
end_idx = i + 1;
break;
}
in_frontmatter = true;
}
}
// Skip frontmatter and return the rest
if end_idx > 0 && end_idx < lines.len() {
lines[end_idx..].join("\n").trim_start().to_string()
} else {
content.to_string()
}
} else {
// No frontmatter, return as-is
content.to_string()
}
}
/// Parse a bundle reference that may be source-scoped.
/// "fg/synapse-docs" → (Some("fg"), Some("synapse-docs"))
/// "fg" → (None, Some("fg")) - could be source name OR bundle name
fn parse_bundle_ref(input: &str) -> (Option<&str>, Option<&str>) {
if let Some((source, bundle)) = input.split_once('/') {
(Some(source), Some(bundle))
} else {
(None, Some(input))
}
}
/// Dispatch install command with support for source-scoped references
fn do_install(
config: &Config,
bundle_ref: &str,
tool: &Tool,
target_dir: &PathBuf,
types: &[SkillType],
) -> Result<()> {
let (source_name, bundle_name) = parse_bundle_ref(bundle_ref);
match (source_name, bundle_name) {
(Some(source_name), Some(bundle_name)) => {
// Explicit source/bundle: "fg/synapse-docs"
match config.find_source_by_name(source_name) {
Some((source, _)) => {
install_bundle_from_source(source.as_ref(), bundle_name, tool, target_dir, types)
}
None => {
anyhow::bail!("Source '{}' not found. Add it with: skm sources add {} <path>", source_name, source_name);
}
}
}
(None, Some(name)) => {
// Just a name - could be a source name or bundle name
// First check if it's a named source
if let Some((source, _)) = config.find_source_by_name(name) {
// Install all bundles from this source
return install_from_source(source.as_ref(), tool, target_dir, types);
}
// Otherwise, search all sources for a bundle with this name
install_bundle(config, name, tool, target_dir, types)
}
(None, None) => {
anyhow::bail!("No bundle specified");
}
(Some(_), None) => {
anyhow::bail!("Invalid bundle reference");
}
}
}
#[cfg(test)]
mod convert_tests {
use super::*;
#[test]
fn test_convert_to_rule_no_frontmatter() {
let content = "# Test Rule\n\nSome content here";
let path = PathBuf::from("test-rule.md");
let result = convert_to_rule(content, &path);
assert!(result.starts_with("---\n"));
assert!(result.contains("description: \"Test Rule\""));
assert!(result.contains("alwaysApply: false"));
assert!(result.contains("# Test Rule"));
}
#[test]
fn test_convert_to_rule_with_existing_frontmatter() {
let content = "---\ndescription: existing\n---\n# Content";
let path = PathBuf::from("test.md");
let result = convert_to_rule(content, &path);
// Should return unchanged since it already has frontmatter
assert_eq!(result, content);
}
#[test]
fn test_convert_to_rule_uses_filename_when_no_heading() {
let content = "Some content without a heading";
let path = PathBuf::from("my-custom-rule.md");
let result = convert_to_rule(content, &path);
assert!(result.contains("description: \"my-custom-rule\""));
}
#[test]
fn test_convert_to_command_strips_frontmatter() {
let content =
"---\ndescription: test\nalwaysApply: false\n---\n# Rule Content\n\nBody here";
let result = convert_to_command(content);
assert!(!result.contains("---"));
assert!(!result.contains("description:"));
assert!(result.starts_with("# Rule Content"));
assert!(result.contains("Body here"));
}
#[test]
fn test_convert_to_command_no_frontmatter() {
let content = "# Simple Content\n\nNo frontmatter here";
let result = convert_to_command(content);
// Should return unchanged
assert_eq!(result, content);
}
#[test]
fn test_convert_to_command_only_frontmatter() {
let content = "---\ndescription: test\n---";
let result = convert_to_command(content);
// Edge case: only frontmatter, no content after
assert_eq!(result, content);
}
}
</file>
<file path="src/target.rs">
use anyhow::Result;
use std::fs;
use std::io::Write;
use std::path::{Path, PathBuf};
use crate::bundle::{SkillFile, SkillType};
/// Target AI coding tool
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum Tool {
Claude,
OpenCode,
Cursor,
}
/// Detected agent file format based on tools field syntax
#[derive(Debug, PartialEq)]
enum AgentFormat {
/// Claude format: `tools: Read, Grep, Glob` (PascalCase, comma-separated)
Claude,
/// OpenCode format: `tools:\n read: true` (lowercase, YAML object)
OpenCode,
/// No tools field found
Unknown,
}
impl Tool {
/// Get the global install target for this tool
pub fn global_target(&self) -> PathBuf {
let home = std::env::var_os("HOME")
.map(PathBuf::from)
.unwrap_or_else(|| PathBuf::from("."));
match self {
Tool::Claude => home,
Tool::OpenCode => home.join(".config/opencode"),
Tool::Cursor => {
eprintln!("Warning: Cursor doesn't support global config, using current directory");
std::env::current_dir().unwrap_or(home)
}
}
}
/// Get the name of this tool for display
pub fn name(&self) -> &'static str {
match self {
Tool::Claude => "Claude",
Tool::OpenCode => "OpenCode",
Tool::Cursor => "Cursor",
}
}
/// Write a skill file to the appropriate location for this tool
pub fn write_file(
&self,
target_dir: &PathBuf,
bundle_name: &str,
skill: &SkillFile,
) -> Result<PathBuf> {
match self {
Tool::Claude => self.write_claude(target_dir, bundle_name, skill),
Tool::OpenCode => self.write_opencode(target_dir, bundle_name, skill),
Tool::Cursor => self.write_cursor(target_dir, bundle_name, skill),
}
}
/// Get the destination info string for display
pub fn dest_info(&self, skill_type: SkillType, bundle_name: &str) -> String {
match self {
Tool::Claude => format!(".claude/{}/{}/", skill_type.dir_name(), bundle_name),
Tool::OpenCode => match skill_type {
SkillType::Skill => format!(".opencode/skill/{}-*/", bundle_name),
SkillType::Agent => ".opencode/agent/".to_string(),
SkillType::Command => ".opencode/command/".to_string(),
SkillType::Rule => format!(".opencode/rule/{}-*/", bundle_name),
},
Tool::Cursor => match skill_type {
SkillType::Skill => format!(".cursor/skills/{}-*/", bundle_name),
_ => format!(".cursor/rules/{}-*/", bundle_name),
},
}
}
// Claude: .claude/{type}/{bundle}/{name}.md
// Phase 1+4: detect agent format and reverse-transform if needed
fn write_claude(
&self,
target_dir: &PathBuf,
bundle_name: &str,
skill: &SkillFile,
) -> Result<PathBuf> {
let dest_dir = target_dir
.join(".claude")
.join(skill.skill_type.dir_name())
.join(bundle_name);
fs::create_dir_all(&dest_dir)?;
let dest_file = dest_dir.join(format!("{}.md", skill.name));
match skill.skill_type {
SkillType::Agent => {
match detect_agent_format(&skill.path)? {
AgentFormat::OpenCode => transform_agent_for_claude(&skill.path, &dest_file)?,
_ => { fs::copy(&skill.path, &dest_file)?; }
}
}
_ => { fs::copy(&skill.path, &dest_file)?; }
}
copy_companion_files(skill, &dest_dir)?;
Ok(dest_file)
}
// OpenCode:
// skills -> .opencode/skill/{bundle}-{name}/SKILL.md (with frontmatter)
// agents -> .opencode/agent/{bundle}-{name}.md
// commands -> .opencode/command/{bundle}-{name}.md
// Phase 4: detect agent format before transforming
fn write_opencode(
&self,
target_dir: &PathBuf,
bundle_name: &str,
skill: &SkillFile,
) -> Result<PathBuf> {
let combined_name = format!("{}-{}", bundle_name, skill.name);
match skill.skill_type {
SkillType::Skill => {
let dest_dir = target_dir.join(".opencode/skill").join(&combined_name);
fs::create_dir_all(&dest_dir)?;
let dest_file = dest_dir.join("SKILL.md");
transform_skill_file(&skill.path, &dest_file, &combined_name)?;
copy_companion_files(skill, &dest_dir)?;
Ok(dest_file)
}
SkillType::Rule => {
let dest_dir = target_dir.join(".opencode/rule").join(&combined_name);
fs::create_dir_all(&dest_dir)?;
let dest_file = dest_dir.join("RULE.md");
transform_skill_file(&skill.path, &dest_file, &combined_name)?;
copy_companion_files(skill, &dest_dir)?;
Ok(dest_file)
}
SkillType::Agent => {
// Flat file target — companion files not applicable
let dest_dir = target_dir.join(".opencode/agent");
fs::create_dir_all(&dest_dir)?;
let dest_file = dest_dir.join(format!("{}.md", combined_name));
match detect_agent_format(&skill.path)? {
AgentFormat::Claude => transform_agent_file(&skill.path, &dest_file)?,
_ => { fs::copy(&skill.path, &dest_file)?; }
}
Ok(dest_file)
}
SkillType::Command => {
// Flat file target — companion files not applicable
let dest_dir = target_dir.join(".opencode/command");
fs::create_dir_all(&dest_dir)?;
let dest_file = dest_dir.join(format!("{}.md", combined_name));
fs::copy(&skill.path, &dest_file)?;
Ok(dest_file)
}
}
}
// Cursor:
// skills -> .cursor/skills/{bundle}-{name}/SKILL.md
// agents/commands/rules -> .cursor/rules/{bundle}-{name}/RULE.md (folder-based)
// Phase 3: use transform_cursor_rule for non-skill types
fn write_cursor(
&self,
target_dir: &PathBuf,
bundle_name: &str,
skill: &SkillFile,
) -> Result<PathBuf> {
let combined_name = format!("{}-{}", bundle_name, skill.name);
match skill.skill_type {
SkillType::Skill => {
// Skills use .cursor/skills/ directory with SKILL.md
let dest_dir = target_dir.join(".cursor/skills").join(&combined_name);
fs::create_dir_all(&dest_dir)?;
let dest_file = dest_dir.join("SKILL.md");
transform_skill_file(&skill.path, &dest_file, &combined_name)?;
copy_companion_files(skill, &dest_dir)?;
Ok(dest_file)
}
_ => {
// Agents, commands, and rules use .cursor/rules/ with RULE.md
let dest_dir = target_dir.join(".cursor/rules").join(&combined_name);
fs::create_dir_all(&dest_dir)?;
let dest_file = dest_dir.join("RULE.md");
transform_cursor_rule(&skill.path, &dest_file, &combined_name)?;
copy_companion_files(skill, &dest_dir)?;
Ok(dest_file)
}
}
}
}
// ---------------------------------------------------------------------------
// Phase 4: Agent format detection
// ---------------------------------------------------------------------------
/// Detect whether an agent file uses Claude format (PascalCase comma string)
/// or OpenCode format (lowercase YAML object)
fn detect_agent_format(src: &PathBuf) -> Result<AgentFormat> {
let content = fs::read_to_string(src)?;
let lines: Vec<&str> = content.lines().collect();
let mut in_fm = false;
for line in &lines {
if *line == "---" {
if in_fm { break; }
in_fm = true;
continue;
}
if in_fm && line.trim().starts_with("tools:") {
if line.contains(",") {
return Ok(AgentFormat::Claude); // "tools: Read, Grep, ..."
} else {
return Ok(AgentFormat::OpenCode); // "tools:" (YAML object follows)
}
}
}
Ok(AgentFormat::Unknown) // No tools field
}
// ---------------------------------------------------------------------------
// Phase 2: Skill file transformation with description injection
// ---------------------------------------------------------------------------
/// Transform a skill file to ensure it has proper frontmatter with name and description fields.
/// - Adds `name:` if missing
/// - Adds `description:` if missing (extracted from body content)
fn transform_skill_file(src: &PathBuf, dest: &PathBuf, skill_name: &str) -> Result<()> {
let content = fs::read_to_string(src)?;
let lines: Vec<&str> = content.lines().collect();
let output = if lines.first() == Some(&"---") {
// Has frontmatter - check what fields exist
let mut in_frontmatter = false;
let mut has_name = false;
let mut has_description = false;
let mut frontmatter_end = 0;
for (i, line) in lines.iter().enumerate() {
if *line == "---" {
if in_frontmatter {
frontmatter_end = i;
break;
}
in_frontmatter = true;
continue;
}
if in_frontmatter {
if line.starts_with("name:") { has_name = true; }
if line.starts_with("description:") { has_description = true; }
}
}
if has_name && has_description {
// Already has both required fields, use as-is
content
} else {
let mut result = String::new();
result.push_str("---\n");
if !has_name {
result.push_str(&format!("name: {}\n", skill_name));
}
// Copy existing frontmatter lines (between first --- and closing ---)
for line in lines.iter().skip(1).take(frontmatter_end - 1) {
result.push_str(line);
result.push('\n');
}
if !has_description {
let desc = extract_description_from_body(&lines, frontmatter_end + 1);
result.push_str(&format!("description: \"{}\"\n", desc));
}
// Add closing --- and body
for line in lines.iter().skip(frontmatter_end) {
result.push_str(line);
result.push('\n');
}
result
}
} else {
// No frontmatter - add it with both name and description
let desc = extract_description_from_body(&lines, 0);
let mut result = String::new();
result.push_str("---\n");
result.push_str(&format!("name: {}\n", skill_name));
result.push_str(&format!("description: \"{}\"\n", desc));
result.push_str("---\n");
result.push_str(&content);
result
};
let mut file = fs::File::create(dest)?;
file.write_all(output.as_bytes())?;
Ok(())
}
/// Extract a description from the markdown body content.
/// Uses the first heading text or first non-empty paragraph.
fn extract_description_from_body(lines: &[&str], start_from: usize) -> String {
for line in lines.iter().skip(start_from) {
let trimmed = line.trim();
if trimmed.is_empty() || trimmed == "---" {
continue;
}
if trimmed.starts_with('#') {
// Use heading text as description
let text = trimmed.trim_start_matches('#').trim();
return truncate_description(text);
}
// Use first paragraph text
return truncate_description(trimmed);
}
"Skill instructions".to_string()
}
/// Truncate a description to 200 characters max
fn truncate_description(text: &str) -> String {
if text.len() <= 200 {
text.to_string()
} else {
format!("{}...", &text[..197])
}
}
// ---------------------------------------------------------------------------
// Phase 1: Agent file transformation (Claude → OpenCode)
// ---------------------------------------------------------------------------
/// Transform an agent file for OpenCode format, converting tools from string to YAML object.
/// Phase 1: expanded tool name mapping with pass-through for unknown tools.
fn transform_agent_file(src: &PathBuf, dest: &PathBuf) -> Result<()> {
let content = fs::read_to_string(src)?;
let lines: Vec<&str> = content.lines().collect();
if lines.first() != Some(&"---") {
// No frontmatter, just copy as-is
fs::copy(src, dest)?;
return Ok(());
}
// Parse frontmatter and transform
let mut result = String::new();
let mut in_frontmatter = false;
let mut frontmatter_lines = Vec::new();
let mut body_lines = Vec::new();
let mut found_end = false;
for line in &lines {
if *line == "---" {
if in_frontmatter {
found_end = true;
in_frontmatter = false;
continue;
} else {
in_frontmatter = true;
continue;
}
}
if in_frontmatter && !found_end {
frontmatter_lines.push(*line);
} else {
body_lines.push(*line);
}
}
// Transform frontmatter
result.push_str("---\n");
let mut i = 0;
while i < frontmatter_lines.len() {
let line = frontmatter_lines[i];
if line.trim().starts_with("tools:") && line.contains(",") {
// Found tools string (Claude format), convert to YAML object
let tools_str = line.trim_start_matches("tools:").trim();
let tool_list: Vec<&str> = tools_str.split(',').map(|s| s.trim()).collect();
result.push_str("tools:\n");
for tool in tool_list {
let opencode_tool = claude_to_opencode_tool(tool.trim());
result.push_str(&format!(" {}: true\n", opencode_tool));
}
} else if line.trim().starts_with("color:") {
// Remove invalid color field (not supported by OpenCode)
i += 1;
continue;
} else {
// Keep other fields
result.push_str(line);
result.push('\n');
}
i += 1;
}
result.push_str("---\n");
// Add body
for line in body_lines {
result.push_str(line);
result.push('\n');
}
let mut file = fs::File::create(dest)?;
file.write_all(result.as_bytes())?;
Ok(())
}
/// Map a Claude tool name to its OpenCode equivalent.
/// Unknown tools pass through as lowercase instead of being dropped.
fn claude_to_opencode_tool(tool: &str) -> &str {
match tool {
// Direct equivalents (both directions)
"Read" | "read" => "read",
"Write" | "write" => "write",
"Edit" | "edit" => "edit",
"Grep" | "grep" => "grep",
"Glob" | "glob" => "glob",
"Bash" | "bash" => "bash",
"WebSearch" | "websearch" => "websearch",
"WebFetch" | "webfetch" => "webfetch",
"TodoWrite" | "todowrite" => "todowrite",
"TodoRead" | "todoread" => "todoread",
// Claude-specific → closest OpenCode equivalent
"LS" => "bash",
"MultiEdit" => "edit",
"Task" => "bash",
"NotebookEdit" => "edit",
"NotebookRead" => "read",
"AskUserQuestion" | "question" => "question",
"KillBash" | "BashOutput" => "bash",
// OpenCode-native tools (pass through)
"list" => "list",
"lsp" => "lsp",
"patch" => "patch",
"skill" => "skill",
// Unknown: pass through as-is (don't drop)
other => {
eprintln!("Warning: Unknown tool '{}', passing through as-is", other);
other
}
}
}
// ---------------------------------------------------------------------------
// Phase 1: Reverse agent transform (OpenCode → Claude)
// ---------------------------------------------------------------------------
/// Transform an agent file for Claude format.
/// Converts OpenCode YAML object tools back to Claude comma-separated PascalCase string.
fn transform_agent_for_claude(src: &PathBuf, dest: &PathBuf) -> Result<()> {
let content = fs::read_to_string(src)?;
let lines: Vec<&str> = content.lines().collect();
if lines.first() != Some(&"---") {
fs::copy(src, dest)?;
return Ok(());
}
// Parse frontmatter and body
let mut in_frontmatter = false;
let mut frontmatter_lines = Vec::new();
let mut body_lines = Vec::new();
let mut found_end = false;
for line in &lines {
if *line == "---" {
if in_frontmatter {
found_end = true;
in_frontmatter = false;
continue;
} else {
in_frontmatter = true;
continue;
}
}
if in_frontmatter && !found_end {
frontmatter_lines.push(*line);
} else {
body_lines.push(*line);
}
}
let mut result = String::new();
result.push_str("---\n");
let mut i = 0;
while i < frontmatter_lines.len() {
let line = frontmatter_lines[i];
if line.trim() == "tools:" {
// YAML object format — collect tool entries and convert to comma string
let mut tools = Vec::new();
i += 1;
while i < frontmatter_lines.len() {
let inner = frontmatter_lines[i].trim();
if inner.contains(": true") {
let tool_name = inner.split(':').next().unwrap_or("").trim();
let claude_tool = opencode_to_claude_tool(tool_name);
tools.push(claude_tool);
i += 1;
} else if inner.contains(": false") {
// Skip disabled tools
i += 1;
} else if inner.is_empty() || (!inner.starts_with(' ') && !inner.starts_with('-')) {
// No longer in tools block
break;
} else {
i += 1;
}
}
if !tools.is_empty() {
result.push_str(&format!("tools: {}\n", tools.join(", ")));
}
continue; // don't increment i again
} else if line.trim().starts_with("color:") {
// Pass through color field (valid in Claude agents)
result.push_str(line);
result.push('\n');
} else {
result.push_str(line);
result.push('\n');
}
i += 1;
}
result.push_str("---\n");
for line in body_lines {
result.push_str(line);
result.push('\n');
}
let mut file = fs::File::create(dest)?;
file.write_all(result.as_bytes())?;
Ok(())
}
/// Map an OpenCode tool name to its Claude equivalent.
fn opencode_to_claude_tool(tool: &str) -> &str {
match tool {
"read" => "Read",
"write" => "Write",
"edit" => "Edit",
"grep" => "Grep",
"glob" => "Glob",
"bash" => "Bash",
"websearch" => "WebSearch",
"webfetch" => "WebFetch",
"todowrite" => "TodoWrite",
"todoread" => "TodoRead",
"question" => "AskUserQuestion",
"list" => "LS",
"lsp" => "lsp",
"patch" => "patch",
"skill" => "skill",
// Unknown: pass through as-is
other => other,
}
}
// ---------------------------------------------------------------------------
// Phase 3: Cursor rule frontmatter enhancement
// ---------------------------------------------------------------------------
/// Transform a file into Cursor rule format with proper frontmatter.
/// Ensures description and alwaysApply fields are present so Cursor's
/// "Apply Intelligently" system can discover and use the rule.
fn transform_cursor_rule(src: &PathBuf, dest: &PathBuf, _skill_name: &str) -> Result<()> {
let content = fs::read_to_string(src)?;
let lines: Vec<&str> = content.lines().collect();
let output = if lines.first() == Some(&"---") {
// Has frontmatter — check what fields exist
let mut has_description = false;
let mut has_always_apply = false;
let mut in_fm = false;
let mut fm_end = 0;
for (i, line) in lines.iter().enumerate() {
if *line == "---" {
if in_fm { fm_end = i; break; }
in_fm = true;
continue;
}
if in_fm {
if line.starts_with("description:") { has_description = true; }
if line.starts_with("alwaysApply:") { has_always_apply = true; }
}
}
if has_description && has_always_apply {
content
} else {
let mut result = String::new();
result.push_str("---\n");
// Copy existing frontmatter lines
for line in lines.iter().skip(1).take(fm_end - 1) {
result.push_str(line);
result.push('\n');
}
if !has_description {
let desc = extract_description_from_body(&lines, fm_end + 1);
result.push_str(&format!("description: \"{}\"\n", desc));
}
if !has_always_apply {
result.push_str("alwaysApply: false\n");
}
// Closing --- and body
for line in lines.iter().skip(fm_end) {
result.push_str(line);
result.push('\n');
}
result
}
} else {
// No frontmatter — create with Cursor rule fields
let desc = extract_description_from_body(&lines, 0);
let mut result = String::new();
result.push_str("---\n");
result.push_str(&format!("description: \"{}\"\n", desc));
result.push_str("alwaysApply: false\n");
result.push_str("---\n");
result.push_str(&content);
result
};
let mut file = fs::File::create(dest)?;
file.write_all(output.as_bytes())?;
Ok(())
}
// ---------------------------------------------------------------------------
// Companion file copying
// ---------------------------------------------------------------------------
/// Copy companion files from source_dir to dest_dir, skipping the main .md file.
/// Companion files are scripts, templates, and other resources that live alongside
/// the main skill/rule markdown file in directory-based bundles.
fn copy_companion_files(skill: &SkillFile, dest_dir: &Path) -> Result<()> {
let source_dir = match &skill.source_dir {
Some(dir) => dir,
None => return Ok(()),
};
let main_file = &skill.path;
for entry in fs::read_dir(source_dir)? {
let entry = entry?;
let entry_path = entry.path();
// Skip the main markdown file
if entry_path == *main_file {
continue;
}
let file_name = match entry.file_name().into_string() {
Ok(name) => name,
Err(_) => continue,
};
// Skip meta.yaml (resources format metadata, not a companion)
if file_name == "meta.yaml" {
continue;
}
let dest_path = dest_dir.join(&file_name);
if entry_path.is_dir() {
copy_dir_recursive(&entry_path, &dest_path)?;
} else {
fs::copy(&entry_path, &dest_path)?;
}
}
Ok(())
}
/// Recursively copy a directory tree from src to dest.
fn copy_dir_recursive(src: &Path, dest: &Path) -> Result<()> {
fs::create_dir_all(dest)?;
for entry in fs::read_dir(src)? {
let entry = entry?;
let entry_path = entry.path();
let dest_path = dest.join(entry.file_name());
if entry_path.is_dir() {
copy_dir_recursive(&entry_path, &dest_path)?;
} else {
fs::copy(&entry_path, &dest_path)?;
}
}
Ok(())
}
// ---------------------------------------------------------------------------
// Tests
// ---------------------------------------------------------------------------
#[cfg(test)]
mod tests {
use super::*;
use crate::bundle::{SkillFile, SkillType};
use tempfile::tempdir;
#[test]
fn test_tool_names() {
assert_eq!(Tool::Claude.name(), "Claude");
assert_eq!(Tool::OpenCode.name(), "OpenCode");
assert_eq!(Tool::Cursor.name(), "Cursor");
}
// ---- Phase 2: transform_skill_file with description injection ----
#[test]
fn test_transform_skill_no_frontmatter() {
let dir = tempdir().unwrap();
let src = dir.path().join("src.md");
let dest = dir.path().join("dest.md");
fs::write(&src, "# My Skill\n\nContent here").unwrap();
transform_skill_file(&src, &dest, "test-skill").unwrap();
let result = fs::read_to_string(&dest).unwrap();
assert!(result.contains("name: test-skill"));
assert!(result.contains("description: \"My Skill\""));
assert!(result.contains("# My Skill"));
}
#[test]
fn test_transform_skill_with_frontmatter_no_name() {
let dir = tempdir().unwrap();
let src = dir.path().join("src.md");
let dest = dir.path().join("dest.md");
fs::write(&src, "---\ndescription: test\n---\n# My Skill").unwrap();
transform_skill_file(&src, &dest, "test-skill").unwrap();
let result = fs::read_to_string(&dest).unwrap();
assert!(result.contains("name: test-skill"));
assert!(result.contains("description: test"));
}
#[test]
fn test_transform_skill_with_name_and_description() {
let dir = tempdir().unwrap();
let src = dir.path().join("src.md");
let dest = dir.path().join("dest.md");
fs::write(&src, "---\nname: existing-name\ndescription: existing desc\n---\n# My Skill").unwrap();
transform_skill_file(&src, &dest, "test-skill").unwrap();
let result = fs::read_to_string(&dest).unwrap();
assert!(result.contains("name: existing-name"));
assert!(!result.contains("name: test-skill"));
assert!(result.contains("description: existing desc"));
}
#[test]
fn test_transform_skill_with_name_no_description() {
let dir = tempdir().unwrap();
let src = dir.path().join("src.md");
let dest = dir.path().join("dest.md");
fs::write(&src, "---\nname: my-skill\n---\n# Great Skill\n\nDoes stuff").unwrap();
transform_skill_file(&src, &dest, "test-skill").unwrap();
let result = fs::read_to_string(&dest).unwrap();
assert!(result.contains("name: my-skill"));
assert!(result.contains("description: \"Great Skill\""));
}
#[test]
fn test_transform_skill_empty_body_fallback_description() {
let dir = tempdir().unwrap();
let src = dir.path().join("src.md");
let dest = dir.path().join("dest.md");
fs::write(&src, "").unwrap();
transform_skill_file(&src, &dest, "test-skill").unwrap();
let result = fs::read_to_string(&dest).unwrap();
assert!(result.contains("name: test-skill"));
assert!(result.contains("description: \"Skill instructions\""));
}
#[test]
fn test_transform_skill_paragraph_as_description() {
let dir = tempdir().unwrap();
let src = dir.path().join("src.md");
let dest = dir.path().join("dest.md");
fs::write(&src, "This is a paragraph description of the skill.\n\nMore content.").unwrap();
transform_skill_file(&src, &dest, "test-skill").unwrap();
let result = fs::read_to_string(&dest).unwrap();
assert!(result.contains("description: \"This is a paragraph description of the skill.\""));
}
#[test]
fn test_extract_description_truncation() {
let long_text = "A".repeat(250);
let result = truncate_description(&long_text);
assert_eq!(result.len(), 200);
assert!(result.ends_with("..."));
}
// ---- Phase 1: Tool name mapping (Claude → OpenCode) ----
#[test]
fn test_transform_agent_file_tools_conversion() {
let temp_dir = tempdir().unwrap();
let src_path = temp_dir.path().join("source.md");
let dest_path = temp_dir.path().join("dest.md");
let src_content = r#"---
name: test-agent
description: Test agent
tools: Read, Grep, Glob, LS
model: sonnet
color: yellow
---
This is the agent content.
"#;
fs::write(&src_path, src_content).unwrap();
transform_agent_file(&src_path, &dest_path).unwrap();
let result = fs::read_to_string(&dest_path).unwrap();
assert!(result.contains("name: test-agent"));
assert!(result.contains("description: Test agent"));
assert!(result.contains("tools:"));
assert!(result.contains(" read: true"));
assert!(result.contains(" grep: true"));
assert!(result.contains(" glob: true"));
assert!(result.contains(" bash: true"));
assert!(result.contains("model: sonnet"));
assert!(!result.contains("color: yellow"));
assert!(result.contains("This is the agent content."));
}
#[test]
fn test_transform_agent_expanded_tools() {
let temp_dir = tempdir().unwrap();
let src_path = temp_dir.path().join("source.md");
let dest_path = temp_dir.path().join("dest.md");
let src_content = "---\nname: full-agent\ntools: Write, Edit, Bash, Task, AskUserQuestion, MultiEdit, NotebookRead\n---\nContent\n";
fs::write(&src_path, src_content).unwrap();
transform_agent_file(&src_path, &dest_path).unwrap();
let result = fs::read_to_string(&dest_path).unwrap();
assert!(result.contains(" write: true"));
assert!(result.contains(" edit: true"));
assert!(result.contains(" bash: true"));
assert!(result.contains(" question: true"));
assert!(result.contains(" read: true")); // NotebookRead -> read
// Task -> bash, MultiEdit -> edit (already present)
}
#[test]
fn test_transform_agent_unknown_tool_passthrough() {
let temp_dir = tempdir().unwrap();
let src_path = temp_dir.path().join("source.md");
let dest_path = temp_dir.path().join("dest.md");
let src_content = "---\nname: mcp-agent\ntools: Read, CustomMCP, Grep\n---\nContent\n";
fs::write(&src_path, src_content).unwrap();
transform_agent_file(&src_path, &dest_path).unwrap();
let result = fs::read_to_string(&dest_path).unwrap();
assert!(result.contains(" read: true"));
assert!(result.contains(" CustomMCP: true")); // passed through, not dropped
assert!(result.contains(" grep: true"));
}
// ---- Phase 1: Reverse transform (OpenCode → Claude) ----
#[test]
fn test_transform_agent_for_claude() {
let temp_dir = tempdir().unwrap();
let src_path = temp_dir.path().join("source.md");
let dest_path = temp_dir.path().join("dest.md");
let src_content = "---\nname: oc-agent\ndescription: An OpenCode agent\ntools:\n read: true\n write: true\n grep: true\nmodel: sonnet\n---\nAgent body.\n";
fs::write(&src_path, src_content).unwrap();
transform_agent_for_claude(&src_path, &dest_path).unwrap();
let result = fs::read_to_string(&dest_path).unwrap();
assert!(result.contains("tools: Read, Write, Grep"));
assert!(result.contains("name: oc-agent"));
assert!(result.contains("model: sonnet"));
assert!(result.contains("Agent body."));
}
#[test]
fn test_transform_agent_for_claude_skips_disabled() {
let temp_dir = tempdir().unwrap();
let src_path = temp_dir.path().join("source.md");
let dest_path = temp_dir.path().join("dest.md");
let src_content = "---\ntools:\n read: true\n write: false\n bash: true\n---\nBody\n";
fs::write(&src_path, src_content).unwrap();
transform_agent_for_claude(&src_path, &dest_path).unwrap();
let result = fs::read_to_string(&dest_path).unwrap();
assert!(result.contains("tools: Read, Bash"));
assert!(!result.contains("Write"));
}
// ---- Phase 4: Format detection ----
#[test]
fn test_detect_agent_format_claude() {
let temp_dir = tempdir().unwrap();
let src = temp_dir.path().join("agent.md");
fs::write(&src, "---\ntools: Read, Grep, Glob\n---\nContent").unwrap();
assert_eq!(detect_agent_format(&src).unwrap(), AgentFormat::Claude);
}
#[test]
fn test_detect_agent_format_opencode() {
let temp_dir = tempdir().unwrap();
let src = temp_dir.path().join("agent.md");
fs::write(&src, "---\ntools:\n read: true\n grep: true\n---\nContent").unwrap();
assert_eq!(detect_agent_format(&src).unwrap(), AgentFormat::OpenCode);
}
#[test]
fn test_detect_agent_format_unknown() {
let temp_dir = tempdir().unwrap();
let src = temp_dir.path().join("agent.md");
fs::write(&src, "---\nname: no-tools\ndescription: test\n---\nContent").unwrap();
assert_eq!(detect_agent_format(&src).unwrap(), AgentFormat::Unknown);
}
#[test]
fn test_detect_agent_format_no_frontmatter() {
let temp_dir = tempdir().unwrap();
let src = temp_dir.path().join("agent.md");
fs::write(&src, "# Just a markdown file\nNo frontmatter.").unwrap();
assert_eq!(detect_agent_format(&src).unwrap(), AgentFormat::Unknown);
}
// ---- Phase 4: Write with auto-detection ----
#[test]
fn test_write_claude_from_opencode_agent() {
let temp_dir = tempdir().unwrap();
let target_dir = temp_dir.path().to_path_buf();
// Source is OpenCode format
let src_content = "---\nname: oc-agent\ntools:\n read: true\n write: true\n---\nBody\n";
let src_path = temp_dir.path().join("source.md");
fs::write(&src_path, src_content).unwrap();
let skill = SkillFile {
name: "oc-agent".to_string(),
path: src_path,
skill_type: SkillType::Agent,
source_dir: None,
};
let result = Tool::Claude.write_file(&target_dir, "bundle", &skill).unwrap();
let content = fs::read_to_string(&result).unwrap();
// Should have been reverse-transformed to Claude format
assert!(content.contains("tools: Read, Write"));
assert!(!content.contains(" read: true"));
}
#[test]
fn test_write_opencode_from_claude_agent() {
let temp_dir = tempdir().unwrap();
let target_dir = temp_dir.path().to_path_buf();
// Source is Claude format
let src_content = "---\nname: cl-agent\ntools: Read, Grep\n---\nBody\n";
let src_path = temp_dir.path().join("source.md");
fs::write(&src_path, src_content).unwrap();
let skill = SkillFile {
name: "cl-agent".to_string(),
path: src_path,
skill_type: SkillType::Agent,
source_dir: None,
};
let result = Tool::OpenCode.write_file(&target_dir, "bundle", &skill).unwrap();
let content = fs::read_to_string(&result).unwrap();
// Should have been forward-transformed to OpenCode format
assert!(content.contains(" read: true"));
assert!(content.contains(" grep: true"));
assert!(!content.contains("tools: Read"));
}
#[test]
fn test_write_opencode_from_opencode_agent_no_double_transform() {
let temp_dir = tempdir().unwrap();
let target_dir = temp_dir.path().to_path_buf();
// Source is already OpenCode format
let src_content = "---\nname: oc-agent\ntools:\n read: true\n grep: true\n---\nBody\n";
let src_path = temp_dir.path().join("source.md");
fs::write(&src_path, src_content).unwrap();
let skill = SkillFile {
name: "oc-agent".to_string(),
path: src_path,
skill_type: SkillType::Agent,
source_dir: None,
};
let result = Tool::OpenCode.write_file(&target_dir, "bundle", &skill).unwrap();
let content = fs::read_to_string(&result).unwrap();
// Should be copied as-is (no transform needed)
assert!(content.contains("tools:"));
assert!(content.contains(" read: true"));
assert!(content.contains(" grep: true"));
}
// ---- Integration: write_file for skills ----
#[test]
fn test_write_opencode_skill() {
let temp_dir = tempdir().unwrap();
let target_dir = temp_dir.path().to_path_buf();
let src_content = "# My Skill\n\nContent here";
let src_path = temp_dir.path().join("source.md");
fs::write(&src_path, src_content).unwrap();
let skill = SkillFile {
name: "my-skill".to_string(),
path: src_path,
skill_type: SkillType::Skill,
source_dir: None,
};
let result = Tool::OpenCode.write_file(&target_dir, "test-bundle", &skill).unwrap();
let expected_path = target_dir.join(".opencode/skill/test-bundle-my-skill/SKILL.md");
assert_eq!(result, expected_path);
assert!(expected_path.exists());
let content = fs::read_to_string(&expected_path).unwrap();
assert!(content.contains("name: test-bundle-my-skill"));
assert!(content.contains("description: \"My Skill\""));
assert!(content.contains("# My Skill"));
}
#[test]
fn test_write_cursor_skill() {
let temp_dir = tempdir().unwrap();
let target_dir = temp_dir.path().to_path_buf();
let src_content = "# My Skill\n\nContent here";
let src_path = temp_dir.path().join("source.md");
fs::write(&src_path, src_content).unwrap();
let skill = SkillFile {
name: "my-skill".to_string(),
path: src_path,
skill_type: SkillType::Skill,
source_dir: None,
};
let result = Tool::Cursor.write_file(&target_dir, "test-bundle", &skill).unwrap();
let expected_path = target_dir.join(".cursor/skills/test-bundle-my-skill/SKILL.md");
assert_eq!(result, expected_path);
assert!(expected_path.exists());
let content = fs::read_to_string(&expected_path).unwrap();
assert!(content.contains("name: test-bundle-my-skill"));
assert!(content.contains("description: \"My Skill\""));
assert!(content.contains("# My Skill"));
}
// ---- Phase 3: Cursor rule frontmatter ----
#[test]
fn test_write_cursor_rule() {
let temp_dir = tempdir().unwrap();
let target_dir = temp_dir.path().to_path_buf();
let src_content = "# My Rule\n\nContent here";
let src_path = temp_dir.path().join("source.md");
fs::write(&src_path, src_content).unwrap();
let skill = SkillFile {
name: "my-rule".to_string(),
path: src_path,
skill_type: SkillType::Rule,
source_dir: None,
};
let result = Tool::Cursor.write_file(&target_dir, "test-bundle", &skill).unwrap();
let expected_path = target_dir.join(".cursor/rules/test-bundle-my-rule/RULE.md");
assert_eq!(result, expected_path);
assert!(expected_path.exists());
let content = fs::read_to_string(&expected_path).unwrap();
assert!(content.contains("description: \"My Rule\""));
assert!(content.contains("alwaysApply: false"));
assert!(content.contains("# My Rule"));
}
#[test]
fn test_cursor_rule_with_existing_description() {
let temp_dir = tempdir().unwrap();
let src = temp_dir.path().join("src.md");
let dest = temp_dir.path().join("dest.md");
fs::write(&src, "---\ndescription: Existing desc\n---\n# Rule Content").unwrap();
transform_cursor_rule(&src, &dest, "test-rule").unwrap();
let result = fs::read_to_string(&dest).unwrap();
assert!(result.contains("description: Existing desc"));
assert!(result.contains("alwaysApply: false"));
// Should NOT have a second description
assert_eq!(result.matches("description:").count(), 1);
}
#[test]
fn test_cursor_rule_with_both_fields() {
let temp_dir = tempdir().unwrap();
let src = temp_dir.path().join("src.md");
let dest = temp_dir.path().join("dest.md");
fs::write(&src, "---\ndescription: Complete rule\nalwaysApply: true\n---\n# Content").unwrap();
transform_cursor_rule(&src, &dest, "test-rule").unwrap();
let result = fs::read_to_string(&dest).unwrap();
// Should be unchanged since both fields exist
assert!(result.contains("description: Complete rule"));
assert!(result.contains("alwaysApply: true"));
}
#[test]
fn test_cursor_rule_agent_gets_rule_frontmatter() {
let temp_dir = tempdir().unwrap();
let target_dir = temp_dir.path().to_path_buf();
// An agent file installed to Cursor goes to .cursor/rules/
let src_content = "---\nname: my-agent\ntools: Read, Grep\n---\nAgent instructions.";
let src_path = temp_dir.path().join("source.md");
fs::write(&src_path, src_content).unwrap();
let skill = SkillFile {
name: "my-agent".to_string(),
path: src_path,
skill_type: SkillType::Agent,
source_dir: None,
};
let result = Tool::Cursor.write_file(&target_dir, "tb", &skill).unwrap();
let content = fs::read_to_string(&result).unwrap();
// Should have Cursor rule frontmatter added
assert!(content.contains("alwaysApply: false"));
assert!(content.contains("description:"));
}
// ---- Integration: write_file for agents ----
#[test]
fn test_write_opencode_agent() {
let temp_dir = tempdir().unwrap();
let target_dir = temp_dir.path().to_path_buf();
let src_content = r#"---
name: test-agent
tools: Read, Grep
---
Agent content here.
"#;
let src_path = temp_dir.path().join("source.md");
fs::write(&src_path, src_content).unwrap();
let skill = SkillFile {
name: "test-agent".to_string(),
path: src_path,
skill_type: SkillType::Agent,
source_dir: None,
};
let result = Tool::OpenCode.write_file(&target_dir, "test-bundle", &skill).unwrap();
let expected_path = target_dir.join(".opencode/agent/test-bundle-test-agent.md");
assert_eq!(result, expected_path);
assert!(expected_path.exists());
let content = fs::read_to_string(&expected_path).unwrap();
assert!(content.contains("tools:"));
assert!(content.contains(" read: true"));
assert!(content.contains(" grep: true"));
assert!(content.contains("Agent content here."));
}
// ---- Companion file copying ----
#[test]
fn test_companion_files_copied_claude() {
let temp_dir = tempdir().unwrap();
let target_dir = temp_dir.path().join("target");
fs::create_dir_all(&target_dir).unwrap();
// Set up a source directory with SKILL.md and companion files
let source_dir = temp_dir.path().join("source/skills/pptx");
fs::create_dir_all(&source_dir).unwrap();
let skill_md = source_dir.join("SKILL.md");
fs::write(&skill_md, "# PPTX Skill\n\nCreates presentations.").unwrap();
// Companion files
fs::write(source_dir.join("ooxml.md"), "# OOXML Reference").unwrap();
let scripts_dir = source_dir.join("scripts");
fs::create_dir_all(&scripts_dir).unwrap();
fs::write(scripts_dir.join("build.sh"), "#!/bin/bash\necho hello").unwrap();
// Nested subdir in scripts
let nested = scripts_dir.join("lib");
fs::create_dir_all(&nested).unwrap();
fs::write(nested.join("helper.py"), "print('hi')").unwrap();
let skill = SkillFile {
name: "pptx".to_string(),
path: skill_md,
skill_type: SkillType::Skill,
source_dir: Some(source_dir),
};
Tool::Claude.write_file(&target_dir, "my-bundle", &skill).unwrap();
let dest_dir = target_dir.join(".claude/skills/my-bundle");
// Main file should exist
assert!(dest_dir.join("pptx.md").exists());
// Companion .md file
assert!(dest_dir.join("ooxml.md").exists());
assert_eq!(
fs::read_to_string(dest_dir.join("ooxml.md")).unwrap(),
"# OOXML Reference"
);
// Script file in subdirectory
assert!(dest_dir.join("scripts/build.sh").exists());
// Nested file
assert!(dest_dir.join("scripts/lib/helper.py").exists());
}
#[test]
fn test_companion_files_copied_opencode_skill() {
let temp_dir = tempdir().unwrap();
let target_dir = temp_dir.path().join("target");
fs::create_dir_all(&target_dir).unwrap();
let source_dir = temp_dir.path().join("source/skills/pptx");
fs::create_dir_all(&source_dir).unwrap();
let skill_md = source_dir.join("SKILL.md");
fs::write(&skill_md, "# PPTX Skill").unwrap();
fs::write(source_dir.join("template.pptx"), "binary content").unwrap();
let skill = SkillFile {
name: "pptx".to_string(),
path: skill_md,
skill_type: SkillType::Skill,
source_dir: Some(source_dir),
};
Tool::OpenCode.write_file(&target_dir, "bundle", &skill).unwrap();
let dest_dir = target_dir.join(".opencode/skill/bundle-pptx");
assert!(dest_dir.join("SKILL.md").exists());
assert!(dest_dir.join("template.pptx").exists());
}
#[test]
fn test_companion_files_copied_cursor_skill() {
let temp_dir = tempdir().unwrap();
let target_dir = temp_dir.path().join("target");
fs::create_dir_all(&target_dir).unwrap();
let source_dir = temp_dir.path().join("source/skills/pptx");
fs::create_dir_all(&source_dir).unwrap();
let skill_md = source_dir.join("SKILL.md");
fs::write(&skill_md, "# PPTX Skill").unwrap();
fs::write(source_dir.join("reference.md"), "# Ref").unwrap();
let skill = SkillFile {
name: "pptx".to_string(),
path: skill_md,
skill_type: SkillType::Skill,
source_dir: Some(source_dir),
};
Tool::Cursor.write_file(&target_dir, "bundle", &skill).unwrap();
let dest_dir = target_dir.join(".cursor/skills/bundle-pptx");
assert!(dest_dir.join("SKILL.md").exists());
assert!(dest_dir.join("reference.md").exists());
}
#[test]
fn test_companion_files_skips_meta_yaml() {
let temp_dir = tempdir().unwrap();
let target_dir = temp_dir.path().join("target");
fs::create_dir_all(&target_dir).unwrap();
let source_dir = temp_dir.path().join("source/skills/pptx");
fs::create_dir_all(&source_dir).unwrap();
let skill_md = source_dir.join("SKILL.md");
fs::write(&skill_md, "# PPTX Skill").unwrap();
fs::write(source_dir.join("meta.yaml"), "name: pptx\nauthor: test").unwrap();
fs::write(source_dir.join("helper.py"), "print('hi')").unwrap();
let skill = SkillFile {
name: "pptx".to_string(),
path: skill_md,
skill_type: SkillType::Skill,
source_dir: Some(source_dir),
};
Tool::Claude.write_file(&target_dir, "bundle", &skill).unwrap();
let dest_dir = target_dir.join(".claude/skills/bundle");
assert!(dest_dir.join("pptx.md").exists());
assert!(dest_dir.join("helper.py").exists());
// meta.yaml should NOT be copied
assert!(!dest_dir.join("meta.yaml").exists());
}
#[test]
fn test_no_companion_files_when_source_dir_none() {
let temp_dir = tempdir().unwrap();
let target_dir = temp_dir.path().to_path_buf();
let src_path = temp_dir.path().join("source.md");
fs::write(&src_path, "# Simple Skill").unwrap();
let skill = SkillFile {
name: "simple".to_string(),
path: src_path,
skill_type: SkillType::Skill,
source_dir: None,
};
// Should succeed without errors even though source_dir is None
let result = Tool::Claude.write_file(&target_dir, "bundle", &skill).unwrap();
assert!(result.exists());
}
}
</file>
<file path="Cargo.toml">
[package]
name = "skill-manager"
version = "0.5.0"
edition = "2021"
description = "Manage AI coding tool skills for Claude, OpenCode, and Cursor"
license = "MIT"
repository = "https://github.com/pyrex41/skill-manager"
keywords = ["cli", "ai", "claude", "cursor", "skills"]
categories = ["command-line-utilities", "development-tools"]
readme = "README.md"
[[bin]]
name = "skm"
path = "src/main.rs"
[dependencies]
clap = { version = "4", features = ["derive"] }
clap_complete = "4"
walkdir = "2"
anyhow = "1"
thiserror = "1"
directories = "5"
toml = "0.8"
serde = { version = "1", features = ["derive"] }
dialoguer = { version = "0.11", features = ["fuzzy-select"] }
colored = "2"
git2 = "0.19"
serde_yaml = "0.9"
[dev-dependencies]
tempfile = "3"
</file>
</files>