Semantic Commands
A lightweight Rust framework for defining and executing semantic commands using text embeddings. Frontend‑agnostic and async‑first: route user phrases to your functions based on semantic similarity. Use it in CLI tools, services, web, or desktop applications.
Features
- Define commands with multiple example phrases.
- Async executors with typed results (downcast at call site).
- Pluggable embeddings (implemented: OpenAI)
- Command recognition based on input similarity.
- Optional caching layer for embeddings (implemented: PostgreSQL, InMemoryCache).
- Context-aware execution.
- Easy integration with multiple interfaces (CLI, web, API, messaging bots).
Usage
Define Commands
async
let command = Command ;
let inputs = vec!;
Initialize SemanticCommands
let mut semantic_commands = new;
semantic_commands.add_command;
Execute a Command
let result = semantic_commands.execute.await?;
The result should be then downcasted to whatever type returned by your executor:
println!;
Caching Options
| Cache | Speed | Memory | Persistence | Use Case |
|---|---|---|---|---|
NoCache |
N/A | None | N/A | Testing, stateless |
InMemoryCache |
Fast | Unbounded | No | Services, bots |
PostgresCache |
Slow | DB-backed | Yes | Multi-instance |
Features
openai(default) - OpenAI embedding providerin-memory-cache(default) - Fast in-memory LRU cache based on mokapostgres- PostgreSQL cache backend (implemented with sqlx)full- All features enabled
Safety & Privacy
Using remote embedding providers (like OpenAI) sends input text to third‑party services. Do not embed secrets or private data you cannot share.
Extensibility
You can implement:
- A custom
Embedder(e.g. local model) - A custom
Cache