semantic-commands 0.1.0

A lightweight Rust framework for defining and executing semantic commands using text embeddings
Documentation

Semantic Commands

Crates.io Documentation License

A lightweight Rust framework for defining and executing semantic commands using text embeddings. Frontend‑agnostic and async‑first: route user phrases to your functions based on semantic similarity. Use it in CLI tools, services, web, or desktop applications.


Features

  • Define commands with multiple example phrases.
  • Async executors with typed results (downcast at call site).
  • Pluggable embeddings (implemented: OpenAI)
  • Command recognition based on input similarity.
  • Optional caching layer for embeddings (implemented: PostgreSQL, InMemoryCache).
  • Context-aware execution.
  • Easy integration with multiple interfaces (CLI, web, API, messaging bots).

Usage

Define Commands

async fn get_date(_ctx: Arc<()>) -> String {
	"2025-11-05".to_string()
}

let command = Command {
	name: "get_date".to_string(),
	requires_confirmation: false,
	executor: async_executor(get_date),
};
let inputs = vec![
	Input::new("what's the date"),
];

Initialize SemanticCommands

let mut semantic_commands = SemanticCommands::new(
	OpenAIEmbedder,		//	OpenAIEmbedder or implement your own.
	NoCache,			//	PostgresCache |	NoCache or implement your own.
	AppContext			//	define your context which will be available in command executors.
);
semantic_commands.add_command(command, inputs);

Execute a Command

let result = semantic_commands.execute("what is the current BTC price?").await?;

The result should be then downcasted to whatever type returned by your executor:

println!("Date: {:?}", result.downcast::<anyhow::Result<String>>().unwrap().unwrap());

Caching Options

Cache Speed Memory Persistence Use Case
NoCache N/A None N/A Testing, stateless
InMemoryCache Fast Unbounded No Services, bots
PostgresCache Slow DB-backed Yes Multi-instance

Features

  • openai (default) - OpenAI embedding provider
  • in-memory-cache (default) - Fast in-memory LRU cache based on moka
  • postgres - PostgreSQL cache backend (implemented with sqlx)
  • full - All features enabled

Safety & Privacy

Using remote embedding providers (like OpenAI) sends input text to third‑party services. Do not embed secrets or private data you cannot share.


Extensibility

You can implement:

  • A custom Embedder (e.g. local model)
  • A custom Cache

License