Crate semantic_commands

Crate semantic_commands 

Source
Expand description

§Semantic Commands

Crates.io Documentation License

A lightweight Rust framework for defining and executing semantic commands using text embeddings. Frontend‑agnostic and async‑first: route user phrases to your functions based on semantic similarity. Use it in CLI tools, services, web, or desktop applications.


§Features

  • Define commands with multiple example phrases.
  • Async executors with typed results (downcast at call site).
  • Pluggable embeddings (implemented: OpenAI)
  • Command recognition based on input similarity.
  • Optional caching layer for embeddings (implemented: PostgreSQL, InMemoryCache).
  • Context-aware execution.
  • Easy integration with multiple interfaces (CLI, web, API, messaging bots).

§Usage

Define Commands

async fn get_date(_ctx: Arc<()>) -> String {
	"2025-11-05".to_string()
}

let command = Command {
	name: "get_date".to_string(),
	requires_confirmation: false,
	executor: async_executor(get_date),
};
let inputs = vec![
	Input::new("what's the date"),
];

Initialize SemanticCommands

let mut semantic_commands = SemanticCommands::new(
	OpenAIEmbedder,		//	OpenAIEmbedder or implement your own.
	NoCache,			//	PostgresCache |	NoCache or implement your own.
	AppContext			//	define your context which will be available in command executors.
);
semantic_commands.add_command(command, inputs);

Execute a Command

let result = semantic_commands.execute("what is the current BTC price?").await?;

The result should be then downcasted to whatever type returned by your executor:

println!("Date: {:?}", result.downcast::<anyhow::Result<String>>().unwrap().unwrap());

§Caching Options

CacheSpeedMemoryPersistenceUse Case
NoCacheN/ANoneN/ATesting, stateless
InMemoryCacheFastUnboundedNoServices, bots
PostgresCacheSlowDB-backedYesMulti-instance

§Features

  • openai (default) - OpenAI embedding provider
  • in-memory-cache (default) - Fast in-memory LRU cache based on moka
  • postgres - PostgreSQL cache backend (implemented with sqlx)
  • full - All features enabled

§Safety & Privacy

Using remote embedding providers (like OpenAI) sends input text to third‑party services. Do not embed secrets or private data you cannot share.


§Extensibility

You can implement:

  • A custom Embedder (e.g. local model)
  • A custom Cache

§License

Licensed under either of Apache License, Version 2.0 or MIT license at your option.
Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in this crate by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.

Re-exports§

pub use embedders::openai::OpenAIEmbedder;
pub use cache::Cache;
pub use input::Input;

Modules§

cache
embedders
input

Structs§

Command
InMemoryCache
NoCache
SemanticCommands

Traits§

Embedder

Functions§

async_executor