pub struct IndexConfig {Show 16 fields
pub enabled: bool,
pub search_enabled: bool,
pub watch: bool,
pub max_chunks: usize,
pub score_threshold: f32,
pub budget_ratio: f32,
pub repo_map_tokens: usize,
pub repo_map_ttl_secs: u64,
pub mcp_enabled: bool,
pub workspace_root: Option<PathBuf>,
pub concurrency: usize,
pub batch_size: usize,
pub memory_batch_size: usize,
pub max_file_bytes: usize,
pub embed_provider: Option<ProviderName>,
pub embed_concurrency: usize,
}Expand description
Code indexing and repo-map configuration, nested under [index] in TOML.
When enabled = true, the agent indexes source files into Qdrant for semantic
code search. The repo map is injected into the system prompt or served via
IndexMcpServer tool calls when mcp_enabled = true.
§Example (TOML)
[index]
enabled = true
watch = false
max_chunks = 12
score_threshold = 0.25Fields§
§enabled: boolEnable code indexing. Default: false.
search_enabled: boolEnable semantic code search tool. Default: true (no-op when enabled = false).
watch: bool§max_chunks: usize§score_threshold: f32§budget_ratio: f32§repo_map_tokens: usize§repo_map_ttl_secs: u64§mcp_enabled: boolEnable IndexMcpServer tools (symbol_definition, find_text_references, call_graph,
module_summary). When true, static repo-map injection is skipped and the LLM
uses on-demand tool calls instead.
workspace_root: Option<PathBuf>Root directory to index. When None, falls back to the current working directory at
startup. Relative paths are resolved relative to the process working directory.
concurrency: usizeNumber of files to process concurrently during initial indexing. Default: 4.
batch_size: usizeMaximum number of new chunks to batch into a single Qdrant upsert per file. Default: 32.
memory_batch_size: usizeNumber of files to process per memory batch during initial indexing.
After each batch the stream is dropped and the executor yields to allow
the allocator to reclaim pages. Default: 32.
max_file_bytes: usizeMaximum file size in bytes to index. Files larger than this are skipped. Protects against large generated files (e.g. lock files, minified JS). Default: 512 KiB.
embed_provider: Option<ProviderName>Name of a [[llm.providers]] entry to use exclusively for embedding calls during
indexing. A dedicated provider prevents the indexer from contending with the guardrail
at the API server level (rate limits, Ollama single-model lock). Falls back to the main
agent provider when None.
embed_concurrency: usizeMaximum parallel embed_batch calls during indexing (default: 2 to stay within provider
TPM limits).