pub struct RunConfig {
pub streaming_mode: StreamingMode,
pub tool_confirmation_decisions: HashMap<String, ToolConfirmationDecision>,
pub cached_content: Option<String>,
pub transfer_targets: Vec<String>,
pub parent_agent: Option<String>,
pub auto_cache: bool,
}Fields§
§streaming_mode: StreamingMode§tool_confirmation_decisions: HashMap<String, ToolConfirmationDecision>Optional per-tool confirmation decisions for the current run. Keys are tool names.
cached_content: Option<String>Optional cached content name for automatic prompt caching.
When set by the runner’s cache lifecycle manager, agents should attach
this name to their GenerateContentConfig so the LLM provider can
reuse cached system instructions and tool definitions.
transfer_targets: Vec<String>Valid agent names this agent can transfer to (parent, peers, children).
Set by the runner when invoking agents in a multi-agent tree.
When non-empty, the transfer_to_agent tool is injected and validation
uses this list instead of only checking sub_agents.
parent_agent: Option<String>The name of the parent agent, if this agent was invoked via transfer.
Used by the agent to apply disallow_transfer_to_parent filtering.
auto_cache: boolEnable automatic prompt caching for all providers that support it.
When true (the default), the runner enables provider-level caching:
- Anthropic: sets
prompt_caching = trueon the config - Bedrock: sets
prompt_caching = Some(BedrockCacheConfig::default()) - OpenAI / DeepSeek: no action needed (caching is automatic)
- Gemini: handled separately via
ContextCacheConfig