Expand description
Shared types for the agent module.
Organized by domain:
string_utils: UTF-8 safe string helpersconfig: Agent configurationchat: OpenAI-compatible chat typesfeedback: Execution feedback (EVO-1)event_sink: Event sink trait and implementationstask: Task planning typesenv_config: Environment config helpers
Structs§
- Agent
Config - Agent configuration.
- Agent
Result - Agent loop result.
- Chat
Message - A chat message in OpenAI format.
- Clarification
Request - Structured request asking the user for clarification before the agent stops.
- Execution
Feedback - Structured feedback collected from each agent loop execution. Used by the evolution engine to evaluate rule/skill effectiveness.
- Function
Call - Function call details.
- Function
Def - Function definition within a tool.
- Planning
Rule - A planning rule for task generation.
- RunMode
Event Sink - Event sink for unattended run mode: same output as TerminalEventSink, but auto-approves confirmation requests (run_command, L3 skill scan). Replan (update_task_plan) never waits — agent continues immediately.
- Silent
Event Sink - Silent event sink for background operations (e.g. pre-compaction memory flush). Swallows all output and auto-approves confirmation requests.
- Source
Entry - A single external information source entry.
- Source
Registry - The full source registry.
- Task
- A task in the task plan.
Ported from Python
TaskPlanner.task_listdict structure. - Terminal
Event Sink - Simple terminal event sink for CLI chat.
- Tool
Call - A tool call from the LLM.
- Tool
Definition - OpenAI-compatible tool definition.
- Tool
Exec Detail - Per-tool execution outcome.
- Tool
Result - Result from executing a tool.
Enums§
- Clarification
Response - User’s response to a clarification request.
- Feedback
Signal - User feedback signal classified from the next user message.
- Long
Text Strategy - Long text selection strategy.
SKILLLITE_LONG_TEXT_STRATEGY. - Skill
Action - Action type for skill evolution (generate new or refine existing).
- Tool
Format - Supported LLM tool formats.
Ported from Python
core/tools.pyToolFormat enum.
Traits§
- Event
Sink - Event sink trait for different output targets (CLI, RPC, SDK).
Functions§
- chunk_
str - Split a string into chunks of approximately
chunk_sizebytes, ensuring each split occurs at a valid UTF-8 char boundary. - classify_
user_ feedback - Classify user feedback from the next message (simple keyword matching). ~80% accuracy is sufficient; evolution is gradual and tolerates noise.
- get_
chunk_ size - Chunk size for long text summarization (~1.5k tokens).
SKILLLITE_CHUNK_SIZE. - get_
compact_ planning - Whether to use compact planning prompt (rule filtering + fewer examples).
- get_
compaction_ keep_ recent - Number of recent messages to keep after compaction.
SKILLLITE_COMPACTION_KEEP_RECENT. - get_
compaction_ threshold - Compaction threshold: compact conversation history when message count exceeds this.
SKILLLITE_COMPACTION_THRESHOLD. Default 16 (~8 turns). - get_
extract_ top_ k - Number of chunks to select in extract mode. Uses ratio or head+tail count as floor.
- get_
head_ chunks - Number of head chunks for head+tail summarization.
SKILLLITE_HEAD_CHUNKS. - get_
long_ text_ strategy - get_
map_ model - Model for Map stage in MapReduce summarization.
SKILLLITE_MAP_MODEL. When set, Map (per-chunk summarization) uses this cheaper model; Reduce (merge) uses main model. E.g.qwen-plus,gemini-1.5-flash. If unset, both stages use main model. - get_
max_ output_ chars - Max output length for summarization (~2k tokens).
SKILLLITE_MAX_OUTPUT_CHARS. - get_
max_ tokens - Max output tokens for LLM completion.
SKILLLITE_MAX_TOKENS. Higher values reduce write_output/write_file truncation when generating large content. Default 8192 to match common API limits (e.g. DeepSeek). Set higher if your API supports it. - get_
memory_ flush_ enabled - Whether to run pre-compaction memory flush (OpenClaw-style).
When enabled, before compacting we run a silent agent turn to remind the model
to write durable memories.
SKILLLITE_MEMORY_FLUSH_ENABLED. Default true. - get_
memory_ flush_ threshold - Memory flush threshold: run memory flush when history approaches compaction.
Lower value = more frequent memory flush. Use same as compaction if not set.
SKILLLITE_MEMORY_FLUSH_THRESHOLD. Default 12 (so flush triggers ~4 msgs before compaction at 16). - get_
output_ dir - Output directory override.
SKILLLITE_OUTPUT_DIR. - get_
summarize_ threshold - Threshold above which chunked LLM summarization is used instead of simple
truncation.
SKILLLITE_SUMMARIZE_THRESHOLD. Default raised from 15000→30000 to avoid summarizing medium-sized HTML/code files (e.g. 17KB website) which destroys content needed for downstream tasks. - get_
tail_ chunks - Number of tail chunks for head+tail summarization.
SKILLLITE_TAIL_CHUNKS. - get_
tool_ result_ max_ chars - Max chars per tool result.
SKILLLITE_TOOL_RESULT_MAX_CHARS. Default raised from 8000→12000 to better accommodate HTML/code tool results without triggering unnecessary truncation. - get_
tool_ result_ recovery_ max_ chars - Max chars for tool messages during context-overflow recovery.
SKILLLITE_TOOL_RESULT_RECOVERY_MAX_CHARS. - get_
user_ input_ max_ chars - Max chars for a single user input message before truncation/summarization.
SKILLLITE_USER_INPUT_MAX_CHARS. Default 30000 (~7.5k tokens). Inputs shorter than this pass through unchanged; longer inputs are truncated (if ≤SKILLLITE_SUMMARIZE_THRESHOLD) or LLM-summarized. - parse_
claude_ tool_ calls - Parse tool calls from a Claude native API response.
Claude returns content blocks with type “tool_use”.
Ported from Python
ToolUseRequest.parse_from_claude_response. - safe_
slice_ from - Get a &str starting from approximately
start_pos, adjusted forward to a safe UTF-8 boundary. - safe_
truncate - Truncate a string at a safe UTF-8 char boundary (from the start).
Returns a &str of at most
max_bytesbytes, never splitting a multi-byte character.