Skip to main content

Module types

Module types 

Source
Expand description

Shared types for the agent module.

Organized by domain:

  • string_utils: UTF-8 safe string helpers
  • config: Agent configuration
  • chat: OpenAI-compatible chat types
  • feedback: Execution feedback (EVO-1)
  • event_sink: Event sink trait and implementations
  • task: Task planning types
  • env_config: Environment config helpers

Structs§

AgentConfig
Agent configuration.
AgentResult
Agent loop result.
ChatMessage
A chat message in OpenAI format.
ClarificationRequest
Structured request asking the user for clarification before the agent stops.
ExecutionFeedback
Structured feedback collected from each agent loop execution. Used by the evolution engine to evaluate rule/skill effectiveness.
FunctionCall
Function call details.
FunctionDef
Function definition within a tool.
PlanningRule
A planning rule for task generation.
RunModeEventSink
Event sink for unattended run mode: same output as TerminalEventSink, but auto-approves confirmation requests (run_command, L3 skill scan). Replan (update_task_plan) never waits — agent continues immediately.
SilentEventSink
Silent event sink for background operations (e.g. pre-compaction memory flush). Swallows all output and auto-approves confirmation requests.
SourceEntry
A single external information source entry.
SourceRegistry
The full source registry.
Task
A task in the task plan. Ported from Python TaskPlanner.task_list dict structure.
TerminalEventSink
Simple terminal event sink for CLI chat.
ToolCall
A tool call from the LLM.
ToolDefinition
OpenAI-compatible tool definition.
ToolExecDetail
Per-tool execution outcome.
ToolResult
Result from executing a tool.

Enums§

ClarificationResponse
User’s response to a clarification request.
FeedbackSignal
User feedback signal classified from the next user message.
LongTextStrategy
Long text selection strategy. SKILLLITE_LONG_TEXT_STRATEGY.
SkillAction
Action type for skill evolution (generate new or refine existing).
ToolFormat
Supported LLM tool formats. Ported from Python core/tools.py ToolFormat enum.

Traits§

EventSink
Event sink trait for different output targets (CLI, RPC, SDK).

Functions§

chunk_str
Split a string into chunks of approximately chunk_size bytes, ensuring each split occurs at a valid UTF-8 char boundary.
classify_user_feedback
Classify user feedback from the next message (simple keyword matching). ~80% accuracy is sufficient; evolution is gradual and tolerates noise.
get_chunk_size
Chunk size for long text summarization (~1.5k tokens). SKILLLITE_CHUNK_SIZE.
get_compact_planning
Whether to use compact planning prompt (rule filtering + fewer examples).
get_compaction_keep_recent
Number of recent messages to keep after compaction. SKILLLITE_COMPACTION_KEEP_RECENT.
get_compaction_threshold
Compaction threshold: compact conversation history when message count exceeds this. SKILLLITE_COMPACTION_THRESHOLD. Default 16 (~8 turns).
get_extract_top_k
Number of chunks to select in extract mode. Uses ratio or head+tail count as floor.
get_head_chunks
Number of head chunks for head+tail summarization. SKILLLITE_HEAD_CHUNKS.
get_long_text_strategy
get_map_model
Model for Map stage in MapReduce summarization. SKILLLITE_MAP_MODEL. When set, Map (per-chunk summarization) uses this cheaper model; Reduce (merge) uses main model. E.g. qwen-plus, gemini-1.5-flash. If unset, both stages use main model.
get_max_output_chars
Max output length for summarization (~2k tokens). SKILLLITE_MAX_OUTPUT_CHARS.
get_max_tokens
Max output tokens for LLM completion. SKILLLITE_MAX_TOKENS. Higher values reduce write_output/write_file truncation when generating large content. Default 8192 to match common API limits (e.g. DeepSeek). Set higher if your API supports it.
get_memory_flush_enabled
Whether to run pre-compaction memory flush (OpenClaw-style). When enabled, before compacting we run a silent agent turn to remind the model to write durable memories. SKILLLITE_MEMORY_FLUSH_ENABLED. Default true.
get_memory_flush_threshold
Memory flush threshold: run memory flush when history approaches compaction. Lower value = more frequent memory flush. Use same as compaction if not set. SKILLLITE_MEMORY_FLUSH_THRESHOLD. Default 12 (so flush triggers ~4 msgs before compaction at 16).
get_output_dir
Output directory override. SKILLLITE_OUTPUT_DIR.
get_summarize_threshold
Threshold above which chunked LLM summarization is used instead of simple truncation. SKILLLITE_SUMMARIZE_THRESHOLD. Default raised from 15000→30000 to avoid summarizing medium-sized HTML/code files (e.g. 17KB website) which destroys content needed for downstream tasks.
get_tail_chunks
Number of tail chunks for head+tail summarization. SKILLLITE_TAIL_CHUNKS.
get_tool_result_max_chars
Max chars per tool result. SKILLLITE_TOOL_RESULT_MAX_CHARS. Default raised from 8000→12000 to better accommodate HTML/code tool results without triggering unnecessary truncation.
get_tool_result_recovery_max_chars
Max chars for tool messages during context-overflow recovery. SKILLLITE_TOOL_RESULT_RECOVERY_MAX_CHARS.
get_user_input_max_chars
Max chars for a single user input message before truncation/summarization. SKILLLITE_USER_INPUT_MAX_CHARS. Default 30000 (~7.5k tokens). Inputs shorter than this pass through unchanged; longer inputs are truncated (if ≤ SKILLLITE_SUMMARIZE_THRESHOLD) or LLM-summarized.
parse_claude_tool_calls
Parse tool calls from a Claude native API response. Claude returns content blocks with type “tool_use”. Ported from Python ToolUseRequest.parse_from_claude_response.
safe_slice_from
Get a &str starting from approximately start_pos, adjusted forward to a safe UTF-8 boundary.
safe_truncate
Truncate a string at a safe UTF-8 char boundary (from the start). Returns a &str of at most max_bytes bytes, never splitting a multi-byte character.