aether-llm 0.1.9

Multi-provider LLM abstraction layer for the Aether AI agent framework
Documentation
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
A tool that the LLM can invoke during generation.

Tools follow a request-response lifecycle:

1. **Define** -- Create [`ToolDefinition`]s with a name, description, and JSON Schema parameters, then pass them to the model via [`Context::set_tools`]crate::Context::set_tools.
2. **Request** -- The model emits a [`ToolCallRequest`] streamed as `ToolRequestStart` -> `ToolRequestArg` -> `ToolRequestComplete` in [`LlmResponse`](crate::LlmResponse).
3. **Execute** -- Run the requested tool and produce either a [`ToolCallResult`] success or a [`ToolCallError`] failure.
4. **Return** -- Feed the result back to the model as a [`ChatMessage::ToolCallResult`]crate::ChatMessage::ToolCallResult in the next turn.

# Related types

- [`ToolCallRequest`] -- What the model asks for: tool `id`, `name`, and `arguments` (JSON string).
- [`ToolCallResult`] -- Successful execution: includes the `result` string.
- [`ToolCallError`] -- Failed execution: includes the `error` string. Construct from a request with [`ToolCallError::from_request`].

The optional `server` field on `ToolDefinition` tracks which MCP server originally provided the tool, if any.