language-model-token-expander
This crate provides a high-level, batch-oriented token expansion system. By integrating with the batch-mode-batch-workflow tooling and a language model client (e.g., OpenAI), it streamlines processing tokens in batch, generating requests, and reconciling outputs.
Features
-
LanguageModelTokenExpanderStruct- Uses
#[derive(LanguageModelBatchWorkflow)]to implement a comprehensive batch workflow. - Leverages the
CreateLanguageModelRequestsAtAgentCoordinatetrait to define how requests are formed. - Manages workspace, client handles, and metadata for robust batch processing.
- Uses
-
Modular Error Type
TokenExpanderErrorconsolidates a range of possible error variants (e.g., file I/O, reconciliation errors) into one convenient enum.
-
ComputeLanguageModelRequestsIntegration- Automatically extracts unseen tokens from the workspace and creates language model requests in an extensible, trait-driven manner.
Usage
use *;
use CreateLanguageModelRequestsAtAgentCoordinate;
use Arc;
use AgentCoordinate;
async
In this example, LanguageModelTokenExpander automatically handles workspace management and organizes the batch flow from seed input to final JSON output. You need only define how to convert your tokens into LanguageModelBatchAPIRequest structures.