pub trait TextStream:
Stream<Item = Result<String, Self::Error>>
+ Send
+ Unpin
+ IntoFuture<Output = Self::Item, IntoFuture: Send> {
type Error: Error + Send + Sync + 'static;
}
Expand description
A trait for streaming text responses from language models.
TextStream
provides a unified interface for handling streaming text data from AI models.
It combines the functionality of Stream
(for processing chunks as they arrive) and
IntoFuture
(for collecting the complete response into a single string).
§Key Features
- Dual Interface: Both streaming (
Stream
) and batch (IntoFuture
) processing - Error Handling: Type-safe error propagation throughout the stream
- Composable: Can be easily integrated with other async/streaming patterns
- Provider Agnostic: Works with any text streaming implementation
§Usage Patterns
§Real-time Processing (Stream Interface)
Process text chunks as they arrive, useful for real-time display or immediate processing:
use ai_types::llm::{TextStream, LanguageModel};
use futures_lite::StreamExt;
async fn display_as_generated<S: TextStream>(mut stream: S) -> Result<String, S::Error> {
let mut complete_text = String::new();
while let Some(chunk) = stream.next().await {
let text = chunk?;
print!("{}", text); // Display immediately
complete_text.push_str(&text);
}
Ok(complete_text)
}
§Batch Collection (IntoFuture
Interface)
Collect the complete response when you need the full text:
use ai_types::llm::{TextStream, LanguageModel, Request, Message};
async fn get_complete_answer<M: LanguageModel>(model: M) -> ai_types::Result {
let request = Request::new([Message::user("What is Rust?")]);
let stream = model.respond(request);
// Collect everything into a single string
let answer = stream.await?;
Ok(answer)
}
§Implementation Notes
Types implementing TextStream
should ensure that:
- Text chunks are delivered in order
- Empty chunks are handled gracefully
- The stream terminates properly on completion or error
- Buffer management is efficient for memory usage