Expand description
The Batches API: submit a batch of message requests, poll for completion, stream per-request results.
Anthropic’s batch endpoint is the cheapest way to run large fan-out workloads (50% off vs. per-request pricing) at the cost of higher latency. This module wraps the full surface:
Batches::create– submitBatches::get– status (polling-friendly)Batches::list/Batches::list_all– enumerateBatches::cancel,Batches::deleteBatches::wait_for– poller that returns onceended_atis setBatches::results/Batches::results_stream– decode the JSONL results body, eagerly into aVecor lazily as aStream
The batch ID is the only state you need to durably persist; reattach
later by calling Batches::get(id) or
Batches::wait_for(id, _).
§Quick start
use claude_api::{Client, batches::{BatchRequest, BatchResultPayload, WaitOptions},
messages::CreateMessageRequest, types::ModelId};
let client = Client::new(std::env::var("ANTHROPIC_API_KEY").unwrap());
let requests = vec![
BatchRequest::new("q1",
CreateMessageRequest::builder()
.model(ModelId::HAIKU_4_5).max_tokens(32).user("2 + 2?").build()?),
];
let batch = client.batches().create(requests).await?;
let finished = client.batches()
.wait_for(&batch.id, WaitOptions::default()).await?;
let items = client.batches().results(&finished.id).await?;
for item in &items {
if let BatchResultPayload::Succeeded { message } = &item.result {
println!("{}: {} tokens", item.custom_id, message.usage.output_tokens);
}
}Re-exports§
pub use types::BatchDeleted;pub use types::BatchRequest;pub use types::BatchResultItem;pub use types::BatchResultPayload;pub use types::ListBatchesParams;pub use types::MessageBatch;pub use types::ProcessingStatus;pub use types::RequestCounts;pub use types::WaitOptions;pub use api::Batches;async