Skip to main content

Crate azure_ai_foundry_models

Crate azure_ai_foundry_models 

Source
Expand description

§azure_ai_foundry_models

Crates.io docs.rs License: MIT

Model inference client for the Azure AI Foundry Rust SDK — chat completions, embeddings, audio, images, and the Responses API.

§Features

  • Chat Completions — Synchronous and streaming responses
  • Embeddings — Generate vector embeddings for text
  • Audio — Transcription (STT), translation, and text-to-speech (TTS)
  • Images — Image generation and editing
  • Responses — Unified Responses API (create, get, delete)
  • Streaming — SSE with optimized parsing and 1MB buffer protection
  • Builder Pattern — Type-safe request construction with parameter validation
  • Tracing — Full instrumentation with tracing spans

§Installation

[dependencies]
azure_ai_foundry_core = "0.8"
azure_ai_foundry_models = "0.8"
tokio = { version = "1", features = ["full"] }

§Usage

§Chat Completions

use azure_ai_foundry_core::client::FoundryClient;
use azure_ai_foundry_core::auth::FoundryCredential;
use azure_ai_foundry_models::chat::{ChatCompletionRequest, Message};

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let client = FoundryClient::builder()
        .endpoint("https://your-resource.services.ai.azure.com")
        .credential(FoundryCredential::api_key("your-key"))
        .build()?;

    let request = ChatCompletionRequest::builder()
        .model("gpt-4o")
        .message(Message::system("You are a helpful assistant."))
        .message(Message::user("What is Rust?"))
        .build();

    let response = azure_ai_foundry_models::chat::complete(&client, &request).await?;
    println!("{}", response.choices[0].message.content.as_deref().unwrap_or_default());
    Ok(())
}

§Streaming Chat Completions

use azure_ai_foundry_core::client::FoundryClient;
use azure_ai_foundry_core::auth::FoundryCredential;
use azure_ai_foundry_models::chat::{ChatCompletionRequest, Message, complete_stream};
use futures::StreamExt;

let request = ChatCompletionRequest::builder()
    .model("gpt-4o")
    .message(Message::user("Tell me a story"))
    .build();

let stream = complete_stream(&client, &request).await?;
let mut stream = std::pin::pin!(stream);

while let Some(chunk) = stream.next().await {
    let chunk = chunk?;
    if let Some(content) = chunk.choices[0].delta.content.as_deref() {
        print!("{}", content);
    }
}

§Embeddings

use azure_ai_foundry_core::client::FoundryClient;
use azure_ai_foundry_core::auth::FoundryCredential;
use azure_ai_foundry_models::embeddings::{EmbeddingRequest, embed};

let request = EmbeddingRequest::builder()
    .model("text-embedding-ada-002")
    .input("The quick brown fox jumps over the lazy dog")
    .build();

let response = embed(&client, &request).await?;
println!("Embedding dimensions: {}", response.data[0].embedding.len());

§Multiple Embeddings

use azure_ai_foundry_core::client::FoundryClient;
use azure_ai_foundry_core::auth::FoundryCredential;
use azure_ai_foundry_models::embeddings::{EmbeddingRequest, embed};

let request = EmbeddingRequest::builder()
    .model("text-embedding-ada-002")
    .inputs(vec![
        "First document",
        "Second document",
        "Third document",
    ])
    .build();

let response = embed(&client, &request).await?;
for (i, item) in response.data.iter().enumerate() {
    println!("Document {}: {} dimensions", i, item.embedding.len());
}

§Audio Transcription

use azure_ai_foundry_core::client::FoundryClient;
use azure_ai_foundry_core::auth::FoundryCredential;
use azure_ai_foundry_models::audio::{TranscriptionRequest, transcribe};

let audio_data = std::fs::read("recording.wav")?;
let request = TranscriptionRequest::builder()
    .model("whisper-1")
    .filename("recording.wav")
    .data(audio_data)
    .language("en")
    .build();

let response = transcribe(&client, &request).await?;
println!("Transcription: {}", response.text);

§Text-to-Speech

use azure_ai_foundry_core::client::FoundryClient;
use azure_ai_foundry_core::auth::FoundryCredential;
use azure_ai_foundry_models::audio::{SpeechRequest, speak};

let request = SpeechRequest::builder()
    .model("tts-1")
    .input("Hello, world!")
    .voice("alloy")
    .build();

let audio = speak(&client, &request).await?;
std::fs::write("output.mp3", &audio)?;

§Image Generation

use azure_ai_foundry_core::client::FoundryClient;
use azure_ai_foundry_core::auth::FoundryCredential;
use azure_ai_foundry_models::images::{ImageGenerationRequest, ImageSize, generate};

let request = ImageGenerationRequest::builder()
    .model("dall-e-3")
    .prompt("A futuristic city at sunset")
    .size(ImageSize::S1024x1024)
    .build();

let response = generate(&client, &request).await?;
if let Some(url) = &response.data[0].url {
    println!("Image: {}", url);
}

§Responses API

use azure_ai_foundry_core::client::FoundryClient;
use azure_ai_foundry_core::auth::FoundryCredential;
use azure_ai_foundry_models::responses::{CreateResponseRequest, create};

let request = CreateResponseRequest::builder()
    .model("gpt-4o")
    .input("What is Rust?")
    .build();

let response = create(&client, &request).await?;
if let Some(text) = response.output_text() {
    println!("{}", text);
}

§Modules

ModuleDescription
chatChat completions API with sync and streaming support
embeddingsVector embeddings generation
audioTranscription, translation, and text-to-speech
imagesImage generation and editing
responsesUnified Responses API (create, get, delete)

§License

This project is licensed under the MIT License.

Modules§

audio
Audio API types and functions for Azure AI Foundry Models.
chat
Chat completion types and API calls for Azure AI Foundry Models.
embeddings
Embeddings types and API calls for Azure AI Foundry Models.
images
Image generation and editing types and functions for Azure AI Foundry Models.
responses
Responses API types and functions for Azure AI Foundry Models.