Expand description
§Rust AI Agents - Fastly Compute
Run AI agents on Fastly Compute@Edge.
This crate provides a Fastly-native implementation using:
cortexai-llm-clientfor request/response logic- Fastly SDK for HTTP requests
§Usage
ⓘ
use cortexai_fastly::{FastlyAgent, FastlyAgentConfig};
use fastly::{Request, Response};
#[fastly::main]
fn main(req: Request) -> Result<Response, fastly::Error> {
let config = FastlyAgentConfig::new(
"openai",
std::env::var("OPENAI_API_KEY").unwrap(),
"gpt-4o-mini",
);
let mut agent = FastlyAgent::new(config, "llm-backend");
let response = agent.chat("Hello!")?;
Ok(Response::from_body(response.content))
}Structs§
- Fastly
Agent - AI Agent for Fastly Compute
- Fastly
Agent Config - Configuration for the Fastly agent
- Streaming
Response - Streaming response iterator
Enums§
- Fastly
Agent Error - Errors that can occur in the Fastly agent
Functions§
- handle_
chat_ request - Create a simple chat completion handler for Fastly