Runtime Extensions for AWS Lambda in Rust

lambda-extension
is a library that makes it easy to write AWS Lambda Runtime Extensions in Rust. It also helps with using Lambda Logs API.
Example extensions
Simple extension
The code below creates a simple extension that's registered to every INVOKE
and SHUTDOWN
events.
use lambda_extension::{service_fn, Error, LambdaEvent, NextEvent};
async fn my_extension(event: LambdaEvent) -> Result<(), Error> {
match event.next {
NextEvent::Shutdown(_e) => {
}
NextEvent::Invoke(_e) => {
}
}
Ok(())
}
#[tokio::main]
async fn main() -> Result<(), Error> {
tracing_subscriber::fmt()
.with_max_level(tracing::Level::INFO)
.with_target(false)
.without_time()
.init();
let func = service_fn(my_extension);
lambda_extension::run(func).await
}
Log processor extension
use lambda_extension::{service_fn, Error, Extension, LambdaLog, LambdaLogRecord, SharedService};
use tracing::info;
async fn handler(logs: Vec<LambdaLog>) -> Result<(), Error> {
for log in logs {
match log.record {
LambdaLogRecord::Function(_record) => {
},
LambdaLogRecord::Extension(_record) => {
},
_ => (),
}
}
Ok(())
}
#[tokio::main]
async fn main() -> Result<(), Error> {
let logs_processor = SharedService::new(service_fn(handler));
Extension::new().with_logs_processor(logs_processor).run().await?;
Ok(())
}
Telemetry processor extension
use lambda_extension::{service_fn, Error, Extension, LambdaTelemetry, LambdaTelemetryRecord, SharedService};
use tracing::info;
async fn handler(events: Vec<LambdaTelemetry>) -> Result<(), Error> {
for event in events {
match event.record {
LambdaTelemetryRecord::Function(record) => {
},
LambdaTelemetryRecord::PlatformInitStart {
initialization_type: _,
phase: _,
runtime_version: _,
runtime_version_arn: _,
} => {
},
_ => (),
}
}
Ok(())
}
#[tokio::main]
async fn main() -> Result<(), Error> {
let telemetry_processor = SharedService::new(service_fn(handler));
Extension::new().with_telemetry_processor(telemetry_processor).run().await?;
Ok(())
}
Deployment
Lambda extensions can be added to your functions either using Lambda layers, or adding them to containers images.
Regardless of how you deploy them, the extensions MUST be compiled against the same architecture that your lambda functions runs on.
Building extensions
cargo lambda build --release --extension
If you want to run the extension in ARM processors, add the --arm64
flag to the previous command:
cargo lambda build --release --extension --arm64
This previous command will generate a binary file in target/lambda/extensions
called basic
. When the extension is registered with the Runtime Extensions API, that's the name that the extension will be registered with. If you want to register the extension with a different name, you only have to rename this binary file and deploy it with the new name.
Deploying extensions
- Make sure you have the right credentials in your terminal by running the AWS CLI configure command:
aws configure
- Deploy the extension as a layer with:
cargo lambda deploy --extension