Skip to main content

Module lambda

Module lambda 

Source
Expand description

AWS Lambda Inference Deployment

Deploy and manage ML inference on AWS Lambda for serverless, pay-per-use inference.

§Features

  • Model packaging with Docker/OCI containers
  • Cold start optimization with provisioned concurrency
  • Automatic scaling with Lambda’s built-in capabilities
  • Integration with Pacha registry for model artifacts

§Toyota Way Principles

  • Muda Elimination: Pay only for actual inference compute
  • Heijunka: Automatic scaling levels inference load
  • Jidoka: Built-in error handling and retry logic

Structs§

DeploymentEstimate
Deployment estimate
InferenceRequest
Lambda inference request
InferenceResponse
Lambda inference response
LambdaClient
Lambda inference client
LambdaConfig
Lambda function configuration for inference
LambdaDeployer
Lambda deployment manager
VpcConfig
VPC configuration for Lambda

Enums§

ConfigError
Configuration error
DeploymentError
Deployment error
DeploymentStatus
Deployment status
LambdaArchitecture
Lambda architecture
LambdaRuntime
Lambda runtime environment