π Maple Proxy
A lightweight OpenAI-compatible proxy server for Maple/OpenSecret's TEE infrastructure. Works with any OpenAI client library while providing the security and privacy benefits of Trusted Execution Environment (TEE) processing.
π Features
- 100% OpenAI Compatible - Drop-in replacement for OpenAI API
- Secure TEE Processing - All requests processed in secure enclaves
- Streaming Support - Full Server-Sent Events streaming for chat completions
- Flexible Authentication - Environment variables or per-request API keys
- Zero Client Changes - Works with existing OpenAI client code
- Lightweight - Minimal overhead, maximum performance
- CORS Support - Ready for web applications
π¦ Installation
As a Binary
As a Library
Add to your Cargo.toml:
[]
= { = "https://github.com/opensecretcloud/maple-proxy" }
# Or if published to crates.io:
# maple-proxy = "0.1.0"
βοΈ Configuration
Set environment variables or use command-line arguments:
# Environment Variables
# Server host (default: 127.0.0.1)
# Server port (default: 3000)
# Maple backend URL (prod: https://enclave.trymaple.ai)
# Default API key (optional)
# Enable debug logging
# Enable CORS
Or use CLI arguments:
π οΈ Usage
Using as a Binary
Start the Server
You should see:
π Maple Proxy Server started successfully!
π Available endpoints:
GET /health - Health check
GET /v1/models - List available models
POST /v1/chat/completions - Create chat completions (streaming)
API Endpoints
List Models
Chat Completions (Streaming)
Note: Maple currently only supports streaming responses.
Using as a Library
You can also embed Maple Proxy in your own Rust application:
use ;
use TcpListener;
async
Run the example:
π» Client Examples
Python (OpenAI Library)
=
# Streaming chat completion
=
JavaScript/Node.js
import OpenAI from 'openai';
const openai = ;
const stream = await openai...;
cURL
# Health check
# List models
# Streaming chat completion
π Authentication
Maple Proxy supports two authentication methods:
1. Environment Variable (Default)
Set MAPLE_API_KEY - all requests will use this key by default:
2. Per-Request Authorization Header
Override the default key or provide one if not set:
π CORS Support
Enable CORS for web applications:
π³ Docker Deployment
Quick Start with Pre-built Image
Pull and run the official image from GitHub Container Registry:
# Pull the latest image
# Run with your API key
Build from Source
# Build the image locally
# Run the container
Production Docker Setup
- Option A: Use pre-built image from GHCR
# In your docker-compose.yml, use:
- Option B: Build your own image
- Run with docker-compose:
# Copy the example environment file
# Edit .env with your configuration
# Start the service
π Security Note for Public Deployments
When deploying Maple Proxy on a public network:
- DO NOT set
MAPLE_API_KEYin the container environment - Instead, require clients to pass their API key with each request:
# Client-side authentication for public proxy
=
This ensures:
- Users' API keys remain private
- Multiple users can share the same proxy instance
- No API keys are exposed in container configurations
Docker Commands
# Build image
# Run interactively
# Run in background
# View logs
# Stop container
# Use docker-compose
Container Configuration
The Docker image:
- Uses multi-stage builds for minimal size (~130MB)
- Runs as non-root user for security
- Includes health checks
- Optimizes dependency caching with cargo-chef
- Supports both x86_64 and ARM architectures
Environment Variables for Docker
# docker-compose.yml environment section
environment:
- MAPLE_BACKEND_URL=https://enclave.trymaple.ai # Production backend
- MAPLE_ENABLE_CORS=true # Enable for web apps
- RUST_LOG=info # Logging level
# - MAPLE_API_KEY=xxx # Only for private deployments!
π§ Development
Docker Images & CI/CD
Automated Builds (GitHub Actions)
- Every push to
masterautomatically builds and publishes toghcr.io/opensecretcloud/maple-proxy:latest - Git tags (e.g.,
v1.0.0) trigger versioned releases - Multi-platform images (linux/amd64, linux/arm64) built automatically
- No manual intervention needed - just push your code!
Local Development (Justfile)
# For local testing and debugging
Use GitHub Actions for production releases, Justfile for local development.
Build from Source
Run with Debug Logging
Run Tests
π Supported Models
Maple Proxy supports all models available in the Maple/OpenSecret platform, including:
llama3-3-70b- Llama 3.3 70B parameter model- And many others - check
/v1/modelsendpoint for current list
π Troubleshooting
Common Issues
"No API key provided"
- Set
MAPLE_API_KEYenvironment variable or provideAuthorization: Bearer <key>header
"Failed to establish secure connection"
- Check your
MAPLE_BACKEND_URLis correct - Ensure your API key is valid
- Check network connectivity
Connection refused
- Make sure the server is running on the specified host/port
- Check firewall settings
Debug Mode
Enable debug logging for detailed information:
ποΈ Architecture
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β OpenAI Client βββββΆβ Maple Proxy βββββΆβ Maple Backend β
β (Python/JS) β β (localhost) β β (TEE) β
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
- Client makes standard OpenAI API calls to localhost
- Maple Proxy handles authentication and TEE handshake
- Requests are securely forwarded to Maple's TEE infrastructure
- Responses are streamed back to the client in OpenAI format
π License
MIT License - see LICENSE file for details.
π€ Contributing
Contributions welcome! Please feel free to submit a Pull Request.