# π Maple Proxy
A lightweight OpenAI-compatible proxy server for Maple/OpenSecret's TEE infrastructure. Works with **any** OpenAI client library while providing the security and privacy benefits of Trusted Execution Environment (TEE) processing.
## π Features
- **100% OpenAI Compatible** - Drop-in replacement for OpenAI API
- **Secure TEE Processing** - All requests processed in secure enclaves
- **Streaming Support** - Full Server-Sent Events streaming for chat completions
- **Flexible Authentication** - Environment variables or per-request API keys
- **Zero Client Changes** - Works with existing OpenAI client code
- **Lightweight** - Minimal overhead, maximum performance
- **CORS Support** - Ready for web applications
## π¦ Installation
### As a Binary
```bash
git clone <repository>
cd maple-proxy
cargo build --release
```
### As a Library
Add to your `Cargo.toml`:
```toml
[dependencies]
maple-proxy = { git = "https://github.com/opensecretcloud/maple-proxy" }
# Or if published to crates.io:
# maple-proxy = "0.1.0"
```
## βοΈ Configuration
Set environment variables or use command-line arguments:
```bash
# Environment Variables
export MAPLE_HOST=127.0.0.1 # Server host (default: 127.0.0.1)
export MAPLE_PORT=3000 # Server port (default: 3000)
export MAPLE_BACKEND_URL=http://localhost:3000 # Maple backend URL (prod: https://enclave.trymaple.ai)
export MAPLE_API_KEY=your-maple-api-key # Default API key (optional)
export MAPLE_DEBUG=true # Enable debug logging
export MAPLE_ENABLE_CORS=true # Enable CORS
```
Or use CLI arguments:
```bash
cargo run -- --host 0.0.0.0 --port 8080 --backend-url https://enclave.trymaple.ai
```
## π οΈ Usage
### Using as a Binary
#### Start the Server
```bash
cargo run
```
You should see:
```
π Maple Proxy Server started successfully!
π Available endpoints:
GET /health - Health check
GET /v1/models - List available models
POST /v1/chat/completions - Create chat completions (streaming)
```
### API Endpoints
#### List Models
```bash
curl http://localhost:8080/v1/models \
-H "Authorization: Bearer YOUR_MAPLE_API_KEY"
```
#### Chat Completions (Streaming)
```bash
curl -N http://localhost:8080/v1/chat/completions \
-H "Authorization: Bearer YOUR_MAPLE_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "llama3-3-70b",
"messages": [
{"role": "user", "content": "Write a haiku about technology"}
],
"stream": true
}'
```
**Note:** Maple currently only supports streaming responses.
### Using as a Library
You can also embed Maple Proxy in your own Rust application:
```rust
use maple_proxy::{Config, create_app};
use tokio::net::TcpListener;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Initialize tracing
tracing_subscriber::fmt::init();
// Create config programmatically
let config = Config::new(
"127.0.0.1".to_string(),
8081, // Custom port
"https://enclave.trymaple.ai".to_string(),
)
.with_api_key("your-api-key-here".to_string())
.with_debug(true)
.with_cors(true);
// Create the app
let app = create_app(config.clone());
// Start the server
let addr = config.socket_addr()?;
let listener = TcpListener::bind(addr).await?;
println!("Maple proxy server running on http://{}", addr);
axum::serve(listener, app).await?;
Ok(())
}
```
Run the example:
```bash
cargo run --example library_usage
```
## π» Client Examples
### Python (OpenAI Library)
```python
import openai
client = openai.OpenAI(
api_key="YOUR_MAPLE_API_KEY",
base_url="http://localhost:8080/v1"
)
# Streaming chat completion
stream = client.chat.completions.create(
model="llama3-3-70b",
messages=[{"role": "user", "content": "Hello, world!"}],
stream=True
)
for chunk in stream:
if chunk.choices[0].delta.content is not None:
print(chunk.choices[0].delta.content, end="")
```
### JavaScript/Node.js
```javascript
import OpenAI from 'openai';
const openai = new OpenAI({
apiKey: 'YOUR_MAPLE_API_KEY',
baseURL: 'http://localhost:8080/v1',
});
const stream = await openai.chat.completions.create({
model: 'llama3-3-70b',
messages: [{ role: 'user', content: 'Hello!' }],
stream: true,
});
for await (const chunk of stream) {
```
### cURL
```bash
# Health check
curl http://localhost:8080/health
# List models
curl http://localhost:8080/v1/models \
-H "Authorization: Bearer YOUR_MAPLE_API_KEY"
# Streaming chat completion
curl -N http://localhost:8080/v1/chat/completions \
-H "Authorization: Bearer YOUR_MAPLE_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "llama3-3-70b",
"messages": [{"role": "user", "content": "Tell me a joke"}],
"stream": true
}'
```
## π Authentication
Maple Proxy supports two authentication methods:
### 1. Environment Variable (Default)
Set `MAPLE_API_KEY` - all requests will use this key by default:
```bash
export MAPLE_API_KEY=your-maple-api-key
cargo run
```
### 2. Per-Request Authorization Header
Override the default key or provide one if not set:
```bash
curl -H "Authorization: Bearer different-api-key" ...
```
## π CORS Support
Enable CORS for web applications:
```bash
export MAPLE_ENABLE_CORS=true
cargo run
```
## π³ Docker Deployment
### Quick Start with Pre-built Image
Pull and run the official image from GitHub Container Registry:
```bash
# Pull the latest image
docker pull ghcr.io/opensecretcloud/maple-proxy:latest
# Run with your API key
docker run -p 8080:8080 \
-e MAPLE_BACKEND_URL=https://enclave.trymaple.ai \
ghcr.io/opensecretcloud/maple-proxy:latest
```
### Build from Source
```bash
# Build the image locally
just docker-build
# Run the container
just docker-run
```
### Production Docker Setup
1. **Option A: Use pre-built image from GHCR**
```bash
image: ghcr.io/opensecretcloud/maple-proxy:latest
```
2. **Option B: Build your own image**
```bash
docker build -t maple-proxy:latest .
```
3. **Run with docker-compose:**
```bash
cp .env.example .env
vim .env
docker-compose up -d
```
### π Security Note for Public Deployments
When deploying Maple Proxy on a public network:
- **DO NOT** set `MAPLE_API_KEY` in the container environment
- Instead, require clients to pass their API key with each request:
```python
# Client-side authentication for public proxy
client = OpenAI(
base_url="https://your-proxy.example.com/v1",
api_key="user-specific-maple-api-key" # Each user provides their own key
)
```
This ensures:
- Users' API keys remain private
- Multiple users can share the same proxy instance
- No API keys are exposed in container configurations
### Docker Commands
```bash
# Build image
just docker-build
# Run interactively
just docker-run
# Run in background
just docker-run-detached
# View logs
just docker-logs
# Stop container
just docker-stop
# Use docker-compose
just compose-up
just compose-logs
just compose-down
```
### Container Configuration
The Docker image:
- Uses multi-stage builds for minimal size (~130MB)
- Runs as non-root user for security
- Includes health checks
- Optimizes dependency caching with cargo-chef
- Supports both x86_64 and ARM architectures
### Environment Variables for Docker
```yaml
# docker-compose.yml environment section
environment:
- MAPLE_BACKEND_URL=https://enclave.trymaple.ai # Production backend
- MAPLE_ENABLE_CORS=true # Enable for web apps
- RUST_LOG=info # Logging level
# - MAPLE_API_KEY=xxx # Only for private deployments!
```
## π§ Development
### Docker Images & CI/CD
**Automated Builds (GitHub Actions)**
- Every push to `master` automatically builds and publishes to `ghcr.io/opensecretcloud/maple-proxy:latest`
- Git tags (e.g., `v1.0.0`) trigger versioned releases
- Multi-platform images (linux/amd64, linux/arm64) built automatically
- No manual intervention needed - just push your code!
**Local Development (Justfile)**
```bash
# For local testing and debugging
just docker-build # Build locally
just docker-run # Test locally
just ghcr-push v1.2.3 # Manual push (requires login)
```
Use GitHub Actions for production releases, Justfile for local development.
### Build from Source
```bash
cargo build
```
### Run with Debug Logging
```bash
export MAPLE_DEBUG=true
cargo run
```
### Run Tests
```bash
cargo test
```
## π Supported Models
Maple Proxy supports all models available in the Maple/OpenSecret platform, including:
- `llama3-3-70b` - Llama 3.3 70B parameter model
- And many others - check `/v1/models` endpoint for current list
## π Troubleshooting
### Common Issues
**"No API key provided"**
- Set `MAPLE_API_KEY` environment variable or provide `Authorization: Bearer <key>` header
**"Failed to establish secure connection"**
- Check your `MAPLE_BACKEND_URL` is correct
- Ensure your API key is valid
- Check network connectivity
**Connection refused**
- Make sure the server is running on the specified host/port
- Check firewall settings
### Debug Mode
Enable debug logging for detailed information:
```bash
export MAPLE_DEBUG=true
cargo run
```
## ποΈ Architecture
```
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β OpenAI Client βββββΆβ Maple Proxy βββββΆβ Maple Backend β
β (Python/JS) β β (localhost) β β (TEE) β
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
```
1. **Client** makes standard OpenAI API calls to localhost
2. **Maple Proxy** handles authentication and TEE handshake
3. **Requests** are securely forwarded to Maple's TEE infrastructure
4. **Responses** are streamed back to the client in OpenAI format
## π License
MIT License - see LICENSE file for details.
## π€ Contributing
Contributions welcome! Please feel free to submit a Pull Request.