Halldyll Starter RunPod
A comprehensive Rust library for managing RunPod GPU pods with automatic provisioning, state management, and orchestration.
Features
- REST API Client - Create, start, stop pods via RunPod REST API
- GraphQL Client - Full access to RunPod GraphQL API for advanced operations
- State Management - Persist pod state and compute idempotent action plans
- Orchestration - High-level pod management with automatic reconciliation
- Fully Configurable - All settings via environment variables (
.envfile) - Strict Linting - Production-ready code with comprehensive lint rules
Installation
From crates.io (recommended)
[]
= "0.1"
= { = "1", = ["macros", "rt-multi-thread"] }
From GitHub
[]
= { = "https://github.com/Mr-soloDev/halldyll-starter" }
= { = "1", = ["macros", "rt-multi-thread"] }
tokio = { version = "1", features = ["macros", "rt-multi-thread"] }
## Configuration
Create a `.env` file in your project root:
```env
# Required
RUNPOD_API_KEY=your_api_key_here
RUNPOD_IMAGE_NAME=runpod/pytorch:2.1.0-py3.10-cuda11.8.0-devel
# Optional - Pod Configuration
RUNPOD_POD_NAME=my-gpu-pod
RUNPOD_GPU_TYPE_IDS=NVIDIA A40
RUNPOD_GPU_COUNT=1
RUNPOD_CONTAINER_DISK_GB=20
RUNPOD_VOLUME_GB=50
RUNPOD_VOLUME_MOUNT_PATH=/workspace
RUNPOD_PORTS=22/tcp,8888/http
# Optional - Timeouts
RUNPOD_HTTP_TIMEOUT_MS=30000
RUNPOD_READY_TIMEOUT_MS=300000
RUNPOD_POLL_INTERVAL_MS=5000
# Optional - API URLs
RUNPOD_REST_URL=https://rest.runpod.io/v1
RUNPOD_GRAPHQL_URL=https://api.runpod.io/graphql
# Optional - Behavior
RUNPOD_RECONCILE_MODE=reuse
Environment Variables Reference
| Variable | Required | Default | Description |
|---|---|---|---|
RUNPOD_API_KEY |
✓ | - | RunPod API key |
RUNPOD_IMAGE_NAME |
✓ | - | Container image (e.g., runpod/pytorch:2.1.0-py3.10-cuda11.8.0-devel) |
RUNPOD_POD_NAME |
halldyll-pod |
Name for the pod | |
RUNPOD_GPU_TYPE_IDS |
NVIDIA A40 |
Comma-separated GPU types (e.g., NVIDIA A40,NVIDIA RTX 4090) |
|
RUNPOD_GPU_COUNT |
1 |
Number of GPUs | |
RUNPOD_CONTAINER_DISK_GB |
20 |
Container disk size in GB | |
RUNPOD_VOLUME_GB |
0 |
Persistent volume size (0 = no volume) | |
RUNPOD_VOLUME_MOUNT_PATH |
/workspace |
Mount path for persistent volume | |
RUNPOD_PORTS |
22/tcp,8888/http |
Exposed ports (format: port/protocol) |
|
RUNPOD_HTTP_TIMEOUT_MS |
30000 |
HTTP request timeout (ms) | |
RUNPOD_READY_TIMEOUT_MS |
300000 |
Pod ready timeout (ms) | |
RUNPOD_POLL_INTERVAL_MS |
5000 |
Poll interval for readiness (ms) | |
RUNPOD_RECONCILE_MODE |
reuse |
reuse or recreate existing pods |
Pod Naming & Multiple Pods
The orchestrator uses the pod name to identify and reuse existing pods:
- Same name → Reuses the existing pod (starts it if stopped)
- Different name → Creates a new pod
To run multiple pods simultaneously, simply use different names:
# Development pod
RUNPOD_POD_NAME=dev-pod
# Production pod
RUNPOD_POD_NAME=prod-pod
# ML training pod
RUNPOD_POD_NAME=training-pod
Each unique name creates a separate pod on RunPod.
Usage
Quick Start with Orchestrator
The orchestrator provides the simplest way to get a ready-to-use pod:
use ;
async
Low-Level Provisioner
For direct pod creation:
use ;
async
Pod Starter (Start/Stop)
For managing existing pods:
use ;
async
GraphQL Client
For advanced operations:
use ;
async
State Management
For persistent state and reconciliation:
use ;
Modules
| Module | Description |
|---|---|
runpod_provisioner |
Create new pods via REST API |
runpod_starter |
Start/stop existing pods via REST API |
runpod_state |
State persistence and reconciliation |
runpod_client |
GraphQL client for advanced operations |
runpod_orchestrator |
High-level pod management |
GPU Types
Common GPU types available on RunPod:
| GPU | ID |
|---|---|
| NVIDIA A40 | NVIDIA A40 |
| NVIDIA A100 80GB | NVIDIA A100 80GB PCIe |
| NVIDIA RTX 4090 | NVIDIA GeForce RTX 4090 |
| NVIDIA RTX 3090 | NVIDIA GeForce RTX 3090 |
| NVIDIA L40S | NVIDIA L40S |
Use client.list_gpu_types() to get the full list with availability.
Running the Example
# Clone the project
# Create your .env file
# Edit .env with your API key and settings
# Run the example
Building
# Debug build
# Release build
# Check without building
# Run with all lints
Contributing
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
License
MIT