ObjectiveAI API Server
Score everything. Rank everything. Simulate anyone.
A self-hostable API server for ObjectiveAI - run the full ObjectiveAI platform locally or use the library to build your own custom server.
Website | API | GitHub | Discord
Overview
This crate provides two ways to use the ObjectiveAI API:
- Run the server - Start a local instance of the ObjectiveAI API
- Import as a library - Build your own server with custom authentication, routing, or middleware
Running Locally
Prerequisites
- Rust (latest stable)
- An OpenRouter API key (for LLM access)
- Optionally, an ObjectiveAI API key (for Profile Computation)
Quick Start
# Clone the repository
# Create a .env file
# Run the server
The server starts on http://localhost:5000 by default.
Environment Variables
| Variable | Default | Description |
|---|---|---|
OPENROUTER_API_KEY |
(required) | Your OpenRouter API key |
OBJECTIVEAI_API_KEY |
(optional) | ObjectiveAI API key for caching and remote Functions |
OBJECTIVEAI_API_BASE |
https://api.objective-ai.io |
ObjectiveAI API base URL |
OPENROUTER_API_BASE |
https://openrouter.ai/api/v1 |
OpenRouter API base URL |
ADDRESS |
0.0.0.0 |
Server bind address |
PORT |
5000 |
Server port |
USER_AGENT |
(optional) | User agent for upstream requests |
HTTP_REFERER |
(optional) | HTTP referer for upstream requests |
X_TITLE |
(optional) | X-Title header for upstream requests |
Backoff Configuration
| Variable | Default | Description |
|---|---|---|
CHAT_COMPLETIONS_BACKOFF_INITIAL_INTERVAL |
100 |
Initial retry interval (ms) |
CHAT_COMPLETIONS_BACKOFF_MAX_INTERVAL |
1000 |
Maximum retry interval (ms) |
CHAT_COMPLETIONS_BACKOFF_MAX_ELAPSED_TIME |
40000 |
Maximum total retry time (ms) |
CHAT_COMPLETIONS_BACKOFF_MULTIPLIER |
1.5 |
Backoff multiplier |
CHAT_COMPLETIONS_BACKOFF_RANDOMIZATION_FACTOR |
0.5 |
Randomization factor |
Using as a Library
Add to your Cargo.toml:
[]
= "0.1.0"
Example: Custom Server
use ;
use Arc;
// Create your HTTP client
let http_client = new;
// Create the ObjectiveAI HTTP client
let objectiveai_client = new;
// Build the component stack
let ensemble_llm_fetcher = new;
let chat_client = new;
// Use in your own Axum/Actix/Warp routes
Architecture
Modules
| Module | Description |
|---|---|
auth |
Authentication and API key management |
chat |
Chat completions with Ensemble LLMs |
vector |
Vector completions for scoring and ranking |
functions |
Function execution and Profile management |
ensemble |
Ensemble management and caching |
ensemble_llm |
Ensemble LLM management and caching |
ctx |
Request context for dependency injection |
error |
Error response handling |
util |
Utilities for streaming and indexing |
Component Stack
Request
│
▼
┌─────────────────────────────────────────────────┐
│ Functions Client │
│ - Executes Function pipelines │
│ - Handles Profile weights │
└─────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────┐
│ Vector Completions Client │
│ - Runs ensemble voting │
│ - Combines votes into scores │
└─────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────┐
│ Chat Completions Client │
│ - Sends prompts to individual LLMs │
│ - Handles retries and backoff │
└─────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────┐
│ Upstream Client (OpenRouter) │
│ - Actual LLM API calls │
└─────────────────────────────────────────────────┘
Customization Points
Each layer uses traits for dependency injection:
- Fetchers - Implement custom caching or data sources for Ensembles, Functions, Profiles
- Usage Handlers - Track usage, billing, or analytics
- Context Extensions - Add per-request state (authentication, BYOK keys, etc.)
API Endpoints
Chat Completions
POST /chat/completions- Create chat completion
Vector Completions
POST /vector/completions- Create vector completionPOST /vector/completions/{id}- Get completion votesPOST /vector/completions/cache- Get cached vote
Functions
GET /functions- List functionsGET /functions/{owner}/{repo}- Get functionPOST /functions/{owner}/{repo}- Execute remote function with inline profile
Profiles
GET /functions/profiles- List profilesGET /functions/profiles/{owner}/{repo}- Get profilePOST /functions/{owner}/{repo}/profiles/{owner}/{repo}- Execute remote function with remote profilePOST /functions/profiles/compute- Train a profile
Ensembles
GET /ensembles- List ensemblesGET /ensembles/{id}- Get ensemble
License
MIT