Please check the build logs for more information.
See Builds for ideas on how to fix a failed build, or Metadata for how to configure docs.rs builds.
If you believe this is docs.rs' fault, open an issue.
[!WARNING] Hadrian is experimental, alpha, vibe-coded software and is not ready for production use. The API, configuration format, and database schema are subject to breaking changes that will lead to data loss. Hadrian has not undergone a security audit. Do not expose it to untrusted networks or use it to handle sensitive data. We are not accepting pull requests at this time, but issues and discussions are welcome.
Why Hadrian?
- Single binary, single config. No complex deployments. Works on a Raspberry Pi or global cloud infrastructure.
- All features included. Multi-tenancy, SSO, RBAC, guardrails, semantic caching, cost forecasting. Everything is free.
- Production ready. Budget enforcement, rate limiting, circuit breakers, fallback chains, observability.
- Multi-model chat UI. Compare responses from multiple models side-by-side with 14 interaction modes.
- Built-in RAG. OpenAI-compatible Vector Stores API with document processing, chunking, and search.
- Studio. Image generation, TTS, transcription, and translation with multi-model execution.
Quick Start
See the Getting Started guide for more details. Otherwise:
Download the latest binary from GitHub Releases and run it:
Or use Docker:
To customize the configuration, create a hadrian.toml and mount it:
Or build from source (just required):
&& &&
Or install from crates.io (find the latest version with cargo search hadrian):
The gateway starts at http://localhost:8080 with the chat UI. No database required for basic use. Running without arguments creates ~/.config/hadrian/hadrian.toml with sensible defaults, uses SQLite, and opens the browser.
Configuration
# Minimal -- just add a provider
[]
= "open_ai"
= "${OPENAI_API_KEY}"
# Multiple providers with fallback
[]
= "anthropic"
= "${ANTHROPIC_API_KEY}"
= ["openai"]
[]
= "open_ai"
= "${OPENAI_API_KEY}"
Supports OpenAI, Anthropic, AWS Bedrock, Google Vertex AI, Azure OpenAI, and any OpenAI-compatible API (OpenRouter, Ollama, etc). See the provider docs for details.
Features
- Providers -- OpenAI, Anthropic, Bedrock, Vertex, Azure, plus any OpenAI-compatible API. Fallback chains, circuit breakers, health checks.
- Multi-tenancy -- Organizations, teams, projects, users. Scoped providers, budgets, and rate limits at every level.
- Auth -- API keys, OIDC/OAuth, per-org SSO, SAML, SCIM, reverse proxy auth, CEL-based RBAC, Sovereignty enforcement for data residency.
- Guardrails -- Blocklist, PII detection, content moderation (OpenAI, Bedrock, Azure). Blocking, concurrent, and post-response modes.
- Caching -- Exact match and semantic similarity caching with pgvector or Qdrant.
- Knowledge Bases -- File upload, text extraction, OCR, chunking, vector search, re-ranking. OpenAI-compatible Vector Stores API.
- Cost tracking -- Microcent precision, time-series forecasting, budget enforcement with atomic reservation.
- Observability -- Prometheus metrics, OTLP tracing, structured logging, usage export.
- Web UI -- Multi-model chat with 14 modes, web search, frontend tools (Python, JS, SQL, charts), MCP support, admin panel.
- Studio -- Image generation, text-to-speech, transcription, and translation across providers.
- Secrets -- External secrets managers (AWS Secrets Manager, GCP Secret Manager, Azure Key Vault, HashiCorp Vault) for credential storage.
API
OpenAI-compatible. Point any OpenAI SDK at Hadrian:
Interactive API reference available at /api/docs when running.
Deployment
Available as a single binary, Docker image, or Helm chart.
# Docker Compose (production)
&&
# Kubernetes (from source)
&& &&
See the deployment docs for Docker Compose configurations, Helm chart options, and production recommendations.
Development
# Backend
&& && &&
# Frontend
&& &&
# E2E tests
&&
License
Dual-licensed under Apache 2.0 and MIT.