ATI — Agent Tools Interface
Let your agents cook.
One binary. Any agent framework. Every tool your agent needs.
ATI gives AI agents secure access to APIs, MCP servers, OpenAPI services, and local CLIs — through one unified interface. No custom tool wrappers. No per-SDK plumbing. Your agent calls ati run <tool> --arg value and ATI handles auth, protocol bridging, and response formatting.
Install
Pre-built binary (recommended)
Download the latest release for your platform:
# macOS (Apple Silicon)
| &&
# macOS (Intel)
| &&
# Linux (x86_64, static musl binary)
| &&
# Linux (ARM64, static musl binary)
| &&
From source
# With cargo
# Or clone and build
&&
# Binary at target/release/ati
Quick start
# Initialize ATI (creates ~/.ati/)
# Add a free API — zero config, no API key needed
# Try it
See It Work
Import an API from its OpenAPI spec
Finnhub publishes an OpenAPI spec with 110 endpoints — stock quotes, company financials, insider transactions, market news. One command turns it into tools:
# Import the spec — ATI auto-derives provider name, auth, endpoints
# → Saved manifest to ~/.ati/manifests/finnhub.toml
# → Imported 85 operations from "Finnhub — Real-time stock quotes..."
# Store your API key
# 85 tools, instantly available
|
# ┌──────────────────────┬──────────┬────────────────────────────────┐
# │ DESCRIPTION ┆ PROVIDER ┆ TOOL │
# ╞══════════════════════╪══════════╪════════════════════════════════╡
# │ Symbol Lookup ┆ finnhub ┆ finnhub__symbol-search │
# │ Company Profile ┆ finnhub ┆ finnhub__company-profile2 │
# │ Quote ┆ finnhub ┆ finnhub__quote │
# │ Insider Transactions ┆ finnhub ┆ finnhub__insider-transactions │
# │ Basic Financials ┆ finnhub ┆ finnhub__company-basic-... │
# └──────────────────────┴──────────┴────────────────────────────────┘
Every operation in the spec is now a tool. No --name, no TOML to write, no code to generate.
Explore what you just added
# Agent asks: "research Apple stock — price, insider activity, and sentiment"
# Here are the exact commands to research Apple (AAPL) stock:
#
# 1. Current Price
# ati run finnhub__quote --symbol AAPL
#
# 2. Insider Transactions
# ati run finnhub__insider-transactions --symbol AAPL
#
# 3. News Sentiment
# ati run finnhub__news-sentiment --symbol AAPL
ati assist answers like a knowledgeable colleague — which tools, what order, key params, gotchas — with commands you can run immediately.
Run it
# c: 262.52 ← current price
# d: -1.23 ← change
# dp: -0.4664 ← percent change
# h: 266.15 ← day high
# l: 261.43 ← day low
# o: 264.65 ← open
# pc: 263.75 ← previous close
# data: [{name: "COOK TIMOTHY D", transactionCode: "S",
# change: -59751, share: 3280295, transactionPrice: 257.57,
# filingDate: "2025-10-03"}, ...]
The agent doesn't write HTTP requests. It doesn't parse JSON responses. It calls ati run and gets structured data back — real Apple stock price, real Tim Cook insider sells.
Now add an MCP server — same pattern, zero install
# DeepWiki is a free MCP server — no API key needed
# → Saved manifest to ~/.ati/manifests/deepwiki.toml
# See the auto-discovered tools
# ┌─────────────────────────────────────────────────────┬──────────┬───────────────────────────────┐
# │ DESCRIPTION ┆ PROVIDER ┆ TOOL │
# ╞═════════════════════════════════════════════════════╪══════════╪═══════════════════════════════╡
# │ Get a list of documentation topics for a GitHub ... ┆ deepwiki ┆ deepwiki__read_wiki_structure │
# │ View documentation about a GitHub repository. ┆ deepwiki ┆ deepwiki__read_wiki_contents │
# │ Ask any question about a GitHub repository ... ┆ deepwiki ┆ deepwiki__ask_question │
# └─────────────────────────────────────────────────────┴──────────┴───────────────────────────────┘
# Ask a question about any repo
# Tool dispatch in Claude Code involves a dynamic system that handles both
# built-in and external Model Context Protocol (MCP) tools.
#
# 1. **Tool Name Check**: The system first checks if the tool name starts
# with `mcp__`. If not, it's routed to the built-in tool pipeline.
# 2. **MCP Tool Resolution**: Server and tool names are extracted from the
# `mcp__<servername>__<toolname>` format, the server is connected, and
# the tool is invoked using a `call_tool` RPC.
# 3. **Output Handling**: Text output is returned; images are handled
# during streaming.
MCP tools are namespaced as <provider>__<tool_name>. ATI handles JSON-RPC framing, session management, and auth injection.
Built by Agents, for Agents
The examples above show a human typing commands. But ATI is designed so agents do all of this themselves — init, discover, store secrets, search across providers, and execute — with zero human intervention.
1. Agent discovers APIs and MCP servers
The agent can import any OpenAPI spec by URL — name auto-derived — and connect to MCP servers. No human has to write config files. ATI auto-creates ~/.ati/ on first use.
# Import APIs from their specs
# Connect MCP servers
3. Agent stores secrets
# cerebras_api_key csk-...tj3k
# finnhub_api_key sk-...r-key
# github_token ghs-...O6RE
# linear_api_key lin-...c123
Keys are masked on output. The agent never sees raw values after storing them.
4. Agent searches across everything
This is the key part. The agent now has dozens of providers and hundreds of tools. It doesn't need to know which provider has what — it just asks.
# Yes, we have several tools for stock prices:
#
# **For current/latest prices:**
# - `financial_datasets__getStockPriceSnapshot` — latest price snapshot
# - `finnhub__quote` — real-time quote data (US stocks)
#
# **For historical data:**
# - `financial_datasets__getStockPrices` — OHLCV data with date ranges
#
# ati run financial_datasets__getStockPriceSnapshot --ticker AAPL
# ati run finnhub__quote --symbol AAPL
# ati run financial_datasets__getStockPrices --ticker AAPL \
# --start_date 2024-12-01 --end_date 2024-12-31 --interval day
# PROVIDER TOOL DESCRIPTION
# complyadvantage ca_business_sanctions_search Search sanctions lists for businesses
# complyadvantage ca_person_sanctions_search Search sanctions lists for individuals
ati assist answers naturally — which tools, why, and exact commands — like asking a colleague. ati tool search is instant and offline. The agent picks the right tool and runs it — no human in the loop.
5. It works with CLIs too
Wrap any CLI. The agent calls ati run, ATI spawns the subprocess with credentials injected. The agent never sees the raw token.
# Agent asks how to use it
# Use the `pr create` subcommand:
# ati run gh -- pr create --title "Fix login bug" --body "Resolves #123"
# Your branch must be pushed first. Add --draft to open as draft,
# or --base main --head feature/new-auth to target specific branches.
# Agent runs it
# name: claude-code
# stargazerCount: 73682
6. Security scales with your threat model
Three tiers — same ati run interface, different credential exposure:
| Dev Mode | Local Mode | Proxy Mode | |
|---|---|---|---|
| Credentials | Plaintext file | Encrypted keyring | Not in sandbox |
| Key exposure | Readable on disk | mlock'd memory | Never enters sandbox |
| Setup | ati key set |
Keyring + session key | ATI_PROXY_URL |
| Use case | Local dev | Sandboxed agents | Untrusted sandboxes |
In proxy mode, the agent's sandbox has zero credentials. All calls route through ati proxy, which holds the keys, validates JWTs, and enforces per-tool scopes:
# Orchestrator issues a scoped token
# Agent's sandbox — only has the binary and a JWT
# Same commands, routed through proxy, scoped to allowed tools
Five Provider Types
Every provider type produces the same interface: ati run <tool> --arg value. The agent doesn't know or care what's behind it.
OpenAPI Specs — Auto-discovered from any spec
Point ATI at an OpenAPI 3.0 spec URL or file. It downloads the spec, discovers every operation, and registers each as a tool with auto-generated schemas.
# Preview what's in a spec before importing
# Import it — name derived from the URL (or pass --name to override)
# If the API needs auth, ATI tells you what key to set
Supports tag/operation filtering (--include-tags, --exclude-tags) and an operation cap (openapi_max_operations) for large APIs.
3 OpenAPI specs ship out of the box: ClinicalTrials.gov, Finnhub, and Crossref. Additional specs are available in contrib/specs/.
MCP Servers — Auto-discovered via protocol
Any MCP server — stdio subprocess or remote HTTP — gets its tools auto-discovered. No hand-written tool definitions.
# Remote MCP server (HTTP transport)
# Local MCP server (stdio transport)
# Store the key, then use the tools
Skills — Install a skill, get tools automatically
A skill is a SKILL.md that teaches agents how to use an API — endpoints, auth patterns, parameters, workflows. On ati skill install, ATI reads the SKILL.md and uses a fast LLM call (Cerebras) to extract a full provider manifest automatically. No hand-written TOML, no OpenAPI spec needed. The SKILL.md is the only source of truth.
# From a git URL — ATI clones, reads SKILL.md, generates the manifest
# From a local directory — same thing, ATI copies it into ~/.ati/skills/
# Either way, tools are immediately available
# Set the key and cook
The skill creator writes the SKILL.md — that's it. ATI + Cerebras extracts everything: base URL, auth type, endpoints, parameters, HTTP methods. Local paths get copied into ~/.ati/skills/, git URLs get cloned — either way the manifest is auto-generated and cached in ~/.ati/manifests/.
Local CLIs — Wrap any command with credential injection
Run gh, gcloud, kubectl, or any CLI through ATI. The agent calls ati run, ATI spawns the subprocess with a curated environment, and credentials never leak to the agent.
# Wrap the GitHub CLI
# Wrap gcloud with a credential file
# Use them
The ${key} syntax injects a keyring secret as an env var. The @{key} syntax materializes it as a temporary file (0600 permissions, wiped on process exit) — for CLIs that need a credential file path.
CLI providers get a curated environment (only PATH, HOME, TMPDIR, LANG, USER, TERM from the host). The subprocess can't see your shell's full environment.
Example: Google Workspace — 25 APIs, one CLI. ATI ships a pre-built manifest for gws (Google Workspace CLI). It covers Drive, Gmail, Calendar, Sheets, Docs, Slides, Chat, Admin, and 18 more services — all auto-discovered from Google's Discovery Service.
# Install gws, store your service account, go
# Agent asks how to create a presentation
# Create a new presentation with:
#
# ati run google_workspace -- slides presentations create \
# --json '{"title": "My Presentation"}'
#
# --json is required for the request body (POST). title is the only required field.
# Returns the presentation ID and URL.
#
# Check the schema first for all available fields:
# ati run google_workspace -- schema slides.presentations.create
# Agent runs it
# List Drive files, search Gmail, check calendar
ATI materializes the service account JSON as a temp file (0600, wiped on exit) via the @{key} syntax — the agent never sees raw credentials. In proxy mode, service accounts create files in their own invisible Drive. To have files appear in a real user's Drive, enable domain-wide delegation and set the impersonated user:
# Now slides, docs, sheets are created in the analyst's Drive
HTTP Tools — Hand-written TOML for full control
For APIs where you want precise control over endpoints, parameters, and response extraction, write TOML manifests directly:
# ~/.ati/manifests/pubmed.toml
[]
= "pubmed"
= "PubMed medical literature search"
= "https://eutils.ncbi.nlm.nih.gov/entrez/eutils"
= "none"
[[]]
= "medical_search"
= "Search PubMed for medical research articles"
= "/esearch.fcgi"
= "GET"
[]
= "object"
= ["term"]
[]
= "string"
= "Search term (e.g. 'CRISPR gene therapy')"
[]
= "integer"
= "Max results"
= 20
[]
= "$.esearchresult"
= "json"
Auth types: bearer, header, query, basic, oauth2, none.
Manifests — Your Provider Catalog
Every provider is a .toml file in ~/.ati/manifests/. The ati provider commands generate these for you, but you can also edit them directly for full control.
# What you've got
# Add via CLI
# Or edit manifests directly
# Remove
ATI ships with 8 curated manifests covering the main provider types — HTTP, OpenAPI, MCP, and CLI. Use them as-is or as templates for your own. Additional manifests are available in contrib/.
Tool Discovery
Three tiers of finding the right tool.
Search — Offline, Instant
Fuzzy search across tool names, descriptions, providers, categories, tags, and hints.
Inspect — Full Schema
)
)
)
)
)
Assist — Like Asking a Colleague
ati assist answers naturally — which tools, what order, gotchas — not a numbered command list.
# Broad — searches all tools
# For sanctions screening, use `ca_person_sanctions_search`:
#
# ati run ca_person_sanctions_search --search_term "John Smith" --fuzziness 0.6
#
# Set fuzziness between 0.4–0.8 depending on how strict you need matching.
# You'll also want to check PEP lists in the same pass:
#
# ati run ca_person_pep_search --search_term "John Smith" --fuzziness 0.6
# Scoped to a provider — captures --help output for CLIs
# Use the `pr create` subcommand:
#
# ati run gh -- pr create --title "Fix login bug" --body "Resolves #123"
#
# Your branch must be pushed to the remote first. Use `--base main` to
# target a specific branch, or `--draft` to open as a draft PR.
# Scoped to a tool — uses the full schema for precise commands
Security
Three tiers of credential protection, matched to your threat model. All use the same ati run interface.
Dev Mode — Plaintext Credentials
Quick, no ceremony. For local development.
Stored at ~/.ati/credentials with 0600 permissions. Also supports ATI_KEY_ env var prefix.
Local Mode — Encrypted Keyring
AES-256-GCM encrypted keyring. The orchestrator provisions a one-shot session key to /run/ati/.key (deleted after first read). Keys held in mlock'd memory, zeroized on drop.
┌─────────────────────────────────────────────────────┐
│ Sandbox │
│ │
│ ┌──────────┐ ati run my_tool ┌──────────┐ │
│ │ Agent │ ────────────────────────▶│ ATI │ │
│ │ │ │ binary │ │
│ │ │◀────────────────────────│ │ │
│ └──────────┘ structured result └────┬─────┘ │
│ │ │
│ reads encrypted keyring ───┘ │
│ injects auth headers │
│ enforces scopes │
│ │
│ /run/ati/.key (session key, deleted after read) │
└─────────────────────────────────────────────────────┘
Proxy Mode — Zero Credentials in the Sandbox
ATI forwards all calls to a central proxy server holding the real keys. The sandbox never touches credentials.
┌──────────────────────────┐ ┌────────────────────────────┐
│ Sandbox │ │ Proxy Server (ati proxy) │
│ │ │ │
│ Agent → ATI binary ─────│── POST ─│──▶ keyring + MCP servers │
│ │ /call │ injects auth │
│ No keys. No keyring. │ /mcp │ routes by tool name │
│ Only manifests + JWT. │ │ calls upstream APIs │
└──────────────────────────┘ └────────────────────────────┘
Switch modes with one env var — the agent never changes its commands:
# Local mode (default)
# Proxy mode — same command, routed to proxy
| Dev Mode | Local Mode | Proxy Mode | |
|---|---|---|---|
| Credentials | Plaintext file | Encrypted keyring | Not in sandbox |
| Key exposure | Readable on disk | mlock'd memory | Never enters sandbox |
| Setup | ati key set |
Keyring + session key | ATI_PROXY_URL |
| Use case | Local dev | Sandboxed agents | Untrusted sandboxes |
JWT Scoping
Each agent session gets a JWT with identity, permissions, and an expiry. The proxy validates on every request — agents only access what they're granted.
| Scope | Grants |
|---|---|
tool:web_search |
One specific tool |
tool:github__* |
Wildcard — all GitHub MCP tools |
help |
Access to ati assist |
skill:compliance-screening |
A specific skill |
* |
Everything (dev/testing only) |
# Generate signing keys
# Issue a scoped token
# Inspect / validate
A compliance agent gets tool:ca_* and skill:compliance-screening. A research agent gets tool:arxiv_* and tool:deepwiki__*. Neither can access the other's tools.
Skills
Skills are methodology documents that teach agents how to approach a task — when to use which tools, how to interpret results, what workflow to follow.
Tools provide data access. Skills provide workflow.
~/.ati/skills/compliance-screening/
├── skill.toml # Metadata: tool bindings, keywords, dependencies
└── SKILL.md # The methodology document
Skills auto-activate based on the agent's tool scope. If an agent has access to ca_person_sanctions_search, ATI automatically loads the compliance-screening skill because its tools binding includes that tool. Resolution walks: tool → provider → category → depends_on transitively.
When an agent calls ati assist, skills are automatically loaded into the LLM's context — the agent gets methodology-aware recommendations, not just raw command syntax.
End-to-End Example: Image → Voice → Lip-Sync Video
This is a real workflow an agent ran using ATI — three fal.ai models chained together, guided by skills. The agent generated an image, synthesized speech, and produced a lip-synced talking head video.
The result: fal-lipsync.mp4
Step 1 — Generate an image with Flux
# request_id: 1d491d8e-5c22-417b-a62b-471aa7f380e3
# status: IN_QUEUE
# images: [{url: "https://v3b.fal.media/files/.../streamer.jpg"}]
Step 2 — Generate speech with ElevenLabs (via fal)
# request_id: f9b24972-9ea9-47bd-9e6c-1fc8f48c70c5
# audio: {url: "https://v3b.fal.media/files/.../output.mp3"}
Step 3 — Lip-sync with VEED Fabric
# request_id: 1c7bdab9-3572-45fe-829d-c5c87071e7d9
# video: {url: "https://v3b.fal.media/files/.../lipsync.mp4"}
The agent didn't know any of this workflow. It asked ati assist:
# To create a lip-synced talking head video, you'll use the VEED Fabric 1.0
# model on fal.ai. This requires:
# - A face image URL (portrait/headshot)
# - An audio URL (speech)
#
# Step 1: Generate speech with ElevenLabs TTS...
# Step 2: Submit the lip-sync job to veed/fabric-1.0...
# Step 3: Poll status and get result...
#
# Best Practices:
# Face Image: Front-facing, good lighting, neutral expression
# Audio: Clean speech, no background noise
# Duration: Keep under 60 seconds per segment
ati assist loaded skills for fal-generate, elevenlabs-tts-api, and veed-fabric-lip-sync — giving the agent model-specific best practices, not just raw command syntax.
Declaring Skills in Manifests
Providers can declare associated skills in their manifest. When an agent imports a provider, it can install the skills in one command:
# ~/.ati/manifests/fal.toml
[]
= "fal"
= "https://queue.fal.run"
= [
"https://github.com/org/ati-skills#fal-generate",
"https://github.com/org/ati-skills#veed-fabric-lip-sync",
]
# See what skills are declared
# Skills: 2 declared (1 installed, 1 not installed)
# Install: ati provider install-skills fal
# Install all declared skills
Works on Any Agent Harness
If your framework has a shell tool, ATI works. The pattern is always the same — system prompt + shell access. No custom tool wrappers, no SDK-specific adapters.
=
# Claude Agent SDK
=
# OpenAI Agents SDK
=
# LangChain
=
| SDK | Shell Mechanism | Example |
|---|---|---|
| Claude Agent SDK | Built-in Bash tool |
~100 lines |
| OpenAI Agents SDK | @function_tool async shell |
~100 lines |
| Google ADK | run_shell() function tool |
~120 lines |
| LangChain | ShellTool (zero-config) |
~90 lines |
| Codex CLI | Built-in shell agent | ~60 lines |
| Pi | Built-in bashTool |
~100 lines |
Every example uses free, no-auth tools so you can run them immediately with just an LLM API key. See examples/.
Proxy Server
For production, run ati proxy as a central server holding secrets:
| Endpoint | Method | Description |
|---|---|---|
/health |
GET | Status — tool/provider/skill counts |
/call |
POST | Execute tool — {tool_name, args} |
/mcp |
POST | MCP JSON-RPC pass-through |
/help |
POST | LLM-powered tool guidance |
/skills |
GET | List/search skills |
/skills/:name |
GET | Skill content and metadata |
/skills/resolve |
POST | Resolve skills for scopes |
/.well-known/jwks.json |
GET | JWKS public key |
All endpoints except /health and JWKS require Authorization: Bearer <JWT> when JWT is configured.
Python SDK
The ati-client Python package provides orchestrator provisioning and JWT token utilities for integrating ATI into Python orchestrators.
Orchestrator Provisioning
=
# One call — returns env vars to inject into the sandbox
=
# env_vars = {"ATI_PROXY_URL": "...", "ATI_SESSION_TOKEN": "eyJ..."}
Token Utilities
=
=
# "agent-7"
# ["tool:web_search", "tool:finnhub_quote"]
Tokens are HS256-signed JWTs fully compatible with the Rust proxy — tested bidirectionally. See ati-client/python/ for full docs.
CLI Reference
ati — Agent Tools Interface
COMMANDS:
init Initialize ATI directory structure (~/.ati/)
run Execute a tool by name
tool List, inspect, search, and discover tools
provider Add, list, remove, inspect, and import providers
skill Manage skills (methodology docs for agents)
assist Ask which tools to use and how (LLM-powered)
key Manage API keys in the credentials store
token JWT token management (keygen, issue, inspect, validate)
auth Show authentication and scope information
proxy Run ATI as a proxy server
version Print version information
OPTIONS:
--output <FORMAT> json, table, text [default: text]
--verbose Enable debug output
Provider Management
|
Key & Token Management
|
Output Formats
Building
Project Structure
ati/
├── Cargo.toml
├── manifests/ # 8 curated provider manifests (HTTP, MCP, OpenAPI, CLI)
├── specs/ # 3 OpenAPI specs for curated providers
├── contrib/ # 35+ additional manifests and specs (gitignored)
├── skills/ # Skill methodology documents
├── examples/ # 6 SDK integrations (Claude, OpenAI, ADK, LangChain, Codex, Pi)
├── ati-client/python/ # Python SDK (pip install ati-client)
├── scripts/ # E2E test scripts
├── docs/
│ ├── SECURITY.md # Threat model and security design
│ └── IDEAS.md # Future directions
├── src/
│ ├── main.rs # CLI entry point (clap)
│ ├── lib.rs # Library crate
│ ├── cli/ # Command handlers
│ ├── core/ # Registry, MCP client, OpenAPI parser, HTTP executor,
│ │ # keyring, JWT, scopes, skills, response processing
│ ├── proxy/ # Client + server (axum)
│ ├── security/ # mlock/madvise/zeroize, sealed key file
│ └── output/ # JSON, table, text formatters
└── tests/ # Unit, integration, e2e, live MCP
License
Apache-2.0