cargo-ai™
Build AI-powered CLI tools from a single JSON definition locally.
Define declarative agents in JSON, hatch native executables locally, and share them in minutes.
Cargo AI is an open-source CLI for building auditable AI-powered CLI tools from a single JSON definition. Define inputs, schema, and actions once, run the JSON directly with cargo ai run --config, or hatch a native executable with cargo ai hatch, then inspect, run, and share it on your terms.
Cargo AI keeps agent behavior readable, auditable, and understandable through a single JSON definition.
Why Cargo AI
- Declarative by Design: define exactly what the agent does, what actions it can take, and keep the behavior easy to inspect.
- Open Source and Fully Auditable: inspect the generated code, understand what ships, and keep control of the runtime.
- Handles Real Inputs: work with text, images, URLs, and common files.
- Supports Advanced Logic: add conditions and follow-up behavior without hand-building a custom app.
- Real Actions, Not Just Prompts: run local commands, call child agents, pass command-line arguments, and send email follow-ups.
- Choose Your Own AI: use OpenAI models today or open-source models through Ollama, with room for more providers over time.
- You Own the Output: hatch a local executable and generated code that you can keep, modify, and run wherever you want.
- Portable Across macOS, Linux, and Windows: keep one readable agent definition and hatch it for the systems you care about.
- Easy to Share Through
cargo-ai.org: create a free account to publish definitions in minutes so other people can hatch them locally on their own machines. - No Extra Token Plumbing Required: use your existing Codex workflow when it fits, or bring your own model access when you want direct provider control.
- Built for AI-Assisted Iteration: keep the agent readable, diffable, and easy to improve with tools like Codex.
- Built to Grow With You: start with one clear definition, then add commands, email actions, and shared definitions as your workflow expands.
A concise JSON definition keeps the agent easy to read, review, diff, and improve without losing trust in what it does.
Quick Start
0. Install Cargo AI
The preferred install path today is Cargo-based.
If you do not already have Rust and Cargo, install them with rustup first using the official guide:
Then install Cargo AI:
Full install guidance, PATH details, and current platform posture live under docs/install. The step-by-step Cargo workflow is here: Install with Cargo.
By default, Cargo AI stores config, credentials, and internal workspaces under ~/.cargo/.cargo-ai (or $CARGO_HOME/.cargo-ai). Set CARGO_AI_HOME if you want Cargo AI to use a different root directory. See Cargo AI Home for the full resolution order, stored state, and first-run behavior.
1. Choose your model setup
Option A: recommended if you use ChatGPT Plus or above
Includes Codex at no additional cost. This is the easiest path today. cargo-ai uses your Codex login, so no separate API key is required.
If you do not already have Codex installed, get it here: Codex CLI setup
Option B: direct provider control
Use this path if you want an explicit model profile with direct provider credentials and no Codex dependency.
Option C: open-source models with Ollama
Use this path if you want to run cargo-ai without ChatGPT or OpenAI at all.
Install Ollama here: Get Ollama
Then pull a model such as mistral and add a local profile:
2. Run a sample agent directly
You can also run a local definition with cargo ai run ./adder_test.json or
cargo ai run --config ./adder_test.json. For inline or scripted flows, you
can also use cargo ai run --json '<agent-definition-json>' or
cat ./adder_test.json | cargo ai run --stdin.
3. Hatch the same sample as a standalone executable
On Windows, run adder_test.exe or just adder_test.
4. Register an account
Define agent email alerts with cargo-ai.org and manage your agents in one place. Keep them private, or share them instantly with anyone in the world.
Optional: set a custom public handle
If you want a specific public handle, set it here. Otherwise, cargo-ai.org assigns one automatically, and you can change it later.
Once registered, you can push an agent definition to your account repository and then either run it directly through Cargo AI or hatch it locally:
The Core Mental Model
[!TIP] You do not need to author this by hand. The fastest path is to tell Codex exactly what kind of agent you want and let it update the file for you. Read this section so the structure is easy to recognize, then review the result and verify exactly what the agent does. When you're ready for that loop, jump to Best First Workflow in Codex.
Cargo AI keeps the authoring model intentionally small:
- optional
inputsOrdered model-facing input such astext,url, orimage. - optional
runtime_varsTyped caller-supplied values that can control action logic,when, and selected run-step fields at invocation time. agent_schemaThe typed response you expect back.actionsWhat to do after the response is validated, including the orderedrunsteps inside each action.
The next section expands those same pieces from minimal snippets into richer patterns.
A minimal agent looks like this:
That JSON can run directly through Cargo AI:
Or it can become a compiled local executable through:
Inline and stdin definition sources work there too:
|
For Windows users, run my_agent.exe or just my_agent.
You can also override or inject runtime input without editing the JSON. Generated agents accept flags such as --input-text, --input-url, and --input-file. By default, runtime input flags replace the baked inputs array for that run. Use --input-mode append to keep baked inputs first, or --input-mode prepend to place runtime inputs before the baked inputs. If agent_schema.properties is empty, those model-facing runtime input flags are invalid because Cargo AI skips the initial model call in that structural action-only shape.
Top-level inputs may also declare optional name. Named inputs stay regular inputs for schema-backed agents, but they also become reusable bindings for child-agent steps and targeted runtime replacement with repeatable --input-override NAME=VALUE.
As a rule of thumb, prefer name when an input is part of the workflow contract, reusable by child steps, or likely to be operator-overrideable. Leave one-off root-model context unnamed when it does not need that extra identity.
For readability, prefer named input object field order as name, then type, then the value field. Keep unnamed literal inputs as type, then the value field.
You can also declare typed runtime variables for action control and step-local settings. Define them under top-level runtime_vars, pass values with repeatable --run-var name=value, and reference them in JSON as runtime.<name>.
Quote --run-var values when your shell would otherwise split them, for example --run-var subject="Quarterly Review".
You can also author a structural action-only worker by leaving agent_schema.properties empty. In that shape, Cargo AI skips the initial model pass and starts directly at action logic, which can read declared runtime.* values. Top-level named inputs are still allowed there as reusable parent-owned inputs for child forwarding.
Start Simple, Then Expand
Use these snippets to recognize how inputs, agent_schema, and actions grow as the agent becomes more capable.
Click linked labels to open full runnable examples.
Inputs
Use the input types that fit the job.
URL input:
Image input:
File input:
Named input:
Multiple inputs with related scoring:
You can override the baked inputs any time you run the generated agent. By default, runtime input flags replace the configured inputs for that execution, and the runtime input order is preserved exactly as you pass it on the command line. Use --input-mode append to keep baked inputs first, or --input-mode prepend to keep runtime inputs first. When you need to target one declared named input specifically, use repeatable --input-override NAME=VALUE.
agent_schema
The agent_schema is the output contract for the agent. Start simple, then add more structure as the agent becomes more capable.
Minimal output contract:
Add clearer field meaning with descriptions:
agent_schema can include any number of top-level string, integer, number, and boolean fields, plus optional description, string enum, and numeric bounds where supported.
It may also include top-level array and object fields for structured tool consumption.
The narrow structured-data rule is:
- arrays must be homogeneous
- objects must declare their shape explicitly
- arrays may contain supported scalar item types or declared-shape object items
- object properties inside structured tool-bound fields may be scalar or
scalar | null - structured top-level fields may flow only into tool params as raw JSON
- nullable support is limited to
scalar | nullobject properties inside those structured payloads - scalar-first surfaces such as
logic,when,exec.args, string-part interpolation,email_me, and childrun_varsreject structured field references
Then expand into richer constraints and exact output choices:
actions
actions define what the agent is allowed to do after it produces the top-level structured output.
Action logic uses JSON Logic.
Within an action, run steps execute in order after the action's JSON Logic condition evaluates true. That logic can read both top-level model output fields and declared runtime.* values.
By default, a failed step stops the rest of that action's run list unless you set failure_mode: "continue", but later eligible top-level actions still run and Cargo AI aggregates top-level failures at the end. If a step is truly fatal for the whole invocation, use failure_mode: "abort" to stop scheduling new work, let already-running work settle, and fail the run with an explicit abort summary.
Start with one simple local action:
Then expand into multiple action types:
You can keep actions simple or mix local executables, email alerts, child-agent handoffs, and generated image artifacts in the same agent definition. The next section shows how to sequence multiple run steps and control them with when.
Top-level actions run sequentially by default. If you want matching top-level actions to overlap, add:
That only changes scheduling across top-level actions. Each individual action still keeps its own run list in order, and a hard failure in one top-level action no longer prevents later eligible top-level actions from running. Cargo AI aggregates those top-level hard failures after all eligible actions finish.
Cargo AI prints one root using: line near run start that shows the effective profile, auth, server, and model for that invocation. When a profile seeds the invocation, it also prints loaded profile: ..., and when CLI flags replace profile-sourced values, it prints applied overrides: ... before the final using: line. It only adds url=... when the effective URL is custom or materially different from the standard transport. Cargo AI also prints one run-level mode header before actions start. When output is redirected, piped, or running in simpler terminals, it prefixes parent-visible action output with deterministic labels such as [Action 1: first_action], long-running steps emit a step-start liveness line such as step 2/2 generate_image started; waiting for provider response..., and terminal lane summaries plus the root run footer include wall-clock durations such as completed · 31s and Run complete · 32s total. Short runs now stay millisecond-aware instead of collapsing to 0s. The root completion footer is separated from action lanes by a blank line so it reads as a run-level summary instead of another action row. When attached directly to an interactive terminal, it switches to a compact live dashboard that groups each action by label, running or terminal status with elapsed time, terminal step marker/current step, and the last high-level lifecycle message only. Child-agent steps stay minimal in the parent view with start/completion or exit summaries instead of recursively inlining child detail.
Use --render-mode auto|live|append-only to control that behavior explicitly:
autopreserves the current terminal-sensitive defaultappend-onlyforces incremental labeled output even in an interactive terminalliveforces the dashboard when supported and otherwise falls back to append-only with a short notice
If you need a safety/testing pass, invoke a parallel-capable agent with --action-execution sequential. That runtime override forces the whole invocation tree down to sequential scheduling for that run, including child-agent handoffs.
run
run is the ordered step list inside an action.
Start with one simple step:
Then expand into a multi-step workflow:
Use run to sequence multiple side effects in order. exec steps can capture output, status, or errors for later steps, generate_image can write a single local image artifact, and when lets later steps react to success or failure without leaving the agent definition.
generate_image.model is optional. If omitted, Cargo AI falls back to the effective invocation model resolved from the current profile and any --model CLI override. If neither the step nor the invocation provides a model, the run fails clearly instead of guessing. When the image step should use a different model from the main invocation, set generate_image.model explicitly as either a literal string or a single variable reference. Prefer a runtime-backed string such as { "var": "runtime.hero_image_model" } when the operator should choose the image model at invocation time. Top-level string schema fields may also drive generate_image.model, but captured step variables may not.
generate_image and child agent steps also accept an optional step-level profile. Use it when one step should resolve its provider/model/url/token context differently from the parent invocation. For generate_image, explicit model still wins, then the step-profile model, then the parent invocation model. That means a parent agent may stay on OpenAI while one generate_image step switches to an Ollama profile. For child agent steps, the resolved profile is forwarded to the child as --profile <name>. Use artifact: "./child_reporter" for a direct child executable or artifact: "./child_reporter.json" to run that child through Cargo AI.
Cargo AI always prints one root using: line near run start. In append-only output, it also prints another action-prefixed using: line when a provider-backed or child-agent step changes the effective profile, auth, server, or model. Interactive live mode keeps the parent dashboard at the orchestration level and does not surface child or step-level using: lines there.
For the default OpenAI account transport, use a tool-capable mainline model such as gpt-5.2. For a direct OpenAI API token and URL, prefer GPT Image models such as gpt-image-1.5 or gpt-image-1-mini. Official OpenAI docs list gpt-image-1.5 as the latest GPT Image model, and the image-generation guide lists gpt-image-1.5, gpt-image-1, and gpt-image-1-mini for direct image generation. Verified: 2026-03-28. For Ollama's experimental OpenAI-compatible /v1/images/generations endpoint, use an Ollama image model such as x/flux2-klein:4b on a step-level Ollama profile. The current Cargo AI compatibility slice uses Ollama's documented b64_json response path, so Ollama-backed generate_image steps currently require a .png output path.
You can also target individual run steps to specific runtime platforms:
Or target multiple platforms with an array:
Child agents
Use child agents when one agent needs to hand work to another agent.
- Point to a child agent that lives next to the parent file, such as
./child_reporter. - By default, an agent can call child agents up to
5levels deep. Override that with--max-agent-depth. - By default, the parent plus any child agents share a total runtime budget of
600seconds. Override that with--max-runtime-in-sec. - A parent can pass inputs to a child and record whether the child succeeded or failed.
- A parent can also reuse one declared named top-level input explicitly inside child
inputswith{ "input": "<name>" }. - Child
agentsteps may setrun_varsto pass child runtime vars the same way the CLI uses repeatable--run-var NAME=VALUE. - Child
agentsteps may setinput_overridesto target the child's declared named inputs directly. - Child
agentsteps may still provide anonymous childinputs. - Child
agentsteps may setinput_modetoreplace,append, orprependwhen they also provide childinputs. - Named child-input reuse is explicit only. Cargo AI does not automatically inherit every named parent input into the child.
- If a middle agent wants to pass the same named input to its own child, it should declare the same named top-level input locally first.
run_vars,input_overrides,inputs, andinput_modemirror the CLI mental model:run_varsis the child-step equivalent of--run-var NAME=VALUEinput_overridesis the child-step equivalent of--input-override NAME=VALUEinputsis the child-step equivalent of anonymous runtime--input-*input_modeapplies only to childinputs, not toinput_overrides
- Prefer
input_overrideswhen targeting declared named child inputs. Use childinputsfor extra anonymous context. - If the target is another Cargo AI agent, prefer a native
kind: "agent"step instead of a Python or shell wrapper that only launches the child. - Use wrapper programs only when the task truly needs extra non-Cargo-AI behavior around that child call.
- A parent cannot automatically pull the child's structured return fields back into its own output.
Assume the parent definition also declares { "name": "menu_image", "type": "image" } at top level.
Example:
That child step behaves like a structured CLI invocation:
run_vars.yearis equivalent to--run-var year=...run_vars.monthis equivalent to--run-var month=08run_vars.generate_imagesis equivalent to--run-var generate_images=trueinput_overrides.menu_imageis equivalent to--input-override menu_image=...input_overrides.review_reasonis equivalent to--input-override review_reason=...- child
inputsstays the anonymous extra-input list - child
input_modestill controls only that anonymousinputslist
Use these child-step value shapes:
run_vars.<name>: string, number, boolean, or{ "var": "..." }input_overrides.<name>: string,{ "var": "..." }, or{ "input": "<name>" }
For schema-backed agents, --input-override and anonymous runtime inputs operate at different layers. This is valid:
In that case, the root model input list is replaced by the runtime text, but child steps that use { "input": "menu_image" } still receive the named override.
Build In Any Editor
You can build a cargo-ai agent in any editor you want. If you want the fastest execution loop while editing, run the JSON directly:
The supported definition-source options are:
|
If you want to check whether the definition is valid before exporting a binary, run:
Those same definition-source options also work with hatch:
|
If your config file already matches the agent name, the shorthand works too:
When the file checks cleanly, use the Codex workflow below for the fastest iteration loop.
Best First Workflow in Codex
If you want the fastest authoring loop, start in a new folder and let Codex build the agent definition with you.
This creates the Cargo AI project boundary first, then installs AGENTS.md plus the helper files under .cargo-ai/guidance/ so Codex knows the Cargo AI contract.
If you already have a folder, use cargo ai init first, then cargo ai add guidance --style codex.
Then tell Codex: I want to build a Cargo AI agent. Describe what the agent should do, what inputs it should accept, what structured output it should return, and any follow-up actions you want.
Ask Codex to:
- build the JSON definition
- run
cargo ai hatch my_agent --config ./my_agent.json --check - update the JSON until the check passes
Then review the generated JSON yourself to make sure it matches your intent.
Cargo AI works best when the definition stays small, understandable, and easy to verify as you iterate.
Local Project Tools
Cargo AI can also scaffold project-local tools that agents call through kind: "tool".
When an agent needs new project-local executable code and you have Cargo available, prefer a Rust tool created with cargo ai add tool <name>. Use ad hoc Python, Node, or shell helper scripts only when you explicitly want that shape or the task does not fit the current tool contract.
Tools are normal Rust crates, so they may use crates.io dependencies when needed. Keep dependency choices conservative: prefer stable, focused, actively maintained crates, enable only the features required, avoid Git/path dependencies unless intentional, and keep the tool's Cargo.lock. Before treating a tool as complete, review it as trusted local executable code: validate params, keep errors clear, document filesystem/network/subprocess/credential behavior in the resource profile, and run dependency checks such as cargo tree -e features, cargo audit, or cargo deny check when practical.
This is the current local workflow:
If you are already inside an existing folder, run cargo ai init first. Add cargo ai add guidance --style codex when you want the Codex guidance bundle.
If you want a project to refuse machine/global tool fallback, set this in .cargo-ai/project.toml:
[]
= false
If allow_global_fallback is missing, Cargo AI treats that as project-only lookup.
When a project also wants an explicit assembled build root, keep that in the same file under a build profile:
= 1
[]
= "my_tool_project"
= "0.1.0"
[]
= true
[]
= 600
= 600
= 5
[]
= ["agents/research.json"]
= ["agents/report.json"]
= ["hello_tool"]
= ["assets/prompts/"]
Use that build section as a direct-edit contract:
agent_definitions- JSON/config files copied into the build output as source definitions
hatched_agents- JSON/config entrypoints hatched into target-specific binaries
tools- project-attached tools that should be rebuilt and packaged into the build output
assets- project-relative files or directories copied into the build output
Keep the lists explicit. Cargo AI does not infer tools from agents during cargo ai build, and the same agent path may appear in both agent_definitions and hatched_agents when you want both the JSON definition and the compiled binary in the assembled output.
[runtime.defaults] is optional. When present, it sets project-level defaults for repeated cargo ai run workflows:
inference_timeout_in_sec- CLI override first, then project default, then selected profile timeout, then built-in default
max_runtime_in_sec- CLI override first, then project default, then built-in default
max_agent_depth- CLI override first, then project default, then built-in default
max_runtime_in_sec and max_agent_depth still cascade to child agents as invocation-tree guardrails. inference_timeout_in_sec stays invocation-local unless you explicitly set a different child profile or child invocation timeout.
That creates:
.cargo-ai/project.toml- Cargo AI project metadata and tool-resolution policy
- includes a top-level
[project]section for project/package identity cargo ai new/initwrites[tools] allow_global_fallback = trueby default
.gitignore- generated artifact ignore rules when VCS is enabled
AGENTS.mdplus.cargo-ai/guidance/- Codex guidance when you run
cargo ai add guidance --style codex tool-authoring.mdstays the workflow overview, while detailed contract, child-agent, and hardening rules live in adjacent guidance files
- Codex guidance when you run
tools/hello_tool/- normal Rust source for the tool crate, with custom behavior isolated in
src/tool.rs - Cargo AI-owned child-agent helper code isolated in
src/agent_bridge.rs
- normal Rust source for the tool crate, with custom behavior isolated in
.cargo-ai/tools/hello_tool/tool.json- Cargo AI-managed metadata pointing back to the source crate
After you implement the tool's metadata and invoke behavior in tools/hello_tool/src/tool.rs, build and inspect it with:
cargo ai tools build <name> is a project-local authoring/build step. It materializes the managed artifact inside the current project only. Reusable machine-scope installs are reserved for a later package-backed install flow rather than direct promotion from a local project tool.
cargo ai tools lint <name> is the static source/scaffold check for project-local source-backed tools. It checks Cargo AI-managed metadata linkage plus scaffold/layout expectations without executing the tool's business logic. Machine-only or binary-only tools are currently out of scope for linting.
The tool describe result schema must be a nullable string. A step that sets output_variable still requires the actual invoke response to contain a non-null string result. For UI or background-process tools, keep rendering/artifact creation testable without launching the UI when practical, expose a smoke-test control such as open_window=false, and declare UI/process behavior in the tool resource_profile.
Tool params may declare string, boolean, integer, number, array, or object. For array / object params, Cargo AI validates only the top-level kind before invocation and passes the resolved value through as raw JSON. The tool owns deeper item/object-shape deserialization and validation.
When a parent agent calls a kind: "tool" step, new scaffolded tools also receive a Cargo AI-owned child-agent helper in src/agent_bridge.rs. That helper is available through the InvocationContext argument passed to src/tool.rs, so tool-authored Rust code can call one or more same-project child agents without hand-rolling subprocess flags, depth handling, or runtime-budget propagation. Tool execution itself does not consume an extra agent-depth hop; child-agent calls made from the tool consume depth exactly as if the parent had called those children directly.
For validation, use Cargo AI surfaces first: cargo test only for crate-local Rust logic, then cargo ai tools lint, build, check, and hatch --check, with live leaf runtime checks before live parent orchestration and real side effects last. Treat ps or kill as exceptional cleanup for a specific long-lived child process left behind by your own live test run, not as a normal part of authoring-time validation.
Treat .cargo-ai/tools/... and .cargo-ai/agents/... as Cargo AI-owned generated state, not as author-owned scratch space. Do not manually copy, move, symlink, or delete files there during debugging. If you do touch managed state by hand, stop using that workspace as proof of a Cargo AI artifact bug and rerun the repro from a fresh workspace or freshly regenerated managed state instead. When a workflow mixes deterministic fan-out logic with live sources, prove the hardcoded-input path first and add URL/provider behavior only after the local orchestration path is already green.
Then wire it into your agent JSON:
Validate the pairing with:
By default, run, hatch --check, and hatch perform an upfront tool audit against the tool describe contract. They resolve tools from the current Cargo AI project first and then from Cargo AI Home only when .cargo-ai/project.toml allows global fallback. Use --ignore-tools only when you intentionally want to skip that startup audit and accept failure later if a tool step is actually reached.
Ordinary cargo ai hatch exports only the binary. It does not copy tool artifacts next to the output. When you run a hatched binary from inside a Cargo AI project, it uses the same project-first lookup contract. Outside a project context, it can use machine-installed tools but not project-only tools.
When you want an explicit assembled local package root instead of a single exported binary, use:
cargo ai build reads .cargo-ai/project.toml, selects a build profile (defaults to default), and assembles a target-specific build root under target/cargo-ai/build/<profile>/<target>/ unless you override it with --output-dir.
Phase 2 build rules are intentionally strict:
- only project-attached tools listed in
[build.<profile>].toolsare eligible - machine-only tools are not pulled into the build automatically
- if a listed tool exists only in Cargo AI Home,
cargo ai buildfails and tells you to attach/install it into the project first - build outputs get their own generated
.cargo-ai/project.toml,.cargo-ai/tools/..., copied agent definitions/assets, and root-level hatched binaries so the assembled folder is inspectable and runnable as a package root
When you want a portable source package instead of a target-specific runnable build root, use:
cargo ai package also reads .cargo-ai/project.toml, reuses the selected [build.<profile>] section directly, and assembles a source-portable package root under target/cargo-ai/package/<profile>/ unless you override it with --output-dir.
Phase 3A package rules stay narrow on purpose:
packagedoes not invent a second selector; it reusesagent_definitions,hatched_agents,tools, andassetsfrom the build profile- both
agent_definitionsandhatched_agentsare copied into the package as JSON source definitions - listed tools must already be project-attached and source-backed; machine-only tools are rejected with attach/install guidance
- packaged tools keep source metadata under
.cargo-ai/tools/...and source crates under their project-relative paths, but they do not include built binaries - package outputs get their own generated
.cargo-ai/project.tomlpluscargo-ai-package.tomlso the folder is inspectable and can be treated as a portable project snapshot - when the source project declares
[project].nameand[project].version, package output carries those values into both generated manifests for later publish/pull identity
Account-Backed Flows
After registration, you can use Cargo AI as more than a local hatching tool:
- store and retrieve agent definitions through your account
- run hosted definitions directly through the interpreted runtime
- hatch from your own hosted definitions
- hatch public definitions from another owner's handle
- use account-aware email workflows
Examples:
# Run your own hosted definition directly
# Hatch your own hosted definition
# Run a public definition from another handle
# Validate scaffold and compile path without exporting a binary
# Hatch a public definition from another handle
Project packages use a separate account surface:
# List your published projects
# List another owner's public projects
# Publish the current project package (developer-tools build)
# Pull the latest published package from another owner
Account-project rules are intentionally different from account agents:
publishpackages the current project first, then uploads the resulting package archive- published project identity comes from
.cargo-ai/project.toml[project].nameand[project].version listwith--owner-handle <handle>only returns that owner's public projectspulldefaults to the latest published version unless you pass--version <semver>- pulled packages restore a project-shaped folder locally; they do not expose agent-style definition-path identities in the backend
- after
pull,.cargo-ai/project.tomlremains the working project config and the pulled package receipt is preserved under.cargo-ai/origin/cargo-ai-package.toml - pulled tools are restored as source-backed project content; materialize a needed tool with
cargo ai tools build <tool-name>or assemble the runnable build root withcargo ai build - the current publish path works best when the final package stays at or below about
5.5 MiB; keep packaged assets minimal and avoid large sample inputs unless they are required in the package itself - if you add non-trivial assets to
[build.<profile>].assets, runcargo ai packageand inspect the reported package, archive, and request sizes before treating the project as publish-ready
Where To Go Next
When you want deeper details, use these files:
- Versioning and releases:
- Examples:
- JSON/schema reference:
- Actions and authoring patterns:
- Hatch/check workflow:
- Troubleshooting:
Notes
cargo ai hatch --checkvalidates scaffold and compile behavior withcargo checkwithout exporting a binary.- Generated binaries use your configured/default profile unless you override runtime flags.
- Standalone recipients do not need Cargo AI installed if they run the binary with explicit runtime flags such as
--server,--model, optional--url, optional--token, and optional--render-mode. --profile <name>is strict for generated binaries: if the named profile is missing, the run fails closed instead of falling back to another profile or to profileless auth.- For the standalone OpenAI account path, run the generated binary with
--server openai --model <model>and no--token; if a local Codex session is available, the binary reuses it automatically. - On machines without Cargo AI installed/configured,
./my_agent versiontreats local sync comparison as not checked and points users to./my_agent inspectfor embedded provenance. - Scheduling is not built into Cargo AI today. To run an agent on a schedule, use your operating system scheduler such as
cronon macOS/Linux or Task Scheduler on Windows. We know scheduling matters and expect this area to expand over time. - Cargo AI recommends manual upgrade via:
License
MIT. See LICENSE.