autogpt 0.3.2

πŸ¦€ A Pure Rust Framework For Building AGIs.
Documentation
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
# πŸ“¦ Installation

Welcome! AutoGPT offers seamless integration with both Cargo and Docker for easy installation and usage.

## πŸ“¦ Install From Registry

### βš“ Using Cargo

To install AutoGPT CLI via Cargo, execute the following command:

```sh
cargo install autogpt --all-features
```

To install with specific features (e.g., MoP and Gemini):

```sh
cargo install autogpt --features "cli,gem,mop"
```

### 🐳 Using Docker

To install and run the AutoGPT CLI via Docker, use the following command:

```sh
docker run -it \
  -e GEMINI_API_KEY=<your_gemini_api_key> \
  -e PINECONE_API_KEY=<Your_Pinecone_API_Key> \
  -e PINECONE_INDEX_URL=<Your_Pinecone_Index_URL> \
  --rm --name autogpt kevinrsdev/autogpt man
```

To install and run the OrchGPT CLI via Docker, use the following command:

```sh
docker run -it \
  -e GEMINI_API_KEY=<your_gemini_api_key> \
  -e PINECONE_API_KEY=<Your_Pinecone_API_Key> \
  -e PINECONE_INDEX_URL=<Your_Pinecone_Index_URL> \
  --rm --name orchgpt kevinrsdev/orchgpt
```

## πŸ“¦ Build From Source

Fork/Clone The Repo:

```sh
git clone https://github.com/wiseaidotdev/autogpt.git
```

Navigate to the core autogpt directory:

```sh
cd autogpt/autogpt
```

### βš“ Using Cargo

To run OrchGPT CLI via Cargo, execute:

```sh
cargo run --all-features --bin orchgpt
```

To run AutoGPT CLI via Cargo, execute:

```sh
cargo run --all-features --bin autogpt
```

To run with Mixture of Providers:

```sh
cargo run --features "cli,gem,oai,mop" --bin autogpt -- --mixture
```

### 🐳 Using Docker

Install the [docker buildx plugin](https://docs.docker.com/build/concepts/overview/):

```sh
sudo apt-get update
sudo apt-get install docker-buildx-plugin
```

Once installed, build the `orchgpt` Docker container using BuildKit:

```sh
docker buildx build -f Dockerfile.orchgpt -t orchgpt .
```

Build the `autogpt` Docker container:

```sh
docker buildx build -f Dockerfile.autogpt -t autogpt .
```

Run the `orchgpt` container:

```sh
docker run -i \
  -e GEMINI_API_KEY=<your_gemini_api_key> \
  -e PINECONE_API_KEY=<Your_Pinecone_API_Key> \
  -e PINECONE_INDEX_URL=<Your_Pinecone_Index_URL> \
  -t orchgpt:latest
```

Run the `autogpt` container:

```sh
docker run -i \
  -e GEMINI_API_KEY=<your_gemini_api_key> \
  -e PINECONE_API_KEY=<Your_Pinecone_API_Key> \
  -e PINECONE_INDEX_URL=<Your_Pinecone_Index_URL> \
  -t autogpt:latest
```

Now, you can attach to the container:

```sh
$ docker ps
CONTAINER ID   IMAGE            COMMAND                  CREATED         STATUS         PORTS     NAMES
95bf85357513   autogpt:latest   "/usr/local/bin/auto…"   9 seconds ago   Up 8 seconds             autogpt

$ docker exec -it 95bf85357513 /bin/sh
~ $ ls
workspace
~ $ tree
.
└── workspace
    β”œβ”€β”€ architect
    β”‚   └── diagram.py
    β”œβ”€β”€ backend
    β”‚   β”œβ”€β”€ main.py
    β”‚   └── template.py
    β”œβ”€β”€ designer
    └── frontend
        β”œβ”€β”€ main.py
        └── template.py
```

to stop the current container, open up a new terminal and run:

```sh
$ docker stop $(docker ps -q)
```

### 🚒 Using Compose V2

This project uses [**Docker Compose V2**](https://github.com/docker/compose) to define and manage two services:

- `autogpt` - an AutoGPT instance
- `orchgpt` - an orchestrator that interacts with the AutoGPT container

These services are built from separate custom Dockerfiles and run in isolated containers. Docker Compose sets up networking automatically, enabling communication between `autogpt` and `orchgpt` as if they were on the same local network.

#### πŸš€ Build and Run

To build and start both services:

```sh
docker compose up --build
```

This will:

- Build both `autogpt` and `orchgpt` images using their respective Dockerfiles.
- Create and start the containers.
- Allow `autogpt` to communicate with `orchgpt`.

---

## 🧰 SDK Usage

The SDK offers a simple and flexible API for building and running intelligent agents in your applications. Before getting started, **make sure to configure the required environment variables**. For detailed setup, refer to the [Environment Variables Setup](#environment-variables-setup) section.

Once the environment is ready, you can quickly spin up an agent like so:

```rust
use autogpt::prelude::*;

#[tokio::main]
async fn main() {
    let persona = "Lead UX/UI Designer";
    let behavior = "Generate a diagram for a simple web application running on Kubernetes.";

    let agent = ArchitectGPT::new(persona, behavior).await;

    let autogpt = AutoGPT::default()
        .with(agents![agent])
        .build()
        .expect("Failed to build AutoGPT");

    match autogpt.run().await {
        Ok(response) => {
            println!("{}", response);
        }
        Err(err) => {
            eprintln!("Agent error: {:?}", err);
        }
    }
}
```

### πŸ’‘ Example Use Cases

Below are a few example patterns to help you integrate agents for various tasks:

#### πŸ› οΈ Backend API Generator

```rust
use autogpt::prelude::*;

#[tokio::main]
async fn main() {
    let persona = "Backend Developer";
    let behavior = "Develop a weather backend apis in Rust using axum.";

    let agent = BackendGPT::new(persona, behavior, "rust").await;

    let autogpt = AutoGPT::default()
        .with(agents![agent])
        .build()
        .expect("Failed to build AutoGPT");

    match autogpt.run().await {
        Ok(response) => {
            println!("{}", response);
        }
        Err(err) => {
            eprintln!("Agent error: {:?}", err);
        }
    }
}
```

#### 🎨 Frontend UI Designer

```rust
use autogpt::prelude::*;

#[tokio::main]
async fn main() {
    let persona = "UX/UI Designer";
    let behavior = "Generate UI for a weather app using React JS.";

    let agent = FrontendGPT::new(persona, behavior, "javascript").await;

    let autogpt = AutoGPT::default()
        .with(agents![agent])
        .build()
        .expect("Failed to build AutoGPT");

    match autogpt.run().await {
        Ok(response) => {
            println!("{}", response);
        }
        Err(err) => {
            eprintln!("Agent error: {:?}", err);
        }
    }
}
```

#### 🧠 Custom General Purpose Agent

```rust
use autogpt::prelude::*;

/// To be compatible with AutoGPT, an agent must implement the `Agent`,
/// `Functions`, and `AsyncFunctions` traits.
/// These traits can be automatically derived using the `Auto` macro.
/// The agent struct must contain at least the following fields.
#[derive(Debug, Default, Auto)]
pub struct CustomAgent {
    pub agent: AgentGPT,
    pub client: ClientType,
}

#[async_trait]
impl Executor for CustomAgent {
    async fn execute<'a>(
        &'a mut self,
        task: &'a mut Task,
        execute: bool,
        browse: bool,
        max_tries: u64,
    ) -> Result<()> {
        // Custom agent logic to interact with `client` (e.g. OpenAI, Gemini, XAI, etc).

        // Use the `generate` method to send the agent's behavior as a prompt
        // to the configured AI client (e.g., OpenAI, Gemini, Claude). This abstracts
        // over the client implementation and returns a model-generated response.
        let behavior = self.agent.behavior().clone();
        let response = self.generate(behavior.as_ref()).await?;

        // (Optional) Store the result in the task or agent state
        self.agent.add_message(Message {
            role: "assistant".into(),
            content: response.clone().into(),
        });

        // (Optional) Store the result in the vector DB (e.g. pinecone)
        let _ = self
            .save_ltm(Message {
                role: "assistant".into(),
                content: response.clone().into(),
            })
            .await;
        Ok(())
    }
}

#[tokio::main]
async fn main() {
    let persona = "General Purpose Agent";
    let behavior = "Can do anything.";

    let agent = CustomAgent::new(persona.into(), behavior.into());

    let autogpt = AutoGPT::default()
        .with(agents![agent])
        .build()
        .expect("Failed to build AutoGPT");

    match autogpt.run().await {
        Ok(response) => {
            println!("{}", response);
        }
        Err(err) => {
            eprintln!("Agent error: {:?}", err);
        }
    }
}
```

## πŸ› οΈ CLI Usage

The CLI provides a convenient means to interact with the code generation ecosystem. The `autogpt` crate bundles two binaries in a single package:

- `orchgpt` - Launches the orchestrator that manages agents.
- `autogpt` - Launches an agent.

Before utilizing the CLI, you need to **set up environment variables**. These are essential for establishing a secure connection with the orchestrator using the IAC protocol.

### Environment Variables Setup

To configure the CLI and or the SDK environment, follow these steps:

1. **Define Orchestrator Bind Address (Required If Using CLI)**: The orchestrator listens for incoming agent requests over a secure TLS connection. By default, it binds to `0.0.0.0:8443`. You can override this behavior by setting the `ORCHESTRATOR_ADDRESS` environment variable:

   ```sh
   export ORCHESTRATOR_ADDRESS=127.0.0.1:9443
   ```

   This tells the orchestrator to bind to `127.0.0.1` on port `9443` instead of the default.

1. **Define Workspace Path**: GenericGPT defaults to the **current directory** where the CLI is launched as its workspace. Set this to an explicit path when you want generated files scoped to a fixed location:

   ```sh
   export AUTOGPT_WORKSPACE=workspace/
   ```

   For the classic multi-agent workflow (BackendGPT, FrontendGPT, etc.), agents write to subdirectories of the configured root:

   ```sh
   <AUTOGPT_WORKSPACE>/
   β”œβ”€β”€ architect/
   β”œβ”€β”€ backend/
   β”œβ”€β”€ frontend/
   └── designer/
   ```

1. **AI Provider Selection**: You can control which AI client is initialized at runtime using the `AI_PROVIDER` environment variable.
   - `gemini` - Initializes the Gemini client (**requires** the `gem` feature). This is the **default** if `AI_PROVIDER` is not set.
   - `openai` - Initializes the OpenAI client (**requires** the `oai` feature).
   - `anthropic` - Initializes the Anthropic Claude client (**requires** the `cld` feature).
   - `xai` - Initializes the XAI Grok client (**requires** the `xai` feature).
   - `cohere` - Initializes the Cohere client (**requires** the `co` feature).

   ```sh
   # Use Gemini (default, requires `--features gem`)
   export AI_PROVIDER=gemini

   # Use OpenAI (requires `--features oai`)
   export AI_PROVIDER=openai

   # Use Anthropic Claude (requires `--features cld`)
   export AI_PROVIDER=anthropic

   # Use XAI Grok (requires `--features xai`)
   export AI_PROVIDER=xai

   # Use Cohere (requires `--features co`)
   export AI_PROVIDER=cohere
   ```

   Make sure to enable the corresponding Cargo features (`gem`, `oai`, `xai`, `cld`, `co`, or `mop`) when building your project.

### πŸ”€ Mixture of Providers (MoP) Configuration

When using the `--mixture` flag, AutoGPT will attempt to fan out prompts to **every** provider that is compiled in (via feature flags) and has its corresponding API key set in the environment.

Example: If you have `GEMINI_API_KEY` and `OPENAI_API_KEY` set, and build with `--features gem,oai,mop`, running with `--mixture` will automatically use both providers for every query.

1. **API Key Configuration**: Set the API key for your chosen provider:

   ```sh
   # Gemini (default)
   export GEMINI_API_KEY=<your_gemini_api_key>

   # OpenAI
   export OPENAI_API_KEY=<your_openai_api_key>

   # Anthropic Claude
   export ANTHROPIC_API_KEY=<your_anthropic_api_key>

   # XAI Grok
   export XAI_API_KEY=<your_xai_api_key>

   # Cohere
   export COHERE_API_KEY=<your_cohere_api_key>
   ```

   Obtain a Gemini API key from [Google AI Studio]https://aistudio.google.com/app/apikey.

1. **Model Override (Optional)**: Override the default model for any provider using provider-specific env vars or the global fallback:

   ```sh
   export GEMINI_MODEL=gemini-2.5-pro-preview-05-06
   export OPENAI_MODEL=gpt-4o
   ```

1. **DesignerGPT Setup (Optional)**: To enable DesignerGPT, you will need to set up the following environment variable:

   ```sh
   export GETIMG_API_KEY=<your_getimg_api_key>
   ```

   Generate an API key from your [GetImg Dashboard]https://dashboard.getimg.ai/api-keys.

1. **MailerGPT Setup (Optional)**: To enable MailerGPT, in addition to these environment variables, you will need to set up the environment:

   ```sh
   export NYLAS_SYSTEM_TOKEN=<Your_Nylas_System_Token>
   export NYLAS_CLIENT_ID=<Your_Nylas_Client_ID>
   export NYLAS_CLIENT_SECRET=<Your_Nylas_Client_Secret>
   ```

   Follow [this tutorial]NYLAS.md for a guide on how to obtain these values.

1. **Pinecone Setup (Optional)**: To persist agents memory in a vector database, you will need to set up these environment variables:

   ```sh
   export PINECONE_API_KEY=<Your_Pinecone_API_Key>
   export PINECONE_INDEX_URL=<Your_Pinecone_Index_URL>
   ```

   Follow [this tutorial]PINECONE.md for a guide on how to obtain these values.

### πŸš€ Running the Orchestrator

To launch the orchestrator and start listening for incoming agent connections over TLS, simply run:

```sh
orchgpt
```

### 🧠 Running Agents

#### πŸ€– Interactive Mode (Default)

To launch the **GenericGPT interactive shell**, simply run `autogpt` with no arguments:

```sh
autogpt
```

This opens a conversational AI shell where you can:

- Type any prompt to get an immediate response from the active agent.
- Switch providers at runtime with `/provider`.
- Browse and switch models with `/models`.
- List and resume past sessions with `/sessions`.
- Check current status with `/status`.
- Press `ESC` to interrupt a running generation.
- Type `exit` or `quit` to save the session and close.

#### ⚑ Direct Prompt Mode

For a quick one-shot non-interactive prompt:

```sh
autogpt -p "Explain what a Rust lifetime is"
```

#### 🌐 Networking Mode

To connect to an orchestrator and interact with specialized agents (ArchitectGPT, BackendGPT, etc.), first start the orchestrator, then run:

```sh
autogpt --net
```

This command connects to the orchestrator over a secure TLS connection using the configured address.

Once connected, you can interact with agents using:

```sh
/<agent_name> <action> <input> | <language>
```

For example, to instruct the orchestrator to **create** a new agent, send a command like:

```sh
/arch create "fastapi app" | python
```

This will send a message to the orchestrator with:

- `msg_type`: `"create"`
- `to`: `"ArchitectGPT"`
- `payload_json`: `"fastapi app"`
- `language`: `"python"`

The orchestrator will then initialize and register an `ArchitectGPT` agent ready to perform tasks.

You can also run OrchGPT CLI using Docker:

```sh
docker run -i \
  -e GEMINI_API_KEY=<your_gemini_api_key> \
  -e PINECONE_API_KEY=<Your_Pinecone_API_Key> \
  -e PINECONE_INDEX_URL=<Your_Pinecone_Index_URL> \
  -t kevinrsdev/orchgpt
```

You can also run AutoGPT CLI using Docker:

```sh
docker run -i \
  -e GEMINI_API_KEY=<your_gemini_api_key> \
  -e PINECONE_API_KEY=<Your_Pinecone_API_Key> \
  -e PINECONE_INDEX_URL=<Your_Pinecone_Index_URL> \
  --rm --name autogpt kevinrsdev/autogpt
```

---

Β© 2026 Wise AI Foundation