autogpt 0.1.15

πŸ¦€ A Pure Rust Framework For Building AGIs.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
# πŸ“¦ Installation

Welcome! AutoGPT offers seamless integration with both Cargo and Docker for easy installation and usage.

## πŸ“¦ Install From Registry

### βš“ Using Cargo

To install AutoGPT CLI via Cargo, execute the following command:

```sh
cargo install autogpt --all-features
```

### 🐳 Using Docker

To install and run the AutoGPT CLI via Docker, use the following command:

```sh
docker run -it \
  -e GEMINI_API_KEY=<your_gemini_api_key> \
  -e PINECONE_API_KEY=<Your_Pinecone_API_Key> \
  -e PINECONE_INDEX_URL=<Your_Pinecone_Index_URL> \
  --rm --name autogpt kevinrsdev/autogpt:0.1.14 man
```

To install and run the OrchGPT CLI via Docker, use the following command:

```sh
docker run -it \
  -e GEMINI_API_KEY=<your_gemini_api_key> \
  -e PINECONE_API_KEY=<Your_Pinecone_API_Key> \
  -e PINECONE_INDEX_URL=<Your_Pinecone_Index_URL> \
  --rm --name orchgpt kevinrsdev/orchgpt:0.1.14
```

## πŸ“¦ Build From Source

Fork/Clone The Repo:

```sh
git clone https://github.com/kevin-rs/autogpt.git
```

Navigate to the core autogpt directory:

```sh
cd autogpt/autogpt
```

### βš“ Using Cargo

To run OrchGPT CLI via Cargo, execute:

```sh
cargo run --all-features --bin orchgpt
```

To run AutoGPT CLI via Cargo, execute:

```sh
cargo run --all-features --bin autogpt
```

### 🐳 Using Docker

Build the `orchgpt` Docker container:

```sh
docker build -f Dockerfile.orchgpt -t orchgpt .
```

Build the `autogpt` Docker container:

```sh
docker build -f Dockerfile.autogpt -t autogpt .
```

Run the `orchgpt` container:

```sh
docker run -i \
  -e GEMINI_API_KEY=<your_gemini_api_key> \
  -e PINECONE_API_KEY=<Your_Pinecone_API_Key> \
  -e PINECONE_INDEX_URL=<Your_Pinecone_Index_URL> \
  -t orchgpt:latest
```

Run the `autogpt` container:

```sh
docker run -i \
  -e GEMINI_API_KEY=<your_gemini_api_key> \
  -e PINECONE_API_KEY=<Your_Pinecone_API_Key> \
  -e PINECONE_INDEX_URL=<Your_Pinecone_Index_URL> \
  -t autogpt:latest
```

Now, you can attach to the container:

```sh
$ docker ps
CONTAINER ID   IMAGE            COMMAND                  CREATED         STATUS         PORTS     NAMES
95bf85357513   autogpt:latest   "/usr/local/bin/auto…"   9 seconds ago   Up 8 seconds             autogpt

$ docker exec -it 95bf85357513 /bin/sh
~ $ ls
workspace
~ $ tree
.
└── workspace
    β”œβ”€β”€ architect
    β”‚   └── diagram.py
    β”œβ”€β”€ backend
    β”‚   β”œβ”€β”€ main.py
    β”‚   └── template.py
    β”œβ”€β”€ designer
    └── frontend
        β”œβ”€β”€ main.py
        └── template.py
```

to stop the current container, open up a new terminal and run:

```sh
$ docker stop $(docker ps -q)
```

### 🚒 Using Compose V2

This project uses [**Docker Compose V2**](https://github.com/docker/compose) to define and manage two services:

- `autogpt` - an AutoGPT instance
- `orchgpt` - an orchestrator that interacts with the AutoGPT container

These services are built from separate custom Dockerfiles and run in isolated containers. Docker Compose sets up networking automatically, enabling communication between `autogpt` and `orchgpt` as if they were on the same local network.

#### πŸš€ Build and Run

To build and start both services:

```sh
docker compose up --build
```

This will:

- Build both `autogpt` and `orchgpt` images using their respective Dockerfiles.
- Create and start the containers.
- Allow `autogpt` to communicate with `orchgpt`.

---

## 🧰 SDK Usage

The SDK offers a simple and flexible API for building and running intelligent agents in your applications. Before getting started, **make sure to configure the required environment variables**. For detailed setup, refer to the [Environment Variables Setup](#environment-variables-setup) section.

Once the environment is ready, you can quickly spin up an agent like so:

```rust
use autogpt::prelude::*;

#[tokio::main]
async fn main() {
    let position = "Lead UX/UI Designer";

    let prompt = "Generate a diagram for a simple web application running on Kubernetes.";

    let agent = ArchitectGPT::new(prompt, position).await;

    let autogpt = AutoGPT::default()
        .with(agents![agent])
        .build()
        .expect("Failed to build AutoGPT");

    match autogpt.run().await {
        Ok(response) => {
            println!("{}", response);
        }
        Err(err) => {
            eprintln!("Agent error: {:?}", err);
        }
    }
}
```

### πŸ’‘ Example Use Cases

Below are a few example patterns to help you integrate agents for various tasks:

#### πŸ› οΈ Backend API Generator

```rust
use autogpt::prelude::*;

#[tokio::main]
async fn main() {
    let position = "Backend Developer";

    let prompt = "Develop a weather backend apis in Rust using axum.";

    let agent = BackendGPT::new(prompt, position, "rust").await;

    let autogpt = AutoGPT::default()
        .with(agents![agent])
        .build()
        .expect("Failed to build AutoGPT");

    match autogpt.run().await {
        Ok(response) => {
            println!("{}", response);
        }
        Err(err) => {
            eprintln!("Agent error: {:?}", err);
        }
    }
}
```

#### 🎨 Frontend UI Designer

```rust
use autogpt::prelude::*;

#[tokio::main]
async fn main() {
    let position = "UX/UI Designer";

    let prompt = "Generate UI for a weather app using React JS.";

    let agent = FrontendGPT::new(prompt, position, "javascript").await;

    let autogpt = AutoGPT::default()
        .with(agents![agent])
        .build()
        .expect("Failed to build AutoGPT");

    match autogpt.run().await {
        Ok(response) => {
            println!("{}", response);
        }
        Err(err) => {
            eprintln!("Agent error: {:?}", err);
        }
    }
}
```

#### 🧠 Custom General Purpose Agent

```rust
use autogpt::prelude::*;

/// To be compatible with AutoGPT, an agent must implement the `Agent`,
/// `Functions`, and `AsyncFunctions` traits.
/// These traits can be automatically derived using the `Auto` macro.
/// The agent struct must contain at least the following fields.
#[derive(Debug, Default, Auto)]
pub struct CustomAgent {
    objective: Cow<'static, str>,
    position: Cow<'static, str>,
    status: Status,
    agent: AgentGPT,
    client: ClientType,
    memory: Vec<Communication>,
}

#[async_trait]
impl Executor for CustomAgent {
    async fn execute<'a>(
        &'a mut self,
        tasks: &'a mut Task,
        execute: bool,
        browse: bool,
        max_tries: u64,
    ) -> Result<()> {
        // Custom agent logic to interact with `client` (e.g. OpenAI, Gemini, XAI, etc).

        // Use the `generate` method to send the agent's objective as a prompt
        // to the configured AI client (e.g., OpenAI, Gemini, Claude). This abstracts
        // over the client implementation and returns a model-generated response.\
        let prompt = self.agent.objective().clone();
        let response = self.generate(prompt.as_ref()).await?;

        // (Optional) Store the result in the task or agent state
        self.agent.add_communication(Communication {
            role: "assistant".into(),
            content: response.clone().into(),
        });

        // (Optional) Store the result in the vector DB (e.g. pinecone)
        let _ = self
            .save_ltm(Communication {
                role: "assistant".into(),
                content: response.clone().into(),
            })
            .await;
        Ok(())
    }
}

#[tokio::main]
async fn main() {
    let position = "General Purpose Agent";

    let prompt = "Can do anything.";

    let agent = CustomAgent::new(prompt.into(), position.into());

    let autogpt = AutoGPT::default()
        .with(agents![agent])
        .build()
        .expect("Failed to build AutoGPT");

    match autogpt.run().await {
        Ok(response) => {
            println!("{}", response);
        }
        Err(err) => {
            eprintln!("Agent error: {:?}", err);
        }
    }
}
```

## πŸ› οΈ CLI Usage

The CLI provides a convenient means to interact with the code generation ecosystem. The `autogpt` crate bundles two binaries in a single package:

- `orchgpt` - Launches the orchestrator that manages agents.
- `autogpt` - Launches an agent.

Before utilizing the CLI, you need to **set up environment variables**. These are essential for establishing a secure connection with the orchestrator using the IAC protocol.

### Environment Variables Setup

To configure the CLI and or the SDK environment, follow these steps:

1. **Define Orchestrator Bind Address (Required If Using CLI)**: The orchestrator listens for incoming agent requests over a secure TLS connection. By default, it binds to `0.0.0.0:8443`. You can override this behavior by setting the `ORCHESTRATOR_ADDRESS` environment variable:

   ```sh
   export ORCHESTRATOR_ADDRESS=127.0.0.1:9443
   ```

   This tells the orchestrator to bind to `127.0.0.1` on port `9443` instead of the default.

1. **Define Workspace Path**: Set up the paths for designer, backend, frontend, and architect workspaces by setting the following environment variable:

   ```sh
   export AUTOGPT_WORKSPACE=workspace/
   ```

   This variable guide the agents on where to generate the code within your project structure.

1. **AI Provider Selection**: You can control which AI client is initialized at runtime using the `AI_PROVIDER` environment variable.

   - `xai` - Initializes the XAI Grok client (**requires** the `xai` feature).
   - `openai` - Initializes the OpenAI client (**requires** the `oai` feature).
   - `anthropic` - Initializes the Anthropic Claude client (**requires** the `cld` feature).
   - `gemini` - Initializes the Gemini client (**requires** the `gem` feature). This is the **default** if `AI_PROVIDER` is not set.

   ```sh
   # Use OpenAI (requires `--features oai`)
   export AI_PROVIDER=openai

   # Use Gemini (requires `--features gem`)
   export AI_PROVIDER=gemini

   # Use Anthropic Claude (requires `--features cld`)
   export AI_PROVIDER=anthropic

   # Use XAI Grok (requires `--features xai`)
   export AI_PROVIDER=xai
   ```

   Make sure to enable the corresponding Cargo features (`oai`, `xai`, `cld`, or `gem`) when building your project.

1. **API Key Configuration**: Additionally, you need to set up the Gemini API key by setting the following environment variable:

   ```sh
   export GEMINI_API_KEY=<your_gemini_api_key>
   ```

   To obtain your API key, navigate to [Google AI Studio]https://aistudio.google.com/app/apikey and generate it there. This key allows autogpt to communicate with Gemini API.

1. **DesignerGPT Setup (Optional)**: To enable DesignerGPT, you will need to set up the following environment variable:

   ```sh
   export GETIMG_API_KEY=<your_getimg_api_key>
   ```

   Generate an API key from your [GetImg Dashboard]https://dashboard.getimg.ai/api-keys.

1. **MailerGPT Setup (Optional)**: To enable MailerGPT, in addition to these environment variables, you will need to set up the environment:

   ```sh
   export NYLAS_SYSTEM_TOKEN=<Your_Nylas_System_Token>
   export NYLAS_CLIENT_ID=<Your_Nylas_Client_ID>
   export NYLAS_CLIENT_SECRET=<Your_Nylas_Client_Secret>
   ```

   Follow [this tutorial]NYLAS.md for a guide on how to obtain these values.

1. **Pinecone Setup (Optional)**: To persist agents memory in a vector database, you will need to set up these environment variables:

   ```sh
   export PINECONE_API_KEY=<Your_Pinecone_API_Key>
   export PINECONE_INDEX_URL=<Your_Pinecone_Index_URL>
   ```

   Follow [this tutorial]PINECONE.md for a guide on how to obtain these values.

### πŸš€ Running the Orchestrator

To launch the orchestrator and start listening for incoming agent connections over TLS, simply run:

```sh
orchgpt
```

### 🧠 Running Agents

To start an agent and establish a connection with the orchestrator (either locally or on a remote machine), run:

```sh
autogpt
```

This command launches the agent and connects it to the orchestrator over a secure TLS connection using the configured address.

Once the agent is running, you can interact with it using simple command syntax:

```sh
/<agent_name> <action> <input> | <language>
```

For example, to instruct the orchestrator to **create** a new agent, send a command like:

```sh
/ArchitectGPT create "fastapi app" | python
```

This will send a message to the orchestrator with:

- `msg_type`: `"create"`
- `to`: `"ArchitectGPT"`
- `payload_json`: `"fastapi app"`
- `language`: `"python"`

The orchestrator will then initialize and register an `ArchitectGPT` agent ready to perform tasks.

You can also run OrchGPT CLI using Docker:

```sh
docker run -i \
  -e GEMINI_API_KEY=<your_gemini_api_key> \
  -e PINECONE_API_KEY=<Your_Pinecone_API_Key> \
  -e PINECONE_INDEX_URL=<Your_Pinecone_Index_URL> \
  -t kevinrsdev/orchgpt:0.1.14
```

You can also run AutoGPT CLI using Docker:

```sh
docker run -i \
  -e GEMINI_API_KEY=<your_gemini_api_key> \
  -e PINECONE_API_KEY=<Your_Pinecone_API_Key> \
  -e PINECONE_INDEX_URL=<Your_Pinecone_Index_URL> \
  --rm --name autogpt kevinrsdev/autogpt:0.1.14
```

---