vespe 0.1.2

Text as a Canvas for LLM Collaboration and Automation
Documentation
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
# vespe: Text as a Canvas for LLM Collaboration and Automation

[![License: AGPL v3](https://img.shields.io/badge/License-AGPL_v3-blue.svg)](https://www.gnu.org/licenses/agpl-3.0)
[![Rust](https://img.shields.io/badge/rust-1.70+-orange.svg)](https://www.rust-lang.org)

`vespe` is a powerful command-line interface (CLI) tool designed to facilitate a seamless, collaborative mind-mapping experience between users and Large Language Models (LLMs). It enables you to co-create, refine, and expand your ideas by treating your documents as dynamic, interactive canvases.

## How it Works


![Demo in VSCode](doc/demo_in_vscode_2.gif)

At its core, `vespe` operates on a collection of textual documents, referred to as "contexts." These documents are managed within the `.vespe` folder, which acts as a sidecar to your main project. These are standard Markdown files augmented with special custom commands (tags) that allow for direct interaction with LLMs and dynamic content generation. This approach transforms static documents into living, evolving knowledge bases.

## Why vespe?


`vespe` addresses key challenges in leveraging LLMs effectively:

*   **Context is King**: LLM answers are only as good as the query context provided. `vespe` gives you precise control over what information the LLM receives.
*   **User-Centric Focus**: You, the user, are the expert on your project and its current focus. `vespe` empowers you to guide the LLM with your domain knowledge.
*   **Fine-Grained Control**: `vespe` offers granular control over the context provided to the LLM, allowing you to craft highly relevant and effective prompts.


## Philosophy


`vespe` is built on a few core principles:

*   **Text as a Canvas**: Your documents are not just static files; they are interactive canvases for collaboration with AI. `vespe` empowers you to weave AI-generated content directly into your notes, drafts, and mind maps.
*   **Seamless Integration**: The tool is designed to be unobtrusive. The tag-based syntax is simple and integrates naturally with Markdown, so you can focus on your ideas, not on the tooling.
*   **You are in Control**: `vespe` gives you fine-grained control over the interaction with LLMs. You can choose the provider, shape the prompts, and direct the output.
*   **Iterative and Dynamic**: The `watch` mode and the tag system encourage an iterative workflow. You can refine your prompts, re-run contexts, and evolve your documents in real-time.
*   **Local First**: `vespe` is a local tool that works with your local files. It can be used with local LLMs (like Ollama) for a completely offline experience.

## Table of Contents

- [How it Works]#how-it-works
- [Philosophy]#philosophy
- [Installation]#installation
  - [Prerequisites]#prerequisites
  - [Steps]#steps
- [Getting Started: A Quick Glimpse]#getting-started-a-quick-glimpse
- [Syntax]#syntax
  - [Example]#example
- [Core Tags]#core-tags
  - [@answer]#answer
  - [@include]#include
  - [@set]#set
  - [@forget]#forget
  - [@repeat]#repeat
  - [@answer Advanced]#answer-advanced
  - [@inline]#inline
  - [@task / @done]#task--done
- [Templating with Handlebars]#templating-with-handlebars
  - [Special Variables]#special-variables
- [Examples]#examples
- [CLI Usage]#cli-usage
  - [`vespe init`]#vespe-init
  - [`vespe context new`]#vespe-context-new
  - [`vespe context run`]#vespe-context-run
  - [`vespe context analyze`]#vespe-context-analyze  
  - [`vespe watch`]#vespe-watch
  - [`vespe project add-aux-path`]#vespe-project-add-aux-path
  - [`vespe project remove-aux-path`]#vespe-project-remove-aux-path
  - [`vespe project list-aux-paths`]#vespe-project-list-aux-paths
- [Piping Data into Contexts]#piping-data-into-contexts
- [License]#license
- [NO-WARRANTY]#no-warranty
- [Contributing]#contributing

## Installation


To use `vespe`, you'll need to have **Rust** and its package manager, **Cargo**, installed on your system.

### Prerequisites


*   **Rust & Cargo**: Required to build and install the `vespe` CLI. You can get them from the [official Rust website]https://www.rust-lang.org/tools/install.
*   **A Command-Line LLM**: `vespe` works by calling an external command to interact with a Large Language Model. You need to have at least one LLM accessible from your shell.

    Here are a few examples of how you can set this up:

    *   **Google Gemini**: You can use the `gemini` CLI. Make sure it's installed and configured in your system's PATH. You can get it from the [gemini-cli repository]https://github.com/google-gemini/gemini-cli. The `@answer` provider command would look like `{ provider: "gemini -y" }`.
    *   **Ollama**: If you run models locally with [Ollama]https://ollama.com/, you can use its CLI. The provider command might be `{ provider: "ollama run mistral" }`.

    Essentially, any command that can take a prompt from standard input and return a response to standard output will work as a `provider`.

### Install `vespe`


Use Cargo to install the `vespe` command-line tool. This will compile it and place the `vespe` executable in your Cargo bin directory, making it available globally in your shell's PATH.

```shell
cargo install vespe
```

If you encounter any issues, ensure your Cargo bin directory is in your system's PATH. You can usually find instructions for this in the Rust installation guide.

After these steps, you should be able to run `vespe` from any directory in your terminal.

## Getting Started: A Quick Glimpse


Let's dive right in with a simple example.

1.  **Initialize your project:**

    Open your terminal, navigate to your project and run:

    ```shell
    vespe init
    ```

    This creates a `.vespe` directory in your project, where all your contexts will be stored.

2.  **Create your first context:**

    A "context" is just a Markdown file where you can interact with the AI. Let's create one called `hello`:

    ```shell
    vespe context new hello
    ```

    This will create a file named `hello.md` inside the `.vespe/contexts` directory.

3.  **Add a prompt and an AI command:**

    Open `hello.md` in your favorite editor and add the following lines:

    ```markdown
    Tell me something nice!

    @answer { provider: "gemini -y -m gemini-2.5-flash" }
    ```

4.  **Run `vespe` to get a response:**

    Now, execute the context:

    ```shell
    vespe context run hello
    ```

    `vespe` will process the file, send the prompt to the Gemini model, and inject the answer directly into your `hello.md` file. It will look something like this:

    ```markdown
    Tell me something nice!

    <!-- answer-a98dc897-1e4b-4361-b530-5c602f358cef:begin { provider: "gemini -y -m gemini-2.5-flash" } -->
    You are an amazing person, and you're capable of achieving wonderful things!
    <!-- answer-a98dc897-1e4b-4361-b530-5c602f358cef:end -->
    ```

And that's it! You've just used `vespe` to collaborate with an LLM on a document.

## Syntax


The power of `vespe` lies in its simple yet powerful tag syntax. Each tag follows a consistent structure that is easy to read and write.

The general syntax for a tag is:

```
@tag_name {key1: "value1", key2: value2} positional_argument
```

Let's break it down:

*   **`@tag_name`**: This is the command you want to execute (e.g., `@answer`, `@include`). It always starts with an `@` symbol.
*   **`{...}` (Parameters)**: This is a JSON-like object containing key-value pairs that configure the tag's behavior. `vespe` uses a more flexible version of JSON for convenience:
    *   Quotes around keys are optional (e.g., `provider` is the same as `"provider"`).
    *   Quotes around string values are optional if the value doesn't contain spaces or special characters.
*   **Positional Arguments**: Some tags can also take additional arguments after the parameter block. For example, `@include` takes a file path.

### `%%` Comments


You can also use `%%` at line start to add single-line comments anywhere in your context files. These comments are ignored by `vespe` and are not sent to the LLM.

**Usage:**

```markdown
%% This is a single-line comment that vespe will ignore.
Tell me a fact about {{topic}}. 

@answer { provider: "gemini -y" }
```

### Example


Consider the `@answer` tag from our "Getting Started" example:

```markdown
@answer { provider: "gemini -y -m gemini-2.5-flash" }
```

*   **Tag Name**: `@answer`
*   **Parameters**: `{ provider: "gemini -y -m gemini-2.5-flash" }`
    *   The key is `provider`.
    *   The value is `"gemini -y -m gemini-2.5-flash"`. In this case, quotes are necessary because the value contains spaces.

This flexible syntax makes writing `vespe` commands feel natural and unobtrusive within your Markdown files.

## Core Tags


`vespe` provides a set of powerful tags to control the interaction with LLMs and manage your content.

### @answer


The `@answer` tag is the primary way to interact with an LLM. It sends the content preceding it as a prompt and injects the model's response into the document.

**Usage:**
```markdown
What is the capital of France?

@answer { provider: "gemini -y" }
```

### @include


The `@include` tag statically inserts the content of another context file. This is useful for reusing prompts or structuring complex contexts.
File lookup happens in .vespe directory, and in auxiliary paths specified in project settings or by command line.

**Usage:**
```markdown
@include my_common_prompts/preamble.md

Now, do something specific.

@answer { provider: "gemini -y" }
```

You can also pass data to the included file, which can be used for templating with [Handlebars](https://handlebarsjs.com/) syntax.

**`data-example.md`:**
```markdown
Hello, {{name}}!
```

**`main.md`:**
```markdown
@include data-example.md { data: { name: "World" } }
```
This will resolve to "Hello, World!".

### @set


The `@set` tag defines default parameters for all subsequent tags in the current context. This helps to avoid repetition.

**Usage:**
```markdown
@set { provider: "gemini -y -m gemini-2.5-pro" }

What is the meaning of life?
@answer
<!-- This will use the provider set above -->


Tell me a joke.
@answer
<!-- This will also use the same provider -->

```

### @forget


The `@forget` tag clears all the context that came before it. This is like starting a fresh conversation with the LLM within the same file.

**Usage:**
```markdown
--- First Conversation ---
Prompt for the first question.
@answer { provider: "gemini -y" }

@forget

--- Second Conversation ---
This prompt is sent without the context of the first one.
@answer { provider: "gemini -y" }
```

### @repeat


The `@repeat` tag forces the re-execution of the dynamic anchor it is placed within (like `@answer` or `@inline`). Context will be re-read so any correction to query can be made. `@repeat` can also modify the parameters of the anchor it is repeating: the repeated anchor will inherith parameters from the @repeat tag.

**Usage:**
```markdown
<!-- answer-some-uuid:begin { provider: "gemini -y" } -->

Initial answer from the model.

@repeat { provider: "ollama run qwen2.5:1.5b" }
<!-- answer-some-uuid:end -->

```
In the next run, the `@answer` block will be executed again, with different parameters and re-reading the surrounding context.


### @answer Advanced


This section details advanced parameters for the `@answer` tag, allowing for fine-tuned control over prompt construction and output management.

**Multiple Choices:**
Force the LLM to choose from a predefined set of options. `vespe` will insert the content associated with the chosen option.

```markdown
What is the best programming language for my project?

@answer {
  provider: "gemini -y",
  choose: {
    "Rust": "Rust is the best because of its safety and performance.",
    "Python": "Python is the best for its simplicity and vast libraries."
  }
}
```

**Dynamic Answers:**
You can make an answer dynamic, so it automatically updates if the input context changes.

```markdown
@answer { provider: "gemini -y", dynamic: true }
```

**Advanced Prompt Control:**

You can fine-tune the prompt sent to the LLM and manage the output using a set of advanced parameters for `input`, `prefix`, and `postfix`. These parameters give you powerful control over context composition.

**Input Redirection (`input`)**

The `input` parameter replaces the current prompt (the content before the `@answer` tag) with content from one or more external files.

It can be specified in three ways:

1.  **A single file path**:

    ```markdown
    input: "path/to/your/context.md"
    ```

2.  **A file with template data**:

    If your context file is a Handlebars template, you can provide data to it.

    ```markdown
    input: {
      context: "path/to/template.md",
      data: { topic: "Rust", year: 2025 }
    }
    ```

3.  **A list of contexts**:

    You can concatenate multiple files and templates into a single prompt.

    ```markdown
    input: [
      "prompts/intro.md",
      {
        context: "prompts/question-template.md",
        data: { question: "What is the capital of France?" }
      },
      "prompts/outro.md"
    ]
    ```

*Example:*

Let's say you have a file `my_prompts/question.md`:

```markdown
Tell me about {{topic}}.
```

You can use it in another file like this:

```markdown
@answer {
  provider: "gemini -y",
  input: {
    context: "my_prompts/question.md",
    data: { topic: "Rust" }
  }
}
```

This will send "Tell me about Rust." to the LLM.

**Prompt Augmentation (`prefix` and `postfix`)**

The `prefix` and `postfix` parameters allow you to add content before or after the main prompt. This is ideal for adding system messages, instructions, or formatting rules without cluttering your main context.

Both `prefix` and `postfix` support the exact same formats as the `input` parameter (a single file path, a file with template data, or a list of contexts).

-   `prefix`: Prepends content to the prompt as system message. Often used for system instructions or to set a persona for the LLM.
-   `postfix`: Appends content to the prompt. Useful for adding constraints, output formatting instructions, or final reminders.

*Example:*

`my_prompts/persona.md`:
```markdown
You are a helpful assistant that speaks like a pirate.
```

`my_prompts/format.md`:
```markdown
Please answer in less than 20 words.
```

`main.md`:
```markdown
What is the capital of France?

@answer {
  provider: "gemini -y",
  prefix: "my_prompts/persona.md",
  postfix: "my_prompts/format.md"
}
```

This constructs a prompt where the LLM is first instructed to act like a pirate, then given the question, and finally told to keep the answer short.

**Output Redirection:**

-   `output: "path/to/file"`: Redirects the LLM's response to a specified context instead of injecting it back into the current document. The content of the `@answer` tag will be cleared.

*Example:*

```markdown
Summarize the following text and save it to a file.

@answer {
  provider: "gemini -y",
  output: "output/summary.txt"
}
```

The LLM's summary will be saved in `.vespe/contexts/output/summary.txt`.

**Agent Persona and Conversation Flow:**

-   `with_agent_names: true`: When dealing with a conversation history involving multiple agents (answer with different prefix / prefix_data), this option assigns a unique, consistent name to each agent (e.g., "Agent-A", "Agent-B"). This helps the LLM maintain a coherent persona for each participant. The system prompt will also be prefixed with "You are <agent_name>" to reinforce the current agent's identity.
-   `with_invitation: true`: Appends "Assistant:" (or "Assistant <agent_name>:" if `with_agent_names` is active) at the end of the prompt. This serves as a clear signal for the LLM to begin its response, guiding the turn-taking in the conversation.

### @inline


The `@inline` tag dynamically includes content from another file. Unlike `@include`, this creates a dynamic anchor and file is inlined in current context. This can be re-executed by a `@repeat` tag. This is useful to instantiate templates.

**Usage:**
```markdown
@inline path/to/template
```

Like `@include`, it also supports passing `data` for templating.

### @task / @done


The `@task` and `@done` tags work together to manage sequential tasks, like following a plan or a list of steps. They allow you to execute a large task one step at a time, ensuring the LLM only focuses on the current action while preserving the history of what's already been completed.

**How it Works:**

1.  **`@task`**: Place this tag at the beginning of your sequential plan. It creates a dynamic anchor that will wrap the completed steps. The content before the `@task` tag acts as the main instruction or goal for the entire sequence.
2.  **`@done`**: Place this tag immediately after the `@answer` for the step you want to execute.

When `vespe` runs, the `@done` tag does two things:
*   It signals that the current step is complete.
*   It moves the completed step (including the prompt and the LLM's answer) *inside* the `@task` anchor.

The effect is that for the next run, the completed step is "hidden" from the LLM's context. The LLM will only see the main instruction and the *next* step in the sequence, preventing confusion and keeping the focus on the current action.

**Usage Example:**

Imagine a file where you are working through a checklist.

**Initial setup (before the first run):**
```markdown
I'm going to make coffee by following these steps. I will tell you when I'm done with a step.

@task

1. Open the moka pot.

2. Clean the moka pot.

3. Fill the base with water.
```

**After the first run, the file becomes:**
```markdown
I'm going to make coffee by following these steps. I will tell you when I'm done with a step.

<!-- task-some-uuid:begin -->
<!-- task-some-uuid:end -->

1. Open the moka pot.

2. Clean the moka pot.

3. Fill the base with water.
```

Now we're ready to answer the first step:

```markdown
I'm going to make coffee by following these steps. I will tell you when I'm done with a step.

<!-- task-some-uuid:begin -->

<!-- task-some-uuid:end -->


1. Open the moka pot.
@answer

2. Clean the moka pot.

3. Fill the base with water.
```

**After the second run, the file becomes:**
```markdown
I'm going to make coffee by following these steps. I will tell you when I'm done with a step.

<!-- task-some-uuid:begin -->
<!-- task-some-uuid:end -->

1. Open the moka pot.
<!-- answer-another-uuid:begin -->
Okay, the moka pot is open.
<!-- answer-another-uuid:end -->

2. Clean the moka pot.

3. Fill the base with water.
```

Now first step is done, then we can archive it in the task anchor by using `@done`:
```markdown
I'm going to make coffee by following these steps. I will tell you when I'm done with a step.

<!-- task-some-uuid:begin -->

<!-- task-some-uuid:end -->


1. Open the moka pot.
<!-- answer-another-uuid:begin -->
Okay, the moka pot is open.
<!-- answer-another-uuid:end -->
@done

2. Clean the moka pot.

3. Fill the base with water.
```

**After the third run, the file becomes:**
```markdown
I'm going to make coffee by following these steps. I will tell you when I'm done with a step.

<!-- task-some-uuid:begin -->

1. Open the moka pot.
<!-- answer-another-uuid:begin -->
Okay, the moka pot is open.
<!-- answer-another-uuid:end -->
<!-- task-some-uuid:end -->

2. Clean the moka pot.

3. Fill the base with water.
```

For the next execution, you would move the `@answer` and `@done` tags to be after step 2. The LLM would be prompted with the main instruction and "2. Clean the moka pot.", but it would not see the context from step 1.

## Templating with Handlebars


All contexts in `vespe` are processed as [Handlebars](https://handlebarsjs.com/) templates. This means you can use Handlebars syntax to create dynamic and reusable content within your Markdown files. You can inject values using the `data` parameter within `input`, `prefix`, `postfix` blocks, or with the `data` parameter on an `@include` or `@inline` tag.

### Special Variables


`vespe` provides several special variables that can be used within your Handlebars templates:

*   `{{$1}}`, `{{$2}}`, ... `{{$n}}`: Represents the first positional command-line argument passed to `vespe context run`. For example, if you run `vespe context run my-context arg1 arg2`, `{{$1}}` will be `arg1`.
*   `{{$args}}`: Represents all positional command-line arguments as a space-separated string. Using the previous example, `{{$args}}` would be `arg1 arg2`.
*   `{{$stdin}}`: Represents the input received from `stdin`. If you pipe content to `vespe context run`, this variable will hold that content.

## Examples


You can find many examples in the `examples/` directory.

## CLI Usage


`vespe` provides a simple yet powerful command-line interface to manage your projects and contexts.

### `vespe init`


Initializes a new `vespe` project in the current directory or a specified path. This command creates the `.vespe` directory where all your contexts and configurations are stored. Git integration is enabled to project if the specified path is in a git repository.

**Usage:**

```shell
vespe init [--project-root <PATH>]
```

*   `--project-root <PATH>`: (Optional) The path to the directory where the project should be initialized. Defaults to the current directory.

### `vespe context new`


Creates a new context file within your project.

**Usage:**

```shell
vespe context new [NAME] [--today] [--context-template <FILE>]
```

*   `[NAME]`: The name for the new context (e.g., `my-feature/story`). This will create a corresponding Markdown file.
*   `--today`: A convenient flag to create a diary-style context for the current date (e.g., `diary/2025-11-10.md`).
*   `--context-template <FILE>`: (Optional) Path to a custom Handlebars template file to use for the new context.

### `vespe context run`


Executes a context file. `vespe` processes the tags within the file and outputs the context when all tags have been executed.

**Usage:**

```shell
# Execute a context by name

vespe context run [NAME] [--today] [-D <KEY>=<VALUE>]... [-I <PATH>]... [-O <PATH>] [ARGS]...

# Pipe content into a context

cat my-data.txt | vespe context run [NAME]
```

*   `[NAME]`: The name of the context to execute.
*   `--today`: A flag to execute the context for the current date.
*   `-D <KEY>=<VALUE>`: (Optional) Defines a variable that can be used within the context via Handlebars syntax (e.g., `{{$KEY}}`). This is useful for passing dynamic values to your templates. For example, running with `-D name=World` allows you to use `{{$name}}` in your context. This option can be specified multiple times.
*   `-I <PATH>`: (Optional) Adds an auxiliary directory path to search for input files (e.g., for `@include`, `@inline` or `@answer input/prefix/postfix:` ). When resolving a file, `vespe` will first check the project's root path and then search the specified auxiliary paths in order. This allows you to organize and reuse context files from shared locations. This option can be specified multiple times.
*   `-O <PATH>`, `--output-path <PATH>`: (Optional) Specifies a directory where output files should be written. When an `@answer` tag uses the `output:` parameter, the resulting file will be created in this directory instead of the default `.vespe/contexts` location. This is useful for directing generated content to a specific folder.
*   `[ARGS]...`: (Optional) A list of string arguments that can be accessed within the context file using Handlebars syntax (e.g., `{{$1}}` for first argument, `{{$2}}` for second argument, and so on; {{$args}} for all of the arguments space-separated).
*   **Piped Input**: The `run` command can also receive text from `stdin`. This input is available within the context via the `{{$stdin}}` Handlebars variable.

### `vespe context analyze`


Analyzes a specified context file and generates a report on the status of all its anchors (`@answer`, `@inline`, `@task`, etc.). This is useful for inspecting the state of dynamic content, such as viewing the query and reply for an `@answer` anchor.

**Usage:**

```shell
vespe context analyze <NAME> [--filter-uuid <UUID_PREFIX>]
```

*   `<NAME>`: The name of the context to analyze.
*   `--filter-uuid <UUID_PREFIX>`: (Optional) Filters the report to show only the anchors whose UUID starts with the specified prefix. This is useful for focusing on a specific anchor.

### `vespe watch`


Starts a watcher that monitors your context files for any changes. When a file is modified, `vespe` automatically re-executes it, providing a live-editing experience.

**Usage:**

```shell
vespe watch [--project-root <PATH>]
```

This is very useful for iterative development, allowing you to see the results of your changes in real-time.

### `vespe project add-aux-path`


Adds a persistent auxiliary search path to the project's configuration. This is useful for permanently linking shared directories of contexts or templates.

**Usage:**

```shell
vespe project add-aux-path <PATH>
```

*   `<PATH>`: The directory path to add to the project's auxiliary search paths.

### `vespe project remove-aux-path`


Removes a persistent auxiliary search path from the project's configuration.

**Usage:**

```shell
vespe project remove-aux-path <PATH>
```

*   `<PATH>`: The directory path to remove from the project's auxiliary search paths.

### `vespe project list-aux-paths`


Lists all the persistent auxiliary search paths configured for the project.

**Usage:**

```shell
vespe project list-aux-paths
```

## Piping Data into Contexts


You can pipe data directly into a context:
```shell
cat logs/error.log | vespe context run analyze-errors
```

**analyze-errors.md:**
```markdown
Analyze these error logs and suggest fixes:

{{$input}}

@answer { provider: "gemini -y" }
```

This is powerful for processing command output, log files, or any text data.

## License


This project is licensed under the AGPL-3.0 license. See the [LICENSE-AGPL3.md](LICENSE-AGPL3.md) file for details.

## NO-WARRANTY


THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

Please be aware that this software interacts with Large Language Models (LLMs) in an unsupervised manner ("yolo mode"). The output generated by LLMs can be unpredictable, inaccurate, or even harmful. By using this software, you acknowledge and accept full responsibility for any consequences arising from its use, including but not limited to the content generated by LLMs and any actions taken based on that content. Exercise caution and critical judgment when interpreting and utilizing the output.

## Contributing


Contributions are welcome! If you want to contribute to `vespe`, please feel free to open an issue or submit a pull request.