do-memory-mcp 0.1.31

Model Context Protocol (MCP) server for AI agents
Documentation
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
# Batch Operations for MCP Server

## Overview

Batch operations allow you to execute multiple MCP tool calls in a single request, dramatically improving performance for complex workflows. This feature provides:

- **3-5x faster execution** for multi-tool workflows
- **Dependency management** with automatic DAG validation
- **Parallel execution** of independent operations
- **Partial failure handling** - continue on errors
- **Flexible execution modes** - parallel, sequential, or fail-fast

## Quick Start

### Basic Parallel Execution

Execute multiple independent operations concurrently:

```json
{
  "jsonrpc": "2.0",
  "id": 1,
  "method": "batch/execute",
  "params": {
    "operations": [
      {
        "id": "query1",
        "tool": "query_memory",
        "arguments": {
          "query": "authentication patterns",
          "domain": "web-api",
          "limit": 5
        }
      },
      {
        "id": "metrics1",
        "tool": "get_metrics",
        "arguments": {
          "metric_type": "performance"
        }
      },
      {
        "id": "health1",
        "tool": "health_check",
        "arguments": {}
      }
    ],
    "mode": "parallel",
    "max_parallel": 10
  }
}
```

**Result**: All 3 operations execute concurrently, completing in ~100ms instead of ~300ms.

### Operations with Dependencies

Create complex workflows with dependency chains:

```json
{
  "jsonrpc": "2.0",
  "id": 2,
  "method": "batch/execute",
  "params": {
    "operations": [
      {
        "id": "fetch",
        "tool": "query_memory",
        "arguments": {
          "query": "error handling patterns",
          "domain": "api"
        }
      },
      {
        "id": "analyze",
        "tool": "analyze_patterns",
        "arguments": {
          "task_type": "error_recovery",
          "min_success_rate": 0.7
        },
        "depends_on": ["fetch"]
      },
      {
        "id": "recommend",
        "tool": "recommend_patterns",
        "arguments": {
          "task_description": "implement retry logic",
          "domain": "api"
        },
        "depends_on": ["analyze"]
      }
    ],
    "mode": "parallel"
  }
}
```

**Result**: Operations execute in order (fetch → analyze → recommend), but the system automatically handles dependencies.

## Execution Modes

### Parallel Mode (Default)

Executes independent operations concurrently while respecting dependencies.

```json
{
  "mode": "parallel",
  "max_parallel": 10
}
```

- **Best for**: Maximum throughput
- **Behavior**: Independent operations run concurrently
- **Errors**: Continues executing remaining operations
- **Use case**: Most workflows benefit from this mode

### Sequential Mode

Executes all operations one after another in insertion order.

```json
{
  "mode": "sequential"
}
```

- **Best for**: Operations with side effects or strict ordering requirements
- **Behavior**: Executes operations in the exact order provided
- **Errors**: Continues executing all operations
- **Use case**: When order matters and parallelism isn't safe

### Fail-Fast Mode

Stops execution on the first error encountered.

```json
{
  "mode": "failfast"
}
```

- **Best for**: Validation workflows or critical operation chains
- **Behavior**: Stops immediately when any operation fails
- **Errors**: Returns partial results up to the failure point
- **Use case**: Pre-flight checks, validation pipelines

## Response Format

```json
{
  "jsonrpc": "2.0",
  "id": 1,
  "result": {
    "results": [
      {
        "id": "query1",
        "success": true,
        "result": { "content": [{ "type": "text", "text": "..." }] },
        "duration_ms": 45
      },
      {
        "id": "metrics1",
        "success": true,
        "result": { "content": [{ "type": "text", "text": "..." }] },
        "duration_ms": 23
      },
      {
        "id": "health1",
        "success": false,
        "error": {
          "code": -32000,
          "message": "Service unavailable"
        },
        "duration_ms": 12
      }
    ],
    "total_duration_ms": 48,
    "success_count": 2,
    "failure_count": 1,
    "stats": {
      "total_operations": 3,
      "parallel_executed": 3,
      "sequential_executed": 0,
      "avg_duration_ms": 26.7
    }
  }
}
```

## Advanced Features

### Complex Dependency Graphs

Create workflows with multiple parallel branches:

```json
{
  "operations": [
    {
      "id": "init",
      "tool": "configure_embeddings",
      "arguments": { "provider": "openai" }
    },
    {
      "id": "query_code",
      "tool": "query_memory",
      "arguments": { "domain": "code_generation" },
      "depends_on": ["init"]
    },
    {
      "id": "query_debug",
      "tool": "query_memory",
      "arguments": { "domain": "debugging" },
      "depends_on": ["init"]
    },
    {
      "id": "merge_results",
      "tool": "analyze_patterns",
      "arguments": { "task_type": "combined" },
      "depends_on": ["query_code", "query_debug"]
    }
  ],
  "mode": "parallel"
}
```

**Execution flow**:
```
       init
      /    \
 query_code  query_debug
      \    /
   merge_results
```

### Partial Failure Handling

By default, batch operations continue on failure:

```json
{
  "operations": [
    {"id": "op1", "tool": "query_memory", "arguments": {...}},
    {"id": "op2", "tool": "invalid_tool", "arguments": {...}},
    {"id": "op3", "tool": "get_metrics", "arguments": {...}}
  ],
  "mode": "parallel"
}
```

**Result**: Operations 1 and 3 succeed, operation 2 fails, but all results are returned.

### Rate Limiting with max_parallel

Control concurrency to avoid overwhelming the system:

```json
{
  "operations": [...],  // 20 operations
  "mode": "parallel",
  "max_parallel": 5     // Only 5 execute at once
}
```

## Performance Characteristics

| Workflow Type | Sequential Time | Batch Time | Speedup |
|--------------|----------------|------------|---------|
| 3 independent queries | ~300ms | ~100ms | **3x** |
| 5 analysis operations | ~500ms | ~100ms | **5x** |
| Complex DAG (8 ops) | ~800ms | ~200ms | **4x** |

**Real-world example**:
- Traditional: 5 separate requests = 5 round-trips + 5× latency
- Batch: 1 request = 1 round-trip + parallel execution

## Error Handling

### Validation Errors

Returned before any execution:

```json
{
  "error": {
    "code": -32602,
    "message": "Invalid batch request params",
    "data": {
      "details": "Circular dependency detected: op1 -> op2 -> op1"
    }
  }
}
```

### Execution Errors

Returned with partial results:

```json
{
  "results": [
    {"id": "op1", "success": true, ...},
    {"id": "op2", "success": false, "error": {...}}
  ],
  "success_count": 1,
  "failure_count": 1
}
```

## Validation Rules

1. **Unique IDs**: Each operation must have a unique ID
2. **Valid Dependencies**: All `depends_on` IDs must exist
3. **Acyclic Graph**: No circular dependencies allowed
4. **Valid Tools**: All tool names must be recognized

## Best Practices

### 1. Use Meaningful Operation IDs

```json
// Good
{"id": "fetch_user_data", ...}
{"id": "validate_permissions", ...}

// Bad
{"id": "op1", ...}
{"id": "op2", ...}
```

### 2. Group Related Operations

Batch operations that are part of the same logical workflow:

```json
// Good: Related workflow
["fetch_patterns", "analyze_effectiveness", "recommend_best"]

// Bad: Unrelated operations
["random_query1", "unrelated_metric", "different_domain"]
```

### 3. Set Appropriate max_parallel

```json
// Light operations: higher parallelism
{"max_parallel": 20}

// Heavy operations (embeddings, analysis): lower parallelism
{"max_parallel": 3}
```

### 4. Use Dependencies Wisely

Only specify dependencies when truly needed:

```json
// Good: Real dependency
{
  "id": "analyze",
  "depends_on": ["fetch_data"]  // Needs data first
}

// Bad: Unnecessary dependency
{
  "id": "independent_query",
  "depends_on": ["unrelated_op"]  // Slows down execution
}
```

### 5. Handle Failures Gracefully

Check individual operation success:

```javascript
const response = await executeBatch(...);

for (const result of response.results) {
  if (result.success) {
    processResult(result.result);
  } else {
    console.error(`Operation ${result.id} failed:`, result.error.message);
  }
}
```

## Use Cases

### 1. Dashboard Data Loading

Load all dashboard metrics in one request:

```json
{
  "operations": [
    {"id": "episode_count", "tool": "get_metrics", "arguments": {"metric_type": "episodes"}},
    {"id": "performance", "tool": "get_metrics", "arguments": {"metric_type": "performance"}},
    {"id": "health", "tool": "health_check", "arguments": {}},
    {"id": "recent_patterns", "tool": "search_patterns", "arguments": {"query": "recent", "limit": 5}}
  ],
  "mode": "parallel",
  "max_parallel": 10
}
```

### 2. Multi-Source Query

Query multiple domains simultaneously:

```json
{
  "operations": [
    {"id": "web_patterns", "tool": "query_memory", "arguments": {"domain": "web-api"}},
    {"id": "cli_patterns", "tool": "query_memory", "arguments": {"domain": "cli"}},
    {"id": "db_patterns", "tool": "query_memory", "arguments": {"domain": "database"}}
  ],
  "mode": "parallel"
}
```

### 3. Pipeline Workflow

Execute a multi-stage pipeline:

```json
{
  "operations": [
    {"id": "configure", "tool": "configure_embeddings", "arguments": {...}},
    {"id": "query", "tool": "query_semantic_memory", "arguments": {...}, "depends_on": ["configure"]},
    {"id": "analyze", "tool": "advanced_pattern_analysis", "arguments": {...}, "depends_on": ["query"]},
    {"id": "recommend", "tool": "recommend_patterns", "arguments": {...}, "depends_on": ["analyze"]}
  ],
  "mode": "parallel"
}
```

### 4. Pre-Flight Validation

Validate multiple conditions before proceeding:

```json
{
  "operations": [
    {"id": "check_health", "tool": "health_check", "arguments": {}},
    {"id": "test_embeddings", "tool": "test_embeddings", "arguments": {}},
    {"id": "verify_storage", "tool": "get_metrics", "arguments": {"metric_type": "system"}}
  ],
  "mode": "failfast"  // Stop on first failure
}
```

## Limitations

1. **Maximum Operations**: Recommended limit of 50 operations per batch
2. **Timeout**: Individual operations subject to normal timeout rules
3. **No Streaming**: Results returned after all operations complete
4. **Memory Usage**: All results held in memory until completion

## Comparison with JSON-RPC Batch

MCP batch operations are **different** from standard JSON-RPC 2.0 batch requests:

| Feature | JSON-RPC Batch | MCP Batch Operations |
|---------|----------------|---------------------|
| Dependency Management | ❌ No | ✅ Yes |
| Parallel Execution | ❌ Sequential | ✅ Parallel |
| Partial Failure | ✅ Yes | ✅ Yes |
| Execution Modes | ❌ One mode | ✅ Three modes |
| Performance Gains | 🟡 Moderate | ✅ 3-5x |

## Examples

See `do-memory-mcp/examples/batch_operations_demo.rs` for runnable examples.

## Testing

Run batch operation tests:

```bash
cargo test --package do-memory-mcp --test batch_operations_test
```

Run all 11 comprehensive tests covering:
- Parallel execution
- Dependency management
- Error handling
- Execution modes
- Performance characteristics

## Future Enhancements

Planned features:

- **Streaming results**: Return results as they complete
- **Conditional execution**: Skip operations based on prior results
- **Result interpolation**: Pass results between operations
- **Transaction support**: All-or-nothing semantics
- **Progress callbacks**: Real-time execution updates

## Support

For issues or questions:
- GitHub Issues: [memory system repository]
- Documentation: `do-memory-mcp/README.md`
- Examples: `do-memory-mcp/examples/`