Please check the build logs for more information.
See Builds for ideas on how to fix a failed build, or Metadata for how to configure docs.rs builds.
If you believe this is docs.rs' fault, open an issue.
Me And My Friends
A Rust TUI that orchestrates conversations between AI "advisors" (CFO, CTO, CMO, etc.) backed by Ollama, Gemini, OpenAI, or Claude CLI. Get diverse perspectives through council discussions, one-on-one chats, and deep research mode, with RAG-powered knowledge using Qdrant.

Features
- Multi-Advisor Council — Multiple AI personas discuss your question from different angles, then a chairman synthesizes the best answer
- Provider Flexibility — Mix and match models: Ollama for local privacy, Gemini for speed, Groq for cost, Claude for research
- Knowledge Base (RAG) — Index your documents so advisors have context about your domain
- Living Council — Advisors debate and iterate until they reach consensus
- Beautiful TUI — Full terminal interface with streaming responses, history, and settings
Quick Demo: Cat Advisory Council
Try MAMF with a fun demo featuring a council of cat experts:
The council includes:
- 🐱 Whiskers McPurrface — Obsessive cat lover
- 🐕 Rex Barkington — Reluctant dog person convert
- 🩺 Dr. Mittens DVM — Evidence-based veterinarian
- 🤧 Sneezey Johnson — Allergic but found workarounds
- 🎓 Professor Meowington PhD — Chairman who synthesizes all views
See demos/cat-council/README.md for full setup.
Installation
# From crates.io (recommended)
# Or from source
Prerequisites
# Ollama (recommended for local inference)
# Install from https://ollama.ai
# Qdrant (for knowledge base)
# Optional: Claude CLI for deep research
Quick Start
# Initialize config file
# Launch the TUI
# Or use CLI directly
How It Works
1. Define Your Advisors
Each advisor has a persona, expertise area, and backing LLM:
advisors:
cfo:
name: "Chief Financial Officer"
emoji: "💰"
model: "llama-3.3-70b-versatile"
provider: groq
temperature: 0.3
system_prompt: "You are a seasoned CFO focused on financial sustainability..."
2. Ask Your Question
3. Get Diverse Perspectives
Each advisor responds from their expertise:
- CFO → Runway analysis, burn rate, dilution concerns
- CTO → Technical debt impact, team scaling needs
- Investor → Market timing, valuation expectations
- Wildcard → Unconventional alternatives
4. Chairman Synthesizes
The chairman weighs all perspectives and provides a balanced recommendation.
Supported Providers
| Provider | Type | Setup |
|---|---|---|
| Ollama | Local/Self-hosted | ollama serve at localhost:11434 |
| Google Gemini | Cloud | GOOGLE_API_KEY or config |
| OpenAI | Cloud | OPENAI_API_KEY or config |
| OpenRouter | Cloud (Multi-model) | OPENROUTER_API_KEY or config |
| Groq | Cloud (Fast) | GROQ_API_KEY or config |
| Claude CLI | Local CLI | npm install -g @anthropic-ai/claude-code |
Embedding Providers (for RAG)
| Provider | Model | Setup |
|---|---|---|
| Ollama | nomic-embed-text | Default, local |
| Voyage AI | voyage-2 | VOYAGE_API_KEY |
| text-embedding-004 | GOOGLE_API_KEY |
TUI Interface
Launch with mamf tui. Navigate between screens using function keys:
Dashboard (F1)

Council Discussion (F2)
Start a discussion and watch advisors respond in real-time:

View completed responses with RAG source references:

Expand individual advisor responses for detailed reading:

See which knowledge base documents informed the response:

One-on-One Chat (F3)
Have a private conversation with a single advisor:

Session History (F4)
Review and continue past discussions:

Settings (F5)
Configure providers, advisors, and RAG settings:

Knowledge Base (F6)
Index documents and query your knowledge base:

Navigation Keys
| Key | Screen | Description |
|---|---|---|
F1 |
Dashboard | Overview and quick actions |
F2 |
Discussion | Council discussions |
F3 |
Advisor | One-on-one chat with single advisor |
F4 |
History | Past sessions |
F5 |
Settings | Configuration |
F6 |
Knowledge | RAG knowledge base |
q |
— | Quit (from dashboard) |
Ctrl+C |
— | Force quit |
Discussion Screen Controls
| Key | Action |
|---|---|
Ctrl+1 |
Council mode (all advisors) |
Ctrl+2 |
Focus mode (filtered advisors) |
Ctrl+3 |
Synthesis mode (quick summary) |
Ctrl+4 |
Deep mode (multiple rounds) |
+/- |
Adjust rounds (in Deep mode) |
Tab |
Switch focus |
Enter |
Send message |
Esc |
Cancel/Back |
Navigation
| Key | Action |
|---|---|
j/↓ |
Move down |
k/↑ |
Move up |
PgUp/PgDn |
Scroll pages |
Discussion Modes
Council Mode (Default)
All advisors respond sequentially, chairman synthesizes at the end.
Focus Mode
Filter to advisors relevant to a topic:
Living Council
Advisors discuss with each other until consensus:
Deep Mode
Multiple rounds for complex topics:
One-on-One
Chat with a single advisor:
Research Mode
Deep research using Claude CLI with MCP tools:
Knowledge Base (RAG)
Index your documents so advisors have context about your domain:
# Index a directory of markdown files
# Query the knowledge base
# View statistics
RAG context is automatically injected into advisor prompts when auto_inject: true (default).
Knowledge Base Structure
Organize your docs with YAML frontmatter:
title: Pricing Strategy
category: business
tags: [pricing, revenue, monetization]
Content here...
Configuration
Config file location: ~/.config/mamf/config.yaml or ./mamf.yaml (local takes priority)
providers:
ollama:
base_url: "http://localhost:11434"
google:
api_key: "AIza..." # Or use GOOGLE_API_KEY env var
openai:
api_key: "sk-..." # Or use OPENAI_API_KEY env var
groq:
api_key: "gsk_..." # Or use GROQ_API_KEY env var
claude_cli:
timeout_secs: 600
model: "claude-sonnet-4-5-20250929"
# Assign models to advisors
advisors:
cfo:
model: "llama-3.3-70b-versatile"
provider: groq
temperature: 0.3
order: 1
cto:
model: "phi4:14b"
provider: ollama
temperature: 0.4
order: 2
chairman:
model: "gemini-2.5-flash"
provider: google
temperature: 0.5
order: 100 # Always last (synthesis)
# RAG configuration
rag:
qdrant_url: "http://localhost:6334" # gRPC port
collection: "mamf_docs"
embedding_provider: ollama
embedding_model: "nomic-embed-text"
auto_inject: true
min_relevance: 0.5
top_k: 5
defaults:
timeout_secs: 120
max_tokens: 4096
stream: true
Built-in Advisors
| ID | Role | Expertise | Default Temp |
|---|---|---|---|
cfo |
Chief Financial Officer | Finance, funding, runway, pricing | 0.3 |
cto |
Chief Technology Officer | Technical, architecture, security | 0.4 |
cmo |
Chief Marketing Officer | Marketing, branding, growth | 0.7 |
coo |
Chief Operations Officer | Operations, execution, process | 0.4 |
cpo |
Chief Product Officer | Product strategy, roadmap, UX | 0.5 |
chro |
Chief HR Officer | People, culture, hiring | 0.6 |
legal |
General Counsel | Legal, compliance, contracts | 0.2 |
investor |
Board Advisor | Investment, valuation, exits | 0.6 |
strategy |
Strategy Consultant | Long-term planning, market analysis | 0.5 |
innovation |
R&D Lead | Unconventional ideas, disruption | 0.8 |
customer |
Customer Advocate | User needs, feedback, satisfaction | 0.5 |
wildcard |
Devil's Advocate | Contrarian views, challenge assumptions | 0.95 |
chairman |
Board Chairman | Synthesis, balanced summary | 0.5 |
Session Management
# List recent sessions
# Continue a session
# Export to markdown
Creating Custom Councils
MAMF is flexible—create councils for any domain:
| Use Case | Example Advisors |
|---|---|
| Startup | CFO, CTO, Investor, Legal |
| Game Dev | Designer, Programmer, Artist, QA |
| Writing | Editor, Critic, Fan, Publisher |
| Health | Doctor, Nutritionist, Trainer, Patient |
| Cat Care | Vet, Breeder, Shelter Worker, Cat Lover |
See demos/cat-council/ for a complete example.
Architecture
src/
├── config/ # YAML configuration loading
├── providers/ # LLM provider abstraction
│ ├── ollama.rs
│ ├── google.rs
│ ├── openai.rs
│ ├── openrouter.rs
│ ├── groq.rs
│ └── claude_cli.rs
├── advisors/ # Advisor personas and registry
├── session/ # Conversation orchestration
├── rag/ # Knowledge base (Qdrant + embeddings)
├── storage/ # SQLite persistence
├── cli/ # Command-line interface
└── tui/ # Ratatui terminal UI
└── screens/ # Dashboard, Discussion, Advisor, etc.
Development
# Build (uses build.sh for correct linker config)
# Run tests
# Debug logging
RUST_LOG=debug
# Format and lint
&&
License
MIT