Mermaid - Open Source AI Pair Programmer
An open-source AI pair programmer CLI that provides an interactive chat interface with full agentic coding capabilities. Uses local Ollama models for fast, private, and efficient coding assistance.
Features
- Native Tool Calling: Ollama's tool calling API for structured, reliable actions (v0.2.0+)
- Local Model Support: Use Ollama for fast, private code assistance
- Multiple Local Models: Switch between different Ollama models mid-session without losing context
- Project Aware: Automatically loads and understands your entire project context
- True Agency: Can read, write, execute commands, and manage git
- Privacy First: Run 100% locally with Ollama - your code never leaves your machine
- Interactive TUI: Beautiful terminal interface with Claude Code-inspired aesthetics
- Real-time Streaming: See responses as they're generated
- Smart Context: Respects .gitignore and intelligently manages token limits
- Web Search: Integrated local Searxng for documentation and current information
- Rootless Containers: Secure Podman deployment with no daemon overhead
Quick Start
Prerequisites
-
Rust toolchain (required for building from source)
| -
Ollama (required for running local AI models)
| -
Podman (optional, for web search via Searxng)
# Ubuntu/Debian/Linux Mint &&
Installation
Quick Install (Recommended)
If you already have Rust:
If starting from scratch (installs everything):
|
This one-liner installs:
- Rust (if needed)
- Ollama (if needed)
- Mermaid CLI from crates.io
- llama3.1:8b model (4.7GB, tool calling compatible)
- Configures your PATH
After installation, just run:
Manual Install (Advanced)
Step-by-step installation with full control:
# 1. Install Rust
|
# 2. Install Ollama
|
# 3. Install Mermaid from crates.io
# 4. Download a compatible model
# 5. Run Mermaid
Updating Mermaid
To update to the latest version:
# Update from crates.io
# Or use the one-liner installer (also updates Ollama if needed)
|
Basic Usage
# Start Mermaid with default model
# Use a specific model
# List available models
Interactive Commands
Once in the chat interface:
i- Enter insert mode (type your message)Enter- Send message (in insert mode)Esc- Return to normal mode:- Enter command modeTab- Toggle file sidebarCtrl+C- Quit
Command Mode
:help- Show all commands:model <name>- Switch to a different model:clear- Clear chat history:sidebar- Toggle file tree:quit- Exit Mermaid
Configuration
Environment Variables (.env file)
Set your default model configuration:
MERMAID_DEFAULT_MODEL=ollama/tinyllama
Application Configuration
Located at ~/.config/mermaid/config.toml:
[]
= "ollama/deepseek-coder:33b" # provider/model format
= 0.7
= 4096
[]
= "dark"
= true
[]
= 100
= 75000
Project Configuration
Create .mermaid/config.toml in your project root to override global settings.
Model Compatibility
Mermaid uses Ollama for local model support with native tool calling (v0.2.0+).
Verified Compatible Models (Tool Calling Support)
Models with native Ollama tool calling support that can execute file operations, commands, and git actions:
Recommended for Coding:
llama3.1:8b- Fast, excellent tool calling (4.7GB)llama3.1:70b- Best quality, slower (40GB)qwen2.5-coder:7b- Optimized for code (4.7GB)qwen2.5-coder:14b- Excellent coding (9.0GB)qwen2.5-coder:32b- Elite coding (19GB)mistral-nemo:12b- Balanced performance (7.1GB)
Other Compatible Models:
llama3.2:1b- Ultra-fast, limited capabilities (1.3GB)llama3.2:3b- Fast, decent quality (2.0GB)firefunction-v2:70b- Specialized for function calling (40GB)
Models Without Tool Calling
These models can chat but cannot execute actions (coming in v0.2.1 with text fallback):
deepseek-coder:33b- Excellent for code, no tool supportcodellama- Good for code, no tool supporttinyllama- Ultra-fast, no tool support- Most other Ollama models
Installing Models
# Install a compatible model
# List installed models
# Use with Mermaid
Cloud Models (Ollama Cloud)
Access massive models on datacenter hardware:
qwen3-coder:480b-cloud- 480B params, elite codingkimi-k2-thinking:cloud- 1T params, advanced reasoningdeepseek-v3.1:671b-cloud- 671B params, largest
Note: Cloud models require an API key from ollama.com/cloud
Example Workflows
Code Generation
You: Create a REST API endpoint for user authentication
Mermaid: I'll create a REST API endpoint for user authentication. Let me set up a basic auth endpoint with JWT tokens.
[Creates files, shows code, explains implementation]
Code Review
You: Review my changes in src/main.rs
Mermaid: I'll review the changes in src/main.rs. Let me check the diff first.
[Analyzes code, suggests improvements, identifies issues]
Debugging
You: The tests are failing, can you help?
Mermaid: I'll help you debug the failing tests. Let me first run them to see the errors.
[Runs tests, analyzes errors, fixes issues]
Refactoring
You: Refactor this function to use async/await
Mermaid: I'll refactor this function to use async/await pattern.
[Shows original code, explains changes, implements refactoring]
Features in Action
Agent Capabilities (Native Tool Calling)
Mermaid uses Ollama's native tool calling API for structured, reliable actions:
Available Tools:
read_file- Read any file (text, PDF, images with vision models)write_file- Create or update files in the projectdelete_file- Delete a file from the project directorycreate_directory- Create a new directoryexecute_command- Execute shell commands and see outputgit_status- Check git working tree statusgit_diff- View changes in filesgit_commit- Create commits with proper messagesweb_search- Search the web via local Searxng
How It Works:
- Model receives tool definitions as JSON Schema
- Model calls tools when needed (structured function calls)
- Mermaid executes the tool and returns results
- Model continues with the context of results
- All tool calls are shown in the UI with clear summaries
Project Context
Mermaid automatically:
- Scans your project directory
- Respects
.gitignorepatterns - Loads relevant source files
- Understands project structure (Cargo.toml, package.json, etc.)
- Manages token limits intelligently
Development
Building from Source
# Clone the repository
# Build debug version
# Run tests
# Build optimized release
Architecture
┌─────────────┐ ┌──────────────┐
│ Mermaid │────▶│ Ollama │
│ CLI │ │ Local Server│
└─────────────┘ └──────────────┘
│ │
└──────────┬───────────────┘
▼
┌─────────┐
│ Local │
│ Context │
└─────────┘
Key Components:
models/ollama_direct.rs- Direct Ollama connectionagents/- File system, command execution, git operationscontext/- Project analysis and context loadingtui/- Terminal user interface with Ratatuiapp/- Configuration and application state
Privacy First:
- All processing happens locally
- Your code never leaves your machine
- No external API calls or cloud dependencies
Comparison
| Feature | Mermaid | Aider | Claude Code | GitHub Copilot |
|---|---|---|---|---|
| Open Source | Yes | Yes | No | No |
| Local Models Only | Yes | Yes | No | No |
| Model Support | Ollama | Multiple | Claude only | OpenAI only |
| Privacy | Full | Full | No | No |
| File Operations | Yes | Yes | Yes | Limited |
| Command Execution | Yes | Yes | Yes | No |
| Git Integration | Yes | Yes | Yes | Yes |
| Streaming UI | Yes | Yes | Yes | N/A |
| Rootless Containers | Yes (Podman) | No | No | No |
| Cost | Completely Free | Completely Free | $20/mo | $10/mo |
FAQ
Can I use this with my proprietary code?
Yes! With local models (Ollama), your code never leaves your machine.
Does it work offline?
Yes, with Ollama and local models.
Can I add support for other models?
Mermaid uses Ollama for model support. To use additional models:
- Pull the model with Ollama:
ollama pull model-name - Use it with Mermaid:
mermaid --model ollama/model-name - Check available models at ollama.ai
License
Licensed under either of:
- Apache License, Version 2.0 (LICENSE-APACHE or http://www.apache.org/licenses/LICENSE-2.0)
- MIT license (LICENSE-MIT or http://opensource.org/licenses/MIT)
at your option.
Contribution
Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be dual licensed as above, without any additional terms or conditions.
Acknowledgments
- Built with Ratatui for the TUI
- Uses Ollama for local model support
- Inspired by Aider, Gemini-CLI, and Claude Code
Community
- GitHub Issues: Report bugs or request features
Note: This project is under active development. Expect breaking changes until v1.0.
Made with love by the open source community