🤖 LogAI
AI-powered log analysis - Parse, group, and understand your logs with AI.
LogAI analyzes your application logs, groups similar errors, and uses AI to explain what went wrong and how to fix it.
What is LogAI?
LogAI is a CLI tool that analyzes application logs, groups similar errors, and provides intelligent suggestions for fixing issues. Stop manually searching through massive log files and let LogAI do the detective work.
Features
✅ Multiple log formats - JSON, plain text, Apache, Nginx, Syslog
✅ Auto-detect log format - Automatically identifies format
✅ Group similar errors intelligently - Pattern-based grouping
✅ Deduplicate repeated errors
✅ Beautiful terminal output
✅ Track error frequency and timing
✅ AI-powered error explanations (OpenAI, Claude, Gemini, Ollama, AWS Bedrock)
✅ Parallel AI analysis - Process multiple errors concurrently (5x faster)
✅ Automatic retry - Exponential backoff for transient failures
✅ Solution suggestions with code examples
✅ Response caching to reduce API costs
✅ Configuration file - Customize analysis behavior
✅ MCP (Model Context Protocol) integration - Connect external tools and
data sources
Coming Soon
🚧 Built-in MCP tools (search_docs, check_metrics, search_code)
🚧 Watch mode for real-time analysis
🚧 HTML reports
🚧 Additional log formats (Docker, Kubernetes, custom formats)
Quick Start
Installation
Quick Install (macOS/Linux)
|
Homebrew (macOS/Linux)
Cargo (All platforms)
Pre-built Binaries
Download from GitHub Releases:
- macOS (Intel & Apple Silicon)
- Linux (x86_64 & ARM64)
- Standard:
logai-linux-x86_64.tar.gz(Ubuntu 22.04+, RHEL 9+, AL2023) - Musl:
logai-linux-x86_64-musl.tar.gz(Amazon Linux 2, Ubuntu 20.04+, CentOS 7+, any Linux)
- Standard:
- Windows (x86_64)
Amazon Linux 2:
From Source
Usage
Analyze a log file:
Analyze multiple files:
Pipe logs from stdin:
|
|
Limit output:
JSON output:
Interactive HTML report:
# With AI analysis
Enable verbose/debug logging:
# or
AI-Powered Analysis
Analyze with OpenAI:
Analyze with Claude:
Analyze with Gemini:
Analyze with Ollama (local, free):
# Make sure Ollama is running: ollama serve
Analyze with AWS Bedrock:
# With region flag (recommended)
# With specific model
# Or set region via environment variable
Disable caching (force fresh analysis):
Parallel Analysis
LogAI processes error groups in parallel for faster analysis. Control concurrency:
# Default: 5 concurrent requests
# High concurrency (faster, more resources)
# Low concurrency (slower, less resources)
# Sequential processing
Performance comparison (100 error groups):
- Sequential (concurrency=1): ~25 minutes
- Default (concurrency=5): ~5 minutes
- High (concurrency=15): ~2 minutes
Configuration File
Create ~/.logai/config.toml to set defaults:
# AI Settings
[]
= "ollama" # Default AI provider
# Analysis settings
[]
= 5 # Concurrent AI requests (1-20)
= true # Retry failed requests
= 3 # Maximum retry attempts
= 1000 # Initial retry delay
= 30000 # Maximum retry delay
= true # Cache AI responses
= 2000 # Max message length
# Provider configurations
[]
= true
= "llama3.2"
= "http://localhost:11434"
[]
= false
# api_key = "sk-..." # Or use OPENAI_API_KEY env var
# model = "gpt-4"
Configuration examples:
High-performance (self-hosted Ollama):
[]
= 15
= 2
= 500
Conservative (API rate limits):
[]
= 2
= 5
= 2000
= 60000
Fast-fail (development):
[]
= 10
= false
MCP Integration (Advanced)
LogAI supports Model Context Protocol (MCP) to connect external tools and data sources during analysis.
Create ~/.logai/mcp.toml:
= 30
[[]]
= "filesystem"
= true
[]
= "Stdio"
= "npx"
= ["-y", "@modelcontextprotocol/server-filesystem", "/tmp"]
Use with MCP tools:
Disable MCP:
See MCP Integration Guide for more details.
Example Output
🤖 LogAI Analysis Report
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
📊 Summary
Errors found: 3 unique patterns (9 occurrences)
Time range: 2025-11-17 10:30:00 - 2025-11-17 10:35:00
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🔴 Critical: Connection failed to database (3 occurrences)
First seen: 5 minutes ago | Last seen: 4 minutes ago
📋 Example:
Connection failed to database
📍 Location: db.rs:42
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
🔴 Critical: Timeout waiting for response from <DYNAMIC> (3 occurrences)
First seen: 1 minute ago | Last seen: 30 seconds ago
📋 Example:
Timeout waiting for response from api.example.com
Supported Log Formats
- JSON logs - Structured logs with fields like
level,message,timestamp - Plain text logs - Traditional text logs with timestamps and severity levels
- Apache logs - Apache HTTP server access and error logs (Common and Combined formats)
- Nginx logs - Nginx web server access and error logs
- Syslog - System logs in RFC3164 and RFC5424 formats
- Auto-detection - Automatically detects format from log content
Development
Build:
Run tests:
Run with sample logs:
Supported AI Providers
| Provider | Models | Cost | Speed | Setup |
|---|---|---|---|---|
| OpenAI | GPT-4, GPT-4o-mini | Paid | Fast | API key required |
| Claude | Claude 3.5 Sonnet/Haiku | Paid | Fast | API key required |
| Gemini | Gemini 1.5 Flash/Pro | Paid | Fast | API key required |
| Bedrock | Claude, Llama, Titan | Paid | Fast | AWS credentials |
| Ollama | Llama 3.2, Mistral, etc. | Free | Medium | Local install |
How It Works
- Parse - Automatically detects log format (JSON, plain text)
- Group - Clusters similar errors by normalizing dynamic values
- Deduplicate - Shows unique patterns with occurrence counts
- Analyze - Uses AI to explain errors and suggest fixes (optional)
- Processes multiple error groups in parallel (configurable concurrency)
- Automatic retry with exponential backoff for transient failures
- Real-time progress tracking with throughput and ETA
- Cache - Stores AI responses locally to reduce costs
Roadmap
- Core parsing and grouping
- AI integration (OpenAI, Claude, Gemini, Ollama)
- Response caching
- MCP (Model Context Protocol) integration
- Built-in MCP tools (search_docs, check_metrics, search_code, query_logs)
- Watch mode for real-time analysis
- HTML reports
- Advanced log format support (Apache, Nginx, Syslog)
- Anomaly detection and trend analysis
Documentation
Getting Started
- Quick Start Guide - Get up and running in 5 minutes
- Usage Guide - Comprehensive usage examples
- Examples - Sample logs and real-world scenarios
- FAQ - Frequently asked questions
For Developers
- Architecture - System design and architecture
- API Documentation - Using LogAI as a library
- Development Guide - Setting up development environment
- Contributing - How to contribute to the project
Operations
- Deployment Guide - Production deployment strategies
- Troubleshooting - Common issues and solutions
- Security Policy - Security best practices and reporting
Reference
- Compatibility - Supported log formats
- Changelog - Version history
- MCP Integration - Model Context Protocol guide
Community
- Contributors - Recognition for contributors
- Maintainers - Project maintainers and governance
Contributing
Contributions are welcome! Please read our Contributing Guide and Code of Conduct.
Future Plans
See GitHub Issues for planned features and known issues.
License
MIT License - see LICENSE file
Author
Built with ❤️ by Ranjan Mohanty
Acknowledgments
- Inspired by the need for better log debugging tools
- Thanks to all AI providers for making this possible
- Built with Rust 🦀
Star History
If you find LogAI useful, please consider giving it a star ⭐