Lightweight, Real-time Debugging for AI Agents
Debug your Agents in Real Time. Trace, analyze, and optimize instantly. Seamless with LangChain, Google ADK, OpenAI, and all major frameworks.
Quick Start
First, install Homebrew if you haven't already, then:
Start the vLLora:
The server will start on
http://localhost:9090and the UI will be available athttp://localhost:9091.
vLLora uses OpenAI-compatible chat completions API, so when your AI agents make calls through vLLora, it automatically collects traces and debugging information for every interaction.

Test Send your First Request
- Configure API Keys: Visit
http://localhost:9091to configure your AI provider API keys through the UI - Make a request to see debugging in action:
Features
Real-time Tracing - Monitor AI agent interactions as they happen with live observability of calls, tool interactions, and agent workflow. See exactly what your agents are doing in real-time.

MCP Support - Full support for Model Context Protocol (MCP) servers, enabling seamless integration with external tools by connecting with MCP Servers through HTTP and SSE

Development
To get started with development:
- Clone the repository:
The binary will be available at target/release/vlora.
- Run tests:
Contributing
We welcome contributions! Please check out our Contributing Guide for guidelines on:
- How to submit issues
- How to submit pull requests
- Code style conventions
- Development workflow
- Testing requirements
License
This project is released under the Apache License 2.0. See the license file for more information.