vex-llm
LLM provider integrations for the VEX Protocol.
Supported Providers
- OpenAI - GPT-4, GPT-3.5, etc.
- Ollama - Local LLM inference
- DeepSeek - DeepSeek models
- Mistral - Mistral AI models
- Mock - Testing provider
- Secure WASM Sandbox - Isolate tool execution with
wasmtime - OOM Protection - Strict 10MB output memory limits
Installation
[]
[]
= "1.3.0"
# With OpenAI support
= { = "0.1", = ["openai"] }
Quick Start
use ;
async
License
Apache-2.0 License - see LICENSE for details.