onetool
Sandboxed Lua runtime for LLM tool use.
The Problem
LLM agents typically need dozens of specialized tools (calculator, date formatter, string manipulator, JSON parser, base64 encoder, hash generator, etc.). Each tool requires a round-trip to the LLM provider, and you pay for every token exchanged. Tools don't compose well, and you're always limited by what you thought to create.
The Solution
onetool provides a sandboxed Lua REPL that LLMs can use as a single tool.
LLMs are already trained on programming languages. By giving them code execution instead of specialized tools, you reduce token costs (one tool call instead of many) while increasing flexibility. State persists between calls for multi-step reasoning. It's safe by design with comprehensive sandboxing.
Installation
Basic REPL only (no LLM framework):
[]
= "0.0.1-alpha.7"
With a framework adapter:
# Pick one (or more):
= { = "0.0.1-alpha.7", = ["genai"] }
= { = "0.0.1-alpha.7", = ["mistralrs"] }
= { = "0.0.1-alpha.7", = ["rig"] }
= { = "0.0.1-alpha.7", = ["aisdk"] }
= { = "0.0.1-alpha.7", = ["mcp"] }
Feature flags:
| Feature | Description |
|---|---|
genai |
genai adapter |
mistralrs |
mistral.rs adapter |
rig |
rig-core Tool implementation |
aisdk |
aisdk integration |
mcp |
MCP server via rmcp |
json_schema |
JSON Schema generation (included by all above) |
Note: Currently in alpha - API may change.
Quick Start
use Repl;
// Create the sandboxed Lua runtime
let repl = new?;
// Execute Lua code
let response = repl.eval?;
// Access results
println!; // "4"
println!; // (print() output)
The REPL maintains state between calls, so variables and functions persist:
repl.eval?;
repl.eval?;
let result = repl.eval?; // "30"
Real Example
Here's an actual interaction from the included example:
User: "What's the sum of the 10 first prime numbers?"
LLM calls lua_repl with:
{
"source_code": "
local primes = {}
local num = 2
while #primes < 10 do
local is_prime = true
for i = 2, math.sqrt(num) do
if num % i == 0 then
is_prime = false
break
end
end
if is_prime then
table.insert(primes, num)
end
num = num + 1
end
local sum = 0
for _, p in ipairs(primes) do
sum = sum + p
end
return sum
"
}
Response: {
"result": "129",
"output": ""
}
LLM: "The sum of the first 10 prime numbers is 129."
The LLM wrote a complete algorithm, executed it safely, and got the answer - all without needing a specialized "prime number calculator" tool.
Why Lua?
These were the criteria for choosing the execution language:
- Interpreted: We can't depend on a compile-eval loop
- Easy to embed: The runtime needs to live inside the host application
- Easy to sandbox: Giving too much power to an LLM can be dangerous
- Simple and expressive: LLMs need to write small, correct snippets
- Strong standard library: Especially for string manipulation
- Mature and well-known: Editor plugins, documentation, familiarity
Lua checks all these boxes. It's widespread enough (neovim config language, game scripting) that LLMs are well-trained on it.
Running the Examples
# Interactive REPL (no API key required)
# Custom Rust functions in the runtime
# Interactive notebook demo (requires API key)
# LLM framework examples (require API keys where noted)
Project Status
This is still a toy project. Use with care - everything may break, and I might decide to change everything tomorrow.
- Version: 0.0.1-alpha.7
- API Stability: Expect breaking changes
- Production Ready: No
The core concept is stable (sandboxed Lua REPL for LLMs), but the implementation and API surface are experimental.
License & Contributing
License: MIT - Copyright 2026 Caio Augusto Araujo Oliveira
Contributing:
- Early stage project - feedback welcome!
- Issues and PRs appreciated
Built with mlua.