Expand description
§librlm
Implementation of the Recursive Language Models (RLM) algorithm as described in “Recursive Language Models” (Zhang, Kraska, Khattab — MIT CSAIL, Jan 2026).
RLM enables LLMs to handle arbitrarily long prompts by treating them as part of an external environment. The LLM interacts with the prompt through a persistent Lua REPL, writing code to peek at, decompose, and recursively invoke sub-LLMs over manageable chunks.
§Quick Start
use librlm::Rlm;
let rlm = Rlm::builder()
.root_model("gpt-5")
.root_api_key("sk-...")
.sub_model("gpt-5-mini")
.max_iterations(30)
.build()?;
let result = rlm.completion("very long prompt...", Some("What is X?")).await?;
println!("{}", result.response);Structs§
- Code
Block - A code block extracted from LLM output.
- Completion
Response - Response from an LLM completion call.
- Message
- A single message in a conversation.
- Open
AiBackend - OpenAI-compatible API backend.
- Repl
Result - Result of executing code in the REPL.
- Rlm
- The main RLM (Recursive Language Model) engine.
- RlmBuilder
- Builder for constructing an
Rlminstance. - RlmCompletion
- The final result of an RLM completion.
- RlmConfig
- Configuration for the RLM algorithm.
- Usage
Info - Token usage info from an LLM response.
Enums§
- Final
Answer - How the final answer was signaled.
- RlmError
- All errors that can occur in the RLM library.
- Role
- Role in a conversation message.
Traits§
- LlmBackend
- Trait for LLM backends. Implement this for custom providers.