Skip to main content

Crate librlm

Crate librlm 

Source
Expand description

§librlm

Implementation of the Recursive Language Models (RLM) algorithm as described in “Recursive Language Models” (Zhang, Kraska, Khattab — MIT CSAIL, Jan 2026).

RLM enables LLMs to handle arbitrarily long prompts by treating them as part of an external environment. The LLM interacts with the prompt through a persistent Lua REPL, writing code to peek at, decompose, and recursively invoke sub-LLMs over manageable chunks.

§Quick Start

use librlm::Rlm;

let rlm = Rlm::builder()
    .root_model("gpt-5")
    .root_api_key("sk-...")
    .sub_model("gpt-5-mini")
    .max_iterations(30)
    .build()?;

let result = rlm.completion("very long prompt...", Some("What is X?")).await?;
println!("{}", result.response);

Structs§

CodeBlock
A code block extracted from LLM output.
CompletionResponse
Response from an LLM completion call.
Message
A single message in a conversation.
OpenAiBackend
OpenAI-compatible API backend.
ReplResult
Result of executing code in the REPL.
Rlm
The main RLM (Recursive Language Model) engine.
RlmBuilder
Builder for constructing an Rlm instance.
RlmCompletion
The final result of an RLM completion.
RlmConfig
Configuration for the RLM algorithm.
UsageInfo
Token usage info from an LLM response.

Enums§

FinalAnswer
How the final answer was signaled.
RlmError
All errors that can occur in the RLM library.
Role
Role in a conversation message.

Traits§

LlmBackend
Trait for LLM backends. Implement this for custom providers.