RLM Runtime logo

RLM Runtime

Open Source

Sandboxed Code Execution for LLMs

MCP Server

RLM Runtime enables LLMs to recursively decompose tasks, execute real code in isolated environments, and retrieve context on demand. Instead of simulating computation in tokens, the model executes actual code—cheaper, more reliable, and auditable. Includes MCP server for Claude Desktop/Code integration.

Key Features

Everything you need to get the most out of RLM Runtime.

Recursive Completion

LLMs can spawn sub-calls, execute code, and aggregate results.

Sandboxed REPL

Local RestrictedPython or Docker isolation for secure code execution.

MCP Server

Claude Desktop/Code integration with multi-project support.

Multi-Provider

OpenAI, Anthropic, and 100+ providers via LiteLLM.