Recursive Language Model agent inspired by MIT CSAIL research
CA: 3ynxgnXhmbZR7kinwU3NEryExkXEmRXhoNL4qV9sBAGS
Website • X / Twitter • GitHub
Agent RLM is an experimental conversational agent built around the idea of Recursive Language Models (RLMs).
The project is inspired by the Recursive Language Models paper by Alex Zhang and collaborators at MIT CSAIL, and is based on a fork of the official open-source RLM repository. The goal here is not to reproduce the full research framework, but to adapt the core inference ideas into a lightweight, interactive agent.
In particular, this project explores how recursive refinement, context-as-state, and convergence-focused responses can be applied in a real-time chat setting.
The original RLM framework proposes an inference paradigm in which language models can programmatically inspect, decompose, and recursively reason over their input context, rather than relying on a single monolithic prompt.
If you are interested in the full research implementation, benchmarks, or sandboxed REPL environments, please refer to the original work:
- Paper: https://arxiv.org/abs/2512.24601
- Blog: https://alexzhang13.github.io/blog/2025/rlm/
- Repository: https://github.com/alexzhang13/rlm
This repository should be viewed as an agent-oriented adaptation, not a drop-in replacement for the research codebase.
This repository is a fork and adaptation of the original MIT CSAIL RLM codebase, with several structural and architectural changes made to support a deployable, agent-oriented system.
-
Agent-oriented backend
A newrlm-backend/directory has been added, containing a lightweight Python backend that exposes RLM as a conversational service (e.g. via HTTP). This backend wraps the core RLM logic and is designed for real-time interaction rather than offline benchmarking. -
Integration with Eliza-style agent configuration
The agent’s personality, tone, and conversational behavior are informed by an Eliza-stylecharacter.jsonconfiguration.
Rather than replacing RLM, this configuration is injected into the RLM system prompt, allowing recursive inference to operate underneath a consistent, human-facing agent persona. -
Simplified execution model
Many research-oriented components (e.g. multi-environment sandboxing, heavy logging, benchmark tooling) are left untouched but are not required to run the agent. The focus here is on:- fast iteration
- deployability (e.g. Railway)
- conversational UX
-
Frontend-friendly architecture
The backend is designed to be consumed by a web frontend or agent framework, making it suitable for use with Eliza Cloud, custom UIs, or other agent orchestration layers.
These changes are intentionally minimal and additive: the original RLM core remains intact, while the surrounding structure adapts it for agent use.
You can run the Agent RLM backend on your own machine with a standard Python setup.
- Python 3.10+
- A Google Gemini API key
Set your API key as an environment variable:
export GEMINI_API_KEY="your_api_key_here"Clone the repository:
git clone https://github.com/matomoniwano/AgentRLM.git
cd AgentRLMCreate and activate a virtual environment:
python -m venv .venv
source .venv/bin/activate # macOS / Linux.venv\Scripts\Activate # WindowsInstall dependencies from the backend directory:
cd rlm-backend
pip install -r requirements.txtThis installs both the backend dependencies and the local RLM package from the repository.
###Running the Backend
Start the backend server:
uvicorn app:app --reloadBy default, the server will be available at:
http://localhost:8000
You can test it with a simple request:
curl -X POST http://localhost:8000/chat \
-H "Content-Type: application/json" \
-d '{"history": [], "message": "Explain recursion in simple terms"}'NEW: AgentRLM now includes a feature that converts academic research papers into executable Jupyter notebooks!
The Paper Decomposer:
- 📄 Extracts structure and experiments from PDF papers or arXiv URLs
- 🤖 Uses RLM to generate runnable notebook cells
- 🐳 Executes notebooks safely in DockerREPL
- 🔄 Iteratively fixes errors until notebooks run successfully
- 📊 Creates visualizations and reproduces key results
Quick Start:
# Install with paper decomposer support
pip install -e ".[paper_decomposer]"
# Convert a paper to notebook
python -m paper_decomposer.controller paper.pdf --experiment 0 --toy
# Or from arXiv
python -m paper_decomposer.controller https://arxiv.org/abs/2301.12345Documentation:
- Full Guide: paper_decomposer/docs/README.md
- Quick Start: paper_decomposer/QUICKSTART.md
- Feature Overview: paper_decomposer/FEATURE_OVERVIEW.md
This project is experimental and evolving. Design decisions prioritize clarity, interaction, and deployment simplicity over completeness or benchmark performance.

