Skip to content

First RLM Agent on Solana · Built on MIT CSAIL research ·Thinks in loops, not prompts

License

Notifications You must be signed in to change notification settings

matomoniwano/AgentRLM

 
 

Repository files navigation

Agent RLM

Recursive Language Model agent inspired by MIT CSAIL research

CA: 3ynxgnXhmbZR7kinwU3NEryExkXEmRXhoNL4qV9sBAGS

WebsiteX / TwitterGitHub

rlm

Overview

Agent RLM is an experimental conversational agent built around the idea of Recursive Language Models (RLMs).

The project is inspired by the Recursive Language Models paper by Alex Zhang and collaborators at MIT CSAIL, and is based on a fork of the official open-source RLM repository. The goal here is not to reproduce the full research framework, but to adapt the core inference ideas into a lightweight, interactive agent.

In particular, this project explores how recursive refinement, context-as-state, and convergence-focused responses can be applied in a real-time chat setting.

Relationship to the Original RLM Work

The original RLM framework proposes an inference paradigm in which language models can programmatically inspect, decompose, and recursively reason over their input context, rather than relying on a single monolithic prompt.

If you are interested in the full research implementation, benchmarks, or sandboxed REPL environments, please refer to the original work:

This repository should be viewed as an agent-oriented adaptation, not a drop-in replacement for the research codebase.

Project Structure & Modifications

This repository is a fork and adaptation of the original MIT CSAIL RLM codebase, with several structural and architectural changes made to support a deployable, agent-oriented system.

Key Modifications

flow

  • Agent-oriented backend
    A new rlm-backend/ directory has been added, containing a lightweight Python backend that exposes RLM as a conversational service (e.g. via HTTP). This backend wraps the core RLM logic and is designed for real-time interaction rather than offline benchmarking.

  • Integration with Eliza-style agent configuration
    The agent’s personality, tone, and conversational behavior are informed by an Eliza-style character.json configuration.
    Rather than replacing RLM, this configuration is injected into the RLM system prompt, allowing recursive inference to operate underneath a consistent, human-facing agent persona.

  • Simplified execution model
    Many research-oriented components (e.g. multi-environment sandboxing, heavy logging, benchmark tooling) are left untouched but are not required to run the agent. The focus here is on:

    • fast iteration
    • deployability (e.g. Railway)
    • conversational UX
  • Frontend-friendly architecture
    The backend is designed to be consumed by a web frontend or agent framework, making it suitable for use with Eliza Cloud, custom UIs, or other agent orchestration layers.

These changes are intentionally minimal and additive: the original RLM core remains intact, while the surrounding structure adapts it for agent use.


Running Agent RLM Locally

You can run the Agent RLM backend on your own machine with a standard Python setup.

Prerequisites

  • Python 3.10+
  • A Google Gemini API key

Set your API key as an environment variable:

export GEMINI_API_KEY="your_api_key_here"

Installation

Clone the repository:

git clone https://github.com/matomoniwano/AgentRLM.git
cd AgentRLM

Create and activate a virtual environment:

python -m venv .venv
source .venv/bin/activate   # macOS / Linux
.venv\Scripts\Activate      # Windows

Install dependencies from the backend directory:

cd rlm-backend
pip install -r requirements.txt

This installs both the backend dependencies and the local RLM package from the repository.


###Running the Backend

Start the backend server:

uvicorn app:app --reload

By default, the server will be available at:

http://localhost:8000

You can test it with a simple request:

curl -X POST http://localhost:8000/chat \
  -H "Content-Type: application/json" \
  -d '{"history": [], "message": "Explain recursion in simple terms"}'

Features

Paper Decomposer & Notebook Builder

NEW: AgentRLM now includes a feature that converts academic research papers into executable Jupyter notebooks!

The Paper Decomposer:

  • 📄 Extracts structure and experiments from PDF papers or arXiv URLs
  • 🤖 Uses RLM to generate runnable notebook cells
  • 🐳 Executes notebooks safely in DockerREPL
  • 🔄 Iteratively fixes errors until notebooks run successfully
  • 📊 Creates visualizations and reproduces key results

Quick Start:

# Install with paper decomposer support
pip install -e ".[paper_decomposer]"

# Convert a paper to notebook
python -m paper_decomposer.controller paper.pdf --experiment 0 --toy

# Or from arXiv
python -m paper_decomposer.controller https://arxiv.org/abs/2301.12345

Documentation:


Notes

This project is experimental and evolving. Design decisions prioritize clarity, interaction, and deployment simplicity over completeness or benchmark performance.

About

First RLM Agent on Solana · Built on MIT CSAIL research ·Thinks in loops, not prompts

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 93.1%
  • TypeScript 5.9%
  • Other 1.0%